# Import _thread instead of threading to reduce startup cost
# Hardwired values
# open() uses st_blksize whenever we can
# bytes
# NOTE: Base classes defined here are registered with the "official" ABCs
# defined in io.py. We don't use real inheritance though, because we don't want
# to inherit the C implementations.
# Rebind for compatibility
# Does open() check its 'errors' argument?
# Wrapper for builtins.open
# Trick so that open() won't become a bound method when stored
# as a class variable (as dbm.dumb does).
# See init_set_builtins_open() in Python/pylifecycle.c.
# Define a default pure-Python implementation for open_code()
# that does not allow hooks. Warn on first use. Defined for tests.
# In normal operation, both `UnsupportedOperation`s should be bound to the
# same object.
### Internal ###
### Positioning ###
### Flush and close ###
# XXX Should this return the number of bytes written???
# If getting closed fails, then the object is probably
# in an unusable state, so ignore.
# If close() fails, the caller logs the exception with
# sys.unraisablehook. close() must be called at the end at __del__().
### Inquiries ###
### Context manager ###
# That's a forward reference
### Lower-level APIs ###
# XXX Should these be present even if unimplemented?
### Readline[s] and writelines ###
# For backwards compatibility, a (slowish) readline().
# The read() method is implemented by calling readinto(); derived
# classes that want to support read() only need to implement
# readinto() as a primitive operation.  In general, readinto() can be
# more efficient than read().
# (It would be tempting to also provide an implementation of
# readinto() in terms of read(), in case the latter is a more suitable
# primitive operation, but that would lead to nasty recursion in case
# a subclass doesn't implement either.)
# b'' or None
# Flush the stream.  We're mixing buffered I/O with lower-level I/O,
# and a flush may be necessary to synch both views of the current
# file state.
# XXX: Should seek() be used, instead of passing the position
# XXX  directly to truncate?
# may raise BlockingIOError or BrokenPipeError etc
# Initialize _buffer as soon as possible since it's used by __del__()
# which calls close()
# Size of any bytes-like object
# Inserts null bytes between the current end of the file
# and the new write position.
# Special case for when the number of bytes to read is unspecified.
# Strip the consumed bytes.
# Read until EOF or until read() would block.
# The number of bytes to read is specified, return at most n bytes.
# Length of the available buffered data.
# Fast path: the data to read is fully buffered.
# Slow path: read from the stream until enough bytes are read,
# or until an EOF occurs or until read() would block.
# n is more than avail only when an EOF occurred or when
# read() would have blocked.
# Save the extra data in the buffer.
# Returns up to size bytes.  If at least one byte is buffered, we
# only return buffered bytes.  Otherwise, we do one raw read.
# Implementing readinto() and readinto1() is not strictly necessary (we
# could rely on the base class that provides an implementation in terms of
# read() and read1()). We do it anyway to keep the _pyio implementation
# similar to the io implementation (which implements the methods for
# performance reasons).
# Need to create a memoryview object of type 'b', otherwise
# we may not be able to assign bytes to it, and slicing it
# would create a new object.
# First try to read from internal buffer
# If remaining space in callers buffer is larger than
# internal buffer, read directly into callers buffer
# eof
# Otherwise refill internal buffer - unless we're
# in read1 mode and already got some data
# In readinto1 mode, return as soon as we have some data
# GH-95782: Keep return value non-negative
# XXX we can implement some more tricks to try and avoid
# partial writes
# We're full, so let's pre-flush the buffer.  (This may
# raise BlockingIOError with characters_written == 0.)
# We've hit the buffer_size. We have to accept a partial
# write and cut back our buffer.
# We have to release the lock and call self.flush() (which will
# probably just re-take the lock) in case flush has been overridden in
# a subclass or the user set self.flush to something. This is the same
# behavior as the C implementation.
# XXX The usefulness of this (compared to having two separate IO
# objects) is questionable.
# Undo read ahead.
# First do the raw seek, then empty the read buffer, so that
# if the raw seek fails, we don't lose buffered data forever.
# Use seek to flush the read buffer.
# Undo readahead
# Have to close the existing file first.
# bpo-27066: Raise a ValueError for bad value.
# Ignore the AttributeError if stat.S_ISDIR or errno.EISDIR
# don't exist.
# don't translate newlines (\r\n <=> \n)
# For consistent behaviour, we explicitly seek to the
# end of file (otherwise, it might be done only on the
# first write()).
# reached the end of the file
# decode input (with the eventual \r from a previous pass)
# retain last \r even when not translating data:
# then readline() is sure to get \r\n in one pass
# Record which newlines are read
# The write_through argument has no effect here since this
# implementation always writes through.  The argument is present only
# so that the signature can match the signature of the C version.
# buffer for text returned from decoder
# offset into _decoded_chars for read()
# info for reconstructing decoder state
# don't write a BOM in the middle of a file
# Sometimes the encoder doesn't exist
# self._snapshot is either None, or a tuple (dec_flags, next_input)
# where dec_flags is the second (integer) item of the decoder state
# and next_input is the chunk of input bytes that comes next after the
# snapshot point.  We use this to reconstruct decoder states in tell().
# Naming convention:
# XXX What if we were just reading?
# The following three methods implement an ADT for _decoded_chars.
# Text returned from the decoder is buffered here until the client
# requests it by calling our read() or readline() method.
# Importing locale may fail if Python is being built
# The return value is True unless EOF was reached.  The decoded
# string is placed in self._decoded_chars (replacing its previous
# value).  The entire input chunk is sent to the decoder, though
# some of it may remain buffered in the decoder, yet to be
# converted.
# To prepare for tell(), we need to snapshot a point in the
# file where the decoder's input buffer is empty.
# Given this, we know there was a valid snapshot point
# len(dec_buffer) bytes ago with decoder state (b'', dec_flags).
# Read a chunk, decode it, and put the result in self._decoded_chars.
# At the snapshot point, len(dec_buffer) bytes before the read,
# the next input to be decoded is dec_buffer + input_chunk.
# The meaning of a tell() cookie is: seek to position, set the
# decoder flags to dec_flags, read bytes_to_feed bytes, feed them
# into the decoder with need_eof as the EOF flag, then skip
# chars_to_skip characters of the decoded result.  For most simple
# decoders, tell() will often just give a byte offset in the file.
# This should never happen.
# Skip backward to the snapshot point (see _read_chunk).
# How many decoded characters have been used up since the snapshot?
# We haven't moved from the snapshot point.
# Starting from the snapshot position, we will walk the decoder
# forward until it gives us enough decoded characters.
# Fast search for an acceptable start point, close to our
# current pos.
# Rationale: calling decoder.decode() has a large overhead
# regardless of chunk size; we want the number of such calls to
# be O(1) in most situations (common decoders, sensible input).
# Actually, it will be exactly 1 for fixed-size codecs (all
# 8-bit codecs, also UTF-16 and UTF-32).
# Decode up to temptative start point
# Before pos and no bytes buffered in decoder => OK
# Skip back by buffered amount and reset heuristic
# We're too far ahead, skip back a bit
# Note our initial start point.
# We haven't moved from the start point.
# Feed the decoder one byte at a time.  As we go, note the
# nearest "safe start point" before the current location
# (a point where the decoder has nothing buffered, so seek()
# can safely start from there and advance to this location).
# Chars decoded since `start_pos`
# Decoder buffer is empty, so this is a safe start point.
# We didn't get enough decoded data; signal EOF to get more.
# The returned cookie corresponds to the last safe start point.
# Seeking to the current position should attempt to
# sync the underlying buffer with the current position.
# The strategy of seek() is to go back to the safe start point
# and replay the effect of read(chars_to_skip) from there.
# Seek back to the safe start point.
# Restore the decoder to its state from the safe start point.
# Just like _read_chunk, feed the decoder and save a snapshot.
# Skip chars_to_skip of the decoded characters.
# Read everything.
# Keep reading chunks until we have size characters to return.
# Grab all the decoded text (we will rewind any extra bits later).
# Make the decoder if it doesn't already exist.
# Newlines are already translated, only search for \n
# Universal newline search. Find any of \r, \r\n, \n
# The decoder ensures that \r\n are not split in two pieces
# In C we'd look for these in parallel of course.
# Nothing found
# Found \n
# Found lone \r
# Found \r\n
# Found \r
# non-universal
# reached length size
# No line ending seen yet - get more data'
# end of file
# don't exceed size
# Rewind _decoded_chars to just after the line ending we found.
# Issue #5645: make universal newlines semantics the same as in the
# C version, even under Windows.
# TextIOWrapper tells the encoding in its repr. In StringIO,
# that's an implementation detail.
# This doesn't make sense on StringIO.
# Note, the comparison uses "<" to match the
# __lt__() logic in list.sort() and in heapq.
# Overwrite above definitions with a fast C implementation
# Create aliases
# Testing is done through test_urllib.
# e.g.
# and
# become
# URL has an empty authority section, so the path begins on the third
# character.
# Skip past 'localhost' authority.
# Skip past extra slash before UNC drive in URL path.
# Windows itself uses ":" even in URLs.
# No drive specifier, just convert slashes
# make sure not to convert quoted slashes :-)
# becomes
# First, clean up some special forms. We are going to sacrifice
# the additional information anyway
# No DOS drive specified, just quote the pathname
# configuration variables that may contain universal build flags,
# like "-arch" or "-isdkroot", that may need customization for
# the user environment
# configuration variables that may contain compiler calls
# prefix added to original configuration variable names
# the file exists, we have a shot at spawn working
# Similar to os.popen(commandstring, "r").read(),
# but without actually using os.popen because that
# function is not usable during python bootstrap.
# tempfile is also not available then.
# Reading this plist is a documented way to get the system
# version (see the documentation for the Gestalt Manager)
# We avoid using platform.mac_ver to avoid possible bootstrap issues during
# the build of Python itself (distutils is used to build standard library
# extensions).
# We're on a plain darwin box, fall back to the default
# behaviour.
# else: fall back to the default behaviour
# This is needed for higher-level cross-platform tests of get_platform.
# As an approximation, we assume that if we are running on 10.4 or above,
# then we are running with an Xcode environment that supports universal
# builds, in particular -isysroot and -arch arguments to the compiler. This
# is in support of allowing 10.4 universal builds to run on 10.3.x systems.
# There are two sets of systems supporting macOS/arm64 builds:
# 1. macOS 11 and later, unconditionally
# 2. macOS 10.15 with Xcode 12.2 or later
# For now the second category is ignored.
# Issue #13590:
# skip checks if the compiler was overridden with a CC env variable
# The CC config var might contain additional arguments.
# Ignore them while searching.
# Compiler is not found on the shell search PATH.
# Now search for clang, first on PATH (if the Command LIne
# Tools have been installed in / or if the user has provided
# another location via CC).  If not found, try using xcrun
# to find an uninstalled clang (within a selected Xcode).
# NOTE: Cannot use subprocess here because of bootstrap
# issues when building Python itself (and os.popen is
# implemented on top of subprocess and is therefore not
# usable as well)
# Compiler is GCC, check if it is LLVM-GCC
# Found LLVM-GCC, fall back to clang
# Found a replacement compiler.
# Modify config vars using new compiler, if not already explicitly
# overridden by an env variable, preserving additional arguments.
# Do not alter a config var explicitly overridden by env var
# Different Xcode releases support different sets for '-arch'
# flags. In particular, Xcode 4.x no longer supports the
# PPC architectures.
# This code automatically removes '-arch ppc' and '-arch ppc64'
# when these are not supported. That makes it possible to
# build extensions on OSX 10.7 and later with the prebuilt
# 32-bit installer on the python.org website.
# issues when building Python itself
# The compile failed for some reason.  Because of differences
# across Xcode and compiler versions, there is no reliable way
# to be sure why it failed.  Assume here it was due to lack of
# PPC support and remove the related '-arch' flags from each
# config variables not explicitly overridden by an environment
# variable.  If the error was for some other reason, we hope the
# failure will show up again when trying to compile an extension
# module.
# NOTE: This name was introduced by Apple in OSX 10.5 and
# is used by several scripting languages distributed with
# that OS release.
# If we're on OSX 10.5 or later and the user tries to
# compile an extension using an SDK that is not present
# on the current machine it is better to not use an SDK
# than to fail.  This is particularly important with
# the standalone Command Line Tools alternative to a
# full-blown Xcode install since the CLT packages do not
# provide SDKs.  If the SDK is not present, it is assumed
# that the header files and dev libs have been installed
# to /usr and /System/Library by either a standalone CLT
# package or the CLT component within Xcode.
# OSX before 10.4.0, these don't support -arch and -isysroot at
# all.
# Strip this argument and the next one:
# Look for "-arch arm64" and drop that
# User specified different -arch flags in the environ,
# see also distutils.sysconfig
# It's '-isysroot/some/path' in one arg
# Check if the SDK that is used during compilation actually exists,
# the universal build requires the usage of a universal SDK and not all
# users have that installed by default.
# On Mac OS X before 10.4, check if -arch and -isysroot
# are in CFLAGS or LDFLAGS and remove them if they are.
# This is needed when building extensions on a 10.3 system
# using a universal build of python.
# Allow user to override all archs with ARCHFLAGS env var
# Remove references to sdks that are not found
# Find a compiler to use for extension module builds
# Remove ppc arch flags if not supported here
# called from get_platform() in sysconfig and distutils.util
# For our purposes, we'll assume that the system version from
# distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set
# to. This makes the compatibility story a bit more sane because the
# machine is going to compile and link as if it were
# MACOSX_DEPLOYMENT_TARGET.
# Ensure that the version includes at least a major
# and minor version, even if MACOSX_DEPLOYMENT_TARGET
# is set to a single-label version like "14".
# Use the original CFLAGS value, if available, so that we
# return the same machine type for the platform string.
# Otherwise, distutils may consider this a cross-compiling
# case and disallow installs.
# assume no universal support
# The universal build will build fat binaries, but not on
# systems before 10.4
# On OSX the machine type returned by uname is always the
# 32-bit variant, even if the executable architecture is
# the 64-bit variant
# Pick a sane name for the PPC architecture.
# See 'i386' case
# based on Andrew Kuchling's minigzip.py distributed with the zlib module
# The L format writes the bit pattern correctly whether signed
# or unsigned.
# Assume data was read since the last prepend() call
# Allows fast-forwarding even in unseekable streams
# Overridden with internal file object to be closed, if only a filename
# is passed in
# Avoid a ResourceWarning if the write fails,
# eg read-only file or KeyboardInterrupt
# Current file offset for seek(), tell(), etc
# magic header
# compression method
# RFC 1952 requires the FNAME field to be Latin-1. Do not
# include filenames that cannot be represented that way.
# Called by our self._buffer underlying WriteBufferStream.
# accept any data that supports the buffer protocol
# self.size may exceed 2 GiB, or even 4 GiB
# Ensure the compressor's buffer is flushed
# Flush buffer to ensure validity of self.offset
# Read & discard the extra field, if present
# Read and discard a null-terminated string containing the filename
# Read and discard a null-terminated string containing a comment
# Read & discard the 16-bit header CRC
# Set flag indicating start of a new member
# Decompressed size of unconcatenated stream
# size=0 is special because decompress(max_length=0) is not supported
# For certain input data, a single
# call to decompress() may not return
# any data. In this case, retry until we get some data or reach EOF.
# Ending case: we've come to the end of a member in the file,
# so finish up this member, and read a new gzip header.
# Check the CRC and file size, and set the flag so we read
# a new member
# If the _new_member flag is set, we have to
# jump to the next member, if there is one.
# Read a chunk of data from the file
# Prepend the already read bytes to the fileobj so they can
# be seen by _read_eof() and _read_gzip_header()
# We've read to the end of the file
# We check that the computed CRC and size of the
# uncompressed data matches the stored values.  Note that the size
# stored is the true file size mod 2**32.
# Gzip files can be padded with zeroes and still have archives.
# Consume all zero bytes and set the file position to the first
# non-zero byte. See http://www.gzip.org/#faq8
# Wbits=31 automatically includes a gzip header and trailer.
# Reuse gzip header created by zlib, replace mtime and OS byte for
# consistency.
# Use a zlib raw deflate compressor
# Read all the data except the header
# https://xkcd.com/426/
# A number of functions have this form, where `w` is a desired number of
# digits in base `base`:
# They all had some on-the-fly scheme to cache `base**lo` results for reuse.
# Power is costly.
# This routine aims to compute all amd only the needed powers in advance, as
# efficiently as reasonably possible. This isn't trivial, and all the
# on-the-fly methods did needless work in many cases. The driving code above
# changes to:
# and `mycache[lo]` replaces `base**lo` in the inner function.
# While this does give minor speedups (a few percent at best), the primary
# intent is to simplify the functions using this, by eliminating the need for
# them to craft their own ad-hoc caching schemes.
# any element is fine to use next
# only _need_ lo here; some other path may, or may not, need hi
# cheap
# Multiplying a bigint by itself (same object!) is about twice
# as fast in CPython.
# sanity check
# Function due to Tim Peters.  See GH issue #90716 for details.
# https://github.com/python/cpython/issues/90716
# The implementation in longobject.c of base conversion algorithms
# between power-of-2 and non-power-of-2 bases are quadratic time.
# This function implements a divide-and-conquer algorithm that is
# faster for large numbers.  Builds an equal decimal.Decimal in a
# "clever" recursive way.  If we want a string representation, we
# apply str to _that_.
# Don't bother caching the "lo" mask in this; the time to compute it is
# tiny compared to the multiply.
# It is only usable with the C decimal implementation.
# _pydecimal.py calls str() on very large integers, which in its
# turn calls int_to_decimal_string(), causing very deep recursion.
# Fallback algorithm for the case when the C decimal module isn't
# available.  This algorithm is asymptotically worse than the algorithm
# using the decimal module, but better than the quadratic time
# implementation in longobject.c.
# The estimation of the number of decimal digits.
# There is no harm in small error.  If we guess too large, there may
# be leading 0's that need to be stripped.  If we guess too small, we
# may need to call str() recursively for the remaining highest digits,
# which can still potentially be a large integer. This is manifested
# only if the number has way more than 10**15 digits, that exceeds
# the 52-bit physical address limit in both Intel64 and AMD64.
# log10(2)
# 5**k << k == 5**k * 2**k == 10**k
# If our guess of w is too large, there may be leading 0's that
# need to be stripped.
# Function due to Bjorn Martinsson.  See GH issue #90716 for details.
# This function implements a divide-and-conquer algorithm making use
# of Python's built in big int multiplication. Since Python uses the
# Karatsuba algorithm for multiplication, the time complexity
# of this function is O(len(s)**1.58).
# PyLong_FromString() has already removed leading +/-, checked for invalid
# use of underscore characters, checked that string consists of only digits
# and underscores, and stripped leading whitespace.  The input can still
# contain underscores and have trailing whitespace.
# FIXME: this doesn't support the full syntax that int() supports.
# Fast integer division, based on code from Mark Dickinson, fast_div.py
# GH-47701. Additional refinements and optimizations by Bjorn Martinsson.  The
# algorithm is due to Burnikel and Ziegler, in their paper "Fast Recursive
# Division".
# Use grade-school algorithm in base 2**n, n = nbits(b)
# The constructor_ob function is a vestige of safe for unpickling.
# There is no reason for the caller to pass it anymore.
# Example: provide pickling support for complex numbers.
# Support for pickling new-style objects
# Python code for object.__reduce_ex__ for protocols 0 and 1
# not really reachable
# Helper for __reduce_ex__ protocol 2
# Get the value from a cache in the class if possible
# Not cached -- calculate the value
# This class has no slots
# Slots found -- gather slot names from all base classes
# if class has a single slot, it can be given as a string
# special descriptors
# mangled names
# Cache the outcome in the class if at all possible
# But don't die if we can't
# A registry of extension codes.  This is an ad-hoc compression
# mechanism.  Whenever a global reference to <module>, <name> is about
# to be pickled, the (<module>, <name>) tuple is looked up here to see
# if it is a registered extension code for it.  Extension codes are
# universal, so that the meaning of a pickle does not depend on
# context.  (There are also some codes reserved for local use that
# don't have this restriction.)  Codes are positive ints; 0 is
# reserved.
# key -> code
# code -> key
# code -> object
# Don't ever rebind those names:  pickling grabs a reference to them when
# it's initialized, and won't see a rebinding.
# Redundant registrations are benign
# Standard extension code assignments
# Reserved ranges
# First  Last Count  Purpose
# Extension codes are assigned by the Python Software Foundation.
# Old imp constants:
# Modulefinder does a good job at simulating Python's, but it can not
# handle __path__ modifications packages make at runtime.  Therefore there
# is a mechanism whereby you can register extra paths in this map for a
# package, and it will be honored.
# Note this is a mapping is lists of paths.
# A Public interface
# This ReplacePackage mechanism allows modulefinder to work around
# situations in which a package injects itself under the name
# of another package into sys.modules at runtime by calling
# ReplacePackage("real_package_name", "faked_package_name")
# before running ModuleFinder.
# It's necessary to clear the caches for our Finder first, in case any
# modules are being added/deleted/modified at runtime. In particular,
# test_modulefinder.py changes file tree contents in a cache-breaking way:
# Some special cases:
# Should never happen.
# The set of global names that are assigned to in the module.
# This includes those names imported through starimports of
# Python modules.
# The set of starimports this module did that could not be
# resolved, ie. a starimport from a non-Python module.
# Used in debugging only
# relative import
# 'suffixes' used to be a list hardcoded to [".py", ".pyc"].
# But we must also collect Python extension modules - although
# we cannot separate normal dlls from Python extensions.
# wrapper for self.import_hook() that won't raise ImportError
# Scan the code, and yield 'interesting' opcode combinations
# absolute import
# We've encountered an "import *". If it is a Python module,
# the code has already been parsed and we can suck out the
# global names.
# At this point we don't know whether 'name' is a
# submodule of 'm' or a global module. Let's just try
# the full name first.
# We don't expect anything else from the generator.
# As per comment at top of file, simulate runtime __path__ additions.
# assert path is not None
# Print modules found
# Print missing modules
# Print modules that may be missing, but then again, maybe not...
# The package tried to import this module itself and
# failed. It's definitely missing.
# It's a global in the package: definitely not missing.
# It could be missing, but the package did an "import *"
# from a non-Python module, so we simply can't be sure.
# It's not a global in the package, the package didn't
# do funny star imports, it's very likely to be missing.
# The symbol could be inserted into the package from the
# outside, but since that's not good style we simply list
# it missing.
# Parse command line
# Process options
# Provide default arguments
# Set the path based on sys.path and the script directory
# Create the module finder and turn its crank
# for -i debugging
# Catch errors that may happen when close is called from __del__
# because CPython is in interpreter shutdown.
# __init__ didn't succeed, so don't bother closing
# see http://bugs.python.org/issue1339007 for details
# Call through to the clear method on dbm-backed shelves.
# see https://github.com/python/cpython/issues/107089
# Does a path exist?
# This is false for dangling symbolic links on systems that support them.
# Being true for dangling symbolic links is also useful.
# This follows symbolic links, so both islink() and isdir() can be true
# for the same path on systems that support symlinks
# Is a path a directory?
# This follows symbolic links, so both islink() and isdir()
# can be true for the same path on systems that support symlinks
# Is a path a symbolic link?
# This will always return false on systems where os.lstat doesn't exist.
# Is a path a junction?
# Return the longest prefix of all list elements.
# Some people pass in a list of pathname parts to operate in an OS-agnostic
# fashion; don't try to translate in that case as that's an abuse of the
# API and they are already doing what they need to be OS-agnostic and so
# they most likely won't be using an os.PathLike object in the sublists.
# Are two stat buffers (obtained from stat, fstat or lstat)
# describing the same file?
# Are two filenames really pointing to the same file?
# Are two open files really referencing the same file?
# (Not necessarily the same file descriptor!)
# Split a path in root and extension.
# The extension is everything starting at the last dot in the last
# pathname component; the root is everything before that.
# It is always true that root + ext == p.
# Generic implementation of splitext, to be parametrized with
# the separators
# NOTE: This code must work for text and bytes strings.
# skip all leading dots
# A singleton with a true boolean value.
# Should be a 2-tuple.
# Else it should be an int giving the minor version for 3.x.
# end_lineno and end_col_offset are optional attributes, and they
# should be copied whether the value is None or not.
# TypeIgnore is a special case where lineno is not an attribute
# but rather a field of the node itself.
# If the ast module is loaded more than once, only add deprecated methods once
# The following code is for backward compatibility.
# It will be removed in future.
# arbitrary keyword arguments are accepted
# Keep another reference to Ellipsis in the global namespace
# so it can be referenced in Ellipsis.__new__
# (The original "Ellipsis" name is removed from the global namespace later on)
# should be before int
# Large float and imaginary literals get turned into infinities in the AST.
# We unparse those infinities to INFSTR.
# <target> := <expr1>
# <expr1>, <expr2>
# 'yield', 'yield from'
# 'if'-'else', 'lambda'
# 'or'
# 'and'
# 'not'
# '<', '>', '==', '>=', '<=', '!=',
# 'in', 'not in', 'is', 'is not'
# '|'
# '^'
# '&'
# '<<', '>>'
# '+', '-'
# '*', '@', '/', '%', '//'
# unary '+', '-', '~'
# '**'
# 'await'
# Note: as visit() resets the output text, do NOT rely on
# NodeVisitor.generic_visit to handle any nodes (as it calls back in to
# the subclass visit() method, which resets self._source to an empty list)
# collapse nested ifs into equivalent elifs.
# final else
# \n and \t are non-printable, but we only escape them if
# escape_special_whitespace is True
# Always escape backslashes and other non-printable characters
# If there aren't any possible_quotes, fallback to using repr
# on the original string. Try to use a quote from quote_types,
# e.g., so that we use triple quotes for docstrings.
# Sort so that we prefer '''"''' over """\""""
# If we're using triple quotes and we'd need to escape a final
# quote, escape it
# If we weren't able to find a quote type that works for all parts
# of the JoinedStr, fallback to using repr and triple single quotes.
# force repr to use single quotes
# for both the f-string itself, and format_spec
# Separate pair of opening brackets as "{ {"
# Substitute overflowing decimal literal for AST infinities,
# and inf - inf for NaNs.
# `{}` would be interpreted as a dictionary literal, and
# `set` might be shadowed. Thus:
# for dictionary unpacking operator in dicts {**{'y': 2}}
# see PEP 448 for details
# factor prefixes (+, -, ~) shouldn't be separated
# from the value they belong, (e.g: +1 instead of + 1)
# Special case: 3.__abs__() is a syntax error, so if node.value
# is an integer literal then we need to either parenthesize
# it or add an extra space to get 3 .__abs__().
# parentheses can be omitted if the tuple isn't empty
# normal arguments
# varargs, or bare '*' if no varargs but keyword-only arguments present
# keyword-only arguments
# kwargs
# Translated by Guido van Rossum from C source provided by
# Adrian Baddeley.  Adapted by Raymond Hettinger for use with
# the Mersenne Twister  and os.urandom() core generators.
# Number of bits in a float
# used by getstate/setstate
# hashlib is pretty heavy to load, try lean internal
# module first
# fallback to official implementation
# In version 2, the state was saved as signed ints, which causes
## -------------------------------------------------------
## ---- Methods below this point do not need to be overridden or extended
## ---- when subclassing for the purpose of using a different core generator.
## -------------------- pickle support  -------------------
# Issue 17489: Since __reduce__ was defined to fix #759889 this is no
# longer called; we leave it here because it has been here since random was
# rewritten back in 2001 and why risk breaking something.
# for pickle
## ---- internal support method for evenly distributed integers ----
# just inherit it
# 0 <= r < 2**k
# int(limit * maxsize) % n == 0
## --------------------------------------------------------
## ---- Methods below this point generate custom distributions
## ---- based on the methods defined above.  They do not
## ---- directly touch the underlying generator and only
## ---- access randomness through the methods:  random(),
## ---- getrandbits(), or _randbelow().
## -------------------- bytes methods ---------------------
## -------------------- integer methods  -------------------
# This code is a bit messy to make it fast for the
# common case while still doing adequate error checking.
# We don't check for "step != 1" because it hasn't been
# type checked and converted to an integer yet.
# Stop argument supplied.
# Fast path.
# Non-unit step argument supplied.
## -------------------- sequence methods  -------------------
# As an accommodation for NumPy, we don't use "if not seq"
# because bool(numpy.array()) raises a ValueError.
# pick an element in x[:i+1] with which to exchange x[i]
# Sampling without replacement entails tracking either potential
# selections (the pool) in a list or previous selections in a set.
# When the number of selections is small compared to the
# population, then tracking selections is efficient, requiring
# only a small set and an occasional reselection.  For
# a larger number of selections, the pool tracking method is
# preferred since the list takes less space than the
# set and it doesn't suffer from frequent reselections.
# The number of calls to _randbelow() is kept at or near k, the
# theoretical minimum.  This is important because running time
# is dominated by _randbelow() and because it extracts the
# least entropy from the underlying random number generators.
# Memory requirements are kept to the smaller of a k-length
# set or an n-length list.
# There are other sampling algorithms that do not require
# auxiliary memory, but they were rejected because they made
# too many calls to _randbelow(), making them slower and
# causing them to eat more entropy than necessary.
# size of a small set minus size of an empty list
# table size for big sets
# An n-length list is smaller than a k-length set.
# Invariant:  non-selected at pool[0 : n-i]
# move non-selected item into vacancy
# convert to float for a small speed improvement
# convert to float
## -------------------- real-valued distributions  -------------------
# Uses Kinderman and Monahan method. Reference: Kinderman,
# A.J. and Monahan, J.F., "Computer generation of random
# variables using the ratio of uniform deviates", ACM Trans
# Math Software, 3, (1977), pp257-260.
# When x and y are two variables from [0, 1), uniformly
# distributed, then
# are two *independent* variables with normal distribution
# (mu = 0, sigma = 1).
# (Lambert Meertens)
# (corrected version; bug discovered by Mike Miller, fixed by LM)
# Multithreading note: When two threads call this function
# simultaneously, it is possible that they will receive the
# same return value.  The window is very small though.  To
# avoid this, you have to use a lock around all calls.  (I
# didn't want to slow this down in the serial case by using a
# lock here.)
# we use 1-random() instead of random() to preclude the
# possibility of taking the log of zero.
# Based upon an algorithm published in: Fisher, N.I.,
# "Statistical Analysis of Circular Data", Cambridge
# University Press, 1993.
# Thanks to Magnus Kessler for a correction to the
# implementation of step 4.
# Warning: a few older sources define the gamma distribution in terms
# of alpha > -1.0
# Uses R.C.H. Cheng, "The generation of Gamma
# variables with non-integral shape parameters",
# Applied Statistics, (1977), 26, No. 1, p71-74
# expovariate(1/beta)
# alpha is between 0 and 1 (exclusive)
# Uses ALGORITHM GS of Statistical Computing - Kennedy & Gentle
## See
## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html
## for Ivan Frohne's insightful analysis of why the original implementation:
##
####### was dead wrong, and how it probably got that way.
# This version due to Janne Sinkkonen, and matches all the std
# texts (e.g., Knuth Vol 2 Ed 3 pg 134 "the beta distribution").
# Jain, pg. 495
# Jain, pg. 499; bug fix courtesy Bill Arms
## -------------------- discrete  distributions  ---------------------
# Error check inputs and handle edge cases
# Fast path for a common case
# Exploit symmetry to establish:  p <= 0.5
# BG: Geometric method by Devroye with running time of O(np).
# https://dl.acm.org/doi/pdf/10.1145/42372.42381
# BTRS: Transformed rejection with squeeze method by Wolfgang Hörmann
# https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.8407&rep=rep1&type=pdf
# Standard deviation of the distribution
# The early-out "squeeze" test substantially reduces
# the number of acceptance condition evaluations.
# Acceptance-rejection test.
# Note, the original paper erroneously omits the call to log(v)
# when comparing to the log of the rescaled binomial distribution.
# Mode of the distribution
# Only needs to be done once
## ------------------------------------------------------------------
## --------------- Operating System Random Source  ------------------
# bits / 8 and rounded up
# trim excess bits
# os.urandom(n) fails with ValueError for n < 0
# and returns an empty bytes string for n == 0.
# ----------------------------------------------------------------------
# Create one instance, seeded from current time, and export its methods
# as module-level functions.  The functions share state across all uses
# (both in the user's code and in the Python libraries), but that's fine
# for most programs and is easier for the casual user than making them
# instantiate their own Random() instance.
## ------------------------------------------------------
## ----------------- test program -----------------------
## ------------------ fork support  ---------------------
# ------------------------------------------------------
# -------------- command-line interface ----------------
# Explicit arguments
# No explicit argument, select based on input
# Is it an integer?
# Is it a float?
# Split in case of space-separated string: "a b c"
#! /usr/bin/env python3
# Don't change the indentation of the template; the reindent() calls
# in Timer.__init__() depend on setup being indented 4 spaces and stmt
# being indented 8 spaces.
# Check that the code can be compiled outside a function
# Save for traceback display
# else the source is already stored somewhere else
# auto-determine
# Include the current directory, so that local imports work (sys.path
# contains the directory of this script, rather than the current
# directory)
# determine number so that 0.2 <= total time < 2.0
# Some strings for ctype-style character classification
# Functions which aren't available as string methods.
# Capitalize the words in a string, e.g. " aBc  dEf " -> "Abc Def".
####################################################################
# r'[a-z]' matches to non-ASCII letters when used with IGNORECASE, but
# without the ASCII flag.  We can't add re.ASCII to flags because of
# backward compatibility.  So we use the ?a local flag and [a-z] pattern.
# See https://bugs.python.org/issue31672
# Search for $$, $identifier, ${identifier}, and any bare $'s
# Helper function for .sub()
# Check the most common path first.
# If all the groups are None, there must be
# another group we're not expecting
# add a named group only the first time it appears
# Initialize Template.pattern.  __init_subclass__() is automatically called
# only for subclasses, not for the Template class itself.
########################################################################
# the Formatter class
# see PEP 3101 for details and purpose of this class
# The hard parts are reused from the C implementation.  They're exposed as "_"
# prefixed methods of str.
# The overall parser is implemented in _string.formatter_parser.
# The field name parser is implemented in _string.formatter_field_name_split
# output the literal text
# if there's a field, output it
# this is some markup, find the object and do
# handle arg indexing when empty field_names are given.
# disable auto arg incrementing, if it gets
# used later on, then an exception will be raised
# given the field_name, find the object it references
# do any conversion on the resulting object
# expand the format spec, if needed
# format the object and append to the result
# returns an iterable that contains tuples of the form:
# (literal_text, field_name, format_spec, conversion)
# literal_text can be zero length
# field_name can be None, in which case there's no
# if field_name is not None, it is looked up, formatted
# given a field_name, find the object it references.
# loop through the rest of the field_name, doing
# Default values for instance variables
# pick the function-like symbols that are local identifiers
# generators are of type TYPE_FUNCTION with a ".0"
# parameter as a first parameter (which makes them
# distinguishable from a function named 'genexpr')
# Get the function-def block in the annotation
# scope 'st' with the same identifier, if any.
# A generic generator of type TYPE_FUNCTION
# cannot be a direct child of 'st' (but it
# can be a descendant), e.g.:
# class A:
# like PyST_GetScope()
# Value 0 no longer used
# Value 2 no longer used
# Relies on the undocumented fact that BufferedReader.peek() always
# returns at least one byte (except at EOF)
# Leftover data is not a valid LZMA/XZ stream; ignore it.
# Error on the first iteration; bail out.
# Formatting and printing lists of traceback lines.
# Printing and Extracting Tracebacks.
# Exception formatting and output.
# -- not official API but folk probably use these two functions.
# --
# Printing and Extracting Stacks.
# Ignore the exception raised if the frame is still executing.
# treat errors (empty string) and empty lines (newline) as the same
# Returns the line as-is from the source, without modifying whitespace.
# Returns _original_lines, but dedented
# return only the first line, stripped
# Internal version of walk_tb that yields full code positions including
# end line and column information.
# Yield tb_lineno when co_positions does not have a line number to
# maintain behavior with walk_tb.
# Also hardcoded in traceback.c.
# Same as extract but operates on a frame generator that yields
# (frame, (lineno, end_lineno, colno, end_colno)) in the stack.
# Only lineno is required, the remaining fields can be None if the
# information is not available.
# Must defer line lookups until we have called checkcache.
# If immediate lookup was desired, trigger lookups now.
# While doing a fast-path check for isinstance(a_list, StackSummary) is
# appealing, idlelib.run.cleanup_traceback and other similar code may
# break this by making arbitrary frames plain tuples, so we need to
# check on a frame by frame basis.
# only output first line if column information is missing
# get first and last line
# assume all_lines_original has enough lines (since we constructed it)
# character index of the start/end of the instruction
# adjust start/end offset based on dedent
# When showing this on a terminal, some of the non-ASCII characters
# might be rendered as double-width characters, so we need to take
# that into account when calculating the length of the line.
# get exact code segment corresponding to the instruction
# attempt to parse for anchors
# only display first line, last line, and lines around anchor start/end
# computed anchor positions do not take start_offset into account,
# so account for it here
# account for display width
# remove bad line numbers
# compute caret character for each position
# before first non-ws char of the line, or before start of instruction
# within anchors
# Replace the previous line with a red version of it only in the parts covered
# by the carets.
# display significant lines
# 1 line in between - just output it
# > 1 line in between - abbreviate
# Without parentheses, `segment` is parsed as a statement.
# Binary ops, subscripts, and calls are expressions, so
# we can wrap them with parentheses to parse them as
# (possibly multi-line) expressions.
# e.g. if we try to highlight the addition in
# x = (
# )
# then we would ast.parse
# which is not a valid statement because of the newline.
# Adding brackets makes it a valid expression.
# (
# Line locations will be different than the original,
# which is taken into account later on.
# -2 since end_lineno is 1-indexed and because we added an extra
# bracket + newline to `segment` when calling ast.parse
# ast gives these locations for BinOp subexpressions
# ( left_expr ) + ( right_expr )
# First operator character is the first non-space/')' character
# binary op is 1 or 2 characters long, on the same line,
# before the right subexpression
# operator char should not be in the right subexpression
# right_col can be invalid since it is exclusive
# ast gives these locations for value and slice subexpressions
# ( value_expr ) [ slice_expr ]
# subscript^^^^^^^^^^^^^^^^^^^^
# find left bracket
# find right bracket (final character of expression)
# ast gives these locations for function call expressions
# ( func_expr ) (args, kwargs)
# call^^^^^^^^^^^^^^^^^^^^^^^^
# Fast track for ASCII-only strings
# NB: we need to accept exc_traceback, exc_value, exc_traceback to
# permit backwards compat with the existing API, otherwise we
# need stub thunk objects just to glue it together.
# Handle loops in __cause__ or __context__.
# Capture now to permit freeing resources: only complication is in the
# unofficial API _format_final_exc_line
# Handle SyntaxError's specially
# Convert __cause__ and __context__ to `TracebackExceptions`s, use a
# queue to avoid recursion (only the top-level call gets _seen == None)
# Nested exceptions needs correct handling of multiline messages.
# Show exactly where the problem was found.
# text  = "   foo\n"
# rtext = "   foo"
# ltext =    "foo"
# Convert 1-based column offset to 0-based index into stripped text
# non-space whitespace (likes tabs) must be kept for alignment
# colorize from colno to end_colno
# exception group, but depth exceeds limit
# format exception group
# The closing frame may be added by a recursive call
# Attributes are unsortable, e.g. int and str
# find most recent frame
# Check first if we are in a method and the instance
# has the wrong name as attribute
# Compute closest match
# A missing attribute is "found". Don't suggest it (see GH-88821).
# No more than 1/3 of the involved characters should need changed.
# Don't take matches we've already beaten.
# A Python implementation of Python/suggestions.c:levenshtein_distance.
# Both strings are the same
# Trim away common affixes
# Prefer shorter buffer
# Quick fail when a match is impossible
# Instead of producing the whole traditional len(a)-by-len(b)
# matrix, we can update just one row in place.
# Initialize the buffer row
# 1) Previous distance in this row is cost(b[:b_index], a[:index])
# 2) cost(b[:b_index], a[:index+1]) from previous row
# 3) existing result is cost(b[:b_index+1], a[index])
# cost(b[:b_index+1], a[:index+1])
# Everything in this row is too big, so bail early.
# generic events, that must be mapped to implementation-specific ones
# this maps file descriptors to keys
# read-only mapping returned by get_map()
# Do an exhaustive search.
# Raise ValueError after all.
# Use a shortcut to update the data.
# This can happen if the FD was closed since it
# was registered.
# This is shared between poll() and epoll().
# epoll() has a different signature and handling of timeout parameter.
# poll() has a resolution of 1 millisecond, round away from
# zero to wait *at least* timeout seconds.
# epoll_wait() has a resolution of 1 millisecond, round away
# from zero to wait *at least* timeout seconds.
# epoll_wait() expects `maxevents` to be greater than zero;
# we want to make sure that `select()` can be called when no
# FD is registered.
# See comment above.
# If max_ev is 0, kqueue will ignore the timeout. For consistent
# behavior with the other selector classes, we prevent that here
# (using max). See https://bugs.python.org/issue29255
# Implementation based upon https://github.com/sethmlarson/selectors2/blob/master/selectors2.py
# select module does not implement method
# check if the OS and Kernel actually support the method. Call may fail with
# OSError: [Errno 38] Function not implemented
# check that poll actually works
# close epoll, kqueue, and devpoll fd
# Choose the best implementation, roughly:
# select() also can't accept a FD > FD_SETSIZE (usually around 1024)
# Check if this is a system where ProcessPoolExecutor can function.
# If workers == 0, let ProcessPoolExecutor choose
# Use set() to remove duplicates.
# Use sorted() to create pyc files in a deterministic order.
# escape non-printable characters in msg
# if flist is provided then load it
# This helper is needed in order for the PEP 302 emulation to
# correctly handle compiled files
# Skip rest of the header
# don't traverse path items we've seen before
# Implement a file walker for the normal importlib path hook
# ignore unreadable directories like import does
# handle packages before same-named modules
# not a package
# Get the containing package's __path__
# This hack fixes an impedance mismatch between pkgutil and
# importlib, where the latter raises other errors for cases where
# pkgutil previously raised ImportError
# This could happen e.g. when this is called from inside a
# frozen package.  Return the path unchanged in that case.
# Start with a copy of the existing path
# We can't do anything: find_loader() returns None when
# passed a dotted name.
# Is this finder PEP 420 compliant?
# XXX This may still add duplicate entries to path on
# case-insensitive filesystems
# XXX Is this the right thing for subpackages like zope.app?
# It looks for a file named "zope.app.pkg"
# Don't check for existence!
# XXX needs test
# Modify the resource name to be compatible with the loader.get_data
# signature - an os.path format "filename" starting with the dirname of
# the package's __file__
# Lazy import to speedup Python startup time
# there is a colon - a one-step import is all that's needed
# no colon - have to iterate to find the package boundary
# first part *must* be a module/package.
# if we reach this point, mod is the module, already imported, and
# parts is the list of parts in the object hierarchy to be traversed, or
# an empty list if just the module is wanted.
# The ID
# localize variable access to minimize overhead
# and to improve thread safety
# Let other threads run
# Use heapq to sort the queue rather than using 'sorted(self._queue)'.
# With heapq, two events scheduled at the same time will show in
# the actual order they would be retrieved.
# Exception raised for bad input (with string parameter for details)
# Exceptions raised for bad input
# This is trick for backward compatibility. Since 3.13, we will raise IllegalMonthError instead of
# IndexError for bad month number(out of 1-12). But we can't remove IndexError for backward compatibility.
# Constants for months
# Constants for days
# Number of days per month (except for February in leap years)
# This module used to have hard-coded lists of day and month names, as
# English strings.  The classes following emulate a read-only version of
# that, but supply localized names.  Note that the values are computed
# fresh on each call, in case the user changes locale between calls.
# January 1, 2001, was a Monday.
# Full and abbreviated names of weekdays
# Full and abbreviated names of months (1-based arrays!!!)
# 0 = Monday, 6 = Sunday
# right-align single-digit days
# months in this row
# max number of weeks for this row
# CSS classes for the day <td>s
# CSS classes for the day <th>s
# CSS class for the days before and after current month
# CSS class for the month's head
# CSS class for the month
# CSS class for the year's table head
# CSS class for the whole year table
# day outside month
# The LC_TIME locale does not seem to be configured:
# get the user preferred locale.
# Support for old module level interface
# Spacing of month columns for multi-column year calendar
# Amount printed by prweek()
# Number of spaces between columns
# high level safe interfaces
# low level safe interfaces
# deprecated unsafe interface
# constants
# Imports.
# This variable _was_ unused for legacy reasons, see issue 10354.
# But as of 3.5 we actually use it at runtime so changing it would
# have a possibly desirable side effect...  But we do not want to support
# that as an API.  It is undocumented on purpose.  Do not depend on this.
# Internal routines.
# tempfile APIs return a str by default.
# we could check for bytes but it'll fail later on anyway
# First, try the environment.
# Failing that, try OS-specific locations.
# As a last resort, the current directory.
# Try only a few names per directory.
# This exception is thrown when a directory with the chosen name
# already exists on windows.
# no point trying more names in this directory
# try again
# Pass follow_symlinks=False, unless not supported on this platform.
# User visible interfaces.
#### Windows provides delete-on-close as a primitive, in which
# case the file was deleted by self.file.close().
# Attribute lookups are delegated to the underlying file
# and cached for non-numeric results
# (i.e. methods are cached, closed and friends are not)
# Avoid closing the file as long as the wrapper is alive,
# see issue #18879.
# The underlying __enter__ method returns the wrong object
# (self.file) so override it to return the wrapper
# Need to trap __exit__ as well to ensure the file gets
# deleted when used in a with statement
# iter() doesn't use __getattr__ to find the __iter__ method
# Don't return iter(self.file), but yield from it to avoid closing
# file as long as it's being used as iterator (see issue #23700).  We
# can't use 'yield from' here because iter(file) returns the file
# object itself, which has a close method, and thus the file would get
# closed when the generator is finalized, due to PEP380 semantics.
# Setting O_TEMPORARY in the flags causes the OS to delete
# the file when it is closed.  This is only supported by Windows.
# On non-POSIX and Cygwin systems, assume that we cannot unlink a file
# while it is open.
# Is the O_TMPFILE flag available and does it work?
# The flag is set to False if os.open(dir, os.O_TMPFILE) raises an
# IsADirectoryError exception
# Linux kernel older than 3.11 ignores the O_TMPFILE flag:
# O_TMPFILE is read as O_DIRECTORY. Trying to open a directory
# with O_RDWR|O_DIRECTORY fails with IsADirectoryError, a
# directory cannot be open to write. Set flag to False to not
# try again.
# The filesystem of the directory does not support O_TMPFILE.
# For example, OSError(95, 'Operation not supported').
# On Linux kernel older than 3.11, trying to open a regular
# file (or a symbolic link to a regular file) with O_TMPFILE
# fails with NotADirectoryError, because O_TMPFILE is read as
# O_DIRECTORY.
# Fallback to _mkstemp_inner().
# The method caching trick from NamedTemporaryFile
# won't work here, because _file may change from a
# BytesIO/StringIO instance to a real file. So we list
# all the methods directly.
# Context management protocol
# file protocol
# The PermissionError handler was originally added for
# FreeBSD in directories, but it seems that it is raised
# on Windows too.
# bpo-43153: Calling _rmtree again may
# raise NotADirectoryError and mask the PermissionError.
# So we must re-raise the current PermissionError if
# path is not a directory.
# Do not import dataclasses; overhead is unacceptable (gh-117703)
# exception classes
# Used in parser getters to indicate the default behaviour when a specific
# option is not found it to raise an exception. Created to enable `None` as
# a valid fallback value.
# escaped percent signs
# valid syntax
# p is no longer used
# escaped dollar signs
# match nothing if no prefixes
# Regular expressions for parsing section headers and options
# Interpolation algorithm to be used if the user does not specify another
# Compiled regular expression for matching sections
# Compiled regular expression for matching options with typical separators
# Compiled regular expression for matching options with optional values
# delimited using typical separators
# Compiled regular expression for matching leading whitespace in a line
# Possible boolean values in the configuration.
# self._sections will never have [DEFAULT] in it
# getint, getfloat and getboolean provided directly for backwards compat
# Update with the entry specific variables
# To conform with the mapping protocol, overwrites existing values in
# the section.
# XXX this is not atomic if read_dict fails at any point. Then again,
# no update method in configparser is atomic in this implementation.
# the default section
# XXX does it break when underlying container state changed?
# add empty line to the value, but only if there was no
# comment on the line
# newlines added at join
# empty line marks end of value
# continuation line?
# a section header or option header?
# is it a section header?
# So sections can't start with a continuation line
# an option line?
# a non-fatal parsing error occurred. set up the
# exception but keep going. the exception will be
# raised at the end of the file and will contain a
# list of all bogus lines
# This check is fine because the OPTCRE cannot
# match if it would set optval to None
# valueless option handling
# The parser object of the proxy is read-only.
# The name of the section on a proxy is read-only.
# If `_impl` is provided, it should be a getter method on the parser
# object that provides the desired type conversion.
# See class docstring.
# don't raise since the entry was present in _data, silently
# clean up
# Note regarding PEP 8 compliant names
# the convention of camelCase function and method names from that
# language. Those original names are not in any imminent danger of
# being deprecated (even for Py3k),so this module provides them as an
# alias for the PEP 8 compliant names
# Note that using the new PEP 8 compliant names facilitates substitution
# with the multiprocessing module, which doesn't provide the old
# Java inspired names.
# Rename some stuff so "from threading import *" is safe
# get thread-local implementation, either from the thread
# module, or from the python fallback
# Support for profile and trace hooks
# Synchronization classes
# Internal methods used by condition variables
# Internal method used for reentrancy checks
# Export the lock's acquire() and release() methods
# If the lock defines _release_save() and/or _acquire_restore(),
# these override the default implementations (which just call
# release() and acquire() on the lock).  Ditto for _is_owned().
# No state to save
# Ignore saved state
# Return True if lock is owned by current_thread.
# This method is called only if _lock doesn't have _is_owned().
# restore state no matter what (e.g., KeyboardInterrupt)
# gh-92530: The previous call of notify() released the lock,
# but was interrupted before removing it from the queue.
# It can happen if a signal handler raises an exception,
# like CTRL+C which raises KeyboardInterrupt.
# After Tim Peters' semaphore class, but not quite the same (no maximum)
# After Tim Peters' event class (without is_posted())
# Private method called by Thread._after_fork()
# A barrier class.  Inspired in part by the pthread_barrier_* api and
# the CyclicBarrier class from Java.  See
# http://sourceware.org/pthreads-win32/manual/pthread_barrier_init.html and
# http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/
# for information.
# We maintain two main states, 'filling' and 'draining' enabling the barrier
# to be cyclic.  Threads are not allowed into it until it has fully drained
# since the previous cycle.  In addition, a 'resetting' state exists which is
# similar to 'draining' except that threads leave with a BrokenBarrierError,
# and a 'broken' state in which all threads get the exception.
# 0 filling, 1 draining, -1 resetting, -2 broken
# Block while the barrier drains.
# We release the barrier
# We wait until someone releases us
# Wake up any threads waiting for barrier to drain.
# Block until the barrier is ready for us, or raise an exception
# if it is broken.
# It is draining or resetting, wait until done
#see if the barrier is in a broken state
# Optionally run the 'action' and release the threads waiting
# in the barrier.
# enter draining state
#an exception during the _action handler.  Break and reraise
# Wait in the barrier until we are released.  Raise an exception
# if the barrier is reset or broken.
#timed out.  Break the barrier
# If we are the last thread to exit the barrier, signal any threads
# waiting for the barrier to drain.
#resetting or draining
#reset the barrier, waking up threads
#was broken, set it to reset state
#which clears when the last thread exits
# An internal error was detected.  The barrier is set to
# a broken state all parties awakened.
# We don't need synchronization here since this is an ephemeral result
# anyway.  It returns the correct value in the steady state.
# exception raised by the Barrier class
# Helper to generate new thread names
# Active thread administration.
# bpo-44422: Use a reentrant lock to allow reentrant calls to functions like
# threading.enumerate().
# maps thread id to Thread object
# Main class for threads
# Copy of sys.stderr used by self._invoke_excepthook()
# For debugging and _after_fork()
# Private!  Called by threading._after_fork().
# This thread is alive.
# Otherwise, the thread is dead, Jim.  _PyThread_AfterFork()
# already marked our handle done.
# Start joinable thread
# Will set ident and native_id
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
# Wrapper around the real bootstrap code that ignores
# exceptions during interpreter cleanup.  Those typically
# happen when a daemon thread wakes up at an unfortunate
# moment, finds the world around it destroyed, and raises some
# random exception *** while trying to report the exception in
# _bootstrap_inner() below ***.  Those random exceptions
# don't help anybody, and they confuse users, so we suppress
# them.  We suppress them only when it appears that the world
# indeed has already been destroyed, so that exceptions in
# _bootstrap_inner() during normal business hours are properly
# reported.  Also, we only suppress them for daemonic threads;
# if a non-daemonic encounters this, something else is wrong.
# There must not be any python code between the previous line
# and after the lock is released.  Otherwise a tracing function
# could try to acquire the lock again in the same thread, (in
# current_thread()), and would block.
# the behavior of a negative timeout isn't documented, but
# historically .join(timeout=x) for x<0 has acted as if timeout=0
# Simple Python implementation if _thread._excepthook() is not available
# silently ignore SystemExit
# do nothing if sys.stderr is None and sys.stderr was None
# when the thread was created
# do nothing if sys.stderr is None and args.thread is None
# Original value of threading.excepthook
# Create a local namespace to ensure that variables remain alive
# when _invoke_excepthook() is called, even if it is called late during
# Python shutdown. It is mostly needed for daemon threads.
# Break reference cycle (exception stored in a variable)
# The timer class was contributed by Itamar Shtull-Trauring
# Special thread class to represent the main thread
# Helper thread-local instance to detect when a _DummyThread
# is collected. Not a part of the public API.
# Put the thread on a thread local variable so that when
# the related thread finishes this instance is collected.
# Note: no other references to this instance may be created.
# If any client code creates a reference to this instance,
# the related _DummyThread will be kept forever!
# Dummy thread class to represent threads not started here.
# These should be added to `_active` and removed automatically
# when they die, although they can't be waited for.
# Their purpose is to return *something* from current_thread().
# They are marked as daemon threads so we won't wait for them
# when we exit (conform previous semantics).
# Global API functions
# NOTE: if the logic in here ever changes, update Modules/posixmodule.c
# warn_about_fork_with_threads() to match.
# Same as enumerate(), but without the lock. Internal use only.
# Create the main thread object,
# and make it available for the interpreter
# (Py_Main) as threading._shutdown.
# Obscure: other threads may be waiting to join _main_thread.  That's
# dubious, but some code does it. We can't wait for it to be marked as done
# normally - that won't happen until the interpreter is nearly dead. So
# mark it done here.
# _shutdown() was already called
# Call registered threading atexit functions before threads are joined.
# Order is reversed, similar to atexit.
# Wait for all non-daemon threads to exit.
# XXX Figure this out for subinterpreters.  (See gh-75698.)
# Reset _active_limbo_lock, in case we forked while the lock was held
# by another (non-forked) thread.  http://bugs.python.org/issue874900
# fork() only copied the current thread; clear references to others.
# fork() was called in a thread which was not spawned
# by threading.Thread. For example, a thread spawned
# by thread.start_new_thread().
# Dangling thread instances must still have their locks reset,
# because someone may join() them.
# Any lock/condition variable may be currently locked or in an
# invalid state, so we reinitialize them.
# This is the one and only active thread.
# All the others are already stopped.
# sys.stderr is None when run with pythonw.exe:
# warnings get lost
# the file (probably stderr) is invalid - this warning gets lost.
# When a warning is logged during Python shutdown, linecache
# and the import machinery don't work anymore
# Logging a warning should not raise a new exception:
# catch Exception, not only ImportError and RecursionError.
# don't suggest to enable tracemalloc if it's not available
# When a warning is logged during Python shutdown, tracemalloc
# Keep a reference to check if the function was replaced
# warnings.showwarning() was replaced
# warnings.formatwarning() was replaced
# Remove possible duplicate filters, so new one will be placed
# in correct place. If append=True and duplicate exists, do nothing.
# Helper to process -W options passed via sys.warnoptions
# Helper for _processoptions()
# Helper for _setoption()
# Alias
# Code typically replaced by _warnings
# Check if message is already a Warning object
# Check category argument
# The C version demands a tuple for implementation performance.
# Get context information
# If frame is too small to care or if the warning originated in
# internal code, then do not try to hide any frames.
# Look for one frame less since the above line starts us off.
# XXX What about leading pathname?
# Quick test for common case
# Search the filters
# Early exit actions
# Prime the linecache for formatting, in case the
# "file" is actually in a zipfile or something.
# Other actions
# Unrecognized actions are errors
# Print message and context
# Reset showwarning() to the default implementation to make sure
# that _showwarnmsg() calls _showwarnmsg_impl()
# Make sure the inner functions created below don't
# retain a reference to self.
# Mirrors a similar check in object.__new__.
# We need slightly different behavior if __init_subclass__
# is a bound method (likely if it was implemented in Python)
# Or otherwise, which likely means it's a builtin such as
# object's implementation of __init_subclass__.
# Private utility function called by _PyErr_WarnUnawaitedCoroutine
# Passing source= here means that if the user happens to have tracemalloc
# enabled and tracking where the coroutine was created, the warning will
# contain that traceback. This does mean that if they have *both*
# coroutine origin tracking *and* tracemalloc enabled, they'll get two
# partially-redundant tracebacks. If we wanted to be clever we could
# probably detect this case and avoid it, but for now we don't bother.
# filters contains a sequence of filter 5-tuples
# The components of the 5-tuple are:
# - an action: error, ignore, always, default, module, or once
# - a compiled regex that must match the warning message
# - a class representing the warning category
# - a compiled regex that must match the module that is being warned
# - a line number for the line being warning, or 0 to mean any line
# If either if the compiled regexs are None, match anything.
# Module initialization
# Several warning categories are ignored by default in regular builds
# placeholders
# Re-raise to get a traceback showing more user code.
# list of keys for the dict
# key to catch long rows
# default value for short rows
# Used only for its side effect.
# unlike the basic reader, we prefer not to return blanks,
# because we will typically wind up with a dict full of None
# values
# for writing short dicts
# in case there is more than one possible delimiter
# escapechar = ''
# _csv.reader won't accept a quotechar of ''
# ,".*?",
# ,".*?"
# (quotechar, doublequote, delimiter, skipinitialspace)
# most likely a file with a single column
# there is *no* delimiter, it's a single column of quoted data
# if we see an extra quote between delimiters, we've got a
# double quoted format
# 7-bit ASCII
# build frequency tables
# must count even if frequency is 0
# value is the mode
# get the mode of the frequencies
# adjust the mode - subtract the sum of all
# other frequencies
# build a list of possible delimiters
# (rows of consistent data) / (number of rows) = 100%
# minimum consistency threshold
# analyze another chunkLength lines
# if there's more than one, fall back to a 'preferred' list
# nothing else indicates a preference, pick the character that
# dominates(?)
# Creates a dictionary of types of data in each column. If any
# column is of a single type (say, integers), *except* for the first
# row, then the first row is presumed to be labels. If the type
# can't be determined, it is assumed to be a string in which case
# the length of the string is the determining factor: if all of the
# rows except for the first are the same length, it's a header.
# Finally, a 'vote' is taken at the end for each column, adding or
# subtracting from the likelihood of the first row being a header.
# assume first row is header
# arbitrary number of rows to check, to keep it sane
# skip rows that have irregular number of columns
# fallback to length of string
# add new column type
# type is inconsistent, remove column from
# consideration
# finally, compare results against first row and "vote"
# on whether it's a header
# it's a length
# attempt typecast
#!/usr/bin/env python3
#-------------------------------------------------------------------
# tarfile.py
# Copyright (C) 2002 Lars Gustaebel <lars@gustaebel.de>
# All rights reserved.
# Permission  is  hereby granted,  free  of charge,  to  any person
# obtaining a  copy of  this software  and associated documentation
# files  (the  "Software"),  to   deal  in  the  Software   without
# restriction,  including  without limitation  the  rights to  use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies  of  the  Software,  and to  permit  persons  to  whom the
# Software  is  furnished  to  do  so,  subject  to  the  following
# conditions:
# The above copyright  notice and this  permission notice shall  be
# included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS  IS", WITHOUT WARRANTY OF ANY  KIND,
# EXPRESS OR IMPLIED, INCLUDING  BUT NOT LIMITED TO  THE WARRANTIES
# OF  MERCHANTABILITY,  FITNESS   FOR  A  PARTICULAR   PURPOSE  AND
# NONINFRINGEMENT.  IN  NO  EVENT SHALL  THE  AUTHORS  OR COPYRIGHT
# HOLDERS  BE LIABLE  FOR ANY  CLAIM, DAMAGES  OR OTHER  LIABILITY,
# WHETHER  IN AN  ACTION OF  CONTRACT, TORT  OR OTHERWISE,  ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#---------
# Imports
# os.symlink on Windows prior to 6.0 raises NotImplementedError
# OSError (winerror=1314) will be raised if the caller does not hold the
# SeCreateSymbolicLinkPrivilege privilege
# from tarfile import *
#---------------------------------------------------------
# tar constants
# the null character
# length of processing blocks
# length of records
# magic gnu tar string
# magic posix tar string
# maximum length of a filename
# maximum length of a linkname
# maximum length of the prefix field
# regular file
# link (inside tarfile)
# symbolic link
# character special device
# block special device
# directory
# fifo special device
# contiguous file
# GNU tar longname
# GNU tar longlink
# GNU tar sparse file
# POSIX.1-2001 extended header
# POSIX.1-2001 global header
# Solaris extended header
# POSIX.1-1988 (ustar) format
# GNU tar format
# POSIX.1-2001 (pax) format
# tarfile constants
# File types that tarfile supports:
# File types that will be treated as a regular file.
# File types that are part of the GNU tar format.
# Fields from a pax header that override a TarInfo attribute.
# Fields from a pax header that are affected by hdrcharset.
# Fields in a pax header that are numbers, all other fields
# are treated as strings.
# initialization
# Some useful functions
# There are two possible encodings for a number field, see
# itn() below.
# POSIX 1003.1-1988 requires numbers to be encoded as a string of
# octal digits followed by a null-byte, this allows values up to
# (8**(digits-1))-1. GNU tar allows storing numbers greater than
# that if necessary. A leading 0o200 or 0o377 byte indicate this
# particular encoding, the following digits-1 bytes are a big-endian
# base-256 representation. This allows values up to (256**(digits-1))-1.
# A 0o200 byte indicates a positive number, a 0o377 byte a negative
# number.
#---------------------------
# internal stream interface
# Enable transparent compression detection for the
# stream interface
# Honor "directory components removed" from RFC1952
# RFC1952 says we must use ISO-8859-1 for the FNAME field.
# taken from gzip.GzipFile with some alterations
# Skip underlying buffer to avoid unaligned double buffering.
# class _Stream
# class StreamProxy
#------------------------
# Extraction file object
# Construct a map with data and zero blocks.
#class _FileInFile
#class ExFileObject
#-----------------------------
# extraction filters (PEP 706)
# Errors caused by filters -- both "fatal" and "non-fatal" -- that
# we consider to be issues with the argument, rather than a bug in the
# filter function
# Strip leading / (tar's directory separator) from filenames.
# Include os.sep (target OS directory separator) as well.
# Path is absolute even after stripping.
# For example, 'C:/foo' on Windows.
# Ensure we stay in the destination
# Limit permissions (no high bits, and go-w)
# Strip high bits & group/other write bits
# For data, handle permissions & file types
# Clear executable bits if not executable by user
# Ensure owner can read & write
# Ignore mode for directories & symlinks
# Reject special files
# Ignore ownership for 'data'
# Check link destination for 'data'
#------------------
# Exported Classes
# Sentinel for replace() defaults, meaning "don't change the attribute"
# Header length is digits followed by a space.
# member name
# file permissions
# user id
# group id
# file size
# modification time
# header checksum
# member type
# link name
# user name
# group name
# device major number
# device minor number
# the tar header starts here
# the file's data starts here
# sparse member information
# pax header information
# Test string fields for values that exceed the field length or cannot
# be represented in ASCII encoding.
# The pax header has priority.
# Try to encode the string as ASCII.
# Test number fields for values that exceed the field limit or values
# that like to be stored as float.
# Avoid overflow.
# Put rounded value in ustar header, and full
# precision value in pax header.
# The existing pax header has priority.
# Create a pax extended header if necessary.
# None values in metadata should cause ValueError.
# itn()/stn() do this for all fields except type.
# checksum field
# create extended header + name blocks.
# Check if one of the fields contains surrogate characters and thereby
# forces hdrcharset=BINARY, see _proc_pax() for more information.
# Put the hdrcharset field at the beginning of the header.
# Try to restore the original byte representation of `value'.
# Needless to say, that the encoding must match the string.
# ' ' + '=' + '\n'
# We use a hardcoded "././@PaxHeader" name like star does
# instead of the one that POSIX recommends.
# Create pax header + record blocks.
# Old V7 tar format represents a directory as a regular
# file with a trailing slash.
# The old GNU sparse format occupies some of the unused
# space in the buffer for up to 4 sparse structures.
# Save them for later processing in _proc_sparse().
# Remove redundant slashes from directories.
# Reconstruct a ustar longname.
#--------------------------------------------------------------------------
# The following are methods that are called depending on the type of a
# member. The entry point is _proc_member() which can be overridden in a
# subclass to add custom _proc_*() methods. A _proc_*() method MUST
# implement the following
# operations:
# 1. Set self.offset_data to the position where the data blocks begin,
# 2. Set tarfile.offset to the position where the next member's header will
# 3. Return self or another valid TarInfo object.
# Skip the following data blocks.
# Patch the TarInfo object with saved global
# header information.
# Remove redundant slashes from directories. This is to be consistent
# with frombuf().
# Fetch the next header and process it.
# Patch the TarInfo object from the next header with
# the longname information.
# We already collected some sparse structures in frombuf().
# Collect sparse structures from extended header blocks.
# Read the header information.
# A pax header stores supplemental information for either
# the following file (extended) or all following files
# (global).
# Parse pax header information. A record looks like that:
# "%d %s=%s\n" % (length, keyword, value). length is the size
# of the complete record including the length field itself and
# the newline.
# Headers must be at least 5 bytes, shortest being '5 x=\n'.
# Value is allowed to be empty.
# Last byte of the header
# Check the framing of the header. The last character must be '\n' (0x0A)
# Check if the pax header contains a hdrcharset field. This tells us
# the encoding of the path, linkpath, uname and gname fields. Normally,
# these fields are UTF-8 encoded but since POSIX.1-2008 tar
# implementations are allowed to store them as raw binary strings if
# the translation to UTF-8 fails. For the time being, we don't care about
# anything other than "BINARY". The only other value that is currently
# allowed by the standard is "ISO-IR 10646 2000 UTF-8" in other words UTF-8.
# Note that we only follow the initial 'hdrcharset' setting to preserve
# the initial behavior of the 'tarfile' module.
# This branch ensures only the first 'hdrcharset' header is used.
# If no explicit hdrcharset is set, we use UTF-8 as a default.
# After parsing the raw headers we can decode them to text.
# Normally, we could just use "utf-8" as the encoding and "strict"
# as the error handler, but we better not take the risk. For
# example, GNU tar <= 1.23 is known to store filenames it cannot
# translate to UTF-8 as raw strings (unfortunately without a
# hdrcharset=BINARY header).
# We first try the strict standard encoding, and if that fails we
# fall back on the user's encoding and error handler.
# Fetch the next header.
# Process GNU sparse information.
# GNU extended sparse format version 0.1.
# GNU extended sparse format version 0.0.
# GNU extended sparse format version 1.0.
# Patch the TarInfo object with the extended header info.
# If the extended header replaces the size field,
# we need to recalculate the offset where the next
# header starts.
# Only non-negative offsets are allowed
# class TarInfo
# May be set from 0 (no msgs) to 3 (all msgs)
# If true, add content of linked file to the
# tar file, else the link.
# If true, skips empty or invalid blocks and
# continues processing.
# If 0, fatal errors only appear in debug
# messages (if debug >= 0). If > 0, errors
# are passed to the caller as exceptions.
# The format to use when creating an archive.
# Encoding for 8-bit character strings.
# Error handler for unicode conversion.
# The default TarInfo class to use.
# The file-object for extractfile().
# The default filter for extraction.
# Create nonexistent files in append mode.
# Init attributes.
# Init datastructures.
# list of members as TarInfo objects
# flag if all members have been read
# current position in the archive file
# dictionary caching the inodes of
# archive members already added
# Move to the end of the archive,
# before the first empty block.
# Below are the classmethods which act as alternate constructors to the
# TarFile class. The open() method is the only one that is needed for
# public use; it is the "super"-constructor and is able to select an
# adequate "sub"-constructor for a particular compression using the mapping
# from OPEN_METH.
# This concept allows one to subclass TarFile without losing the comfort of
# the super-constructor. A sub-constructor is registered and made available
# by adding it to the mapping in OPEN_METH.
# Find out which *open() is appropriate for opening the file.
# Select the *open() function according to
# given compression.
# All *open() methods are registered here.
# uncompressed tar
# gzip compressed tar
# bzip2 compressed tar
# lzma compressed tar
# The public methods which TarFile provides:
# fill up the end with zero-blocks
# (like option -b20 for tar does)
# if we want to obtain a list of
# all members, we first have to
# scan the whole archive.
# When fileobj is given, replace name by
# fileobj's real name.
# Building the name of the member in the archive.
# Backward slashes are converted to forward slashes,
# Absolute paths are turned to relative paths.
# Now, fill the TarInfo object with
# information specific for the file.
# To be removed in 3.16.
# Use os.stat or os.lstat, depending on if symlinks shall be resolved.
# Is it a hardlink to an already
# archived file?
# The inode is added only if its valid.
# For win32 it is always 0.
# Fill the TarInfo object with all
# information we can get.
# Convert tarinfo type to stat type.
# Skip if somebody tries to archive the archive...
# Create a TarInfo object from the file.
# Change or exclude the TarInfo object.
# Append the tar header and data to the archive.
# If there's data to follow, append it.
# For directories, delay setting attributes until later,
# since permissions can interfere with extraction and
# extracting contents can reset mtime.
# Reverse sort directories.
# Set correct owner, mtime and filemode on directories.
# Need to re-apply any filter, to take the *current* filesystem
# state into account.
# This is no longer a directory; presumably a later
# member overwrote the entry.
# Prepare the link target for makelink().
# Members with unknown types are treated as regular files.
# A small but ugly workaround for the case that someone tries
# to extract a (sym)link as a file-object from a non-seekable
# stream of tar blocks.
# A (sym)link's file object is its target's file object.
# If there's no data associated with the member (directory, chrdev,
# blkdev, etc.), return None instead of a file object.
# Fetch the TarInfo object for the given name
# and build the destination pathname, replacing
# forward slashes to platform specific separators.
# Create all upper directories.
# Create directories that are not part of the archive with
# default permissions.
# Below are the different file methods. They are called via
# _extract_member() when extract() is called. They can be replaced in a
# subclass to implement other functionality.
# Use the system's default mode
# Use a safe mode for the directory, the real mode is set
# later in _extract_member().
# Use mknod's default
# For systems that support symbolic and hard links.
# Avoid FileExistsError on following os.symlink.
# We have to be root to do so.
# OverflowError can be raised if an ID doesn't fit in `id_t`
# Advance the file pointer.
# Read the next block.
# if streaming the file we do not want to cache the tarinfo
# Little helper methods:
# Ensure that all members have been loaded.
# Limit the member search list up to tarinfo.
# The given starting point might be a (modified) copy.
# We'll later skip members until we find an equivalent.
# Happy fast path
# Starting point was not found
# Always search the entire archive.
# Search the archive before the link, because a hard link is
# just a reference to an already archived file.
# Yield items using TarFile's next() method.
# When all members have been read, set TarFile as _loaded.
# Fix for SF #1100429: Under rare circumstances it can
# happen that getmembers() is called during iteration,
# which will have already exhausted the next() method.
# An exception occurred. We must not call close() because
# it would try to write end-of-archive blocks and padding.
#--------------------
# exported functions
# gz
# xz
# bz2
# Authors: Piers Lauder (original)
# Always try reading and writing directly on the tty first.
# If that fails, see if stdin can be controlled.
# a copy to save
# 3 == 'lflags'
# issue7208
# _raw_input succeeded.  The final tcsetattr failed.  Reraise
# instead of leaving the terminal in an unknown state.
# We can't control the tty or stdin.  Give up and use normal IO.
# fallback_getpass() raises an appropriate warning.
# clean up unused file objects before blocking
# This doesn't save the string in the GNU readline history.
# Use replace error handler to get as much as possible printed.
# NOTE: The Python C API calls flockfile() (and unlock) during readline.
# Bind the name getpass to the appropriate function
# it's possible there is an incompatible termios from the
# McMillan Installer, make sure we have a UNIX-compatible termios
# libedit uses "^I" instead of "tab"
# This method used to pull in base class attributes
# at a time dir() didn't do it yet.
# XXX check arg syntax
# There can be duplicates if routines overridden
# Try every row count from 1 upwards
# The node this class is augmenting.
# Number of predecessors, generally >= 0. When this value falls to 0,
# and is returned by get_ready(), this is set to _NODE_OUT and when the
# node is marked done by a call to done(), set to _NODE_DONE.
# List of successor nodes. The list can contain duplicated elements as
# long as they're all reflected in the successor's npredecessors attribute.
# Create the node -> predecessor edges
# Create the predecessor -> node edges
# ready_nodes is set before we look for cycles on purpose:
# if the user wants to catch the CycleError, that's fine,
# they can continue using the instance to grab as many
# nodes as possible before cycles block more progress
# Get the nodes that are ready and mark them
# Clean the list of nodes that are ready and update
# the counter of nodes that we have returned.
# Check if we know about this node (it was added previously using add()
# If the node has not being returned (marked as ready) previously, inform the user.
# Mark the node as processed
# Go to all the successors and reduce the number of predecessors, collecting all the ones
# that are ready to be returned in the next get_ready() call.
# If we have seen already the node and is in the
# current stack we have found a cycle.
# else go on to get next successor
# Backtrack to the topmost stack entry with
# at least another successor.
# mutex must be held whenever the queue is mutating.  All methods
# that acquire mutex must release it before returning.  mutex
# is shared between the three conditions, so acquiring and
# releasing the conditions also acquires and releases mutex.
# Notify not_empty whenever an item is added to the queue; a
# thread waiting to get is notified then.
# Notify not_full whenever an item is removed from the queue;
# a thread waiting to put is notified then.
# Notify all_tasks_done whenever the number of unfinished tasks
# drops to zero; thread waiting to join() is notified to resume
# Queue shutdown state
# release all blocked threads in `join()`
# All getters need to re-check queue-empty to raise ShutDown
# Override these methods to implement other queue organizations
# (e.g. stack or priority queue).
# These will only be called with appropriate locks held
# Initialize the queue representation
# Put a new item in the queue
# Get an item from the queue
# Note: while this pure Python version provides fairness
# (by using a threading.Semaphore which is itself fair, being based
# This allows the C version to use a different implementation.
# Inspired by similar code by Jeff Epler and Fredrik Lundh.
# Case 1
# Case 2
# Case 3
# Set the line of text that the exception refers to
# If someone has set sys.excepthook, we let that take precedence
# over self.write
# This method is being overwritten in
# _pyrepl.console.InteractiveColoredConsole
# When the user uses exit() or quit() in their interactive shell
# they probably just want to exit the created shell, not the whole
# process. exit and quit in builtins closes sys.stdin which makes
# it super difficult to restore
# When self.local_exit is True, we overwrite the builtins so
# exit() and quit() only raises SystemExit and we can catch that
# to only exit the interactive shell
# restore exit and quit in builtins if they were modified
# Mac OS X
# Apache
# Apache 1
# Apache 2
# Apache 1.2
# Apache 1.3
# dict for (non-strict, strict)
# TODO: Deprecate accepting file paths (in particular path-like objects).
# syntax of data URLs:
# dataurl   := "data:" [ mediatype ] [ ";base64" ] "," data
# mediatype := [ type "/" subtype ] *( ";" parameter )
# data      := *urlchar
# parameter := attribute "=" value
# type/subtype defaults to "text/plain"
# bad data URL
# never compressed, so encoding is None
# encodings_map is case sensitive
# Accelerated function if it is available
# Only check file extensions
# raises OSError if no 'Content Type' value
# so that MimeTypes.__init__() doesn't call us again
# Quick return if not supported
# Make the DB a global variable now that it is fully initialized
# Before adding new types, make sure they are either registered with IANA,
# at http://www.iana.org/assignments/media-types
# or extensions, i.e. using the x- prefix
# If you add to these, please keep them sorted by mime type.
# Make sure the entry with the preferred file extension for a particular mime type
# appears before any others of the same mimetype.
# These are non-standard types, commonly found in the wild.  They will
# only match if strict=0 flag is given to the API methods.
# Please sort these too
# Known bugs that can't be fixed here:
# --------------------------------------------------------- old names
# --------------------------------------------------------- common routines
# classmethod
# Should be tested before isdatadescriptor().
# <lambda> function are always single-line and should not be formatted
# Strip the bound argument.
# The behaviour of %p is implementation-dependent in terms of case.
# all your base are belong to us
# Certain special names are redundant or internal.
# XXX Remove __initializing__?
# Private names are hidden, but special names are displayed.
# Namedtuples have public fields and methods with a single leading underscore
# Ignore __future__ imports.
# only document that which the programmer exported in __all__
# This allows data descriptors to be ordered according
# to a _fields attribute if present.
# ----------------------------------------------------- module manipulation
# Ignore the "invalid escape sequence" warning.
# Look for binary suffixes first, falling back to source.
# Now handle the choice.
# Must be a source file.
# module can't be opened, so skip it
# text modules can be directly examined
# Must be a binary module, which has to be imported.
# XXX We probably don't need to pass in the loader here.
# Cache the result.
# If forceload is 1 and the module has been previously loaded from
# disk, we always have to reload the module.  Checking the file's
# mtime isn't good enough (e.g. the module could contain a class
# that inherits from another module that has changed).
# Remove the module from sys.modules and re-import to try
# and avoid problems with partially loaded modules.
# Also remove any submodules because they won't appear
# in the newly loaded module's namespace if they're already
# in sys.modules.
# Prevent garbage collection.
# Did the error occur before or after the module was found?
# An error occurred while executing the imported module.
# A SyntaxError occurred before we could execute the module.
# No such module in the path.
# Some other error occurred during the importing process.
# ---------------------------------------------------- formatter base class
# 'try' clause is to attempt to handle the possibility that inspect
# identifies something in a way that pydoc itself has issues handling;
# think 'super' and how it is a descriptor (which raises the exception
# by lacking a __name__ attribute) and an instance.
# -------------------------------------------- HTML documentation generator
# Backslashes are only literal in the string and are never
# needed to make any special characters, so show a raw string.
# ------------------------------------------- HTML formatting utilities
# Create a link for methods like 'self.method(...)'
# and use <strong> for attributes like 'self.attr'
# ---------------------------------------------- type-specific routines
# ignore the passed-in name
# if __all__ exists, believe it.  Otherwise use old heuristic.
# if __all__ exists, believe it.  Otherwise use a heuristic.
# Cute little class to pump out a horizontal rule between sections.
# List the mro, if non-trivial.
# Some descriptors may meet a failure in their __get__.
# (bug #1785)
# The value may not be hashable (e.g., a data attr with
# a dict or list value).
# Pump out the attrs, segregated by kind.
# XXX lambda's won't usually have func_annotations['return']
# since the syntax doesn't support but it is possible.
# So removing parentheses isn't truly safe.
# remove parentheses
# ignore a module if its name contains a surrogate character
# -------------------------------------------- text documentation generator
# ------------------------------------------- text formatting utilities
# Detect submodules as sometimes created by C extensions
# List the built-in subclasses, if any:
# --------------------------------------------------------- user interfaces
# --------------------------------------- interactive interpreter interface
# If the passed object is a piece of data or an instance,
# document its available methods instead of its value.
# These dictionaries map a topic name to either an alias, or a tuple
# (label, seealso-items).  The "label" is the label of the corresponding
# section in the .rst file under Doc/ and an index into the dictionary
# in pydoc_data/topics.py.
# CAUTION: if you change one of these dictionaries, be sure to adapt the
# Either add symbols to this dictionary or to the symbols dictionary
# directly: Whichever is easier. They are merged later.
# Make sure significant trailing quoting marks of literals don't
# get deleted while cleaning input
# special case these keywords since they are objects too
# raised by tests for bad coding cookies or BOM
# ignore problems during import
# --------------------------------------- enhanced web browser interface
# Don't log messages.
# explicitly break a reference cycle: DocServer.callback
# has indirectly a reference to ServerThread.
# Wait until thread.serving is True and thread.docserver is set
# to make sure we are really up before returning.
# scan for modules
# format page
# try topics first, then objects.
# try objects first, then topics.
# Catch any errors and display them in an error page.
# Errors outside the url handler are caught by the server.
# -------------------------------------------------- command-line interface
# Scripts may get the current directory in their path by default if they're
# run with the -m switch, or directly from the current directory.
# The interactive prompt also allows imports from the current directory.
# Accordingly, if the current directory is already present, don't make
# any changes to the given_path
# Otherwise, add the current directory to the given path, and remove the
# script directory (as long as the latter isn't also pydoc's directory.
# Note: the tests only cover _get_revised_path, not _adjust_cli_path itself
# NOTE: the actual command documentation is collected from docstrings of the
# commands and is appended to __doc__ after the class has been defined.
# consumer of this info expects the first line to be 1
# We should always be able to find the code object here
# If safe_path(-P) is not set, sys.path[0] is the directory
# of pdb, and we should replace it with the directory of the script
# Open the file each time because the file may be modified
# Interaction prompt line will separate file and call info from code
# text using value of line_prefix string.  A newline and arrow may
# be to your liking.  You can set it once pdb is imported using the
# command "pdb.line_prefix = '\n% '".
# line_prefix = ': '    # Use this to get the old situation back
# Probably a better default
# Limit the maximum depth of chained exceptions, we should be handling cycles,
# but in case there are recursions, we stop at 999.
# Try to load readline if it exists
# remove some common file name delimiters
# Consider these characters as part of the command so when the users type
# c.a or c['a'], it won't be recognized as a c(ontinue) command
# Read ~/.pdbrc and ./.pdbrc
# associates a command list to breakpoint numbers
# for each bp num, tells if the prompt
# must be disp. after execing the cmd list
# for each bp num, tells if the stack trace
# True while in the process of defining
# a command list
# The breakpoint number for which we are
# defining a list
# when setting up post-mortem debugging with a traceback, save all
# the original line numbers to be displayed along the current line
# numbers (which can be different, e.g. due to finally clauses)
# The f_locals dictionary used to be updated from the actual frame
# locals whenever the .f_locals accessor was called, so it was
# cached here to ensure that modifications were not overwritten. While
# the caching is no longer required now that f_locals is a direct proxy
# on optimized frames, it's also harmless, so the code structure has
# been left unchanged.
# Override Bdb methods
# GH-127321
# We want to avoid stopping at an opcode that does not have
# an associated line number because pdb does not like it
# self.currentbp is set in bdb in Bdb.break_here if a breakpoint was hit
# An 'Internal StopIteration' exception is an exception debug event
# issued by the interpreter when handling a subgenerator run with
# 'yield from' or a generator controlled by a for loop. No exception has
# actually occurred in this case. The debugger uses this debug event to
# stop when the debuggee is returning from such generators.
# General interaction function
# keyboard interrupts allow for an easy way to cancel
# the current command, so allow them during interactive input
# Called before loop, handles display expressions
# Set up convenience variable containers
# check for identity first; this prevents custom __eq__ to
# be called at every loop, and also prevents instances whose
# fields are changed to be displayed
# we can't put those in forget as otherwise they would
# be cleared on exception change
# Restore the previous signal handler at the Pdb prompt.
# ValueError: signal only works in main thread
# We should print the stack entry if and only if the user input
# is expected, and we should print it right before the user input.
# We achieve this by appending _pdbcmd_print_frame_status to the
# command queue. If cmdqueue is not exausted, the user input is
# not expected and we will not print the stack entry.
# If _pdbcmd_print_frame_status is not used, pop it out
# reproduce the behavior of the standard displayhook, not printing None
# Determine if the source should be executed in closure. Only when the
# source compiled to multiple code objects, we should use this feature.
# Otherwise, we can just raise an exception and normal exec will be used.
# locals could be a proxy which does not support pop
# copy it first to avoid modifying the original locals
# If the source is an expression, we need to print its value
# Add write-back to update the locals
# Build a closure source code with freevars from locals like:
# def __pdb_outer():
# Get the code object of __pdb_scope()
# The exec fills locals_copy with the __pdb_outer() function and we can call
# that to get the code object of __pdb_scope()
# get the data we need from the statement
# __pdb_eval__ should not be updated back to locals
# Write all local variables back to locals
# Multi-line mode
# line is a one-line command so we only care about column
# This is a no-op
# split into ';;' separated commands
# unless it's an alias command
# queue up everything after marker
# Replace all the convenience variables
# continue to handle other cmd def in the cmd list
# end of cmd list
# Determine if we must stop
# one of the resuming commands
# interface abstraction functions
# convenience variables
# Generic completion functions.  Individual complete_foo methods can be
# assigned below to one of these functions.
# Overwrite completenames() of cmd so for the command completion,
# if no current command matches, check for expressions as well
# Complete a file/module/function location for break/tbreak/clear.
# Here comes a line number or a condition which we can't complete.
# First, try to find matching functions (i.e. expressions).
# Then, try to complete file names as well.
# Complete a breakpoint number.  (This would be more helpful if we could
# display additional info along with the completions, such as file/line
# of the breakpoint.)
# Complete an arbitrary expression.
# Collect globals and locals.  It is usually not really sensible to also
# complete builtins, and they clutter the namespace quite heavily, so we
# leave them out.
# Complete convenience variables
# Walk an attribute chain up to the last part, similar to what
# rlcompleter does.  This will bail if any of the parts are not
# simple attribute access, which is what we want.
# Complete a simple name.
# Use rlcompleter to do the completion
# Pdb meta commands, only intended to be used internally by pdb
# Command definitions, called by cmdloop()
# The argument is the remaining string on the command line
# Return true to exit from the command loop
# Save old definitions for the case of a keyboard interrupt.
# Restore old definitions.
# There's at least one
# parse arguments; comma has lowest precedence
# and cannot occur in filename
# parse stuff after comma: "condition"
# parse stuff before comma: [filename:]lineno | function
# no colon; can be lineno or function
#use co_name to identify the bkpt (function names
#could be aliased, but co_name is invariant)
# last thing to try
# ok contains a function name
# Check for reasonable breakpoint
# now set the break point
# To be overridden in derived debuggers
# Input is identifier, may be in single quotes
# not in single quotes
# quoted
# Protection for derived debuggers
# Best first guess at file to look at
# More than one part.
# First is module, second is method/class
# this method should be callable before starting debugging, so default
# to "no globals" if there is no current frame
# Don't allow setting breakpoint at a blank line
# Make sure it works for "clear C:\foo\bar.py:12"
# 'c' is already an abbreviation for 'continue'
# this is caught in the main debugger loop
# ValueError happens when do_continue() is invoked from
# a non-main thread in which case we just continue without
# SIGINT set. Would printing a message here (once) make
# sense?
# Do the jump, fix up our copy of the stack, and display the
# new position
# _getval() has displayed the error
# assume it's a count
# gh-93696: stdlib frozen modules provide a useful __file__
# this workaround can be removed with the closure of gh-89815
# _getval() already printed the error
# Is it an instance method?
# Is it a function?
# Is it a class?
# None of the above...
# Do a validation check to make sure no replaceable parameters
# are skipped if %* is not used.
# List of all the commands making the program resume execution.
# Print a traceback starting at the top stack frame.
# The most recently entered frame is printed last;
# this is different from dbx and gdb, but consistent with
# the Python interpreter's stack trace.
# It is also consistent with the up/down commands (which are
# compatible with dbx and gdb: up moves towards 'main()'
# and down moves towards the most recent stack frame).
# Provide help
# other helper functions
# A module is passed in so convert it to equivalent file
# When bdb sets tracing, a number of call and line events happen
# BEFORE debugger even reaches user's code (and the exact sequence of
# events depends on python version). Take special measures to
# avoid stopping before reaching the main script (see user_line and
# user_call for details).
# The target has to run in __main__ namespace (or imports from
# __main__ will break). Clear __main__ and replace with
# the target namespace.
# Clear the mtime table for program reruns, assume all the files
# are up to date.
# GH-103319
# inspect.getsourcelines() returns lineno = 0 for
# module-level frame which breaks our code print line number
# This method should be replaced by inspect.getsourcelines(obj)
# once this bug is fixed in inspect
# Yes it's a bit hacky. Get the caller name, get the method based on
# that name, and get the docstring from that method.
# This should NOT fail if the caller is a method of this class.
# Collect all command help into docstring, if not run with -OO
# unfortunately we can't guess this order from the class definition
# Simplified interface
# B/W compatibility
# Post-Mortem interface
# handling the default
# Main program for testing
# print help
# We need to maunally get the script from args, because the first positional
# arguments could be either the script we need to debug, or the argument
# to the -m module
# If no arguments were given (python -m pdb), print the whole help message.
# Without this check, argparse would only complain about missing required arguments.
# If a module is being debugged, we consider the arguments after "-m module" to
# be potential arguments to the module itself. We need to parse the arguments
# before "-m" to check if there is any invalid argument.
# e.g. "python -m pdb -m foo --spam" means passing "--spam" to "foo"
# This will raise an error if there are invalid arguments
# If a script is being debugged, then pdb expects the script name as the first argument.
# Anything before the script is considered an argument to pdb itself, which would
# be invalid because it's not parsed by argparse.
# Hide "pdb.py" and pdb options from argument list
# Note on saving/restoring sys.argv: it's a good idea when sys.argv was
# modified by the script being debugged. It's a bad idea when it was
# changed by the user from the command line. There is a "restart" command
# which allows explicit specification of command line arguments.
# In most cases SystemExit does not warrant a post-mortem session.
# When invoked as main program, invoke the debugger on a script
# ctypes is an optional module. If it's not present, we're limited in what
# we can tell about the system, but we don't want to prevent the module
# from working.
# ctypes is available. Load the ObjC library, and wrap the objc_getClass,
# sel_registerName methods
# Failed to load the objc library
# Determine if this is a simulator using the multiarch value
# We can't use ctypes; abort
# Most of the methods return ObjC objects
# All the methods used have no arguments.
# Equivalent of:
# UTF8String returns a const char*;
# Author of the BaseServer patch: Luke Kenneth Casson Leighton
# poll/select have the advantage of not requiring any extra file descriptor,
# contrarily to epoll/kqueue (also, they require a single syscall).
# XXX: Consider using another file descriptor or connecting to the
# socket to wake this up instead of polling. Polling reduces our
# responsiveness to a shutdown request and wastes cpu at all other
# times.
# bpo-35017: shutdown() called during select(), exit immediately.
# The distinction between handling, getting, processing and finishing a
# request is fairly arbitrary.  Remember:
# - handle_request() is the top-level call.  It calls selector.select(),
# - get_request() is different for stream or datagram sockets
# - process_request() is the place that may fork a new process or create a
# - finish_request() instantiates the request handler class; this
# Support people who used socket.settimeout() to escape
# handle_request before self.timeout was available.
# Wait until a request arrives or the timeout expires - the loop is
# necessary to accommodate early wakeups due to EINTR.
# Since Linux 6.12.9, SO_REUSEPORT is not allowed
# on other address families than AF_INET/AF_INET6.
#explicitly shutdown.  socket.close() merely releases
#the socket and waits for GC to perform the actual close.
#some platforms may raise ENOTCONN here
# No need to call listen() for UDP.
# No need to shutdown anything.
# No need to close anything.
# If true, server_close() waits until all child processes complete.
# If we're above the max number of children, wait and reap them until
# we go back below threshold. Note that we use waitpid(-1) below to be
# able to collect children in size(<defunct children>) syscalls instead
# of size(<children>): the downside is that this might reap children
# which we didn't spawn, which is why we only resort to this when we're
# above max_children.
# we don't have any children, we're done
# Now reap all defunct children.
# if the child hasn't exited yet, pid will be 0 and ignored by
# discard() below
# someone else reaped it
# Parent process
# Child process.
# This must never return, hence os._exit()!
# Decides how threads will act upon termination of the
# main process
# If true, server_close() waits until all non-daemonic threads terminate.
# Threads object
# used by server_close() to wait for all threads completion.
# The following two classes make it possible to use the same service
# class for stream or datagram servers.
# Each class sets up these instance variables:
# - rfile: a file object from which receives the request is read
# - wfile: a file object to which the reply is written
# When the handle() method returns, wfile is flushed properly
# Default buffer sizes for rfile, wfile.
# We default rfile to buffered because otherwise it could be
# really slow for large data (a getc() call per byte); we make
# wfile unbuffered because (a) often after a write() we want to
# read and we need to flush the line; (b) big writes to unbuffered
# files are typically optimized by stdio even when big reads
# aren't.
# A timeout to apply to the request socket, if not None.
# Disable nagle algorithm for this socket, if True.
# Use only when wbufsize != 0, to avoid small packets.
# A final socket error may have occurred here, such as
# the local error ECONNABORTED.
# References:
# http://en.wikipedia.org/wiki/YIQ
# http://en.wikipedia.org/wiki/HLS_color_space
# http://en.wikipedia.org/wiki/HSV_color_space
# Some floating-point constants
# YIQ: used by composite video signals (linear combinations of RGB)
# Y: perceived grey level (0.0 == black, 1.0 == white)
# I, Q: color components
# There are a great many versions of the constants used in these formulae.
# The ones in this library uses constants from the FCC version of NTSC.
# r = y + (0.27*q + 0.41*i) / (0.74*0.41 + 0.27*0.48)
# b = y + (0.74*q - 0.48*i) / (0.74*0.41 + 0.27*0.48)
# g = y - (0.30*(r-y) + 0.11*(b-y)) / 0.59
# HLS: Hue, Luminance, Saturation
# H: position in the spectrum
# L: color lightness
# S: color saturation
# Not always 2.0-sumc: gh-106498.
# HSV: Hue, Saturation, Value
# S: color saturation ("purity")
# V: color brightness
# XXX assume int() truncates!
# Cannot get here
# Author: Steven J. Bethard <steven.bethard@gmail.com>.
# New maintainer as of 29 August 2019:  Raymond Hettinger <raymond.hettinger@gmail.com>
# =============================
# Utility functions and classes
# The copy module is used only in the 'append' and 'append_const'
# actions, and it is needed only when the default value isn't a list.
# Delay its import for speeding up the common case.
# ===============
# Formatting Help
# default setting for width
# ===============================
# Section and indentation methods
# format the indented section
# return nothing if the section was empty
# add the heading if the section was non-empty
# join the section-initial newline, the heading and the help
# ========================
# Message building methods
# find all invocations
# update the maximum item length
# add the item to the list
# =======================
# Help-formatting methods
# if usage is specified, use that
# if no optionals or positionals are available, usage is just prog
# if optionals and positionals are available, calculate usage
# split optionals from positionals
# build full usage string
# wrap the usage parts if it's too long
# break usage into wrappable parts
# helper for wrapping lines
# if prog is short, follow it with optionals or positionals
# if prog is long, put it on its own line
# join lines into usage
# prefix with 'usage:'
# find group indices and identify actions in groups
# collect all actions format strings
# suppressed arguments are marked with None
# produce all arg strings
# if it's in a group, strip the outer []
# produce the first way to invoke the option in brackets
# if the Optional doesn't take a value, format is:
# if the Optional takes a value, format is:
# make it look optional if it's not required or in a group
# add the action string to the list
# group mutually exclusive actions
# insert a separator if not already done in a nested group
# return the usage parts
# determine the required width and the entry label
# no help; start on same line and add a final newline
# short action name; start on the same line and pad two spaces
# long action name; start on the next line
# collect the pieces of the action help
# if there was help for the action, add lines of help text
# or add a newline if the description doesn't end with one
# if there are any sub-actions, add their help as well
# return a single string
# The textwrap module is used only for formatting help.
# Delay its import for speeding up the common usage of argparse.
# =====================
# Options and Arguments
# ==============
# Action classes
# FIXME: remove together with `BooleanOptionalAction` deprecated arguments.
# We need `_deprecated` special value to ban explicit arguments that
# match default value. Like:
# set prog from the existing prefix
# create a pseudo-action to hold the choice help
# create the parser and add it to the map
# make parser available under aliases also
# set the parser name if requested
# select the parser
# parse all the remaining options into the namespace
# store any unrecognized options on the object, so that the top
# level parser can decide what to do with them
# In case this subparser defines new defaults, we parse them
# in a new namespace object and then update the original
# namespace for the relevant parts.
# Type classes
# the special argument "-" means sys.std{in,out}
# all other arguments are used as file names
# ===========================
# Optional and Positional Parsing
# set up registries
# register actions
# raise an exception if the conflict handler is invalid
# action storage
# groups
# defaults storage
# determines whether an "option" looks like a negative number
# whether or not there are any optionals that look like negative
# numbers -- uses a list so it can be shared and edited
# ====================
# Registration methods
# ==================================
# Namespace default accessor methods
# if these defaults match any existing arguments, replace
# the previous default on the object with the new one
# Adding argument actions
# if no positional args are supplied or only one is supplied and
# it doesn't look like an option string, parse a positional
# argument
# otherwise, we're adding an optional argument
# if no default was supplied, use the parser-level default
# create the action object, and add it to the parser
# raise an error if the action type is not callable
# raise an error if the metavar does not match the type
# resolve any conflicts
# add to actions list
# index the action by any option strings it has
# set the flag if any option strings look like negative numbers
# return the created action
# collect groups by titles
# This branch could happen if a derived class added
# groups with duplicated titles in __init__
# map each action to its group
# if a group with the title exists, use that, otherwise
# create a new group matching the container's group
# map the actions to their new group
# add container's mutually exclusive groups
# NOTE: if add_mutually_exclusive_group ever gains title= and
# description= then this code will need to be expanded as above
# map the actions to their new mutex group
# add all actions to this container or their group
# make sure required is not specified
# mark positional arguments as required if at least one is
# always required
# return the keyword arguments with no option strings
# determine short and long option strings
# error on strings that don't start with an appropriate prefix
# strings starting with two prefix characters are long options
# infer destination, '--foo-bar' -> 'foo_bar' and '-x' -> 'x'
# return the updated keyword arguments
# determine function from conflict handler string
# find all options that conflict with this option
# remove all conflicting options
# remove the conflicting option
# if the option now has no option string, remove it from the
# container holding it
# add any missing keyword arguments by checking the container
# group attributes
# share most attributes with the container
# default setting for prog
# register types
# add help argument if necessary
# (using explicit default to override global argument_default)
# add parent arguments and defaults
# Pretty __repr__ methods
# Optional/Positional adding methods
# add the parser class to the arguments if it's not present
# prog defaults to the usage message of this parser, skipping
# optional arguments and with no "usage:" prefix
# create the parsers action and add it to the positionals list
# return the created parsers action
# =====================================
# Command line argument parsing methods
# args default to the system args
# make sure that args are mutable
# default Namespace built from parser defaults
# add any action defaults that aren't present
# add any parser defaults that aren't present
# parse the arguments and exit if there are any errors
# replace arg strings that are file references
# map all mutually exclusive arguments to the other arguments
# they can't occur with
# find all option indices, and determine the arg_string_pattern
# which has an 'O' if there is an option at an index,
# an 'A' if there is an argument, or a '-' if there is a '--'
# all args after -- are non-options
# otherwise, add the arg to the arg strings
# and note the index if it was an option
# join the pieces together to form the pattern
# converts arg strings to the appropriate and then takes the action
# error if this argument is not allowed with other previously
# seen arguments
# take the action if we didn't receive a SUPPRESS value
# (e.g. from a default)
# function to convert arg_strings into an optional action
# get the optional identified at this index
# if multiple actions match, the option string was ambiguous
# identify additional optionals in the same arg string
# (e.g. -xyz is the same as -x -y -z if no args are required)
# if we found no optional action, skip it
# if there is an explicit argument, try to match the
# optional's string arguments to only this
# if the action is a single-dash option and takes no
# arguments, try to parse more single-dash options out
# of the tail of the option string
# if the action expect exactly one argument, we've
# successfully matched the option; exit the loop
# error if a double-dash option did not use the
# explicit argument
# if there is no explicit argument, try to match the
# optional's string arguments with the following strings
# if successful, exit the loop
# add the Optional to the list and return the index at which
# the Optional's string args stopped
# the list of Positionals left to be parsed; this is modified
# by consume_positionals()
# function to convert arg_strings into positional actions
# match as many Positionals as possible
# slice off the appropriate arg strings for each Positional
# and add the Positional and its args to the list
# Strip out the first '--' if it is not in REMAINDER arg.
# slice off the Positionals that we just parsed and return the
# index at which the Positionals' string args stopped
# consume Positionals and Optionals alternately, until we have
# passed the last option string
# consume any Positionals preceding the next option
# only try to parse the next optional if we didn't consume
# the option string during the positionals parsing
# if we consumed all the positionals we could and we're not
# at the index of an option string, there were extra arguments
# consume the next optional and any arguments for it
# consume any positionals following the last Optional
# if we didn't consume all the argument strings, there were extras
# consume all positionals
# leave unknown optionals and non-consumed positionals in extras
# make sure all required actions were present and also convert
# action defaults which were not given as arguments
# Convert action default now instead of doing it before
# parsing arguments to avoid calling convert functions
# twice (which may fail) if the argument was given, but
# only if it was defined already in the namespace
# make sure all required groups had one option present
# if no actions were used, report the error
# return the updated namespace and the extra arguments
# expand arguments referencing files
# for regular arguments, just add them back into the list
# replace arguments referencing files with the file content
# return the modified argument list
# match the pattern for this action to the arg strings
# raise an exception if we weren't able to find a match
# return the number of arguments matched
# progressively shorten the actions list by slicing off the
# final actions until we find a match
# if it's an empty string, it was meant to be a positional
# if it doesn't start with a prefix, it was meant to be positional
# if the option string is present in the parser, return the action
# if it's just a single character, it was meant to be positional
# if the option string before the "=" is present, return the action
# search through all possible prefixes of the option string
# and all actions in the parser for possible interpretations
# if it was not found as an option, but it looks like a negative
# number, it was meant to be positional
# unless there are negative-number-like options
# if it contains a space, it was meant to be a positional
# it was meant to be an optional but there is no such option
# in this parser (though it might be a valid option in a subparser)
# option strings starting with two prefix characters are only
# split at the '='
# single character options can be concatenated with their arguments
# but multiple character options always have to have their argument
# separate
# shouldn't ever get here
# return the collected option tuples
# in all examples below, we have to allow for '--' args
# which are represented as '-' in the pattern
# if this is an optional action, -- is not allowed
# the default (None) is assumed to be a single argument
# allow zero or one arguments
# allow zero or more arguments
# allow one or more arguments
# allow any number of options or arguments
# allow one argument followed by any number of options or arguments
# suppress action, like nargs=0
# all others should be integers
# return the pattern
# Alt command line argument parsing, allowing free intermix
# returns a namespace and list of extras
# positional can be freely intermixed with optionals.  optionals are
# first parsed with all positional arguments deactivated.  The 'extras'
# are then parsed.  If the parser definition is incompatible with the
# intermixed assumptions (e.g. use of REMAINDER, subparsers) a
# TypeError is raised.
# Value conversion methods
# optional argument produces a default when not present
# when nargs='*' on a positional, if there were no command-line
# args, use the default if it is anything other than None
# since arg_strings is always [] at this point
# there is no need to use self._check_value(action, value)
# single argument or optional argument produces a single value
# REMAINDER arguments convert all values, checking none
# PARSER arguments convert all values, but check only the first
# SUPPRESS argument does not put anything in the namespace
# all other types of nargs produce a list
# return the converted value
# convert the value to the appropriate type
# ArgumentTypeErrors indicate errors
# TypeErrors or ValueErrors also indicate errors
# converted value must be one of the choices (if specified)
# usage
# description
# positionals, optionals and user-defined groups
# epilog
# determine help from format above
# Help-printing methods
# Exiting methods
# This file was generated from:
# computed later
# The help for each option consists of two parts:
# If possible, we write both of these on the same line:
# But if the opt string list is too long, we put the help
# string on a second line, indented to the same column it would
# start in if it fit on the first line.
# start help on same line as opts
# hexadecimal
# binary
# have to remove "0b" prefix
# octal
# decimal
# Not supplying a default is different from a default of None,
# so we need an explicit "not supplied" value.
# The list of instance attributes that may be set through
# keyword args to the constructor.
# The set of actions allowed by option parsers.  Explicitly listed
# here so the constructor can validate its arguments.
# The set of actions that involve storing a value somewhere;
# also listed just for constructor argument validation.  (If
# the action is one of these, there must be a destination.)
# The set of actions for which it makes sense to supply a value
# type, ie. which may consume an argument from the command line.
# The set of actions which *require* a value type, ie. that
# always consume an argument from the command line.
# The set of actions which take a 'const' attribute.
# The set of known types for option parsers.  Again, listed here for
# constructor argument validation.
# Dictionary of argument checking functions, which convert and
# validate option arguments according to the option type.
# Signature of checking functions is:
# where
# The return value should be in the appropriate Python type
# for option.type -- eg. an integer if option.type == "int".
# If no checker is defined for a type, arguments will be
# unchecked and remain strings.
# CHECK_METHODS is a list of unbound method objects; they are called
# by the constructor, in order, after all attributes are
# initialized.  The list is created and filled in later, after all
# the methods are actually defined.  (I just put it here because I
# like to define and document all class attributes in the same
# place.)  Subclasses that add another _check_*() method should
# define their own CHECK_METHODS list that adds their check method
# to those from this class.
# -- Constructor/initialization methods ----------------------------
# Set _short_opts, _long_opts attrs from 'opts' tuple.
# Have to be set now, in case no option strings are supplied.
# Set all other attrs (action, type, etc.) from 'attrs' dict
# Check all the attributes we just set.  There are lots of
# complicated interdependencies, but luckily they can be farmed
# out to the _check_*() methods listed in CHECK_METHODS -- which
# could be handy for subclasses!  The one thing these all share
# is that they raise OptionError if they discover a problem.
# Filter out None because early versions of Optik had exactly
# one short option and one long option, either of which
# could be None.
# -- Constructor validation methods --------------------------------
# The "choices" attribute implies "choice" type.
# No type given?  "string" is the most sensible default.
# Allow type objects or builtin type conversion functions
# (int, str, etc.) as an alternative to their names.
# No destination given, and we need one for this action.  The
# self.type check is for callbacks that take a value.
# Glean a destination from the first long option string,
# or from the first short option string if no long options.
# eg. "--foo-bar" -> "foo_bar"
# -- Miscellaneous methods -----------------------------------------
# -- Processing methods --------------------------------------------
# First, convert the value(s) to the right type.  Howl if any
# value(s) are bogus.
# And then take whatever action is expected of us.
# This is a separate method to make life easier for
# subclasses to add new actions.
# class Option
# Initialize the option list and related data structures.
# This method must be provided by subclasses, and it must
# initialize at least the following instance attributes:
# option_list, _short_opt, _long_opt, defaults.
# For use by OptionParser constructor -- create the main
# option mappings used by this OptionParser and all
# OptionGroups that it owns.
# single letter -> Option instance
# long option -> Option instance
# maps option dest -> default value
# For use by OptionGroup constructor -- use shared option
# mappings from the OptionParser that owns this OptionGroup.
# -- Option-adding methods -----------------------------------------
# option has a dest, we need a default
# -- Option query/removal methods ----------------------------------
# -- Help-formatting methods ---------------------------------------
# Populate the option list; initial sources are the
# standard_option_list class attribute, the 'option_list'
# argument, and (if applicable) the _add_version_option() and
# _add_help_option() methods.
# -- Private methods -----------------------------------------------
# (used by our or OptionContainer's constructor)
# These are set in parse_args() for the convenience of callbacks.
# -- Simple modifier methods ---------------------------------------
# For backwards compatibility with Optik 1.3 and earlier.
# Old, pre-Optik 1.5 behaviour.
# -- OptionGroup methods -------------------------------------------
# XXX lots of overlap with OptionContainer.add_option()
# -- Option-parsing methods ----------------------------------------
# don't modify caller's list
# Store the halves of the argument list as attributes for the
# convenience of callbacks:
# We handle bare "--" explicitly, and bare "-" is handled by the
# standard arg handler since the short arg case ensures that the
# len of the opt string is greater than 1.
# process a single long option (possibly with value(s))
# process a cluster of short options (possibly with
# value(s) for the last one only)
# stop now, leave this arg in rargs
# Say this is the original argument list:
# [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
# (we are about to process arg(i)).
# Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
# [arg0, ..., arg(i-1)] (any options and their arguments will have
# been removed from largs).
# The while loop will usually consume 1 or more arguments per pass.
# If it consumes 1 (eg. arg is an option that takes no arguments),
# then after _process_arg() is done the situation is:
# If allow_interspersed_args is false, largs will always be
# *empty* -- still a subset of [arg0, ..., arg(i-1)], but
# not a very interesting subset!
# Value explicitly attached to arg?  Pretend it's the next
# argument.
# we have consumed a character
# Any characters left in arg?  Pretend they're the
# next arg, and stop consuming characters of arg.
# option doesn't take a value
# -- Feedback methods ----------------------------------------------
# Drop the last "\n", or the header if no options or option groups:
# class OptionParser
# Is there an exact match?
# Isolate all words with s as a prefix.
# No exact match, so there had better be just one possibility.
# More than one possible completion: ambiguous prefix.
# Some day, there might be many Option classes.  As of Optik 1.3, the
# preferred way to instantiate Options is indirectly, via make_option(),
# which will become a factory function when there are many Option
# classes.
# This file is generated by Tools/cases_generator/py_metadata_generator.py
# from:
# Do not edit!
# limit the maximum size of the cache
# Directory comparison class.
# Initialize
# Names never to be shown
# Compare everything except common subdirectories
# Compute common names
# Distinguish files, directories, funnies
# See https://github.com/python/cpython/issues/122400
# for the rationale for protecting against ValueError.
# print('Can\'t stat', a_path, ':', why.args[1])
# print('Can\'t stat', b_path, ':', why.args[1])
# Find out differences between common files
# Find out differences between common subdirectories
# A new dircmp (or MyDirCmp if dircmp was subclassed) object is created
# for each common subdirectory,
# these are stored in a dictionary indexed by filename.
# The hide and ignore properties are inherited from the parent
# Recursively call phase4() on subdirectories
# Print a report on the differences between a and b
# Output format is purposely lousy
# Print reports on self and on subdirs
# Report on self and subdirs recursively
# Compare two files.
# Return:
# Return a copy with items that occur in skip removed.
# Demonstration and testing.
# Module 'ntpath' -- common operations on WinNT/Win95 pathnames
# strings representing various path-related bits and pieces
# These are primarily for export; internally, they are hardcoded.
# Should be set before imports for resolving cyclic dependency.
# Normalize the case of a pathname and map slashes to backslashes.
# Other normalizations (such as optimizing '../' away) are not done
# (this is done by normpath).
# Absolute: UNC, device, and paths with a drive and root.
# Join two (or more) paths.
# Second path is absolute
# Different drives => ignore the first path entirely
# Same drive in different case
# Second path is relative to the first
## add separator between UNC and non-absolute path
# Split a path in a drive specification (a drive letter followed by a
# colon) and the path specification.
# It is always true that drivespec + pathspec == p
# UNC drives, e.g. \\server\share or \\?\UNC\server\share
# Device drives, e.g. \\.\device or \\?\device
# Relative path with root, e.g. \Windows
# Absolute drive-letter path, e.g. X:\Windows
# Relative path with drive, e.g. X:Windows
# Relative path, e.g. Windows
# Split a path in head (everything up to the last '/') and tail (the
# rest).  After the trailing '/' is stripped, the invariant
# join(head, tail) == p holds.
# The resulting head won't end in '/' unless it is the root.
# set i to index beyond p's last slash
# now tail has no slashes
# Return the tail (basename) part of a path.
# Return the head (dirname) part of a path.
# Is a path a mount point?
# Any drive letter root (eg c:\)
# Any share UNC (eg \\server\share)
# Any volume mounted on a filesystem folder
# No one method detects all three situations. Historically we've lexically
# detected drive letter roots and share UNCs. The canonical approach to
# detecting mounted volumes (querying the reparse tag) fails for the most
# common case: drive letter roots. The alternative which uses GetVolumePathName
# fails if the drive letter is the result of a SUBST.
# Refer to "Naming Files, Paths, and Namespaces":
# https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file
# Trailing dots and spaces are reserved.
# Wildcards, separators, colon, and pipe (*?"<>/\:|) are reserved.
# ASCII control characters (0-31) are reserved.
# Colon is reserved for file streams (e.g. "name:stream[:type]").
# DOS device names are reserved (e.g. "nul" or "nul .txt"). The rules
# are complex and vary across Windows versions. On the side of
# caution, return True for names that may not be reserved.
# Expand paths beginning with '~' or '~user'.
# '~' means $HOME; '~user' means that user's home directory.
# If the path doesn't begin with '~', or if the user or $HOME is unknown,
# the path is returned unchanged (leaving error reporting to whatever
# function is called with the expanded path as argument).
# See also module 'glob' for expansion of *, ? and [...] in pathnames.
# (A function should also be defined to do full *sh-style environment
# variable expansion.)
#~user
# Try to guess user home directory.  By default all user
# profile directories are located in the same place and are
# named by corresponding usernames.  If userhome isn't a
# normal profile directory, this guess is likely wrong,
# so we bail out.
# Expand paths containing shell variable substitutions.
# The following rules apply:
# XXX With COMMAND.COM you can use any characters in a variable name,
# XXX except '^|<>='.
# no expansion within single quotes
# variable or '%'
# variable or '$$'
# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A\B.
# Previously, this function also truncated pathnames to 8+3 format,
# but as this module is called "ntpath", that's obviously wrong!
# If the path is now empty, substitute '.'
# Return an absolute path.
# not running on Windows - mock up something sensible
# use native Windows method on Windows
# See gh-75230, handle outside for cleaner traceback
# Either drive or root can be nonempty, but not both.
# Drive "\0:" cannot exist; use the root directory.
# realpath is a no-op on systems without _getfinalpathname support.
# These error codes indicate that we should stop reading links and
# return the path we currently have.
# 1: ERROR_INVALID_FUNCTION
# 2: ERROR_FILE_NOT_FOUND
# 3: ERROR_DIRECTORY_NOT_FOUND
# 5: ERROR_ACCESS_DENIED
# 21: ERROR_NOT_READY (implies drive with no media)
# 32: ERROR_SHARING_VIOLATION (probably an NTFS paging file)
# 50: ERROR_NOT_SUPPORTED (implies no support for reparse points)
# 67: ERROR_BAD_NET_NAME (implies remote server unavailable)
# 87: ERROR_INVALID_PARAMETER
# 4390: ERROR_NOT_A_REPARSE_POINT
# 4392: ERROR_INVALID_REPARSE_DATA
# 4393: ERROR_REPARSE_TAG_INVALID
# Links may be relative, so resolve them against their
# own location
# If it's something other than a symlink, we don't know
# what it's actually going to be resolved against, so
# just return the old path.
# Stop on reparse points that are not symlinks
# These error codes indicate that we should stop resolving the path
# and return the value we currently have.
# 50: ERROR_NOT_SUPPORTED
# 53: ERROR_BAD_NETPATH
# 65: ERROR_NETWORK_ACCESS_DENIED
# 123: ERROR_INVALID_NAME
# 161: ERROR_BAD_PATHNAME
# 1005: ERROR_UNRECOGNIZED_VOLUME
# 1920: ERROR_CANT_ACCESS_FILE
# 1921: ERROR_CANT_RESOLVE_FILENAME (implies unfollowable symlink)
# Non-strict algorithm is to find as much of the target directory
# as we can and join the rest.
# The OS could not resolve this path fully, so we attempt
# to follow the link ourselves. If we succeed, join the tail
# and return.
# If we fail to readlink(), let's keep traversing
# If we get these errors, try to get the real name of the file without accessing it.
# bpo-38081: Special case for realpath(b'nul')
# bpo-38081: Special case for realpath('nul')
# gh-106242: Raised for embedded null characters
# In strict modes, we convert into an OSError.
# Non-strict mode returns the path as-is, since we've already
# made it absolute.
# The path returned by _getfinalpathname will always start with \\?\ -
# strip off that prefix unless it was already provided on the original
# path.
# For UNC paths, the prefix will actually be \\?\UNC\
# Handle that case as well.
# Ensure that the non-prefixed path resolves to the same path
# Unexpected, as an invalid path should not have gained a prefix
# at any point, but we ignore this error just in case.
# If the path does not exist and originally did not exist, then
# strip the prefix anyway.
# All supported version have Unicode filename support.
# Work out how much of the filepath is shared by start and path.
# Return the longest common sub-path of the iterable of paths given as input.
# The function is case-insensitive and 'separator-insensitive', i.e. if the
# only difference between two paths is the use of '\' versus '/' as separator,
# they are deemed to be equal.
# However, the returned path will have the standard '\' separator (even if the
# given paths had the alternative '/' separator) and will have the case of the
# first path given in the iterable. Additionally, any trailing separator is
# stripped from the returned path.
# Check that all drive letters or UNC paths match. The check is made only
# now otherwise type errors for mixing strings and bytes would not be
# caught.
# The isdir(), isfile(), islink(), exists() and lexists() implementations
# in genericpath use os.stat(). This is overkill on Windows. Use simpler
# builtin functions if they are available.
# Use genericpath.* as imported above
# Use genericpath.isdevdrive as imported above
# Please keep __all__ alphabetized within each category.
# Super-special typing primitives.
# ABCs (from collections.abc).
# collections.abc.Set.
# Structural checks, a.k.a. protocols.
# Concrete collection types.
# Not really a type.
# Other concrete types.
# One-off things.
# When changing this function, don't forget about
# `_collections_abc._type_repr`, which does the same thing
# and must be consistent with this one.
# Special case for `repr` of types with `ParamSpec`:
# required type parameter cannot appear after parameter with default
# or after TypeVarTuple
# We don't want __parameters__ descriptor of a bare Python class.
# `t` might be a tuple, when `ParamSpec` is substituted with
# `[T, int]`, or `[int, *Ts]`, etc.
# deal with defaults
# If the parameter at index `actual_len` in the parameters list
# has a default, then all parameters after it must also have
# one, because we validated as much in _collect_type_parameters().
# That means that no error needs to be raised here, despite
# the number of arguments being passed not matching the number
# of parameters: all parameters that aren't explicitly
# specialized in this call are parameters with default values.
# Weed out strict duplicates, preserving the first of each occurrence.
# Happens for cases like `Annotated[dict, {'x': IntValidator()}]`
# Flatten out Union[Union[...], ...].
# The callback 'inner' references the newly created lru_cache
# indirectly by performing a lookup in the global '_caches' dictionary.
# This breaks a reference that can be problematic when combined with
# C API extensions that leak references to types. See GH-98253.
# All real errors (not unhashable args) are raised below.
# Internal indicator of special typing constructs.
# See __doc__ instance attribute for specific docs.
# respect to subclasses
# This is semantically identical to NoReturn, but it is implemented
# separately so that type checkers can distinguish between the two
# if they want.
# There is no '_type_check' call because arguments to Literal[...] are
# values, not types.
# unhashable parameters
# If we do `def f(*args: *Ts)`, then we'll have `arg = '*Ts'`.
# Unfortunately, this isn't a valid expression on its own, so we
# do the unpacking manually.
# E.g. (*Ts,)[0] or (*tuple[int, int],)[0]
# type parameters require some special handling,
# as they exist in their own scope
# but `eval()` does not have a dedicated parameter for that scope.
# For classes, names in type parameter scopes should override
# names in the global scope (which here are called `localns`!),
# but should in turn be overridden by names in the class scope
# (which here are called `globalns`!)
# Special case where Z[[int, str, bool]] == Z[int, str, bool] in PEP 612.
# Convert lists to tuples to help other libraries cache the results.
# Generic and Protocol can only be subscripted with unique type variables.
# Subscripting a regular Generic subclass.
# Look for Generic[T1, ..., Tn].
# If found, tvars must be a subset of it.
# If not found, tvars is it.
# Also check for and reject plain Generic,
# and reject multiple Generic[...].
# This is not documented.
# Some objects raise TypeError (or something even more exotic)
# if you try to set attributes on them; we guard against that here
# Check if any base that occurs after us in `bases` is either itself a
# subclass of Generic, or something which will add a subclass of Generic
# to `__bases__` via its `__mro_entries__`. If not, add Generic
# ourselves. The goal is to ensure that Generic (or a subclass) will
# appear exactly once in the final bases tuple. If we let it appear
# multiple times, we risk "can't form a consistent MRO" errors.
# We are careful for copy and pickle.
# Also for simplicity we don't relay any dunder names
# Special typing constructs Union, Optional, Generic, Callable and Tuple
# use three special attributes for internal bookkeeping of generic types:
# * __parameters__ is a tuple of unique free type parameters of a generic
# * __origin__ keeps a reference to a type that was subscripted,
# * __args__ is a tuple of all arguments used in subscripting,
# The type of parameterized generics.
# That is, for example, `type(List[int])` is `_GenericAlias`.
# Objects which are instances of this class include:
# * Parameterized container types, e.g. `Tuple[int]`, `List[int]`.
# * Parameterized classes:
# * `Callable` aliases, generic `Callable` aliases, and
# * Parameterized `Final`, `ClassVar`, `TypeGuard`, and `TypeIs`:
# Parameterizes an already-parameterized object.
# For example, we arrive here doing something like:
# We also arrive here when parameterizing a generic `Callable` alias:
# Can't subscript Generic[...] or Protocol[...].
# Preprocess `args`.
# Determines new __args__ for __getitem__.
# For example, suppose we had:
# `B.__args__` is `(int, T3)`, so `C.__args__` should be `(int, str)`.
# Unfortunately, this is harder than it looks, because if `T3` is
# anything more exotic than a plain `TypeVar`, we need to consider
# edge cases.
# In the example above, this would be {T3: str}
# Consider the following `Callable`.
# Here, `C.__args__` should be (int, str) - NOT ([int], str).
# That means that if we had something like...
# ...we need to be careful; `new_args` should end up as
# `(int, str, float)` rather than `([int, str], float)`.
# Consider the following `_GenericAlias`, `B`:
# If we then do:
# The `new_arg` corresponding to `T` will be `float`, and the
# `new_arg` corresponding to `*Ts` will be `(int, str)`. We
# should join all these types together in a flat list
# `(float, int, str)` - so again, we should `extend`.
# Corner case:
# Can be substituted like this:
# In this case, `old_arg` will be a tuple:
# To ensure the repr is eval-able.
# generic version of an ABC or built-in class
# _nparams is the number of accepted parameters, e.g. 0 for Hashable,
# 1 for List and 2 for Dict.  It may be -1 if variable number of
# parameters are accepted (needs custom __getitem__).
# This relaxes what args can be on purpose to allow things like
# PEP 612 ParamSpec.  Responsibility for whether a user is using
# Callable[...] properly is deferred to static type checkers.
# fast path
# not hashable, slow path
# `Unpack` only takes one argument, so __args__ should contain only
# a single item.
# These special attributes will be not collected as protocol members.
# without object
# Already using a custom `__init__`. No need to calculate correct
# `__init__` to call. This can lead to RecursionError. See bpo-45121.
# Initially, `__init__` of a protocol subclass is set to `_no_init_or_replace_init`.
# The first instantiation of the subclass will call `_no_init_or_replace_init` which
# searches for a proper new `__init__` in the MRO. The new `__init__`
# replaces the subclass' old `__init__` (ie `_no_init_or_replace_init`). Subsequent
# instantiation of the protocol subclass will thus use the new
# `__init__` and no longer call `_no_init_or_replace_init`.
# should not happen
# For platforms without _getframemodulename()
# For platforms without _getframe()
# Import getattr_static lazily so as not to slow down the import of typing.py
# Cache the result so we don't slow down _ProtocolMeta.__instancecheck__ unnecessarily
# Preload these once, as globals, as a micro-optimisation.
# This makes a significant difference to the time it takes
# to do `isinstance()`/`issubclass()` checks
# against runtime-checkable protocols with only one callable member.
# Same error message as for issubclass(1, int).
# This metaclass is somewhat unfortunate,
# but is necessary for several reasons...
# this attribute is set by @runtime_checkable:
# We need this method for situations where attributes are
# assigned in __init__.
# i.e., it's a concrete subclass of a protocol
# Check if the members appears in the class dictionary...
# ...or in annotations, if it is a sub-protocol.
# Determine if this is a protocol or a concrete subclass.
# Set (or override) the protocol subclass hook.
# Prohibit instantiation for protocol classes
# PEP 544 prohibits using issubclass()
# with protocols that have non-method members.
# See gh-113320 for why we compute this attribute here,
# rather than in `_ProtocolMeta.__init__`
# Classes require a special treatment.
# This is surprising, but required.  Before Python 3.10,
# get_type_hints only evaluated the globalns of
# a class.  To maintain backwards compatibility, we reverse
# the globalns and localns order so that eval() looks into
# *base_globals* first rather than *base_locals*.
# This only affects ForwardRefs.
# Find globalns for the unwrapped object.
# Return empty annotations for something that _could_ have them.
# class-level forward refs were handled above, this must be either
# a module-level annotation or a function argument annotation
# We only modify objects that are defined in this type directly.
# If classes / methods are nested in multiple layers,
# we will modify them when processing their direct holders.
# Instance, class, and static methods:
# Nested types:
# built-in classes
# {module: {qualname: {firstlineno: func}}}
# classmethod and staticmethod
# Not a normal function; ignore.
# Skip the attribute silently if it is not writable.
# AttributeError happens if the object has __slots__ or a
# read-only property, TypeError if it's a builtin class.
# Some unconstrained type variables.  These were initially used by the container types.
# They were never meant for export and are now unused, but we keep them around to
# avoid breaking compatibility with users who import them.
# Any type.
# Key type.
# Value type.
# Any type covariant containers.
# Value type covariant containers.
# Ditto contravariant.
# Internal type variable used for Type[].
# A useful type variable with constraints.  This represents string types.
# (This one *is* for export!)
# Various ABCs mimicking those in collections.abc.
# Not generic.
# NOTE: Mapping is only covariant in the value type.
# Tuple accepts variable number of parameters.
# attributes prohibited to set in NamedTuple class syntax
# update from user namespace without overriding special namedtuple attributes
# static method
# Typed dicts are only for static structural subtyping.
# Setting correct module is necessary to make typed dict classes pickleable.
# We defined __mro_entries__ to get a better error message
# if a user attempts to subclass a NewType instance. bpo-46170
# Python-version-specific alias (Python 2: unicode; Python 3: str)
# Constant that's True when type checking, but False here.
# fill opname and opmap
# Extract functions from methods.
# Extract compiled code objects from...
# ...a function, or
#...a generator object, or
#...an asynchronous generator object, or
#...a coroutine.
# Perform the disassembly.
# Class or module
# Code object
# Raw bytecode
# Source code
# The inspect module interrogates this dictionary to build its
# list of CO_* constants. It is also used by pretty_flags to
# turn the co_flags field into a human readable list.
# Sentinel to represent values that cannot be calculated
# Handle source code.
# By now, if we don't have a code object, we can't disassemble x.
# Only show the fancy argrepr for a CACHE instruction when it's
# the first entry for a particular cache value:
# Column: Source code line number
# Column: Label
# Column: Instruction offset from start of code sequence
# Column: Current instruction indicator
# Column: Opcode name
# Column: Opcode argument
# If opname is longer than _OPNAME_WIDTH, we allow it to overflow into
# the space reserved for oparg. This results in fewer misaligned opargs
# in the disassembly output.
# Column: Opcode argument details
# Use the basic, unadaptive code for finding labels and actually walking the
# bytecode, since replacements like ENTER_EXECUTOR and INSTRUMENTED_* can
# mess that logic up pretty badly:
# Advance the co_positions iterator:
# Omit the line number column entirely if we have no line number info
# Each CACHE takes 2 bytes
# XXX For backwards compatibility
# Rely on C `int` being 32 bits for oparg
# Value for c int when it overflows
# Number of EXTENDED_ARG instructions preceding the current instruction
# Skip inline CACHE entries:
# The oparg is stored as a signed integer
# If the value exceeds its upper limit, it will overflow and wrap
# to a negative integer
# None is a valid line number
# None
# XXX 'arg' is no longer used
# First call of dispatch since reset()
# (CT) Note that this may also be None!
# No need to trace this function
# Ignore call events in generator except when stepping.
# Ignore return events in generator except when stepping.
# The user issued a 'next' or 'until' command.
# The previous frame might not have f_trace set, unless we are
# issuing a command that does not expect to stop, we should set
# f_trace
# When stepping with next/until/return in a generator frame, skip
# the internal StopIteration exception (with no traceback)
# triggered by a subiterator run with the 'yield from' statement.
# Stop at the StopIteration or GeneratorExit exception when the user
# has set stopframe in a generator by issuing a return command, or a
# next/until command at the last statement in the generator before the
# exception.
# Normally derived classes don't override the following
# methods, but they may if they want to redefine the
# definition of stopping and breakpoints.
# some modules do not have names
# (CT) stopframe may now also be None, see dispatch_call.
# (CT) the former test for None is therefore removed from here.
# The line itself has no breakpoint, but maybe the line is the
# first line of a function with breakpoint set by function name.
# flag says ok to delete temp. bp
# Derived classes should override the user_* methods
# to gain control.
# stoplineno >= 0 means: stop at line >= the stoplineno
# stoplineno -1 means: don't stop at all
# Issue #13183: pdb skips frames after hitting a breakpoint and running
# step commands.
# Restore the trace function in the caller (that may not have been set
# for performance reasons) when returning from the current frame, unless
# the caller is the botframe.
# Derived classes and clients can call the following methods
# to affect the stepping state.
# the name "until" is borrowed from gdb
# We need f_trace_lines == True for the debugger to work
# Don't stop except at breakpoints or when finished
# no breakpoints; run without debugger overhead
# to manipulate breakpoints.  These methods return an
# error message if something went wrong, None if all is well.
# Set_break prints out the breakpoint line and file:lineno.
# Call self.get_*break*() to see the breakpoints or better
# for bp in Breakpoint.bpbynumber: if bp: bp.bpprint().
# Import as late as possible
# After we set a new breakpoint, we need to search through all frames
# and set f_trace to trace_dispatch if there could be a breakpoint in
# that frame.
# If there's only one bp in the list for that file,line
# pair, then remove the breaks entry
# Derived classes and clients can call the following method
# to get a data structure representing a stack trace.
# The following methods can be called by clients to use
# a debugger to debug a statement or an expression.
# Both can be given as a string, or a code object.
# This method is more useful to debug a single function call.
# XXX Keeping state in the class is a mistake -- this means
# you cannot have more than one active Bdb instance.
# Next bp to be assigned
# indexed by (file, lineno) tuple
# Each entry is None or an instance of Bpt
# index 0 is unused, except for marking an
# effective break .... see effective()
# Needed if funcname is not None.
# This better be in canonical form!
# Build the two lists
# No longer in list
# No more bp for this f:l combo
# -----------end of Breakpoint class----------
# Breakpoint was set via line number.
# Breakpoint was set at a line with a def statement and the function
# defined is called: don't break.
# Breakpoint set via function name.
# It's not a function call, but rather execution of def statement.
# We are in the right frame.
# The function is entered for the 1st time.
# But we are not at the first line number: don't break.
# Count every hit when bp is enabled
# If unconditional, and ignoring go on to next, else break
# breakpoint and marker that it's ok to delete if temporary
# Conditional bp.
# Ignore count applies only to those bpt hits where the
# condition evaluates to true.
# continue
# else:
# if eval fails, most conservative thing is to stop on
# breakpoint regardless of ignore count.  Don't delete
# temporary, as another hint to user.
# -------------------- testing --------------------
# Copyright (C) 1999-2001 Gregory P. Ward.
# Copyright (C) 2002, 2003 Python Software Foundation.
# Written by Greg Ward <gward@python.net>
# Hardcode the recognized whitespace characters to the US-ASCII
# whitespace characters.  The main reason for doing this is that
# some Unicode spaces (like \u00a0) are non-breaking whitespaces.
# This funky little regex is just the trick for splitting
# text up into word-wrappable chunks.  E.g.
# splits into
# (after stripping out empty strings).
# This less funky little regex just split on recognized spaces. E.g.
# XXX this is not locale- or charset-aware -- string.lowercase
# is US-ASCII only (and therefore English-only)
# lowercase letter
# sentence-ending punct.
# optional end-of-quote
# end of chunk
# (possibly useful for subclasses to override)
# Figure out when indent is larger than the specified width, and make
# sure at least one character is stripped off on every pass
# If we're allowed to break long words, then do so: put as much
# of the next chunk onto the current line as will fit.
# break after last hyphen, but only if there are
# non-hyphens before it
# Otherwise, we have to preserve the long word intact.  Only add
# it to the current line if there's nothing already there --
# that minimizes how much we violate the width constraint.
# If we're not allowed to break long words, and there's already
# text on the current line, do nothing.  Next time through the
# main loop of _wrap_chunks(), we'll wind up here again, but
# cur_len will be zero, so the next line will be entirely
# devoted to the long word that we can't handle right now.
# Arrange in reverse order so items can be efficiently popped
# from a stack of chucks.
# Start the list of chunks that will make up the current line.
# cur_len is just the length of all the chunks in cur_line.
# Figure out which static string will prefix this line.
# Maximum width for this line.
# First chunk on line is whitespace -- drop it, unless this
# is the very beginning of the text (ie. no lines started yet).
# Can at least squeeze this chunk onto the current line.
# Nope, this line is full.
# The current line is full, and the next chunk is too big to
# fit on *any* line (not just this one).
# If the last chunk on this line is all whitespace, drop it.
# Convert current line back to a string and store it in
# list of all lines (return value).
# -- Public interface ----------------------------------------------
# -- Convenience interface ---------------------------------------------
# -- Loosely related functionality -------------------------------------
# Look for the longest leading string of spaces and tabs common to
# all lines.
# Current line more deeply indented than previous winner:
# no change (previous winner is still on top).
# Current line consistent with and no deeper than previous winner:
# it's the new winner.
# Find the largest common whitespace between current line and previous
# winner.
# sanity check (testing/debugging only)
# str.splitlines(True) doesn't produce empty string.
# So we can use just `not s.isspace()` here.
#print dedent("\tfoo\n\tbar")
#print dedent("  \thello there\n  \t  how are you?")
# A global counter that is incremented each time a class is
# registered as a virtual subclass of anything.  It forces the
# negative cache to be cleared before its next use.
# Note: this counter is private. Use `abc.get_cache_token()` for
# Compute set of abstract method names
# Set up inheritance registry
# Already a subclass
# Subtle: test for cycles *after* testing for "already a subclass";
# this means we allow X.register(X) and interpret it as a no-op.
# This would create a cycle, which is bad for the algorithm below
# Invalidate negative cache
# Inline the cache checking
# Fall back to the subclass check.
# Check cache
# Check negative cache; may have to invalidate
# Invalidate the negative cache
# Check the subclass hook
# Check if it's a direct subclass
# Check if it's a subclass of a registered class (recursive)
# Check if it's a subclass of a subclass (recursive)
# No dice; update negative cache
### Registry and builtin stateless codec functions
### Constants
# Byte Order Mark (BOM = ZERO WIDTH NO-BREAK SPACE = U+FEFF)
# and its possible byte string values
# for UTF8/UTF16/UTF32 output and little/big endian machines
# UTF-8
# UTF-16, little endian
# UTF-16, big endian
# UTF-32, little endian
# UTF-32, big endian
# UTF-16, native endianness
# UTF-32, native endianness
# Old broken names (don't use in new code)
### Codec base classes (defining the API)
# Private API to allow Python 3.4 to denylist the known non-Unicode
# codecs in the standard library. A more general mechanism to
# reliably distinguish test encodings from other codecs will hopefully
# be defined for Python 3.5
# See http://bugs.python.org/issue19619
# Assume codecs are text encodings by default
# unencoded input that is kept between calls to encode()
# Overwrite this method in subclasses: It must encode input
# and return an (output, length consumed) tuple
# encode input (taking the buffer into account)
# keep unencoded input until the next call
# undecoded input that is kept between calls to decode()
# Overwrite this method in subclasses: It must decode input
# decode input (taking the buffer into account)
# keep undecoded input until the next call
# additional state info is always 0
# ignore additional state info
# The StreamWriter and StreamReader class provide generic working
# interfaces which can be used to implement new encoding submodules
# very easily. See encodings/utf_8.py for an example on how this is
# done.
###
# If we have lines cached, first merge them back into characters
# For compatibility with other read() methods that take a
# single argument
# read until we get the required number of characters (if available)
# can the request be satisfied from the character buffer?
# we need more data
# decode bytes (those remaining from the last call included)
# keep undecoded bytes until the next call
# put new characters in the character buffer
# there was no data available
# Return everything we've got
# Return the first chars characters
# If we have lines cached from an earlier read, return
# them unconditionally
# revert to charbuffer mode; we might need more data
# next time
# If size is given, we call read() only once
# If we're at a "\r" read one extra character (which might
# be a "\n") to get a proper line ending. If the stream is
# temporarily exhausted we return the wrong line ending.
# More than one line result; the first line is a full line
# to return
# cache the remaining lines
# only one remaining line, put it back into charbuffer
# We really have a line end
# Put the rest back together and keep it until the next call
# we didn't get anything or this was our only try
# Optional attributes set by the file wrappers below
# these are needed to make "with StreamReaderWriter(...)" work properly
# Seeks must be propagated to both the readers and writers
# as they might need to reset their internal buffers.
### Shortcuts
# Force opening of the file in binary mode
# Add attributes to simplify introspection
### Helpers for codec lookup
### Helpers for charmap-based codecs
### error handlers
# In --disable-unicode builds, these error handler are missing
# Tell modulefinder that using codecs probably needs the encodings
# package
# Author: David Ascher <david_ascher@brown.edu>
# Updated: Piers Lauder <piers@cs.su.oz.au> [Jul '97]
# String method conversion and test jig improvements by ESR, February 2001.
# Added the POP3_SSL class. Methods loosely based on IMAP_SSL. Hector Urtubia <urtubia@mrbook.org> Aug 2003
# Example (see the test function at the end of this file)
# Exception raised when an error or invalid response is received:
# Standard Port
# POP SSL PORT
# Line terminators (we always output CRLF, but accept any of CRLF, LFCR, LF)
# maximal line length when calling readline(). This is to prevent
# reading arbitrary length lines. RFC 1939 limits POP3 line length to
# 512 characters, including CRLF. We have selected 2048 just to be on
# the safe side.
# Internal: send one command to the server (through _putline())
# Internal: return one line from the server, stripping CRLF.
# This is where all the CPU time of this module is consumed.
# Raise error_proto('-ERR EOF') if the connection is closed.
# server can send any combination of CR & LF
# however, 'readline()' returns lines ending in LF
# so only possibilities are ...LF, ...CRLF, CR...LF
# Internal: get a response from the server.
# Raise 'error_proto' if the response doesn't start with '+'.
# Internal: get a response plus following text from the server.
# Internal: send a command and get the response
# Internal: send a command and get the response plus following text
# These can be useful:
# Here are all the POP commands:
# Check if the response has enough elements
# RFC 1939 requires at least 3 elements (+OK, message count, mailbox size)
# but allows additional data after the required fields
# The server might already have closed the connection.
# On Windows, this may result in WSAEINVAL (error 10022):
# An invalid operation was attempted.
#__del__ = quit
# optional commands:
# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger
# raises appropriate IndexError if heap is empty
# Transform bottom-up.  The largest index there's any point to looking at
# is the largest with a child index in-range, so must have 2*i + 1 < n,
# or i < (n-1)/2.  If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so
# j-1 is the largest, which is n//2 - 1.  If n is odd = 2*j+1, this is
# (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1.
# 'heap' is a heap at all indices >= startpos, except possibly for pos.  pos
# is the index of a leaf with a possibly out-of-order value.  Restore the
# heap invariant.
# Follow the path to the root, moving parents down until finding a place
# newitem fits.
# The child indices of heap index pos are already heaps, and we want to make
# a heap at index pos too.  We do this by bubbling the smaller child of
# pos up (and so on with that child's children, etc) until hitting a leaf,
# then using _siftdown to move the oddball originally at index pos into place.
# We *could* break out of the loop as soon as we find a pos where newitem <=
# both its children, but turns out that's not a good idea, and despite that
# many books write the algorithm that way.  During a heap pop, the last array
# element is sifted in, and that tends to be large, so that comparing it
# against values starting from the root usually doesn't pay (= usually doesn't
# get us out of the loop early).  See Knuth, Volume 3, where this is
# explained and quantified in an exercise.
# Cutting the # of comparisons is important, since these routines have no
# way to extract "the priority" from an array element, so that intelligence
# is likely to be hiding in custom comparison methods, or in array elements
# storing (priority, record) tuples.  Comparisons are thus potentially
# expensive.
# On random arrays of length 1000, making this change cut the number of
# comparisons made by heapify() a little, and those made by exhaustive
# heappop() a lot, in accord with theory.  Here are typical results from 3
# runs (3 just to demonstrate how small the variance is):
# Compares needed by heapify     Compares needed by 1000 heappops
# --------------------------     --------------------------------
# 1837 cut to 1663               14996 cut to 8680
# 1855 cut to 1659               14966 cut to 8678
# 1847 cut to 1660               15024 cut to 8703
# Building the heap by using heappush() 1000 times instead required
# 2198, 2148, and 2219 compares:  heapify() is more efficient, when
# you can use it.
# The total compares needed by list.sort() on the same lists were 8627,
# 8627, and 8632 (this should be compared to the sum of heapify() and
# heappop() compares):  list.sort() is (unsurprisingly!) more efficient
# for sorting.
# Bubble up the smaller child until hitting a leaf.
# leftmost child position
# Set childpos to index of smaller child.
# Move the smaller child up.
# The leaf at pos is empty now.  Put newitem there, and bubble it up
# to its final resting place (by sifting its parents down).
# Bubble up the larger child until hitting a leaf.
# Set childpos to index of larger child.
# Move the larger child up.
# raises StopIteration when exhausted
# restore heap condition
# remove empty iterator
# fast case when only a single iterator remains
# Algorithm notes for nlargest() and nsmallest()
# ==============================================
# Make a single pass over the data while keeping the k most extreme values
# in a heap.  Memory consumption is limited to keeping k values in a list.
# Measured performance for random inputs:
# -------------   ----------------  ---------------------   -----------------
# 10,000,000           100             10,009,401                 0.1%
# Theoretical number of comparisons for k smallest of n random inputs:
# Step   Comparisons                  Action
# ----   --------------------------   ---------------------------
# Combining and simplifying for a rough estimate gives:
# Computing the number of comparisons for step 3:
# -----------------------------------------------
# * For the i-th new value from the iterable, the probability of being in the
# * If the value is a new extreme value, the cost of inserting it into the
# * The probability times the cost gives:
# * Summing across the remaining n-k elements gives:
# * This reduces to:
# * Where H(n) is the n-th harmonic number estimated by:
# * Substituting the H(n) formula:
# Worst-case for step 3:
# ----------------------
# In the worst case, the input data is reversed sorted so that every new element
# must be inserted in the heap:
# Alternative Algorithms
# Other algorithms were not used because they:
# 1) Took much more auxiliary memory,
# 2) Made multiple passes over the data.
# 3) Made more comparisons in common cases (small k, large n, semi-random input).
# See the more detailed comparison of approach at:
# http://code.activestate.com/recipes/577573-compare-algorithms-for-heapqsmallest
# Short-cut for n==1 is to use min()
# When n>=size, it's faster to use sorted()
# When key is none, use simpler decoration
# put the range(n) first so that zip() doesn't
# consume one too many elements from the iterator
# General case, slowest method
# Short-cut for n==1 is to use max()
# If available, use C implementation
# pragma: no cover
# Those objects are almost immortal and they keep a reference to their module
# globals.  Defining them in the site module would keep too many references
# alive.
# Note this means this module should also avoid keep things alive in its
# globals.
# Shells like IDLE catch the SystemExit, but listen when their
# stdin wrapper is closed.
# Compressed data read chunk size
# Current offset in decompressed stream
# Set to size of decompressed stream once it is known, for SEEK_END
# Save the decompressor factory and arguments.
# If the file contains multiple compressed streams, each
# stream will need a separate decompressor object. A new decompressor
# object is also needed when implementing a backwards seek().
# Exception class to catch from decompressor signifying invalid
# trailing data to ignore
# Default if EOF is encountered
# Depending on the input data, our call to the decompressor may not
# return any data. In this case, try again after reading another block.
# Continue to next stream.
# Trailing data isn't a valid compressed stream; ignore it.
# sys.maxsize means the max length of output buffer is unlimited,
# so that the whole input buffer can be decompressed within one
# .decompress() call.
# Rewind the file to the beginning of the data stream.
# Recalculate offset as an absolute file position.
# Seeking relative to EOF - we need to know the file's size.
# Make it so that offset is the number of bytes to skip forward.
# Read and discard data until we reach the desired position.
# backward compatibility
# treat it as a regular class:
# If is its own copy, don't memoize.
# Make sure x lives at least as long as d
# We're not going to put the tuple in the memo, but it's still important we
# check for it, in case the tuple contains recursive mutable structures.
# Copy instance methods
# aha, this is the first one :-)
# normcase on posix is NOP. Optimize it away from the loop.
# compress consecutive `*` into one
# Remove empty ranges -- invalid in RE.
# Escape backslashes and hyphens for set difference (--).
# Hyphens that create ranges shouldn't be escaped.
# Escape set operations (&&, ~~ and ||).
# Empty range: never match.
# Negated empty range: match any character.
# Deal with STARs.
# Fixed pieces at the start?
# Now deal with STAR fixed STAR fixed ...
# For an interior `STAR fixed` pairing, we want to do a minimal
# .*? match followed by `fixed`, with no possibility of backtracking.
# Atomic groups ("(?>...)") allow us to spell that directly.
# Note: people rely on the undocumented ability to join multiple
# translate() results together via "|" to build large regexps matching
# "one of many" shell patterns.
# Note: we use unicode matching for names ("\w") but ascii matching for
# number literals.
# Return the empty string, plus all of the valid string prefixes.
# The valid string prefixes. Only contain the lower case versions,
# if we add binary f-strings, add: ['fb', 'fbr']
# create a list with upper and lower versions of each
# Note that since _all_string_prefixes includes the empty string,
# Tail end of ' string.
# Tail end of " string.
# Tail end of ''' string.
# Tail end of """ string.
# Single-line ' or " string.
# Sorting in reverse order puts the long operators before their prefixes.
# Otherwise if = came before ==, == would get recognized as two instances
# of =.
# First (or only) line of ' or " string.
# For a given string prefix plus quotes, endpats maps it to a regex
# A set of all of the single and triple quoted string prefixes,
# Insert a space between two consecutive strings
# Insert a space between two consecutive brackets if we are in an f-string
# Insert a space between two consecutive f-strings
# Only care about the first 12 characters.
# Decode as UTF-8. Either the line is an encoding declaration,
# in which case it should be pure ASCII, or it must be UTF-8
# per default encoding.
# This behaviour mimics the Python interpreter
# BOM will already have been stripped.
# Helper error handling routines
# Parse the arguments and options
# Tokenize the input
# Output the tokenization
# (Dec 1991 version).
# if header, we have to escape _ because _ is used to escape space
# RFC 1521 requires that the line ending in a space or tab must have
# that trailing character encoded.
# Strip off any readline induced trailing newline
# Calculate the un-length-limited encoded line
# First, write out the previous line
# Now see if we need any soft line breaks because of RFC-imposed
# length limitations.  Then do the thisline->prevline dance.
# Don't forget to include the soft line break `=' sign in the
# length calculation!
# Write out the current line
# Write out the last line, without a trailing newline
# Strip trailing whitespace
# Bad escape sequence -- leave it in
# Other helper functions
# Wrapper module for _socket, providing some additional facilities
# implemented in Python.
# Set up the socket.AF_* socket.SOCK_* constants as members of IntEnums for
# nicer string representations.
# Note that _socket only knows about the integer values. The public interface
# in this module understands the enums and translates them back from integers
# where needed (e.g. .family property of a socket object).
# WSA error codes
# WSAEFAULT
# For user code address family and type values are IntEnum members, but
# for the underlying _socket.socket they're just integers. The
# constructor of _socket.socket converts the given argument to an
# integer automatically.
# getsockname and getpeername may not be available on WASI.
# Issue #7995: if no default timeout is set and the listening
# socket had a (non-zero) timeout, force the new socket in blocking
# mode to override platform-specific socket flags inheritance.
# XXX refactor to share code?
# not a regular file
# empty file
# Truncate to 1GiB to avoid OverflowError, see bpo-38319.
# poll/select have the advantage of not requiring any
# extra file descriptor, contrarily to epoll/kqueue
# (also, they require a single syscall).
# Block until the socket is ready to send some
# data; avoids hogging CPU resources.
# We can get here for different reasons, the main
# one being 'file' is not a regular mmap(2)-like
# file, in which case we'll fall back on using
# plain send().
# EOF
# This function should not reference any globals. See issue #808164.
# Array of ints
# Origin: https://gist.github.com/4325783, by Geert Jansen.  Public domain.
# This is used if _socket doesn't natively provide socketpair. It's
# always defined so that it can be patched in for testing purposes.
# We create a connected TCP socket. Note the trick with
# setblocking(False) that prevents us from having to create a thread.
# On IPv6, ignore flow_info and scope_id
# Authenticating avoids using a connection from something else
# able to connect to {host}:{port} instead of us.
# We expect only AF_INET and AF_INET6 families.
# getsockname() and getpeername() can fail
# if either socket isn't connected.
# One might wonder why not let FileIO do the job instead.  There are two
# main reasons why FileIO is not adapted:
# - it wouldn't work under Windows (where you can't used read() and
# - it wouldn't work with socket timeouts (FileIO would ignore the
# XXX More docs
# XXX what about EINTR?
# Break explicitly a reference cycle
# raise only the last error
# Note about Windows. We don't set SO_REUSEADDR because:
# 1) It's unnecessary: bind() will succeed even in case of a
# previous closed socket on the same address and still in
# TIME_WAIT state.
# 2) If set, another socket is free to bind() on the same
# address, effectively preventing this one from accepting
# connections. Also, it may set the process in a state where
# it'll no longer respond to any signals or graceful kills.
# See: https://learn.microsoft.com/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse
# Fail later on bind(), for platforms which may not
# support this option.
# We override this function since we want to translate the numeric family
# and socket type values to enum constants.
# The cache. Maps filenames to either a thunk which will provide source code,
# or a tuple (size, mtime, lines, fullname) once loaded.
# get keys atomically
# lazy cache entry, leave it lazy.
# no-op for files loaded via a __loader__
# This import can fail if the interpreter is shutting down
# These imports are not at top level because linecache is in the critical
# path of the interpreter startup and importing os and sys take a lot of time
# and slows down the startup sequence.
# These import can fail if the interpreter is shutting down
# Realise a lazy loader based lookup if there is one
# otherwise try to lookup right now.
# No luck, the PEP302 loader cannot find the source
# for this module.
# Try looking through the module search path, which is only useful
# when handling a relative filename.
# Not sufficiently string-like to do anything useful with.
# may be raised by os.stat()
# Try for a __loader__, if available
# This module represents the integration of work, contributions, feedback, and
# suggestions from the following people:
# Martin von Loewis, who wrote the initial implementation of the underlying
# C-based libintlmodule (later renamed _gettext), along with a skeletal
# gettext.py implementation.
# Peter Funk, who wrote fintl.py, a fairly complete wrapper around intlmodule,
# which also included a pure-Python implementation to read .mo files if
# intlmodule wasn't available.
# James Henstridge, who also wrote a gettext.py module, which has some
# interesting, but currently unsupported experimental features: the notion of
# a Catalog class and instances, and the ability to add to a catalog file via
# a Python API.
# Barry Warsaw integrated these modules, wrote the .install() API and code,
# and conformed all C and Python code to Python's coding standards.
# Francois Pinard and Marc-Andre Lemburg also contributed valuably to this
# J. David Ibanez implemented plural forms. Bruno Haible fixed some bugs.
# TODO:
# - Lazy loading of .mo files.  Currently the entire catalog is loaded into
# - Support Solaris .mo file formats.  Unfortunately, we've been unable to
# Expression parsing for plural form selection.
# The gettext library supports a small subset of C syntax.  The only
# incompatible difference is that integer literals starting with zero are
# decimal.
# https://www.gnu.org/software/gettext/manual/gettext.html#Plural-forms
# http://git.savannah.gnu.org/cgit/gettext.git/tree/gettext-runtime/intl/plural.y
# Break chained comparisons
# '==', '!=', '<', '>', '<=', '>='
# Replace some C operators by their Python equivalents
# '<', '>', '<=', '>='
# Python compiler limit is about 90.
# The most complex example has 2.
# Recursion error can be raised in _parse() or exec().
# split up the locale into its base components
# if all components for this combo exist ...
# Magic number of .mo files
# The encoding of a msgctxt and a msgid in a .mo file is
# msgctxt + "\x04" + msgid (gettext version >= 0.15)
# Acceptable .mo versions
# Delay struct import for speeding up gettext import when .mo files
# are not used.
# Parse the .mo file header, which consists of 5 little endian 32
# bit words.
# germanic plural by default
# Are we big endian or little endian?
# Now put all messages from the .mo file buffer into the catalog
# dictionary.
# See if we're looking at GNU .mo conventions for metadata
# Catalog description
# Skip over comment lines:
# Note: we unconditionally convert both msgids and msgstrs to
# Unicode using the character encoding specified in the charset
# parameter of the Content-Type header.  The gettext documentation
# strongly encourages msgids to be us-ascii, but some applications
# require alternative encodings (e.g. Zope's ZCML and ZPT).  For
# traditional gettext applications, the msgid conversion will
# cause no problems since us-ascii should always be a subset of
# the charset encoding.  We may want to fall back to 8-bit msgids
# if the Unicode conversion fails.
# Plural forms
# advance to next entry in the seek tables
# Locate a .mo file using the gettext strategy
# Get some reasonable defaults for arguments that were not supplied
# now normalize and expand the languages
# select a language
# a mapping between absolute .mo file path and Translation object
# Avoid opening, reading, and parsing the .mo file after it's been done
# once.
# Copy the translation object to allow setting fallbacks and
# output charset. All other instance data is shared with the
# cached object.
# Delay copy import for speeding up gettext import when .mo files
# a mapping b/w domains and locale directories
# current global domain, `messages' used for compatibility w/ GNU gettext
# dcgettext() has been deemed unnecessary and is not implemented.
# James Henstridge's Catalog constructor from GNOME gettext.  Documented usage
# was:
# The resulting catalog object currently don't support access through a
# dictionary API, which was supported (but apparently unused) in GNOME
# gettext.
# Figure out what the current language is set to.
# Find all positions of needle in haystack.
# The lower case of 'İ' ('\u0130') is 'i\u0307'.
# The re module only supports 1-to-1 character matching in
# case-insensitive mode.
# 〇:一:二:三:四:五:六:七:八:九
# 十:十一:十二:十三:十四:十五:十六:十七:十八:十九
# 廿:廿一:廿二:廿三:廿四:廿五:廿六:廿七:廿八:廿九
# 卅:卅一
# Set self.a_weekday and self.f_weekday using the calendar
# Set self.f_month and self.a_month using the calendar module.
# Set self.am_pm by using time.strftime().
# The magic date (1999,3,17,hour,44,55,2,76,0) is not really that
# magical; just happened to have used it everywhere else where a
# static date was needed.
# br_FR has AM/PM info (' ',' ').
# Set self.LC_alt_digits by using time.strftime().
# The magic data should contain all decimal digits.
# Fast path -- all digits are ASCII.
# All 10 decimal digits from the same set.
# All digits are ASCII.
# Test whether the numbers contain leading zero.
# Either non-Gregorian calendar or non-decimal numbers.
# lzh_TW
# Set self.LC_date_time, self.LC_date, self.LC_time and
# self.LC_time_ampm by using time.strftime().
# Use (1999,3,17,22,44,55,2,76,0) for magic date because the amount of
# overloaded numbers is minimized.  The order in which searches for
# values within the format string is very important; it eliminates
# possible ambiguity for what something represents.
# Non-ASCII digits
# '3' needed for when no leading zero.
# The month and the day of the week formats are treated specially
# because of a possible ambiguity in some locales where the full
# and abbreviated names are equal or names of different types
# are equal. See doc of __find_month_format for more details.
# Must deal with possible lack of locale info
# manifesting itself as the empty string (e.g., Swedish's
# lack of AM/PM info) or a platform returning a tuple of empty
# strings (e.g., MacOS 9 having timezone as ('','')).
# Transform all non-ASCII digits to digits in range U+0660 to U+0669.
# If %W is used, then Sunday, 2005-01-03 will fall on week 0 since
# 2005-01-03 occurs before the first Monday of the year.  Otherwise
# %U is used.
# Set self.timezone by using time.tzname.
# Do not worry about possibility of time.tzname[0] == time.tzname[1]
# and time.daylight; handle that in strptime.
# The " [1-9]" part of the regex is to make %c from ANSI C work
# W is set below by using 'U'
# The sub() call escapes all characters that might be misconstrued
# as regex syntax.  Cannot use re.escape since we have to deal with
# format directives (%m, etc.).
# needed for br_FR
# DO NOT modify _TimeRE_cache or _regex_cache without acquiring the cache lock
# first!
# Max number of regexes stored in _regex_cache
# If we are dealing with the %U directive (week starts on Sunday), it's
# easier to just shift the view to Sunday being the first day of the
# week.
# Need to watch out for a week 0 (when the first day of the year is not
# the same as that specified by %U or %W).
# KeyError raised when a bad format is found; can be specified as
# \\, in which case it was a stray % but with a space after it
# weekday and julian defaulted to None so as to signal need to calculate
# Directives not explicitly handled below:
# Open Group specification for strptime() states that a %y
#value in the range of [00, 68] is in the century 2000, while
#[69,99] is in the century 1900
# If there was no AM/PM indicator, we'll treat this like AM
# We're in AM so the hour is correct unless we're
# looking at 12 midnight.
# 12 midnight == 12 AM == hour 0
# We're in PM so we need to add 12 to the hour unless
# we're looking at 12 noon.
# 12 noon == 12 PM == hour 12
# Pad to always return microseconds.
# U starts week on Sunday.
# W starts week on Monday.
# Since -1 is default value only need to worry about setting tz if
# it can be something other than -1.
# Deal with bad locale setup where timezone names are the
# same and yet time.daylight is true; too ambiguous to
# be able to tell what timezone has daylight savings
# Deal with the cases where ambiguities arise
# don't assume default values for ISO week/year
# 1904 is first leap year of 20th century
# If we know the week of the year and what day of that week, we can figure
# out the Julian day of the year.
# Cannot pre-calculate datetime_date() since can change in Julian
# calculation and thus could have different value for the day of
# the week calculation.
# Need to add 1 to result since first day of the year is 1, not 0.
# Assume that if they bothered to include Julian day (or if it was
# calculated above with year/week/weekday) it will be accurate.
# Add timezone info
# the caller didn't supply a year but asked for Feb 29th. We couldn't
# use the default of 1900 for computations. We set it back to ensure
# that February 29th is smaller than March 1st.
# Bugs: No signal handling.  Doesn't set slave termios and window size.
# See:  W. Richard Stevens. 1992.  Advanced Programming in the
# Author: Steen Lumholt -- with additions by Guido.
# names imported directly for test mocking purposes
# Remove API in 3.14
# os.forkpty() already set us session leader
# Parent and child process.
# If we write more than tty/ndisc is willing to buffer, we may block
# indefinitely. So we set master_fd to non-blocking temporarily during
# the copy operation.
# restore blocking mode for backwards compatibility
# Some OSes signal EOF by returning an empty byte string,
# some throw OSErrors.
# Reached EOF.
# Assume the child process has exited and is
# unreachable, so we clean up.
# This is the same as termios.error
# types
# more like LIGHT GRAY
# DARK GRAY
# actual WHITE
# intense = like bold but without being bold
# Copyright 2007 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
############ Maintenance notes #########################################
# ABCs are different from other standard library modules in that they
# specify compliance tests.  In general, once an ABC has been published,
# new methods (either abstract or concrete) cannot be added.
# Though classes that inherit from an ABC would automatically receive a
# new mixin method, registered classes would become non-compliant and
# violate the contract promised by ``isinstance(someobj, SomeABC)``.
# Though irritating, the correct procedure for adding new abstract or
# mixin methods is to create a new ABC as a subclass of the previous
# ABC.  For example, union(), intersection(), and difference() cannot
# be added to Set but could go into a new ABC that extends Set.
# Because they are so hard to change, new ABCs should have their APIs
# carefully thought through prior to publication.
# Since ABCMeta only checks for the presence of methods, it is possible
# to alter the signature of a method by adding optional arguments
# or changing parameters names.  This is still a bit dubious but at
# least it won't cause isinstance() to return an incorrect result.
#######################################################################
# This module has been renamed from collections.abc to _collections_abc to
# speed up interpreter startup. Some of the types such as MutableMapping are
# required early but collections module imports a lot of other modules.
# See issue #19218
# Private list of types that we want to register with the various ABCs
# so that they will pass tests like:
# Note:  in other implementations, these types might not be distinct
# and they may have their own implementation specific types that
# are not included on this list.
#callable_iterator = ???
## views ##
## misc ##
## coroutine ##
# Prevent ResourceWarning
## asynchronous generator ##
### ONE-TRICK PONIES ###
#Iterator.register(callable_iterator)
# Called during TypeVar substitution, returns the custom subclass
# rather than the default types.GenericAlias object.  Most of the
# code is copied from typing's _GenericAlias and the builtin
# types.GenericAlias.
# args[0] occurs due to things like Z[[int, str, bool]] from PEP 612
### SETS ###
### MAPPINGS ###
# Tell ABCMeta.__new__ that this class should have TPFLAGS_MAPPING set.
# Py_TPFLAGS_MAPPING
### SEQUENCES ###
# Tell ABCMeta.__new__ that this class should have TPFLAGS_SEQUENCE set.
# Py_TPFLAGS_SEQUENCE
# Multiply inheriting, see ByteString
# Released to the public domain, by Tim Peters, 15 April 1998.
# XXX Note: this is now a standard library module.
# XXX The API needs to undergo changes however; the current code is too
# XXX script-like.  This will be addressed later.
# the characters used for space and tab
# members:
# return length of longest contiguous run of spaces (whether or not
# preceding a tab)
# count, il = self.norm
# for i in range(len(count)):
# return il
# quicker:
# il = trailing + sum (i//ts + 1)*ts*count[i] =
# trailing + ts * sum (i//ts + 1)*count[i] =
# trailing + ts * sum i//ts*count[i] + count[i] =
# trailing + ts * [(sum i//ts*count[i]) + (sum count[i])] =
# trailing + ts * [(sum i//ts*count[i]) + num_tabs]
# and note that i//ts*count[i] is 0 when i < ts
# return true iff self.indent_level(t) == other.indent_level(t)
# for all t >= 1
# return a list of tuples (ts, i1, i2) such that
# i1 == self.indent_level(ts) != other.indent_level(ts) == i2.
# Intended to be used after not self.equal(other) is known, in which
# case it will return at least one witnessing tab size.
# Return True iff self.indent_level(t) < other.indent_level(t)
# for all t >= 1.
# The algorithm is due to Vincent Broman.
# Easy to prove it's correct.
# XXXpost that.
# Trivial to prove n is sharp (consider T vs ST).
# Unknown whether there's a faster general way.  I suspected so at
# first, but no longer.
# For the special (but common!) case where M and N are both of the
# form (T*)(S*), M.less(N) iff M.len() < N.len() and
# M.num_tabs() <= N.num_tabs(). Proof is easy but kinda long-winded.
# XXXwrite that up.
# Note that M is of the form (T*)(S*) iff len(M.norm[0]) <= 1.
# the self.n >= other.n test already did it for ts=1
# i1 == self.indent_level(ts) >= other.indent_level(ts) == i2.
# Intended to be used after not self.less(other) is known, in which
# a program statement, or ENDMARKER, will eventually follow,
# after some (possibly empty) run of tokens of the form
# If an INDENT appears, setting check_equal is wrong, and will
# be undone when we see the INDENT.
# there's nothing we need to check here!  what's important is
# that when the run of DEDENTs ends, the indentation of the
# program statement (or ENDMARKER) that triggered the run is
# equal to what's left at the top of the indents stack
# Ouch!  This assert triggers if the last line of the source
# is indented *and* lacks a newline -- then DEDENTs pop out
# of thin air.
# assert check_equal  # else no earlier NEWLINE, or an earlier INDENT
# this is the first "real token" following a NEWLINE, so it
# must be the first token of the next program statement, or an
# ENDMARKER; the "line" argument exposes the leading whitespace
# for this statement; in the case of ENDMARKER, line is an empty
# string, so will properly match the empty string with which the
# "indents" stack was seeded
# Written by James Roskind
# Based on prior profile module by Sjoerd Mullender...
# Copyright Disney Enterprises, Inc.  All Rights Reserved.
# Licensed to PSF under a Contributor Agreement
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied.  See the License for the specific language
# governing permissions and limitations under the License.
# calc only if needed
# in case this is not unix
# list the tuple indices and directions for sorting,
# along with some printable description
# Be compatible with old profiler
#******************************************************************
# The following functions support actual printing of reports
# Optional "amount" is either a line count, or a percentage of lines.
# time spent in this function alone
# time spent in the function plus all functions that this function called,
# print sub-header only if we have new-style callers
# hack: should print percentages
#**************************************************************************
# func_name is a triple (file:string, line:int, name:string)
# match what old profile produced
# special case for built-in functions
# The following functions combine statistics for pairs functions.
# The bulk of the processing involves correctly handling "call" lists,
# such as callers and callees.
# format used by cProfile
# format used by profile
# The following functions support printing of reports
# Statistics browser added by ESR, April 2001
# That's all, folks.
# skip empty string
# Patterns ending with a slash should match only directories
# `os.path.split()` returns the argument itself as a dirname if it is a
# drive or UNC path.  Prevent an infinite recursion if a drive or UNC path
# contains magic characters (i.e. r'\\?\C:').
# These 2 helper functions non-recursively glob inside a literal directory.
# They return a list of basenames.  _glob1 accepts a pattern while _glob0
# takes a literal basename (so it only has to check for its existence).
# `os.path.split()` returns an empty basename for paths ending with a
# directory separator.  'q*x/' should match only directories.
# This helper function recursively yields relative pathnames inside a literal
# directory.
# If dironly is false, yields all file names inside a directory.
# If dironly is true, yields only directory names.
# Recursively yields relative pathnames inside a literal directory.
# Same as os.path.lexists(), but with dir_fd
# Same as os.path.isdir(), but with dir_fd
# It is common if dirname or basename is empty
# Escaping is done by wrapping any of "*?[" between square brackets.
# Metacharacters do not work in the drive part and shouldn't be escaped.
# Low-level methods
# High-level methods
# Optimization: consume and join any subsequent literal parts here,
# rather than leaving them for the next selector. This reduces the
# number of string concatenation operations and calls to add_slash().
# We must close the scandir() object before proceeding to
# avoid exhausting file descriptors when globbing deep trees.
# Optimization: consume following '**' parts, which have no effect.
# Optimization: consume and join any following non-special parts here,
# rather than leaving them for the next selector. They're used to
# build a regular expression, which we use to filter the results of
# the recursive walk. As a result, non-special pattern segments
# following a '**' wildcard don't require additional filesystem access
# to expand.
# Optimization: directly yield the path if this is
# last pattern part.
# Optimization: this path is already known to exist, e.g. because
# it was returned from os.scandir(), so we skip calling lstat().
# Indices for stat struct members in the tuple returned by os.stat()
# Extract bits from the mode
# Constants used as S_IFMT() for various file types
# (not all are implemented on all systems)
# character device
# block device
# fifo (named pipe)
# socket file
# Fallbacks for uncommon platform-specific constants
# Functions to test for each file type
# Names for permission bits
# set UID bit
# set GID bit
# file locking enforcement
# sticky bit
# Unix V7 synonym for S_IRUSR
# Unix V7 synonym for S_IWUSR
# Unix V7 synonym for S_IXUSR
# mask for owner permissions
# read by owner
# write by owner
# execute by owner
# mask for group permissions
# read by group
# write by group
# execute by group
# mask for others (not in group) permissions
# read by others
# write by others
# execute by others
# Names for file flags
# owner settable flags
# do not dump file
# file may not be changed
# file may only be appended to
# directory is opaque when viewed through a union stack
# file may not be renamed or deleted
# macOS: file is compressed
# macOS: used for handling document IDs
# macOS: entitlement needed for I/O
# macOS: file should not be displayed
# superuser settable flags
# file may be archived
# macOS: entitlement needed for writing
# file is a snapshot file
# macOS: file is a firmlink
# macOS: file is a dataless object
# File type chars according to:
# http://en.wikibooks.org/wiki/C_Programming/POSIX_Reference/sys/stat.h
# Must appear before IFREG and IFDIR as IFSOCK == IFREG | IFDIR
# Unknown filetype
# Windows FILE_ATTRIBUTE constants for interpreting os.stat()'s
# "st_file_attributes" member
# Author: Piers Lauder <piers@cs.su.oz.au> December 1997.
# Authentication code contributed by Donn Cave <donn@u.washington.edu> June 1998.
# String method conversion by ESR, February 2001.
# GET/SETACL contributed by Anthony Baxter <anthony@interlink.com.au> April 2001.
# IMAP4_SSL contributed by Tino Lange <Tino.Lange@isg.de> March 2002.
# GET/SETQUOTA contributed by Andreas Zeidler <az@kreativkombinat.de> June 2002.
# PROXYAUTH contributed by Rick Holbert <holbert.13@osu.edu> November 2002.
# GET/SETANNOTATION contributed by Tomas Lindroos <skitta@abo.fi> June 2005.
# Most recent first
# Maximal line length when calling readline(). This is to prevent
# reading arbitrary length lines. RFC 3501 and 2060 (IMAP 4rev1)
# don't specify a line length. RFC 2683 suggests limiting client
# command lines to 1000 octets and that servers should be prepared
# to accept command lines up to 8000 octets, so we used to use 10K here.
# In the modern world (eg: gmail) the response to, for example, a
# search command can be quite large, so we now use 1M.
# Data larger than this will be read in chunks, to prevent extreme
# overallocation.
# name            valid states
# NB: obsolete
# Literal is no longer used; kept for backward compatibility.
# We no longer exclude the ']' character from the data portion of the response
# code, even though it violates the RFC.  Popular IMAP servers such as Gmail
# allow flags with ']', and there are programs (including imaplib!) that can
# produce them.  The problem with this is if the 'text' portion of the response
# includes a ']' we'll parse the response wrong (which is the point of the RFC
# restriction).  However, that seems less likely to be a problem in practice
# than being unable to correctly parse flags that include ']' chars, which
# was reported as a real-world problem in issue #21815.
# Untagged_status is no longer used; kept for backward compatibility
# We compile these in _mode_xxx.
# Logical errors - debug required
# Service errors - close and retry
# Mailbox status changed to READ-ONLY
# A literal argument to a command
# Tagged commands awaiting response
# {typ: [data, ...], ...}
# Last continuation response
# READ-ONLY desired state
# Open socket to server.
# Create unique tag for this session,
# and compile tagged response matcher.
# Get server welcome message,
# request and store CAPABILITY response.
# Last `_cmd_log_len' interactions
# Default value of IMAP4.host is '', but socket.getaddrinfo()
# (which is used by socket.create_connection()) expects None
# as a default value for host.
# Prod server for response
# XXX: shouldn't this code be removed, not commented out?
#cap = 'AUTH=%s' % mech
#if not cap in self.capabilities:       # Let the server decide!
# Flush old responses.
# Might have been 'SELECTED'
#if not name in self.capabilities:      # Let the server decide!
# Generate a default SSL context if none was passed.
#if self.PROTOCOL_VERSION == 'IMAP4':   # Let the server decide!
# Avoid quoting the flags
# Wait for continuation response
# BAD/NO?
# Send literal
# BYE is expected after LOGOUT
# Read response and store.
# Returns None for continuation responses,
# otherwise first response line received.
# Command completion response?
# '*' (untagged) responses?
# Only other possibility is '+' (continuation) response...
# NB: indicates continuation
# Null untagged response
# Is there a literal to come?
# Read literal direct from connection.
# Store response with literal as tuple
# Read trailer - possibly containing another literal
# Bracketed response information?
# Server replies to the "LOGOUT" command with "BYE"
# If we've seen a BYE at this point, the socket will be
# closed, so report the BYE now.
# Some have reported "unexpected response" exceptions.
# Note that ignoring them here causes loops.
# Instead, send me details of the unexpected response and
# I'll update the code in `_get_response()'.
# Protocol mandates all lines terminated by CRLF
# Run compiled regular expression match method on 's'.
# Save result, return success.
# Keep log of last `_cmd_log_len' interactions for debugging.
# For compatibility with parent class
# Callable object to provide/process data
# Abort conversation
# INTERNALDATE timezone must be subtracted to get UT
# Assume in correct format
# To test: invoke either as 'python imaplib.py [IMAP4_server_hostname]'
# or 'python imaplib.py -s "rsh IMAP4_server_hostname exec /etc/rimapd"'
# to test the IMAP4_stream class
# Login not needed
### Globals & Constants
# Helper for comparing two version number strings.
# Based on the description of the PHP's version_compare():
# http://php.net/manual/en/function.version-compare.php
# any string not found in this dict, will get 0 assigned
# number, will get 100 assigned
### Platform specific APIs
# parse 'glibc 2.28' as ('glibc', '2.28')
# os.confstr() or CS_GNU_LIBC_VERSION value not available
# sys.executable is not set.
# We use os.path.realpath()
# here to work around problems with Cygwin not being
# able to open symlinks for reading
# Examples of VER command output:
# Note that the "Version" string gets localized on different
# Windows versions.
# Try some common cmd strings
#print('Command %s failed: %s' % (cmd, why))
# Parse the output
# Strip trailing dots from version and release
# Normalize the version and build strings (eliminating additional
# zeros)
# Try using WMI first, as this is the canonical source of data
# Fall back to a combination of sys.getwindowsversion and "ver"
# getwindowsversion() reflect the compatibility mode Python is
# running under, and so the service pack value is only going to be
# valid if the versions match.
# Canonical name
# First try reading the information from an XML file which should
# always be present
# If that also doesn't work return the default values
# A namedtuple for iOS version information.
# Import the needed APIs
# An NDK developer confirmed that this is an officially-supported
# API (https://stackoverflow.com/a/28416743). Use `getattr` to avoid
# private name mangling.
# https://android.googlesource.com/platform/bionic/+/refs/tags/android-5.0.0_r1/libc/include/sys/system_properties.h#39
# This API doesn’t distinguish between an empty property and
# a missing one.
### System name aliasing
# Sun's OS
# These releases use the old name SunOS
# Modify release (marketing release = SunOS release - 3)
# XXX Whatever the new SunOS marketing name is...
# In case one of the other tricks
# bpo-35516: Don't replace Darwin with macOS since input release and
# version arguments can be different than the currently running version.
### Various internal helpers
# Format the platform string
# Cleanup some possible filename obstacles...
# No need to report 'unknown' information...
# Fold '--'s and remove trailing '-'
# No sockets...
# Still not working...
# XXX Others too ?
# "file" output is locale dependent: force the usage of the C locale
# to get deterministic behavior.
# -b: do not prepend filenames to output lines (brief mode)
# With the C locale, the output should be mostly ASCII-compatible.
# Decode from Latin-1 to prevent Unicode decode error.
### Information about the used architecture
# Default values for architecture; non-empty strings override the
# defaults given as parameters
# Use the sizeof(pointer) as default number of bits if nothing
# else is given as default.
# Get data from the 'file' system command
# "file" command did not return anything; we'll try to provide
# some sensible defaults then...
# Format not supported
# Bits
# Linkage
# E.g. Windows uses this format
# XXX the A.OUT format also falls under this class...
# Try to use the PROCESSOR_* environment variables
# available on Win XP and later; see
# http://support.microsoft.com/kb/888731 and
# http://www.geocities.com/rick_lively/MANUALS/ENV/MSWIN/PROCESSI.HTM
# WOW64 processes mask the native architecture
# On the iOS simulator, os.uname returns the architecture as uname.machine.
# On device it returns the model name for some reason; but there's only one
# CPU architecture for iOS devices, so we know the right answer.
### Portable uname() interface
# override factory to affect length check
# Get some infos from the builtin os.uname API...
# uname is not available
# Try win32_ver() on win32 platforms
# Try the 'ver' system command available on some
# platforms
# Normalize system to what win32_ver() normally returns
# (_syscmd_ver() tends to return the vendor name as well)
# Under Windows Vista and Windows Server 2008,
# Microsoft changed the output of the ver command. The
# release is no longer printed.  This causes the
# system and release to be misidentified.
# In case we still don't know anything useful, we'll try to
# help ourselves
# System specific extensions
# OpenVMS seems to have release and version mixed up
# On Android, return the name and version of the OS rather than the kernel.
# Normalize responses on iOS
# Replace 'unknown' values with the more portable ''
### Direct interfaces to some of the uname() return values
### Various APIs for extracting information from sys.version
# Get the Python version
# Try the cache first
# Jython
# "version<space>"
# "(#buildno"
# ", builddate"
# ", buildtime)<space>"
# "[compiler]"
# PyPy
# CPython
# "free-threading-build<space>"
# Add the patchlevel version if missing
# Build and cache the result
### The Opus Magnum of platform strings :-)
# Get uname information and then apply platform specific cosmetics
# to it...
# macOS and iOS both report as a "Darwin" kernel
# MS platforms
# check for libc vs. glibc
# Java platforms
# Generic handler
### freedesktop.org os-release standard
# https://www.freedesktop.org/software/systemd/man/os-release.html
# /etc takes precedence over /usr/lib
# These fields are mandatory fields with well-known defaults
# in practice all Linux distributions override NAME, ID, and PRETTY_NAME.
# NAME=value with optional quotes (' or "). The regular expression is less
# strict than shell lexer, but that's ok.
# unescape five special characters mentioned in the standard
### Command line interface
# Default is to print the aliased verbose platform string
# Author: The Dragon De Monsyne <dragondm@integral.org>
# ESMTP support, test code and doc fixes added by
# Better RFC 821 compliance (MAIL and RCPT, and CRLF in data)
# RFC 2554 (authentication) support by Gerhard Haering <gerhard@bigfoot.de>.
# This was modified from the Python 1.5 library HTTP lib.
# more than 8 times larger than RFC 821, 4.5.3
# Maximum number of AUTH challenges sent
# Exception classes used by this module.
# parseaddr couldn't parse it, use it as is and hope for the best.
# parseaddr couldn't parse it, so use it as is.
# Legacy method kept for backward compatibility.
# RFC 2821 says we should use the fqdn in the EHLO/HELO verb, and
# if that can't be calculated, that we should use a domain literal
# instead (essentially an encoded IP address like [A.B.C.D]).
# We can't find an fqdn hostname, so use a domain literal
# This makes it simpler for SMTP_SSL to use the SMTP connect code
# and just alter the socket connection bit.
# send is used by the 'data' command, where command_encoding
# should not be used, but 'data' needs to convert the string to
# binary itself anyway, so that's not a problem.
# Check that the error code is syntactically correct.
# Don't attempt to read a continuation line if it is broken.
# Check if multiline response.
# std smtp commands
# According to RFC1869 some (badly written)
# MTA's will disconnect on an ehlo. Toss an exception if
# that happens -ddm
#parse the ehlo response -ddm
# To be able to communicate with as many SMTP servers as possible,
# we have to take the old-style auth advertisement into account,
# because:
# 1) Else our SMTP feature parser gets confused.
# 2) There are some servers that only advertise the auth methods we
# This doesn't remove duplicates, but that's no problem
# RFC 1869 requires a space between ehlo keyword and parameters.
# It's actually stricter, in that only spaces are allowed between
# parameters, but were not going to check for that here.  Note
# that the space isn't present if there are no parameters.
# a.k.a.
# some useful methods
# RFC 4954 allows auth methods to provide an initial response.  Not all
# methods support it.  By definition, if they return something other
# than None when challenge is None, then they do.  See issue #15014.
# If server responds with a challenge, send the response.
# If server keeps sending challenges, something is wrong.
# CRAM-MD5 does not support initial-response.
# Authentication methods the server claims to support
# Authentication methods we can handle in our preferred order:
# We try the supported authentications in our preferred order, if
# the server supports them.
# Some servers advertise authentication methods they don't really
# support, so if authentication fails, we continue until we've tried
# all methods.
# 235 == 'Authentication successful'
# 503 == 'Error: already authenticated'
# We could not login successfully.  Return result of last attempt.
# RFC 3207:
# The client MUST discard any knowledge obtained from
# the server, such as the list of SMTP service extensions,
# which was not obtained from the TLS negotiation itself.
# 501 Syntax error (no parameters allowed)
# 454 TLS not available due to temporary reason
# the server refused all our recipients
#if we got here then somebody got our mail
# 'Resent-Date' is a mandatory field if the Message is resent (RFC 2822
# Section 3.6.6). In such a case, we use the 'Resent-*' fields.  However,
# if there is more than one 'Resent-' block there's no way to
# unambiguously determine which one is the most recent in all cases,
# so rather than guess we raise a ValueError in that case.
# TODO implement heuristics to guess the correct Resent-* block with an
# option allowing the user to enable the heuristics.  (It should be
# possible to guess correctly almost all of the time.)
# Prefer the sender field per RFC 2822:3.6.2.
# Make a local copy so we can delete the bcc headers.
# A new EHLO is required after reconnecting with connect()
# LMTP extension
# Handle Unix-domain sockets.
# Test the sendmail method, which tests most of the others.
# Note: This always sends to localhost.
# Python module wrapper for _functools C module
# to allow utilities written in Python to be added
# to the functools module.
# Written by Nick Coghlan <ncoghlan at gmail.com>,
# Raymond Hettinger <python at rcn.com>,
# and Łukasz Langa <lukasz at langa.pl>.
# See C source code for _functools credits/copyright
# import types, weakref  # Deferred to single_dispatch()
# Avoid importing types, so we can speedup import time
################################################################################
### update_wrapper() and wraps() decorator
# update_wrapper() and wraps() are tools to help write
# wrapper functions that can handle naive introspection
# Issue #17482: set __wrapped__ last so we don't inadvertently copy it
# from the wrapped function when updating __dict__
# Return the wrapper so this can be used as a decorator via partial()
### total_ordering class decorator
# The total ordering functions all invoke the root magic method directly
# rather than using the corresponding operator.  This avoids possible
# infinite recursion that could occur when the operator dispatch logic
# detects a NotImplemented result and then calls a reflected method.
# Find user-defined comparisons (not those inherited from object).
# prefer __lt__ to __le__ to __gt__ to __ge__
### cmp_to_key() function converter
### reduce() sequence to a single item
### partial() argument application
# Purely functional, no descriptor behaviour
# just in case it's a subclass
# XXX does it need to be *exactly* dict?
# Descriptor version
# func could be a descriptor like classmethod which isn't callable,
# so we can't inherit from partial (it verifies func is callable)
# flattening is mandatory in order to place cls/self before all
# other arguments
# it's also more efficient since only one function will be called
# Assume __get__ returning something new indicates the
# creation of an appropriate callable
# If the underlying descriptor didn't do anything, treat this
# like an instance method
# Helper functions
### LRU Cache function decorator
# All of code below relies on kwds preserving the order input by the user.
# Formerly, we sorted() the kwds before looping.  The new way is *much*
# faster; however, it means that f(x=1, y=2) will now be treated as a
# distinct call from f(y=2, x=1) which will be cached separately.
# Users should only access the lru_cache through its public API:
# The internals of the lru_cache are encapsulated for thread safety and
# to allow the implementation to change (including a possible C version).
# Negative maxsize is treated as 0
# The user_function was passed in directly via the maxsize argument
# Constants shared by all lru cache instances:
# unique object used to signal cache misses
# build a key from the function arguments
# names for the link fields
# bound method to lookup a key or return None
# get cache size without calling len()
# because linkedlist updates aren't threadsafe
# root of the circular doubly linked list
# initialize by pointing to self
# No caching -- just a statistics update
# Simple caching without ordering or size limit
# Size limited caching that tracks accesses by recency
# Move the link to the front of the circular queue
# Getting here means that this same key was added to the
# cache while the lock was released.  Since the link
# update is already done, we need only return the
# computed result and update the count of misses.
# Use the old root to store the new key and result.
# Empty the oldest link and make it the new root.
# Keep a reference to the old key and old result to
# prevent their ref counts from going to zero during the
# update. That will prevent potentially arbitrary object
# clean-up code (i.e. __del__) from running while we're
# still adjusting the links.
# Now update the cache dictionary.
# Save the potentially reentrant cache[key] assignment
# for last, after the root and links have been put in
# a consistent state.
# Put result in a new link at the front of the queue.
# Use the cache_len bound method instead of the len() function
# which could potentially be wrapped in an lru_cache itself.
### cache -- simplified access to the infinity cache
### singledispatch() - single-dispatch generic function decorator
# purge empty sequences
# find merge candidates among seq heads
# reject the current head, it appears later
# remove the chosen candidate
# Bases up to the last explicit ABC are considered first.
# If *cls* is the class that introduces behaviour described by
# an ABC *base*, insert said ABC to its MRO.
# Remove entries which are already present in the __mro__ or unrelated.
# Remove entries which are strict bases of other entries (they will end up
# in the MRO anyway.
# Subclasses of the ABCs in *types* which are also implemented by
# *cls* can be used to stabilize ABC ordering.
# Favor subclasses with the biggest number of useful bases
# If *match* is an implicit ABC but there is another unrelated,
# equally matching implicit ABC, refuse the temptation to guess.
# There are many programs that use functools without singledispatch, so we
# trade-off making singledispatch marginally slower for the benefit of
# making start-up of such applications slightly faster.
# only import typing if annotation parsing is necessary
### cached_property() - property result cached as instance attribute
# not all objects have __dict__ (e.g. class defines slots)
# Wrapper module for _ssl, providing some additional facilities
# implemented in Python.  Written by Bill Janssen.
# if we can't import it, let the error propagate
# RAND_egd is not supported on some platforms
# pseudo content types
# for DER-to-PEM translation
# keep that public name in module namespace
# speed up common case w/o wildcards
# Only match wildcard in leftmost segment.
# no right side
# no partial wildcard matching
# wildcard must match at least one char
# inet_aton() also accepts strings like '1', '127.1', some also trailing
# data like '127.0.0.1 whatever'.
# not an IPv4 address
# only accept injective ipnames
# refuse for short IPv4 notation and additional trailing data
# AF_INET6 not available
# OpenSSL may add a trailing newline to a subjectAltName's IP address,
# commonly with IPv6 addresses. Strip off trailing \n.
# environment vars shadow paths
# SSLSocket is assigned later.
# SSLObject is assigned later.
# SSLSocket class handles server_hostname encoding before it calls
# ctx._wrap_socket()
# Need to encode server_hostname here because _wrap_bio() can only
# handle ASCII str.
# CA certs are never PKCS#7 encoded
# SSLContext sets OP_NO_SSLv2, OP_NO_SSLv3, OP_NO_COMPRESSION,
# OP_CIPHER_SERVER_PREFERENCE, OP_SINGLE_DH_USE and OP_SINGLE_ECDH_USE
# by default.
# verify certs and host name in client mode
# `VERIFY_X509_PARTIAL_CHAIN` makes OpenSSL's chain building behave more
# like RFC 3280 and 5280, which specify that chain building stops with the
# first trust anchor, even if that anchor is not self-signed.
# `VERIFY_X509_STRICT` makes OpenSSL more conservative about the
# certificates it accepts, including "disabling workarounds for
# some broken certificates."
# no explicit cafile, capath or cadata but the verify mode is
# CERT_OPTIONAL or CERT_REQUIRED. Let's try to load default system
# root CA certificates for the given purpose. This may fail silently.
# OpenSSL 1.1.1 keylog file
# load CA root certs
# Used by http.client if no context is explicitly passed.
# Backwards compatibility alias, even though it's not a public name.
# Now SSLSocket is responsible for closing the file descriptor.
# See if we are connected
# We are not connected so this is not supposed to block, but
# testing revealed otherwise on macOS and Windows so we do
# the non-blocking dance regardless. Our raise when any data
# is found means consuming the data is harmless.
# EINVAL occurs for recv(1) on non-connected on unix sockets.
# This prevents pending data sent to the socket before it was
# closed from escaping to the caller who could otherwise
# presume it came through a successful TLS connection.
# Add the SSLError attributes that _ssl.c always adds.
# Explicitly break the reference cycle.
# Must come after setblocking() calls.
# create the SSL object
# non-blocking
# raise an exception here if you wish to check for spurious closes
# getpeername() will raise ENOTCONN if the socket is really
# not connected; note that we can be connected even without
# _connected being set, e.g. if connect() first returned
# EAGAIN.
# Ensure programs don't send data unencrypted if they try to
# use this method.
# os.sendfile() works with plain sockets only
# Here we assume that the socket is client-side, and not
# connected at the time of the call.  We connect it, then wrap it.
# Python does not support forward declaration of types.
# some utility functions
# NOTE: no month, fixed GMT
# found valid month
# return an integer, the previous mktime()-based implementation
# returned a float (fractional seconds are always zero here).
# Comparison Operations *******************************************************#
# Logical Operations **********************************************************#
# Mathematical/Bitwise Operations *********************************************#
# Sequence Operations *********************************************************#
# Other Operations ************************************************************#
# Generalized Lookup Objects **************************************************#
# In-place Operations *********************************************************#
# All of these "__func__ = func" assignments have to happen after importing
# from _operator to make sure they're set to the right function
# The __main__.py used if the users specifies "-m module:fn".
# Note that this will always be written as UTF-8 (module and
# function names can be non-ASCII in Python 3).
# We add a coding cookie even though UTF-8 is the default in Python 3
# because the resulting archive may be intended to be run under Python 2.
# The Windows launcher defaults to UTF-8 when parsing shebang lines if the
# file has no BOM. So use UTF-8 on Windows.
# On Unix, use the filesystem encoding.
# Skip the shebang line from the source.
# Read 2 bytes of the source and check if they are #!.
# Discard the initial 2 bytes and the rest of the shebang line.
# If there was no shebang, "first_2" contains the first 2 bytes
# of the source file, so write them before copying the rest
# of the file.
# Are we copying an existing archive?
# We are creating a new archive from a directory.
# Check that main has the right format.
# Create the list of files to add to the archive now, in case
# the target is being created in the source directory - we
# don't want the target being added to itself
# The target cannot be in the list of files to add. If it were, we'd
# end up overwriting the source file and writing the archive into
# itself, which is an error. We therefore check for that case and
# provide a helpful message for the user.
# Note that we only do a simple path equality check. This won't
# catch every case, but it will catch the common case where the
# source is the CWD and the target is a file in the CWD. More
# thorough checks don't provide enough value to justify the extra
# cost.
# If target is a file-like object, it will simply fail to compare
# equal to any of the entries in files_to_add, so there's no need
# to add a special check for that.
# Handle `python -m zipapp archive.pyz --info`.
# Module and documentation by Eric S. Raymond, 21 Dec 1998
# Input stacking and error message cleanup added by ESR, March 2000
# push_source() and pop_source() made explicit by ESR, January 2001.
# Posix compliance, split(), string arguments, and
# iterator interface by Gustavo Niemeyer, April 2003.
# changes to tokenize more like Posix shells by Vinay Sajip, July 2016.
# _pushback_chars is a push back queue used by lookahead logic
# these chars added because allowed in file names, args, wildcards
#remove any punctuation chars from wordchars
# No pushback.  Get a token.
# Handle inclusions
# Maybe we got EOF instead?
# Neither inclusion nor EOF
# past end of file
# emit current token
# XXX what error should be raised here?
# In posix shells, only the quote itself or the escape
# character may be escaped within quotes.
# This implements cpp-like semantics for relative-path inclusion.
# use single quotes, and put single quotes into double quotes
# the string $'b is then quoted as '$'"'"'b'
# Can't use functools.wraps() here because of bootstrap issues
# not defined in this class
# defined in this class and is the module intended
# Those imports must be deferred due to Python's build system
# where the reprlib module is imported before the math module.
# Integers with more than sys.get_int_max_str_digits() digits
# are rendered differently as their repr() raises a ValueError.
# See https://github.com/python/cpython/issues/135487.
# Note: math.log10(abs(x)) may be overestimated or underestimated,
# but for simplicity, we do not compute the exact number of digits.
# Bugs in x.__repr__() can cause arbitrary
# exceptions -- then make up something
# Since not all sequences of items can be sorted and comparison
# functions may raise arbitrary exceptions, return an unsorted
# sequence in that case.
# Prefixes for site-packages; add additional prefixes like /usr/local here
# Enable per user site-packages directory
# set it to False to disable the feature or True to force the feature
# for distutils.commands.install
# These values are initialized by the getuserbase() and getusersitepackages()
# functions, through the main() function when Python starts.
# don't mess with a PEP 302-supplied __file__
# This ensures that the initial path provided by the interpreter contains
# only absolute pathnames, even if we're running from the build directory.
# Filter out duplicate paths (on case-insensitive file systems also
# if they only differ in case); turn relative paths into absolute
# paths.
# Accept BOM markers in .pth files as we do in source files
# (Windows PowerShell 5.1 makes it hard to emit UTF-8 files without a BOM)
# Fallback to locale encoding for backward compatibility.
# We will deprecate this fallback in the future.
# Add path component
# check process uid == effective uid
# check process gid == effective gid
# NOTE: sysconfig and it's dependencies are relatively large but site module
# needs very limited part of them.
# To speedup startup time, we have copy of them.
# See https://bugs.python.org/issue29585
# Copy of sysconfig._get_implementation()
# Copy of sysconfig._getuserbase()
# Emscripten, iOS, tvOS, VxWorks, WASI, and watchOS have no home directories
# Same to sysconfig.get_path('purelib', os.name+'_user')
# this will also set USER_BASE
# disable user site and return None
# get the per user site-package path
# this call will also make sure USER_BASE and USER_SITE are set
# Not all modules are required to have a __file__ attribute.  See
# PEP 420 for more details.
# noqa: F401
# Reading the initialization (config) file may not be enough to set a
# completion key, so we set one first and then read the file.
# An OSError here could have many causes, but the most likely one
# is that there's no .inputrc file (or .editrc file in the case of
# Mac OS X + libedit) in the expected location.  In that case, we
# want to ignore the exception.
# If no history was loaded, default to .python_history,
# or PYTHON_HISTORY.
# The guard is necessary to avoid doubling history size at
# each interpreter exit when readline was already configured
# through a PYTHONSTARTUP hook, see:
# http://bugs.python.org/issue5845#msg198636
# home directory does not exist or is not writable
# https://bugs.python.org/issue19891
# gh-128066: read-only file system
# Issue 25185: Use UTF-8, as that's what the venv module uses when
# writing the file.
# Doing this here ensures venv takes precedence over user-site
# addsitepackages will process site_prefix again if its in PREFIXES,
# but that's ok; known_paths will prevent anything being added twice
# removeduppaths() might make sys.path absolute.
# fix __file__ and __cached__ of already imported modules too.
# Prevent extending of sys.path when python was started with -S and
# site is imported later.
# Relies on the undocumented fact that BufferedReader.peek()
# always returns at least one byte (except at EOF), independent
# of the value of n
# Leftover data is not a valid bzip2 stream; ignore it.
# Notes for authors of new mailbox subclasses:
# Remember to fsync() changes to disk before closing a modified file
# or returning from a flush() method.  See functions _sync_flush() and
# _sync_close().
# This is only run once.
# If a message is not 7bit clean, we refuse to handle it since it
# likely came from reading invalid messages in text mode, and that way
# lies mojibake.
# Whether each message must end in a newline
# This assumes the target file is open in binary mode.
# Make sure the message ends with a newline
# Universal newline support.
# Records last time we read cur/new
# Adjust if os/fs clocks are skewing
# No file modification should be done after the file is moved to its
# final position in order to prevent race conditions with changes
# from other programs
# This overrides an inapplicable implementation in the superclass.
# temp's subdir and suffix were specified by message.
# temp's subdir and suffix were defaults from add().
# TODO: check if flags are valid standard flag characters?
# TODO: check that flag is a valid standard flag character?
# Maildir changes are always written immediately, so there's nothing
# to do.
# 60 * 60 * 36
# This is used to generate unique file names.
# Fall through to here if stat succeeded or open raised EEXIST.
# If it has been less than two seconds since the last _refresh() call,
# we have to unconditionally re-read the mailbox just in case it has
# been modified, because os.path.mtime() has a 2 sec resolution in the
# most common worst case (FAT) and a 1 sec resolution typically.  This
# results in a few unnecessary re-reads when _refresh() is called
# multiple times in that interval, but once the clock ticks over, we
# will only re-read as needed.  Because the filesystem might be being
# served by an independent system with its own clock, we record and
# compare with the mtimes from the filesystem.  Because the other
# system's clock might be skewing relative to our clock, we add an
# extra delta to our wait.  The default is one tenth second, but is an
# instance variable and so can be adjusted if dealing with a
# particularly skewed or irregular system.
# Refresh toc
# This method is for backward compatibility only.
# No changes require rewriting the file.
# No need to sync the file
# Used to record mailbox size
# _append_message appends the message to the mailbox file. We
# don't need a full rewrite + rename, sync is enough.
# Messages have only been added, so syncing the file
# is enough.
# In order to be writing anything out at all, self._toc must
# already have been generated (and presumably has been modified
# by adding or deleting an item).
# Check length of self._file; if it's changed, some other process
# has modified the mailbox since we scanned it.
# self._file is about to get replaced, so no need to sync.
# Make sure the new file's mode and owner are the same as the old file's
# Sync has been done by self.flush() above.
# This is the first message, and the _pre_mailbox_hook
# hasn't yet been called. If self._pending is True,
# messages have been removed, so _pre_mailbox_hook must
# have been called already.
# Record current length of mailbox
# May be None.
# All messages must end in a newline character, and
# _post_message_hooks outputs an empty line between messages.
# The last line before the "From " line wasn't
# blank, but we consider it a start of a
# message anyway.
# Unlock and close so it can be deleted on Windows
# Skip b'1,' line specifying labels.
# Read up to the stop, or to the end
# Buffer size is arbitrary.
# There's nothing format-specific to explain.
# do *not* close the underlying file object for partial files,
# since it's global to the mailbox object
# Without write access, just skip dotlocking.
# Class for profiling python code. rev 1.0  6/2/94
# Sample timer for use with
#i_count = 0
#def integer_timer():
#itimes = integer_timer # replace with C coded timer returning integers
# The following are the static member functions for the profiler class
# Note that an instance of Profile() is *not* needed to call them.
# calibration constant
# Materialize in local dict for lookup speed.
# test out timer function
# This get_time() implementation needs to be defined
# here to capture the passed-in timer in the parameter
# list (for performance).  Note that we can't assume
# the timer() result contains two values in all
# cases.
# Heavily optimized dispatch routine for time.process_time() timer
# put back unrecorded delta
# Dispatch routine for best timer program (return = scalar, fastest if
# an integer but float works too -- and time.process_time() relies on that).
# Dispatch routine for macintosh (timer returns time in ticks of
# 1/60th second)
# SLOW generic dispatch routine for timer returning lists of numbers
# In the event handlers, the first 3 elements of self.cur are unpacked
# into vrbls w/ 3-letter names.  The last two characters are meant to be
# mnemonic:
# Prefix "r" means part of the Returning or exiting frame.
# Prefix "p" means part of the Previous or Parent or older frame.
# This is the only occurrence of the function on the stack.
# Else this is a (directly or indirectly) recursive call, and
# its cumulative time will get updated when the topmost call to
# it returns.
# hack: gather more
# stats such as the amount of time added to ct courtesy
# of this specific call, and the contribution to cc
# courtesy of this call.
# the C function returned
# The next few functions play with self.cmd. By carefully preloading
# our parallel stack, we can force the profiled result to include
# an arbitrary string as the name of the calling function.
# We use self.cmd as that string, and the resulting stats look
# very nice :-).
# already set
# collect stats from pending stack, including getting final
# timings for self.cmd frame.
# We *can* cause assertion errors here if
# dispatch_trace_return checks for a frame match!
# The following two methods can be called by clients to use
# a profiler to profile a statement, given as a string.
# This method is more useful to profile a single function call.
# The following calculates the overhead for using a profiler.  The
# problem is that it takes a fair amount of time for the profiler
# to stop the stopwatch (from the time it receives an event).
# Similarly, there is a delay from the time that the profiler
# re-starts the stopwatch before the user's code really gets to
# continue.  The following code tries to measure the difference on
# a per-event basis.
# Note that this difference is only significant if there are a lot of
# events, and relatively little user code per event.  For example,
# code with small functions will typically benefit from having the
# profiler calibrated for the current platform.  This *could* be
# done on the fly during init() time, but it is not worth the
# effort.  Also note that if too large a value specified, then
# execution time on some functions will actually appear as a
# negative number.  It is *normal* for some functions (with very
# low call counts) to have such negative stats, even if the
# calibration figure is "correct."
# One alternative to profile-time calibration adjustments (i.e.,
# adding in the magic little delta during each event) is to track
# more carefully the number of events (and cumulatively, the number
# of events during sub functions) that are seen.  If this were
# done, then the arithmetic could be done after the fact (i.e., at
# display time).  Currently, we track only call/return events.
# These values can be deduced by examining the callees and callers
# vectors for each functions.  Hence we *can* almost correct the
# internal time figure at print time (note that we currently don't
# track exception event processing counts).  Unfortunately, there
# is currently no similar information for cumulative sub-function
# time.  It would not be hard to "get all this info" at profiler
# time.  Specifically, we would have to extend the tuples to keep
# counts of this in each frame, and then extend the defs of timing
# tuples to include the significant two figures. I'm a bit fearful
# that this additional feature will slow the heavily optimized
# event/time ratio (i.e., the profiler would run slower, fur a very
# low "value added" feature.)
#**************************************************************
# Set up a test case to be run with and without profiling.  Include
# lots of calls, because we're trying to quantify stopwatch overhead.
# Do not raise any exceptions, though, because we want to know
# exactly how many profile events are generated (one call event, +
# one return event, per Python-level call).
# warm up the cache
# elapsed_noprofile <- time f(m) takes without profiling.
# elapsed_profile <- time f(m) takes with profiling.  The difference
# is profiling overhead, only some of which the profiler subtracts
# out on its own.
# reported_time <- "CPU seconds" the profiler charged to f and f1.
# reported_time - elapsed_noprofile = overhead the profiler wasn't
# able to measure.  Divide by twice the number of calls (since there
# are two profiler events per call in this test) to get the hidden
# overhead per event.
#****************************************************************************
# The script that we're profiling may chdir, so capture the absolute path
# to the output file at startup.
# Prevent "Exception ignored" during interpreter shutdown.
# When invoked as main program, invoke the profiler on a script
#from importlib import _bootstrap_external
#from importlib import _bootstrap  # for _verbose_message
# for _verbose_message
# for check_hash_based_pycs
# for open
# for loads
# for modules
# for mktime
# For warn()
# _read_directory() cache
# standard EOCD signature
# Zip64 EOCD Locator signature
# Zip64 EOCD signature
# Split the "subdirectory" from the Zip archive path, lookup a matching
# entry in sys.path_importer_cache, fetch the file directory from there
# if found, or else read it from the archive.
# On Windows a ValueError is raised for too long paths.
# Back up one path element.
# it exists
# stat.S_ISREG
# it's a not file
# a prefix directory following the ZIP file path.
# Not a module or regular package. See if this is a directory, and
# therefore possibly a portion of a namespace package.
# We're only interested in the last path component of fullname
# earlier components are recorded in self.prefix.
# This is possibly a portion of a namespace
# package. Return the string representing its path,
# without a trailing separator.
# Return a string matching __file__ for the named module
# Deciding the filename requires working out where the code
# would come from if the module was actually loaded
# we have the module, but no source
# Return a bool signifying whether the module is a package or not.
# Load and return the module named by 'fullname'.
# add __path__ to the module *before* the code gets
# executed
# _zip_searchorder defines how we search for a module in the Zip
# archive: we first search for a package __init__, then for
# non-package .pyc, and .py entries. The .pyc entries
# are swapped by initzipimport() if we run in optimized mode. Also,
# '/' is replaced by path_sep there.
# Given a module name, return the potential file path in the
# archive (without extension).
# Does this path represent a directory?
# See if this is a "directory". If so, it's eligible to be part
# of a namespace package. We test by seeing if the name, with an
# appended path separator, exists.
# If dirpath is present in self._get_files(), we have a directory.
# Return some information about a module.
# implementation
# _read_directory(archive) -> files dict (new reference)
# Given a path to a Zip archive, build a dict, mapping file names
# (local to the archive, using SEP as a separator) to toc entries.
# A toc_entry is a tuple:
# (__file__,        # value to use for __file__, available for all files,
# Directories can be recognized by the trailing path_sep in the name,
# data_size and file_offset are 0.
# GH-87235: On macOS all file descriptors for /dev/fd/N share the same
# file offset, reset the file offset after scanning the zipfile directory
# to not cause problems when some runs 'python3 /dev/fd/9 9<some_script'
# Check if there's a comment.
# Zip64 at "correct" offset from standard EOCD
# Buffer now contains a valid EOCD, and header_position gives the
# starting position of it.
# N.b. if someday you want to prefer the standard (non-zip64) EOCD,
# you need to adjust position by 76 for arc to be 0.
# XXX: These are cursory checks but are not as exact or strict as they
# could be.  Checking the arc-adjusted value is probably good too.
# On just-a-zipfile these values are the same and arc_offset is zero; if
# the file has some bytes prepended, `arc_offset` is the number of such
# bytes.  This is used for pex as well as self-extracting .exe.
# Start of Central Directory
# Start of file header
# Bad: Central Dir File Header
# On Windows, calling fseek to skip over the fields we don't use is
# slower than reading the data because fseek flushes stdio's
# internal buffers.    See issue #8745.
# UTF-8 file names extension
# Historical ZIP filename encoding
# Ordering matches unpacking below.
# need to decode extra_data looking for a zip64 extra (which might not
# be present)
# N.b. Here be dragons: the ordering of these is different than
# the header fields, and it's really easy to get it wrong since
# naturally-occuring zips that use all 3 are >4GB
# For a typical zip, this bytes-slicing only happens 2-3 times, on
# small data like timestamps and filesizes.
# XXX These two statements seem swapped because `central_directory_position`
# is a position within the actual file, but `file_offset` (when compared) is
# as encoded in the entry, not adjusted for this file.
# N.b. this must be after we've potentially read the zip64 extra which can
# change `file_offset`.
# During bootstrap, we may need to load the encodings
# package from a ZIP file. But the cp437 encoding is implemented
# in Python in the encodings package.
# Break out of this dependency by using the translation table for
# the cp437 encoding.
# ASCII part, 8 rows x 16 chars
# non-ASCII part, 16 rows x 8 chars
# Return the zlib.decompress function object, or NULL if zlib couldn't
# be imported. The function is cached when found, so subsequent calls
# don't import zlib again.
# Someone has a zlib.py[co] in their Zip file
# let's avoid a stack overflow.
# Given a path to a Zip file and a toc_entry, return the (uncompressed) data.
# Check to make sure the local file header is correct
# Bad: Local File Header
# Start of file data
# data is not compressed
# Decompress with zlib
# Lenient date/time comparison function. The precision of the mtime
# in the archive is lower than the mtime stored in a .pyc: we
# must allow a difference of at most one second.
# dostime only stores even seconds, so be lenient
# Given the contents of a .py[co] file, unmarshal the data
# and return the code object. Raises ImportError it the magic word doesn't
# match, or if the recorded .py[co] metadata does not match the source.
# We don't use _bootstrap_external._validate_timestamp_pyc
# to allow for a more lenient timestamp check.
# Replace any occurrences of '\r\n?' in the input string with '\n'.
# This converts DOS and Mac line endings to Unix line endings.
# Given a string buffer containing Python source code, compile it
# and return a code object.
# Convert the date/time values found in the Zip archive to a value
# that's compatible with the time stamp stored in .pyc files.
# bits 9..15: year
# bits 5..8: month
# bits 0..4: day
# bits 11..15: hours
# bits 8..10: minutes
# bits 0..7: seconds / 2
# Given a path to a .pyc file in the archive, return the
# modification time of the matching .py file and its size,
# or (0, 0) if no source is available.
# strip 'c' or 'o' from *.py[co]
# fetch the time stamp of the .py file for comparison
# with an embedded pyc time stamp
# contents of the matching .py file, or None if no source
# is available.
# Get the code object associated with the module specified by
# 'fullname'.
# bad magic number or non-matching mtime
# in byte code, try next
# The following flags match the values from Include/cpython/compile.h
# Caveat emptor: These flags are undocumented on purpose and depending
# on their effect outside the standard library is **unsupported**.
# Check for source consisting of only blank lines and comments.
# Leave it alone.
# Replace it with a 'pass' statement
# Disable compiler warnings when checking for incomplete input.
# Let other compile() errors propagate.
# fallthrough
# this is an ast.Module in this case
# The CO_xxx symbols are defined here under the same names defined in
# code.h and used by compile.h, so that an editor search will find them here.
# However, they're not exported in __all__, because they don't really belong to
# this module.
# nested_scopes
# generators (obsolete, was 0x1000)
# division
# perform absolute imports by default
# with statement
# print function
# unicode string literals
# StopIteration becomes RuntimeError in generators
# annotations become strings at runtime
# Functions
# Classes
# Exceptions
# We need to use objects from the threading module, but the threading
# module may also want to use our `local` class, if support for locals
# isn't compiled in to the `thread` module.  This creates potential problems
# with circular imports.  For that reason, we don't import `threading`
# until the bottom of this file (a hack sufficient to worm around the
# potential problems).  Note that all platforms on CPython do have support
# for locals in the `thread` module, and there is no circular import problem
# then, so problems introduced by fiddling the order of imports here won't
# manifest.
# The key used in the Thread objects' attribute dicts.
# We keep it a string for speed but make it unlikely to clash with
# a "real" attribute.
# { id(Thread) -> (ref(Thread), thread-local dict) }
# When the localimpl is deleted, remove the thread attribute.
# When the thread is deleted, remove the local dict.
# Note that this is suboptimal if the thread object gets
# caught in a reference loop. We would like to be called
# as soon as the OS-level thread ends instead.
# We need to create the thread dict in anticipation of
# __init__ being called, to make sure we don't call it
# again ourselves.
# Other ideas:
# - A pickle verifier:  read a pickle and check it exhaustively for
# - A protocol identifier:  examine a pickle and return its protocol number
# - A pickle optimizer:  for example, tuple-building code is sometimes more
# "A pickle" is a program for a virtual pickle machine (PM, but more accurately
# called an unpickling machine).  It's a sequence of opcodes, interpreted by the
# PM, building an arbitrarily complex Python object.
# For the most part, the PM is very simple:  there are no looping, testing, or
# conditional instructions, no arithmetic and no function calls.  Opcodes are
# executed once each, from first to last, until a STOP opcode is reached.
# The PM has two data areas, "the stack" and "the memo".
# Many opcodes push Python objects onto the stack; e.g., INT pushes a Python
# integer object on the stack, whose value is gotten from a decimal string
# literal immediately following the INT opcode in the pickle bytestream.  Other
# opcodes take Python objects off the stack.  The result of unpickling is
# whatever object is left on the stack when the final STOP opcode is executed.
# The memo is simply an array of objects, or it can be implemented as a dict
# mapping little integers to objects.  The memo serves as the PM's "long term
# memory", and the little integers indexing the memo are akin to variable
# names.  Some opcodes pop a stack object into the memo at a given index,
# and others push a memo object at a given index onto the stack again.
# At heart, that's all the PM has.  Subtleties arise for these reasons:
# + Object identity.  Objects can be arbitrarily complex, and subobjects
# + Recursive objects.  For example, after "L = []; L.append(L)", L is a
# + Things pickle doesn't know everything about.  Examples of things pickle
# + Backward compatibility and micro-optimization.  As explained below,
# Pickle protocols:
# For compatibility, the meaning of a pickle opcode never changes.  Instead new
# pickle opcodes get added, and each version's unpickler can handle all the
# pickle opcodes in all protocol versions to date.  So old pickles continue to
# be readable forever.  The pickler can generally be told to restrict itself to
# the subset of opcodes available under previous protocol versions too, so that
# users can create pickles under the current version readable by older
# versions.  However, a pickle does not contain its version number embedded
# within it.  If an older unpickler tries to read a pickle using a later
# protocol, the result is most likely an exception due to seeing an unknown (in
# the older unpickler) opcode.
# The original pickle used what's now called "protocol 0", and what was called
# "text mode" before Python 2.3.  The entire pickle bytestream is made up of
# printable 7-bit ASCII characters, plus the newline character, in protocol 0.
# That's why it was called text mode.  Protocol 0 is small and elegant, but
# sometimes painfully inefficient.
# The second major set of additions is now called "protocol 1", and was called
# "binary mode" before Python 2.3.  This added many opcodes with arguments
# consisting of arbitrary bytes, including NUL bytes and unprintable "high bit"
# bytes.  Binary mode pickles can be substantially smaller than equivalent
# text mode pickles, and sometimes faster too; e.g., BININT represents a 4-byte
# int as 4 bytes following the opcode, which is cheaper to unpickle than the
# (perhaps) 11-character decimal string attached to INT.  Protocol 1 also added
# a number of opcodes that operate on many stack elements at once (like APPENDS
# and SETITEMS), and "shortcut" opcodes (like EMPTY_DICT and EMPTY_TUPLE).
# The third major set of additions came in Python 2.3, and is called "protocol
# 2".  This added:
# - A better way to pickle instances of new-style classes (NEWOBJ).
# - A way for a pickle to identify its protocol (PROTO).
# - Time- and space- efficient pickling of long ints (LONG{1,4}).
# - Shortcuts for small tuples (TUPLE{1,2,3}}.
# - Dedicated opcodes for bools (NEWTRUE, NEWFALSE).
# - The "extension registry", a vector of popular objects that can be pushed
# Another independent change with Python 2.3 is the abandonment of any
# pretense that it might be safe to load pickles received from untrusted
# parties -- no sufficient security analysis has been done to guarantee
# this and there isn't a use case that warrants the expense of such an
# analysis.
# To this end, all tests for __safe_for_unpickling__ or for
# copyreg.safe_constructors are removed from the unpickling code.
# References to these variables in the descriptions below are to be seen
# as describing unpickling in Python 2.2 and before.
# Meta-rule:  Descriptions are stored in instances of descriptor objects,
# with plain constructors.  No meta-language is defined from which
# descriptors could be constructed.  If you want, e.g., XML, write a little
# program to generate XML from the objects.
##############################################################################
# Some pickle opcodes have an argument, following the opcode in the
# bytestream.  An argument is of a specific type, described by an instance
# of ArgumentDescriptor.  These are not to be confused with arguments taken
# off the stack -- ArgumentDescriptor applies only to arguments embedded in
# the opcode stream, immediately following an opcode.
# Represents the number of bytes consumed by an argument delimited by the
# next newline character.
# Represents the number of bytes consumed by a two-argument opcode where
# the first argument gives the number of bytes in the second argument.
# num bytes is 1-byte unsigned int
# num bytes is 4-byte signed little-endian int
# num bytes is 4-byte unsigned little-endian int
# num bytes is 8-byte unsigned little-endian int
# name of descriptor record, also a module global name; a string
# length of argument, in bytes; an int; UP_TO_NEWLINE and
# TAKEN_FROM_ARGUMENT{1,4,8} are negative values for variable-length
# cases
# a function taking a file-like object, reading this kind of argument
# from the object at the current position, advancing the current
# position by n bytes, and returning the value of the argument
# human-readable docs for this arg descriptor; a string
# lose the newline
# There's a hack for True and False here.
# Protocol 2 formats
# Object descriptors.  The stack used by the pickle machine holds objects,
# and in the stack_before and stack_after attributes of OpcodeInfo
# descriptors we need names to describe the various types of objects that can
# appear on the stack.
# name of descriptor record, for info only
# type of object, or tuple of type objects (meaning the object can
# be of any type in the tuple)
# human-readable docs for this kind of stack object; a string
# Descriptors for pickle opcodes.
# symbolic name of opcode; a string
# the code used in a bytestream to represent the opcode; a
# one-character string
# If the opcode has an argument embedded in the byte string, an
# instance of ArgumentDescriptor specifying its type.  Note that
# arg.reader(s) can be used to read and decode the argument from
# the bytestream s, and arg.doc documents the format of the raw
# argument bytes.  If the opcode doesn't have an argument embedded
# in the bytestream, arg should be None.
# what the stack looks like before this opcode runs; a list
# what the stack looks like after this opcode runs; a list
# the protocol number in which this opcode was introduced; an int
# human-readable docs for this opcode; a string
# Ways to spell integers.
# Ways to spell strings (8-bit, not Unicode).
# Bytes (protocol 3 and higher)
# Bytearray (protocol 5 and higher)
# Out-of-band buffer (protocol 5 and higher)
# Ways to spell None.
# Ways to spell bools, starting with proto 2.  See INT for how this was
# done before proto 2.
# Ways to spell Unicode strings.
# this may be pure-text, but it's a later addition
# Ways to spell floats.
# Ways to build lists.
# Ways to build tuples.
# Ways to build dicts.
# Ways to build sets
# Way to build frozensets
# Stack manipulation.
# Memo manipulation.  There are really only two operations (get and put),
# each in all-text, "short binary", and "long binary" flavors.
# Access the extension registry (predefined objects).  Akin to the GET
# family.
# Push a class object, or module function, on the stack, via its module
# and name.
# Ways to build objects of classes pickle doesn't know about directly
# (user-defined classes).  I despair of documenting this accurately
# and comprehensibly -- you really have to read the pickle code to
# find all the special cases.
# Machine control.
# Framing support.
# Ways to deal with persistent IDs.
# Verify uniqueness of .name and .code members.
# Build a code2op dict, mapping opcode characters to OpcodeInfo records.
# Also ensure we've got the same stuff as pickle.py, although the
# introspection here is dicey.
# Forget this one.  Any left over in copy at the end are a problem
# of a different kind.
# A pickle opcode generator.
# A pickle optimizer.
# set of all PUT ids
# set of ids used by a GET opcode
# (op, idx) or (pos, end_pos)
# Copy the opcodes except for PUTS without a corresponding GET
# Write the PROTO header before any framing
# A symbolic pickle disassembler.
# Most of the hair here is for sanity checks, but most of it is needed
# anyway to detect when a protocol 0 POP takes a MARK off the stack
# (which in turn is needed to indent MARK blocks correctly).
# crude emulation of unpickler stack
# crude emulation of unpickler memo
# max protocol number seen
# bytecode positions of MARK opcodes
# column hint for annotations
# don't mutate
# See whether a MARK should be popped.
# Pop everything at and after the topmost markobject.
# Stop later code from popping too much.
# Check for correct memo usage.
# for better stack emulation
# make a mild effort to align arguments
# make a mild effort to align annotations
# Note that we delayed complaining until the offending opcode
# was printed.
# Emulate the stack effects.
# For use in the doctest, simply as an example of a class to pickle.
# Copyright 2007 Google Inc.
# big endian
# First merge
# Merge consecutive subnets
# Then iterate over resulting networks, skipping subsumed subnets
# Since they are sorted, last.network_address <= net.network_address
# is a given.
# split IP addresses and networks
# sort and dedup
# find consecutive address ranges in the sorted sequence and summarize them
# int allows a leading +/- as well as surrounding whitespace,
# so we ensure that isn't the case
# Parse the netmask/hostmask like an IP address.
# Try matching a netmask (this would be /1*0*/ as a bitwise regexp).
# Note that the two ambiguous cases (all-ones and all-zeroes) are
# treated as netmasks.
# Invert the bits, and try matching a /0+1+/ hostmask instead.
# a packed address or integer
# Assume input argument to be string or any object representation
# which converts into a formatted IP prefix string.
# Constructing from a tuple (addr, [mask])
# Shorthand for Integer addition and subtraction. This is not
# meant to ever support addition/subtraction of addresses.
# Support string formatting
# From here on down, support for 'bnXx'
# Set some defaults
# Binary is default for ipv4
# Hex is default for ipv6
# 0b or 0x
# always false if one is v4 and the other is v6.
# dealing with another network.
# dealing with another address
# address
# Returning bare address objects (rather than interfaces) allows for
# more consistent behaviour across the network address, broadcast
# address and individual host addresses.
# Make sure we're comparing the network of other.
# If we got here, there's a bug somewhere.
# does this need to raise a ValueError?
# self._version == other._version below here:
# self.network_address == other.network_address below here:
# Always false if one is v4 and the other is v6.
# Equivalent to 255.255.255.255 or 32 bits of 1's.
# There are only a handful of valid v4 netmasks, so we cache them all
# when constructed (see _make_netmask()).
# Check for a netmask in prefix length form
# Check for a netmask or hostmask in dotted-quad form.
# This may raise NetmaskValueError.
# Reject non-ASCII digits.
# We do the length check second, since the invalid character error
# is likely to be more informative for the user
# Handle leading zeros as strict as glibc's inet_pton()
# See security bug bpo-36384
# Convert to integer (we know digits are legal)
# Efficient constructor from integer.
# Constructing from a packed address
# which converts into a formatted IP string.
# An interface with an associated network is NOT the
# same as an unassociated address. That's why the hash
# takes the extra info into account.
# We *do* allow addresses and interfaces to be sorted. The
# unassociated address is considered less than all interfaces.
# Class to use when creating address objects
# Not globally reachable address blocks listed on
# https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# There are only a bunch of valid v6 netmasks, so we cache them all
# We want to allow more parts than the max to be 'split'
# to preserve the correct error message when there are
# too many parts combined with '::'
# An IPv6 address needs at least 2 colons (3 parts).
# If the address has an IPv4-style suffix, convert it to hexadecimal.
# An IPv6 address can't have more than 8 colons (9 parts).
# The extra colon comes from using the "::" notation for a single
# leading or trailing zero part.
# Disregarding the endpoints, find '::' with nothing in between.
# This indicates that a run of zeroes has been skipped.
# Can't have more than one '::'
# parts_hi is the number of parts to copy from above/before the '::'
# parts_lo is the number of parts to copy from below/after the '::'
# If we found a '::', then check if it also covers the endpoints.
# ^: requires ^::
# :$ requires ::$
# Otherwise, allocate the entire address to parts_hi.  The
# endpoints could still be empty, but _parse_hextet() will check
# for that.
# Now, parse the hextets into a 128-bit integer.
# Length check means we can skip checking the integer value
# Start of a sequence of zeros.
# This is the longest sequence of zeros so far.
# For zeros at the end of the address.
# For zeros at the beginning of the address.
# ipv4 encoded using hexadecimal nibbles instead of decimals
# https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
# IANA says N/A, let's consider it not globally reachable to be safe
# RFC 9637: https://www.rfc-editor.org/rfc/rfc9637.html#section-6-2.2
# Author: Steen Lumholt.
# Indices for termios list.
# Clear all POSIX.1-2017 input mode flags.
# See chapter 11 "General Terminal Interface"
# of POSIX.1-2017 Base Definitions.
# Do not post-process output.
# Disable parity generation and detection; clear character size mask;
# let character size be 8 bits.
# Clear all POSIX.1-2017 local mode flags.
# POSIX.1-2017, 11.1.7 Non-Canonical Mode Input Processing,
# Case B: MIN>0, TIME=0
# A pending read shall block until MIN (here 1) bytes are received,
# or a signal is received.
# Do not echo characters; disable canonical input.
# Initialize cache of modules we've seen.
# Odd Function and Class signatures are for back-compatibility.
# These 2 functions are used in these tests
# Lib/test/test_pyclbr, Lib/idlelib/idle_test/test_browser.py
# Compute the full module name (prepending inpackage if set).
# Check in the cache.
# Initialize the dict for this module's contents.
# Check if it is a built-in module; we don't do much for these.
# Check for a dotted module name.
# Search the path for the module.
# Is module a package?
# If module is not Python source, we cannot do anything.
# We know this super class.
# Super class form is module.class:
# look in module for class.
# If we can't find or parse the imported module,
# too bad -- don't die here.
# Value is a __path__ key.
# Module doctest.
# Released to the public domain 16-Jan-2001, by Tim Peters (tim@python.org).
# Major enhancements and refactoring by:
# Provided as-is; use at your own risk; no warranty; no promises; enjoy!
# 0, Option Flags
# 1. Utility Functions
# 2. Example & DocTest
# 3. Doctest Parser
# 4. Doctest Finder
# 5. Doctest Runner
# 6. Test Functions
# 7. Unittest Support
# 8. Debugging Support
# Used in doctests
# Leave the repr() unchanged for backward compatibility
# if skipped is zero
# There are 4 basic classes:
# So the basic picture is:
# +------+                   +---------+                   +-------+
# |object| --DocTestFinder-> | DocTest | --DocTestRunner-> |results|
# Option constants.
# Create a new flag unless `name` is already known.
# Special string markers for use in `want` strings:
######################################################################
## Table of Contents
## 1. Utility Functions
# The IO module provides a handy decoder for universal newline conversion
# get_data() opens files as 'rb', so one must do the equivalent
# conversion as universal newlines would do.
# This regexp matches the start of non-blank lines:
# Get a traceback message.
# Override some StringIO methods.
# If anything at all was written, make sure there's a trailing
# newline.  There's no way for the expected output to indicate
# that a trailing newline is missing.
# Worst-case linear-time ellipsis matching.
# Find "the real" strings.
# Deal with exact matches possibly needed at one or both ends.
# starts with exact match
# ends with exact match
# Exact end matches required more characters than we have, as in
# _ellipsis_match('aa...aa', 'aaa')
# For the rest, we only need to find the leftmost non-overlapping
# match for each piece.  If there's no overall match that way alone,
# there's no overall match period.
# w may be '' at times, if there are consecutive ellipses, or
# due to an ellipsis at the start or end of `want`.  That's OK.
# Search for an empty string succeeds, and doesn't change startpos.
# Support for IGNORE_EXCEPTION_DETAIL.
# Get rid of everything except the exception name; in particular, drop
# the possibly dotted module path (if any) and the exception message (if
# any).  We assume that a colon is never part of a dotted name, or of an
# exception name.
# E.g., given
# return "MyError"
# Or for "abc.def" or "abc.def:\n" return "def".
# The exception name must appear on the first line.
# retain up to the first colon (if any)
# retain just the exception name
# do not play signal games in the pdb
# still use input() to get user input
# Calling set_continue unconditionally would break unit test
# coverage reporting, as Bdb.set_continue calls sys.settrace(None).
# Redirect stdout to the given stream.
# Call Pdb's trace dispatch method.
# [XX] Normalize with respect to os.path.pardir?
# Normalize the path. On Windows, replace "/" with "\".
# Find the base directory for the path.
# A normal module/package
# An interactive session.
# A module w/o __file__ (this includes builtins)
# Combine the base directory and the test path.
## 2. Example & DocTest
## - An "example" is a <source, want> pair, where "source" is a
##### - A "doctest" is a collection of examples, typically extracted from
### Normalize inputs.
# Store properties.
# This lets us sort tests by name:
## 3. DocTestParser
# This regular expression is used to find doctest examples in a
# string.  It defines three groups: `source` is the source code
# (including leading indentation and prompts); `indent` is the
# indentation of the first (PS1) line of the source code; and
# `want` is the expected output (including leading indentation).
# A regular expression for handling `want` strings that contain
# expected exceptions.  It divides `want` into three pieces:
# `msg` may have multiple lines.  We assume/require that the
# exception message is the first non-indented line starting with a word
# character following the traceback header line.
# A callable returning a true value iff its argument is a blank line
# or contains a single comment.
# If all lines begin with the same indentation, then strip it.
# Find all doctest examples in the string:
# Add the pre-example text to `output`.
# Update lineno (lines before this example)
# Extract info from the regexp match.
# Create an Example, and add it to the list.
# Update lineno (lines inside this example)
# Update charno.
# Add any remaining post-example text to `output`.
# Get the example's indentation level.
# Divide source into lines; check that they're properly
# indented; and then strip their indentation & prompts.
# Divide want into lines; check that it's properly indented; and
# then strip the indentation.  Spaces before the last newline should
# be preserved, so plain rstrip() isn't good enough.
# forget final newline & spaces after it
# If `want` contains a traceback message, then extract it.
# Extract options from the source.
# This regular expression looks for option directives in the
# source code of an example.  Option directives are comments
# starting with "doctest:".  Warning: this may give false
# positives for string-literals that contain the string
# "#doctest:".  Eliminating these false positives would require
# actually parsing the string; but we limit them by ignoring any
# line containing "#doctest:" that is *followed* by a quote mark.
# (note: with the current regexp, this will match at most once:)
# This regular expression finds the indentation of every non-blank
# line in a string.
## 4. DocTest Finder
# If name was not specified, then extract it from the object.
# Find the module that contains the given object (if obj is
# a module, then module=obj.).  Note: this may fail, in which
# case module will be None.
# Read the module's source code.  This is used by
# DocTestFinder._find_lineno to find the line number for a
# given object's docstring.
# Check to see if it's one of our special internal "files"
# (see __patched_linecache_getlines).
# Supply the module globals in case the module was
# originally loaded via a PEP 302 loader and
# file is not a valid filesystem path
# No access to a loader, so assume it's a normal
# filesystem path
# Initialize globals, and merge in extraglobs.
# provide a default module name
# Recursively explore `obj`, extracting DocTests.
# Sort the tests by alpha order of names, for consistency in
# verbose-mode output.  This was a feature of doctest in Pythons
# <= 2.3 that got lost by accident in 2.4.  It was repaired in
# 2.4.4 and 2.5.
# [XX] no easy way to tell otherwise
# [XX] no way not be sure.
# If we've already processed this object, then ignore it.
# Find a test for this object, and add it to the list of tests.
# Look for tests in a module's contained objects.
# Recurse to functions & classes.
# Look for tests in a module's __test__ dictionary.
# Look for tests in a class's contained objects.
# Special handling for staticmethod/classmethod.
# Recurse to methods, properties, and nested classes.
# Extract the object's docstring.  If it doesn't have one,
# then return None (no test for this object).
# Find the docstring's location in the file.
# Don't bother if the docstring is empty.
# Return a DocTest for this object.
# __file__ can be None for namespace packages.
# Find the line number for modules.
# Find the line number for classes.
# Note: this could be fooled if a class is defined multiple
# times in a single file.
# Find the line number for functions & methods.
# We don't use `docstring` var here, because `obj` can be changed.
# Functions implemented in C don't necessarily
# have a __code__ attribute.
# If there's no code, there's no lineno
# Find the line number where the docstring starts.  Assume
# that it's the first line that begins with a quote mark.
# Note: this could be fooled by a multiline function
# signature, where a continuation line begins with a quote
# mark.
# We couldn't find the line number.
## 5. DocTest Runner
# This divider string is used to separate failure messages, and to
# separate sections of the summary.
# Keep track of the examples we've run.
# Create a fake output target for capturing doctest output.
#/////////////////////////////////////////////////////////////////
# Reporting methods
# DocTest Running
# Keep track of the number of failed, attempted, skipped examples.
# Save the option flags (since option directives can be used
# to modify them).
# `outcome` state
# Process each example.
# If REPORT_ONLY_FIRST_FAILURE is set, then suppress
# reporting after the first failure.
# Merge in the example's options.
# If 'SKIP' is set, then skip this example.
# Record that we started this example.
# Use a special filename for compile(), so we can retrieve
# the source code during interactive debugging (see
# __patched_linecache_getlines).
# Run the example in the given context (globs), and record
# any exception that gets raised.  (But don't intercept
# keyboard interrupts.)
# Don't blink!  This is where the user's code gets run.
# ==== Example Finished ====
# the actual output
# guilty until proved innocent or insane
# If the example executed without raising any exceptions,
# verify its output.
# The example raised an exception:  check if it was expected.
# SyntaxError / IndentationError is special:
# we don't care about the carets / suggestions / etc
# We only care about the error message and notes.
# They start with `SyntaxError:` (or any other class name)
# If `example.exc_msg` is None, then we weren't expecting
# an exception.
# We expected an exception:  see whether it matches.
# Another chance if they didn't care about the detail.
# Report the outcome.
# Restore the option flags (in case they were modified)
# Record and return the number of failures and attempted.
# Use backslashreplace error handling on write
# Patch pdb.set_trace to restore sys.stdout during interactive
# debugging (so it's not still redirected to self._fakeout).
# Note that the interactive output will go to *our*
# save_stdout, even if that's not the real sys.stdout; this
# allows us to write test cases for the set_trace behavior.
# Patch linecache.getlines, so we can see the example's source
# when we're inside the debugger.
# Make sure sys.displayhook just prints the value to stdout
# Summarization
# Backward compatibility cruft to maintain doctest.master.
# If `want` contains hex-escaped character such as "\u1234",
# then `want` is a string of six characters(e.g. [\,u,1,2,3,4]).
# On the other hand, `got` could be another sequence of
# characters such as [\u1234], so `want` and `got` should
# be folded to hex-escaped ASCII string to compare.
# Handle the common case first, for efficiency:
# if they're string-identical, always return true.
# The values True and False replaced 1 and 0 as the return
# value for boolean comparisons in Python 2.3.
# <BLANKLINE> can be used as a special sequence to signify a
# blank line, unless the DONT_ACCEPT_BLANKLINE flag is used.
# Replace <BLANKLINE> in want with a blank line.
# If a line in got contains only spaces, then remove the
# spaces.
# This flag causes doctest to ignore any differences in the
# contents of whitespace strings.  Note that this can be used
# in conjunction with the ELLIPSIS flag.
# The ELLIPSIS flag says to let the sequence "..." in `want`
# match any substring in `got`.
# We didn't find any match; return false.
# Should we do a fancy diff?
# Not unless they asked for a fancy diff.
# If expected output uses ellipsis, a meaningful fancy diff is
# too hard ... or maybe not.  In two real-life failures Tim saw,
# a diff was a major help anyway, so this is commented out.
# [todo] _ellipsis_match() knows which pieces do and don't match,
# and could be the basis for a kick-ass diff in this case.
##if optionflags & ELLIPSIS and ELLIPSIS_MARKER in want:
## ndiff does intraline difference marking, so can be useful even
# for 1-line differences.
# The other diff types need at least a few lines to be helpful.
# If <BLANKLINE>s are being used, then replace blank lines
# with <BLANKLINE> in the actual output string.
# Check if we should use diff.
# Split want & got into lines.
# Use difflib to find their differences.
# strip the diff header
# If we're not using diff, then simply list the expected
# output followed by the actual output.
## 6. Test Functions
# These should be backwards compatible.
# For backward compatibility, a global instance of a DocTestRunner
# class, updated by testmod.
# If no module was given, then use __main__.
# DWA - m will still be None if this wasn't invoked from the command
# line, in which case the following TypeError is about as good an error
# as we should expect
# Check that we were actually given a module.
# If no name was given, then use the module's name.
# Find, parse, and run all tests in the given module.
# Relativize the path
# If no name was given, then use the file's name.
# Assemble the globals.
# Read the file, convert it to a test, and run it.
## 7. Unittest Support
# restore the original globs
# The option flags don't include any reporting flags,
# so add the default reporting flags
# Skip doctests when running with -O2
# Relativize the path.
# Find the file and read it.
# Convert it to a test, and wrap it in a DocFileCase.
# We do this here so that _normalize_module is called at the right
# level.  If it were called in DocFileTest, then this function
# would be the caller and we might guess the package incorrectly.
## 8. Debugging Support
# Add the example's source code (strip trailing NL)
# Add the expected output:
# Add non-example text.
# Trim junk on both ends.
# Combine the output, and return it.
# Add a courtesy newline to prevent exec from choking (see bug #1172785)
## 9. Example Usage
# Verbose used to be handled by the "inspect argv" magic in DocTestRunner,
# but since we are using argparse we are passing it manually now.
# It is a module -- insert its dir into sys.path and try to
# import it. If it is part of a package, that possibly
# won't work because of package imports.
# Maintained by Georg Brandl.
# Dictionary of available browser controllers
# Preference order of available browsers
# The preferred browser
# Preferred browsers go to the front of the list.
# Need to match to the default browser returned by xdg-settings, which
# may be of the form e.g. "firefox.desktop".
# User gave us a command line, split it into name and args
# User gave us a browser name or path.
# Please note: the following definition hides a builtin function.
# It is recommended one does "import webbrowser" and uses webbrowser.open(url)
# instead of "from webbrowser import *".
# now attempt to clone to fit the new name:
# General parent classes
# name should be a list with arguments
# In remote_args, %s will be replaced with the requested URL.  %action will
# be replaced depending on the value of 'new' passed to open.
# remote_action is used for new=0 (open).  If newwin is not None, it is
# used for new=1 (open_new).  If newtab is not None, it is used for
# new=3 (open_new_tab).  After both substitutions are made, any empty
# strings in the transformed remote_args list will be removed.
# use autoraise argument only for remote invocation
# for TTY browsers, we need stdin/out
# wait at most five seconds. If the subprocess is not finished, the
# remote invocation has (hopefully) started a new instance.
# if remote call failed, open() will try direct invocation
# remote invocation failed, try straight way
# elinks doesn't like its stdout to be redirected -
# it uses redirected stdout as a signal to do -dump
# XXX Currently I know no way to prevent KFM from opening a new win.
# fall through to next variant
# kfmclient's return code unfortunately has no meaning as it seems
# Should be running now.
# Platform support for Unix
# These are the right tests because all these Unix browsers require either
# a console terminal or an X display to run.
# use xdg-open if around
# Opens an appropriate browser for the URL scheme according to
# freedesktop.org settings (GNOME, KDE, XFCE, etc.)
# The default GNOME3 browser
# The default KDE browser
# Common symbolic link for the default X11 browser
# The Mozilla browsers
# Konqueror/kfm, the KDE browser.
# Gnome's Epiphany
# Google Chrome/Chromium browsers
# Opera, quite popular
# OS X can use below Unix support (but we prefer using the OS X
# specific stuff)
# SerenityOS webbrowser, simply called "Browser".
# First try to use the default Windows browser
# Detect some common Windows browsers, fallback to Microsoft Edge
# location in 64-bit Windows
# location in 32-bit Windows
# Prefer X browsers if present
# NOTE: Do not check for X11 browser on macOS,
# XQuartz installation sets a DISPLAY environment variable and will
# autostart when someone tries to access the display. Mac users in
# general don't need an X11 browser.
# Also try console browsers
# Common symbolic link for the default text-based browser
# The Links/elinks browsers <http://links.twibright.com/>
# The Lynx browser <https://lynx.invisible-island.net/>, <http://lynx.browser.org/>
# The w3m browser <http://w3m.sourceforge.net/>
# OK, now that we know what the default preference orders for each
# platform are, allow user to override them with the BROWSER variable.
# Treat choices in same way as if passed into get() but do register
# and prepend to _tryorder
# what to do if _tryorder is now empty?
# Platform support for Windows
# [Error 22] No application is associated with the specified
# file for this operation: '<URL>'
# Platform support for macOS
# opens in default browser
# Platform support for iOS
# If objc exists, we know ctypes is also importable.
# If ctypes isn't available, we can't open a browser
# All the messages in this call return object references.
# This is the equivalent of:
# NSUTF8StringEncoding = 4
# Create an NSURL object representing the URL
# Get the shared UIApplication instance
# This code is the equivalent of:
# UIApplication shared_app = [UIApplication sharedApplication]
# Open the URL on the shared application
# Method returns void
# This file is generated by mkstringprep.py. DO NOT EDIT.
# === Exceptions ===
# === Private utilities ===
# The sum will be a NAN or INF. We can ignore all the finite
# partials, and just look at this special one.
# Sum all the partial sums using builtin sum.
# or raise TypeError
# This formula has poor numeric properties for floats,
# but with fractions it is exact.
# Likely a Decimal.
# Coerces to float first.
# See http://bugs.python.org/issue24068.
# If the types are the same, no need to coerce anything. Put this
# first, so that the usual case (no coercion needed) happens as soon
# as possible.
# Mixed int & other coerce to the other type.
# If one is a (strict) subclass of the other, coerce to the subclass.
# Ints coerce to the other type.
# Mixed fraction & float coerces to float (or float subclass).
# Any other combination is disallowed.
# XXX We should revisit whether using fractions to accumulate exact
# ratios is the right way to go.
# The integer ratios for binary floats can have numerators or
# denominators with over 300 decimal digits.  The problem is more
# acute with decimal floats where the default decimal context
# supports a huge range of exponents from Emin=-999999 to
# Emax=999999.  When expanded with as_integer_ratio(), numbers like
# Decimal('3.14E+5000') and Decimal('3.14E-5000') have large
# numerators or denominators that will slow computation.
# When the integer ratios are accumulated as fractions, the size
# grows to cover the full range from the smallest magnitude to the
# largest.  For example, Fraction(3.14E+300) + Fraction(3.14E-300),
# has a 616 digit numerator.  Likewise,
# Fraction(Decimal('3.14E+5000')) + Fraction(Decimal('3.14E-5000'))
# has 10,003 digit numerator.
# This doesn't seem to have been problem in practice, but it is a
# potential pitfall.
# float NAN or INF.
# x may be an Integral ABC.
# This covers the cases where T is Fraction, or where value is
# a NAN or INF (Decimal or float).
# FIXME: what do we do if this overflows?
# If this function becomes public at some point, more thought
# needs to be given to the signature.  A list of ints is
# plausible when ties is "min" or "max".  When ties is "average",
# either list[float] or list[Fraction] is plausible.
# Default handling of ties matches scipy.stats.mstats.spearmanr.
# Reference: https://www.lri.fr/~melquion/doc/05-imacs17_1-expose.pdf
# For 53 bit precision floats, the bit width used in
# _float_sqrt_of_frac() is 109.
# See principle and proof sketch at: https://bugs.python.org/msg407078
# Convert to float
# Premise:  For decimal, computing (n/m).sqrt() can be off
# Method:   Check the result, moving up or down a step if needed.
# test: n / m > ((root + plus) / 2) ** 2
# test: n / m < ((root + minus) / 2) ** 2
# === Measures of central tendency (averages) ===
# Handle iterators that do not define __len__().
# FIXME: investigate ways to calculate medians without sorting? Quickselect?
# Find the value at the midpoint. Remember this corresponds to the
# midpoint of the class interval.
# Using O(log n) bisection, find where all the x values occur in the data.
# All x will lie within data[i:j].
# Coerce to floats, raising a TypeError if not possible
# Interpolate the median using the formula found at:
# https://www.cuemath.com/data/median-of-grouped-data/
# Lower limit of the median interval
# Cumulative frequency of the preceding interval
# Number of elements in the median internal
# 1.0 / (exp(t) + 2.0 + exp(-t))
# (2/pi) / (exp(t) + exp(-t))
# Notes on methods for computing quantiles
# ----------------------------------------
# There is no one perfect way to compute quantiles.  Here we offer
# two methods that serve common needs.  Most other packages
# surveyed offered at least one or both of these two, making them
# "standard" in the sense of "widely-adopted and reproducible".
# They are also easy to explain, easy to compute manually, and have
# straight-forward interpretations that aren't surprising.
# The default method is known as "R6", "PERCENTILE.EXC", or "expected
# value of rank order statistics". The alternative method is known as
# "R7", "PERCENTILE.INC", or "mode of rank order statistics".
# For sample data where there is a positive probability for values
# beyond the range of the data, the R6 exclusive method is a
# reasonable choice.  Consider a random sample of nine values from a
# population with a uniform distribution from 0.0 to 1.0.  The
# distribution of the third ranked sample point is described by
# betavariate(alpha=3, beta=7) which has mode=0.250, median=0.286, and
# mean=0.300.  Only the latter (which corresponds with R6) gives the
# desired cut point with 30% of the population falling below that
# value, making it comparable to a result from an inv_cdf() function.
# The R6 exclusive method is also idempotent.
# For describing population data where the end points are known to
# be included in the data, the R7 inclusive method is a reasonable
# choice.  Instead of the mean, it uses the mode of the beta
# distribution for the interior points.  Per Hyndman & Fan, "One nice
# property is that the vertices of Q7(p) divide the range into n - 1
# intervals, and exactly 100p% of the intervals lie to the left of
# Q7(p) and 100(1 - p)% of the intervals lie to the right of Q7(p)."
# If needed, other methods could be added.  However, for now, the
# position is that fewer options make for easier choices and that
# external packages can be used for anything more advanced.
# rescale i to m/n
# clamp to 1 .. ld-1
# exact integer math
# === Measures of spread ===
# See http://mathworld.wolfram.com/Variance.html
# Handle Nans and Infs gracefully
# Finite inputs overflowed, so scale down, and recompute.
# sqrt(1 / sys.float_info.max)
# Non-zero inputs underflowed, so scale up, and recompute.
# Scale:  1 / sqrt(sys.float_info.min * sys.float_info.epsilon)
# Improve accuracy with a differential correction.
# https://www.wolframalpha.com/input/?i=Maclaurin+series+sqrt%28h**2+%2B+x%29+at+x%3D0
# === Statistics for relations between two inputs ===
# See https://en.wikipedia.org/wiki/Covariance
# Center rankings around zero
# List because used three times below
# Generator because only used once below
# Add zero to coerce result to a float
# equivalent to:  covariance(x, y) / variance(x)
## Normal Distribution #####################################################
# There is no closed-form solution to the inverse CDF for the normal
# distribution, so we use a rational approximation instead:
# Wichura, M.J. (1988). "Algorithm AS241: The Percentage Points of the
# Normal Distribution".  Applied Statistics. Blackwell Publishing. 37
# (3): 477–484. doi:10.2307/2347330. JSTOR 2347330.
# Hash sum: 55.88319_28806_14901_4439
# Hash sum: 49.33206_50330_16102_89036
# Hash sum: 47.52583_31754_92896_71629
# https://en.wikipedia.org/wiki/Normal_distribution
# https://en.wikipedia.org/wiki/Variance#Properties
# See: "The overlapping coefficient as a measure of agreement between
# probability distributions and point estimation of the overlap of two
# normal densities" -- Henry F. Inman and Edwin L. Bradley Jr
# http://dx.doi.org/10.1080/03610928908830127
# sort to assure commutativity
# https://www.statisticshowto.com/probability-and-statistics/z-score/
## kde_random() ##############################################################
# Modified 04-Oct-1995 by Jack Jansen to use binascii module
# Modified 30-Dec-2003 by Barry Warsaw to add full RFC 3548 support
# Modified 22-May-2007 by Guido van Rossum to use bytes everywhere
# Legacy interface exports traditional RFC 2045 Base64 encodings
# Generalized interface for other encodings
# Base85 and Ascii85 encodings
# Standard Base64 encoding
# Some common Base64 alternatives.  As referenced by RFC 3458, see thread
# starting at:
# http://zgp.org/pipermail/p2p-hackers/2001-September/000316.html
# Types acceptable as binary data
# Base64 encoding/decoding uses binascii
# Base32 encoding/decoding must be done in Python
# Delay the initialization of the table to not waste memory
# if the function is never called
# Pad the last quantum with zero bits if necessary
# Don't use += !
# bits 1 - 10
# bits 11 - 20
# bits 21 - 30
# bits 31 - 40
# Adjust for any leftover partial quanta
# Handle section 2.4 zero and one mapping.  The flag map01 will be either
# False, or the character to map the digit 1 (one) to.  It should be
# either L (el) or I (eye).
# Strip off pad characters from the right.  We need to count the pad
# characters because this will tell us how many null bytes to remove from
# the end of the decoded string.
# Now decode the full quanta
# Process the last, partial quanta
# 1: 4, 3: 3, 4: 2, 6: 1
# base32hex does not have the 01 mapping
# RFC 3548, Base 16 Alphabet specifies uppercase, but hexlify() returns
# lowercase.  The RFC also recommends against accepting input case
# insensitively.
# Ascii85 encoding/decoding
# Helper function for a85encode and b85encode
# Delay the initialization of tables to not waste memory
# Strip off start/end markers
# We have to go through this stepwise, so as to ignore spaces and handle
# special short sequences
# Skip whitespace
# Throw away the extra padding
# The following code is originally taken (with permission) from Mercurial
# Translating b85 valid but z85 invalid chars to b'\x00' is required
# to prevent them from being decoded as b85 valid chars.
# Legacy interface.  This code could be cleaned up since I don't believe
# binascii has any line length limitations.  It just doesn't seem worth it
# though.  The files should be opened in binary mode.
# Excluding the CRLF
# Usable as a script...
#'
# Note:  more names are added to __all__ later.
# Any new dependencies of the os module and/or changes in path separator
# requires updating importlib as well.
# fstat always works
# mac os x10.3
# Some platforms don't support lchmod().  Often the function exists
# anyway, as a stub that always returns ENOSUP or perhaps EOPNOTSUPP.
# (No, I don't know why that's a good design.)  ./configure will detect
# this and reject it--so HAVE_LCHMOD still won't be defined on such
# platforms.  This is Very Helpful.
# However, sometimes platforms without a working lchmod() *do* have
# fchmodat().  (Examples: Linux kernel 3.2 with glibc 2.15,
# OpenIndiana 3.x.)  And fchmodat() has a flag that theoretically makes
# it behave like lchmod().  So in theory it would be a suitable
# replacement for lchmod().  But when lchmod() doesn't work, fchmodat()'s
# flag doesn't work *either*.  Sadly ./configure isn't sophisticated
# enough to detect this condition--it only determines whether or not
# fchmodat() minimally works.
# Therefore we simply ignore fchmodat() when deciding whether or not
# os.chmod supports follow_symlinks.  Just checking lchmod() is
# sufficient.  After all--if you have a working fchmodat(), your
# lchmod() almost certainly works too.
# _add("HAVE_FCHMODAT",   "chmod")
# Python uses fixed values for the SEEK_ constants; they are mapped
# to native constants if necessary in posixmodule.c
# Other possible SEEK values are directly imported from posixmodule.c
# Super directory utilities.
# (Inspired by Eric Raymond; the doc strings are mostly his)
# Defeats race condition when another thread created the path
# xxx/newdir/. exists if xxx/newdir exists
# Cannot rely on checking for EEXIST, since the operating system
# could give priority to other errors like EACCES or EROFS
# Private sentinel that makes walk() classify all symlinks and junctions as
# regular files.
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains.
# We suppress the exception here, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit.
# If is_dir() raises an OSError, consider the entry not to
# be a directory, same behaviour as os.path.isdir().
# Bottom-up: traverse into sub-directory, but exclude
# symlinks to directories if followlinks is False
# If is_symlink() raises an OSError, consider the
# entry not to be a symbolic link, same behaviour
# as os.path.islink().
# Yield before sub-directory traversal if going top down
# Traverse into sub-directories
# bpo-23605: os.path.islink() is used instead of caching
# entry.is_symlink() result during the loop on os.scandir() because
# the caller can replace the directory entry during the "yield"
# above.
# Yield after sub-directory traversal if going bottom up
# Close any file descriptors still on the stack.
# Each item in the _fwalk() stack is a pair (action, args).
# args: (isroot, dirfd, toppath, topname, entry)
# args: (toppath, dirnames, filenames, topfd)
# args: dirfd
# Note: This uses O(depth of the directory tree) file descriptors: if
# necessary, it can be adapted to only require O(1) FDs, see issue
# #13734.
# Note: To guard against symlink races, we use the standard
# lstat()/open()/fstat() trick.
# Add dangling symlinks, ignore disappeared files
# Add trailing slash.
# Use a local import instead of a global import to limit the number of
# modules loaded at startup: the os module is always loaded at startup by
# Python. It may also avoid a bootstrap issue.
# {b'PATH': ...}.get('PATH') and {'PATH': ...}.get(b'PATH') emit a
# BytesWarning when using python -b or python -bb: ignore the warning
# Change environ to automatically call putenv() and unsetenv()
# raise KeyError with the original key value
# list() from dict object is an atomic operation
# Where Env Var Names Must Be UPPERCASE
# Where Env Var Names Can Be Mixed Case
# unicode environ
# bytes environ
# Does type-checking of `filename`.
# Supply spawn*() (probably only for Unix)
# XXX Should we support P_DETACH?  I suppose it could fork()**2
# and close the std I/O streams.  Also, P_OVERLAY is the same
# as execv*()?
# Internal helper; func is the exec*() function to use
# Child
# Parent
# Caller is responsible for waiting!
# Note: spawnvp[e] isn't currently supported on Windows
# These aren't supplied by the basic Windows code
# but can be easily implemented in Python
# At the moment, Windows doesn't implement spawnvp[e],
# so it won't have spawnlp[e] either.
# VxWorks has no user space shell provided. As a result, running
# command in a shell can't be supported.
# Supply os.popen()
# Helper for popen() -- a proxy for a file whose close waits for the process
# Shift left to match old behavior
# Supply os.fdopen()
# For testing purposes, make sure the function is available when the C
# implementation exists.
# Work from the object's type to match method resolution of other magic
# methods.
# If there is no C implementation, make the pure Python version the
# implementation as transparently as possible.
# Just an alias to cpu_count() (same docstring)
# The recognized platforms - known behaviors
# The built-in int type
# The built-in bytes type
# Set the variant to RFC 4122.
# Set the version number.
# is_safe is a SafeUUID instance.  Return just its value, so that
# it can be un-pickled in older Python versions without SafeUUID.
# is_safe was added in 3.7; it is also omitted when it is "unknown"
# Q. What's the value of being able to sort UUIDs?
# A. Use them as keys in a B-Tree or similar mapping.
# The version bits are only meaningful for RFC 4122 UUIDs.
# LC_ALL=C to ensure English output, stderr=DEVNULL to prevent output
# on stderr (Note: we don't have an example where the words we search
# for are actually localized, but in theory some system could do so.)
# Empty strings will be quoted by popen so we should just ommit it
# For MAC (a.k.a. IEEE 802, or EUI-48) addresses, the second least significant
# bit of the first octet signifies whether the MAC address is universally (0)
# or locally (1) administered.  Network cards from hardware manufacturers will
# always be universally administered to guarantee global uniqueness of the MAC
# address, but any particular machine may have other interfaces which are
# locally administered.  An example of the latter is the bridge interface to
# the Touch Bar on MacBook Pros.
# This bit works out to be the 42nd bit counting from 1 being the least
# significant, or 1<<41.  We'll prefer universally administered MAC addresses
# over locally administered ones since the former are globally unique, but
# we'll return the first of the latter found if that's all the machine has.
# See https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local_(U/L_bit)
# Virtual interfaces, such as those provided by
# VPNs, do not have a colon-delimited MAC address
# as expected, but a 16-byte HWAddr separated by
# dashes. These should be ignored in favor of a
# real MAC address
# Accept 'HH:HH:HH:HH:HH:HH' MAC address (ex: '52:54:00:9d:0e:67'),
# but reject IPv6 address (ex: 'fe80::5054:ff:fe9' or '123:2:3:4:5:6:7:8').
# Virtual interfaces, such as those provided by VPNs, do not have a
# colon-delimited MAC address as expected, but a 16-byte HWAddr separated
# by dashes. These should be ignored in favor of a real MAC address
# (Only) on AIX the macaddr value given is not prefixed by 0, e.g.
# en0   1500  link#2      fa.bc.de.f7.62.4 110854824     0 160133733     0     0
# not
# en0   1500  link#2      fa.bc.de.f7.62.04 110854824     0 160133733     0     0
# The following functions call external programs to 'get' a macaddr value to
# be used as basis for an uuid
# This works on Linux ('' or '-a'), Tru64 ('-av'), but not all Unixes.
# This works on Linux with iproute2.
# Try getting the MAC addr from arp based on our IP address (Solaris).
# This works on OpenBSD
# This works on Linux, FreeBSD and NetBSD
# Return None instead of 0.
# This might work on HP-UX.
# This works on AIX and might work on Tru64 UNIX.
# Import optional C extension at toplevel, to help disabling it when testing
# RFC 9562, §6.10-3 says that
# The "multicast bit" of a MAC address is defined to be "the least
# significant bit of the first octet".  This works out to be the 41st bit
# counting from 1 being the least significant bit, or 1<<40.
# See https://en.wikipedia.org/w/index.php?title=MAC_address&oldid=1128764812#Universal_vs._local_(U/L_bit)
# _OS_GETTERS, when known, are targeted for a specific OS or platform.
# The order is by 'common practice' on the specified platform.
# Note: 'posix' and 'windows' _OS_GETTERS are prefixed by a dll/dlload() method
# which, when successful, means none of these "external" methods are called.
# _GETTERS is (also) used by test_uuid.py to SkipUnless(), e.g.,
# bpo-40201: _windll_getnode will always succeed, so these are not needed
# When the system provides a version-1 UUID generator, use it (but don't
# use UuidCreate here because its UUIDs don't conform to RFC 4122).
# 0x01b21dd213814000 is the number of 100-ns intervals between the
# UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.
# instead of stable storage
# The following standard UUIDs are for use with uuid3() or uuid5().
# Dummy value for Enum and Flag as there are explicit checks for them
# before they have been created.
# This is also why there are checks in EnumType like `if Enum is not None`
# do not use `re` as `re` imports `enum`
# num must be a positive integer
# use previous enum.property
# look up previous attibute
# use previous descriptor
# look for a member by this name.
# first step: remove ourself from enum_class
# second step: create member based on enum_class
# special case for tuple enums
# wrap it one more time
# If another member with the same value was already defined, the
# new member becomes an alias to the existing one.
# try to do a fast lookup to avoid the quadratic loop
# this could still be an alias if the value is multi-bit and the
# class is a flag class
# no other instances found, record this member in _member_names_
# This may fail if value is not hashable. We can't add the value
# to the map, and by-value lookups for this value will be
# linear.
# keep track of the value in a list so containment checks are quick
# use a dict -- faster look-up than a list, and keeps insertion order since 3.7
# do nothing, name will be a normal attribute
# While not in use internally, those are common for pretty
# printing and thus excluded from Enum's reservation of
# _sunder_ names
# check if members already defined as auto()
# descriptor overwriting an enum?
# unwrap value here; it won't be processed by the below `else`
# enum overwriting a descriptor?
# unwrap value here -- it will become a member
# insist on an actual tuple, no subclasses, in keeping with only supporting
# top-level auto() usage (not contained in any other data structure)
# accepts iterable as multiple arguments?
# then pass them in singly
# keep private name for backwards compatibility
# check that previous enum members do not exist
# create the namespace dict
# inherit previous flags and _generate_next_value_ function
# an Enum class is final once enumeration items have been defined; it
# cannot be mixed with other types (int, float, etc.) if it has an
# inherited __new__ unless a new __new__ is defined (or the resulting
# class will fail).
# remove any keys listed in _ignore_
# grab member names
# check for illegal enum names (any others?)
# adjust the sunders
# convert to normal dict
# data type of member and the controlling Enum class
# convert future enum members into temporary _proto_members
# house-keeping structures
# for comparing with non-hashable types
# e.g. frozenset() with set()
# now set the __repr__ for the value
# Flag structures (will be removed if final class is not a Flag
# since 3.12 the note "Error calling __set_name__ on '_proto_member' instance ..."
# is tacked on to the error instead of raising a RuntimeError, so discard it
# update classdict with any changes made by __init_subclass__
# double check that repr and friends are not the mixin's or various
# things break (such as pickle)
# however, if the method is defined in the Enum itself, don't replace
# it
# Also, special handling for ReprEnum
# if member_type does not define __str__, object.__str__ will use
# its __repr__ instead, so we'll also use its __repr__
# check for mixin overrides before replacing
# for Flag, add __or__, __and__, __xor__, and __invert__
# replace any other __new__ with our own (as long as Enum is not None,
# anyway) -- again, this is to support pickle
# if the user defined their own __new__, save it before it gets
# clobbered in case they subclass later
# py3 support for definition order (helps keep py2/py3 code in sync)
# _order_ checking is spread out into three/four steps
# - if enum_class is a Flag:
# - remove any aliases from _order_
# - check that _order_ and _member_names_ match
# step 1: ensure we have a list
# remove Flag structures if final class is not a Flag
# set correct __iter__
# _order_ step 2: remove any items from _order_ that are not single-bit
# _order_ step 3: remove aliases from _order_
# _order_ step 4: verify that _order_ and _member_names_ match
# simple value lookup if members exist
# otherwise, functional API: we're creating a new Enum type
# no body? no data-type? possibly wrong usage
# both structures are lists
# nicer error message when someone tries to delete an attribute
# (see issue19025).
# return whatever mixed-in data type has
# special processing needed for names?
# Here, names is either an iterable of (name, value) or a mapping.
# Fall back on _getframe if _getframemodulename is missing
# convert all constants from source (or module) that pass filter() to
# a new Enum called name, and export the enum and its members back to
# module;
# also, replace the __reduce_ex__ method so unpickling works in
# previous Python versions
# _value2member_map_ is populated in the same order every time
# for a consistent reverse mapping of number to name when there
# are multiple names for the same number.
# sort by value
# unless some values aren't comparable, in which case sort by name
# ensure final parent class is an Enum derivative, find any concrete
# data type, and check that Enum has no members
# if we hit an Enum, use it's _value_repr_
# this is our data repr
# double-check if a dataclass with a default __repr__
# a datatype has a __new__ method, or a __dataclass_fields__ attribute
# now find the correct __new__, checking to see of one was defined
# by the user; also check earlier enum classes in case a __new__ was
# saved as __new_member__
# should __new__ be saved as __new_member__ later?
# check all possibles for __new_member__ before falling back to
# __new__
# if a non-object.__new__ is used then whatever value/tuple was
# assigned to the enum member name will be passed to __new__ and to the
# new enum member's __init__
# _value_ structures are not updated
# if necessary, get redirect in place and then add it to _member_map_
# earlier descriptor found; copy fget, fset, fdel to this one.
# now add to _member_map_ (even aliases)
# keep EnumMeta name for backwards compatibility
# all enum instances are actually created during class construction
# without calling this method; this method is called by the metaclass'
# __call__ (i.e. Color(3) ), and by pickle
# For lookups like Color(Color.RED)
# by-value search for a matching enum member
# see if it's in the reverse mapping (for hashable values)
# Not found, no need to do long O(n) search
# not there, now do long search -- O(n) behavior
# still not found -- verify that members exist, in-case somebody got here mistakenly
# (such as via super when trying to override __new__)
# still not found -- try _missing_ hook
# ensure all variables that could hold an exception are destroyed
# unhashable value, do long search
# that's an enum.property
# in case it was added by `dir(self)`
# enum.property is used to provide access to the `name` and
# `value` attributes of enum members while keeping some measure of
# protection from modification, while still allowing for an enumeration
# to have members named `name` and `value`.  This works because each
# instance of enum.property saves its companion member, which it returns
# on class lookup; on instance lookup it either executes a provided function
# or raises an AttributeError.
# it must be a string
# check that encoding argument is a string
# check that errors argument is a string
# should not be used with Flag-type enums
# check boundaries
# - value must be in range (e.g. -16 <-> +15, i.e. ~15 <-> 15)
# - value must not include any skipped flags (e.g. if bit 2 is not
# get members and unknown
# normal Flag?
# construct a singleton enum pseudo-member
# use setdefault in case another thread already created a composite
# with this value
# note: zero is a special case -- always add it
# Flag / IntFlag
# create basic member (possibly isolate value for alias check)
# now check if alias
# an alias to an existing member
# finish creating member
# not a multi-bit alias, record in _member_names_ and _flag_mask_
# Enum / IntEnum / StrEnum
# check for duplicate names
# check for powers of two
# check for missing consecutive integers
# limit max length to protect against DOS attacks
# examine each alias and check for unnamed flags
# not an alias
# negative numbers are not checked
# keys known to be different, or very long
# members are checked below
# remove all spaces/tabs
# keys known to be different or absent
# cannot compare functions, and it exists in both, so we're good
# method is inherited -- check it out
# if the method existed in only one of the enums, it will have been caught
# in the first checks above
# Auto-generated by Tools/build/generate_token.py
# These aren't used by the C tokenizer but are needed for tokenize.py
# Special definitions for cooperation with parser
# Check dataclass has generated repr method.
# A list of alternating (non-space, space) strings
# drop empty last part
# The SimpleNamespace repr is "namespace" instead of the class
# name, so we do the same here. For subclasses; use the class name.
# Special-case representation of recursion to match standard
# recursive dataclass repr.
# Return triple (repr_string, isreadable, isrecursive).
# This should never be removed, see rationale in:
# https://bugs.python.org/issue43743#msg393429
# macOS
# CMD defaults in Windows 10
# disk_usage is added later, if available on the platform
# Note: copyfileobj() is left alone in order to not introduce any
# unexpected breakage. Possible risks by using zero-copy calls
# in copyfileobj() are:
# - fdst cannot be open in "a"(ppend) mode
# - fsrc and fdst may be open in "t"(ext) mode
# - fsrc may be a BufferedReader (which hides unread data in a buffer),
# - possibly others (e.g. encrypted fs/partition?)
# Hopefully the whole file will be copied in a single call.
# sendfile() is called in a loop 'till EOF is reached (0 return)
# so a bufsize smaller or bigger than the actual file size
# should not make any difference, also in case the file content
# changes while being copied.
# min 8MiB
# 128MiB
# On 32-bit architectures truncate to 1GiB to avoid OverflowError,
# see bpo-38319.
# ...in oder to have a more informative exception.
# sendfile() on this platform (probably Linux < 2.6.33)
# does not support copies between regular files (only
# sockets).
# filesystem is full
# Give up on first call and if no data was copied.
# Localize variable access to minimize overhead.
# Macintosh, Unix.
# All other platforms: check for same pathname.
# File most likely does not exist
# XXX What about other special files? (sockets, devices...)
# Linux
# Windows, see:
# https://github.com/python/cpython/pull/7160#discussion_r195405230
# Issue 43219, raise a less confusing exception
# follow symlinks (aka don't not follow symlinks)
# use the real function if it exists
# use the real function only if it exists
# *and* it supports follow_symlinks
# We must copy extended attributes before the file is (potentially)
# chmod()'ed read-only, otherwise setxattr() will error with -EACCES.
# if we got a NotImplementedError, it's because
# therefore we're out of options--we simply cannot chown the
# symlink.  give up, suppress the error.
# (which is what shutil always did in this circumstance.)
# for compat
# Likely encountered a symlink we aren't allowed to create.
# Fall back on the old code
# Possibly encountered a hidden or readonly file we can't
# overwrite. Fall back on old code
# Special check for directory junctions, which appear as
# symlinks but we want to recurse.
# We can't just leave it to `copy_function` because legacy
# code with a custom `copy_function` may rely on copytree
# doing the right thing.
# ignore dangling symlink if the flag is on
# otherwise let the copy occur. copy2 will raise an error
# Will raise a SpecialFileError for unsupported file types
# catch the Error from the recursive copytree so that we can
# continue with other files
# Copying file access times may fail on Windows
# version vulnerable to race conditions
# Version using fd-based APIs to protect against races
# Each stack item has four elements:
# * func: The first operation to perform: os.lstat, os.close or os.rmdir.
# * dirfd: Open file descriptor, or None if we're processing the top-level
# * path: Path of file to operate upon. This is passed to onexc() if an
# * orig_entry: os.DirEntry, or None if we're processing the top-level
# For error reporting.
# Symlinks to directories are forbidden, see GH-46010.
# Traverse into sub-directory.
# delegate to onerror
# While the unsafe rmtree works fine on bytes, the fd based does not.
# symlinks to directories are forbidden, see bug #1669
# can't continue even if onexc hook returns
# Allow introspection of whether or not the hardening against symlink
# attacks is supported on the current platform
# We might be on a case insensitive filesystem,
# perform the rename anyway.
# Using _basename instead of os.path.basename is important, as we must
# ignore any trailing slash to avoid the basename returning ''
# late import for breaking circular dependency
# creating the tarball
# Maps the name of the archive format to a tuple containing:
# * the archiving function
# * extra keyword arguments
# * description
# Support path-like base_name here for backwards-compatibility.
# first make sure no other unpacker is registered for this extension
# don't extract absolute paths or ones with .. in them
# file
# Maps the name of the unpack format to a tuple containing:
# * extensions
# * the unpacking function
# we need to look at the registered unpackers supported extensions
# -1 means don't change it
# user can either be an int (the uid) or a string (the system username)
# columns, lines are the working values
# only query if necessary
# stdout is None, closed, detached, or not a terminal, or
# os.get_terminal_size() is unsupported
# Check that a given file can be accessed with the correct mode.
# Additionally check that `file` is not a directory, as on Windows
# directories pass the os.access check.
# If we're given a path with a directory part, look it up directly rather
# than referring to PATH directories. This includes checking relative to
# the current directory, e.g. ./script
# os.confstr() or CS_PATH is not available
# bpo-35755: Don't use os.defpath if the PATH environment variable
# is set to an empty string
# PATH='' doesn't match, whereas PATH=':' looks in the current
# PATHEXT is necessary to check on Windows.
# If X_OK in mode, simulate the cmd.exe behavior: look at direct
# match if and only if the extension is in PATHEXT.
# If X_OK not in mode, simulate the first result of where.exe:
# always look at direct match before a PATHEXT match.
# On other platforms you don't have things like PATHEXT to tell you
# what file suffixes are executable, so just pass on cmd as-is.
# ABC.
# or changing parameter names.  This is still a bit dubious but at
# Concrete numeric types must provide their own hash implementation
## Notes on Decimal
## ----------------
## Decimal has all of the methods specified by the Real abc, but it should
## not be registered as a Real because decimals do not interoperate with
## binary floats (i.e.  Decimal('3.14') + 2.71828 is undefined).  But,
## abstract reals are expected to interoperate (i.e. R1 + R2 should be
## expected to work if R1 and R2 are both Reals).
# Concrete implementations of Complex abstract methods.
# Concrete implementation of Real's conversion to float.
# Concrete implementations of Rational and Real abstract methods.
# Don't bind to namespace quite yet, but flag whether the user wants a
# specific namespace or to use __main__.__dict__. This will allow us
# to bind to __main__.__dict__ at completion time, not now.
# get the content of the object, except __builtins__
# bpo-44752: thisobject.word is a method decorated by
# `@property`. What follows applies a postfix if
# thisobject.word is callable, but know we know that
# this is not callable (because it is a property).
# Also, getattr(thisobject, word) will evaluate the
# property method, which is not desirable.
# Release references early at shutdown (the readline module's
# contents are quasi-immortal, and the completer function holds a
# reference to globals).
# Written by Nick Coghlan <ncoghlan at gmail.com>
# importlib first so we can test #15386 via -m
# avoid 'import types' just for ModuleType
# TODO: Replace these helpers with importlib._bootstrap_external functions.
# Copy the globals of the temporary module, as they
# may be cleared when the temporary module goes away
# Helper to get the full name, spec and code for a module
# Try importing the parent to avoid catching initialization errors
# If the parent or higher ancestor package is missing, let the
# error be raised by find_spec() below and then be caught. But do
# not allow other errors to be caught.
# Warn if the module has already been imported under its normal name
# No module loaded; being a package is irrelevant
# XXX ncoghlan: Should this be documented and made public?
# (Current thoughts: don't repeat the mistake that lead to its
# creation when run_module() no longer met the needs of
# mainmodule.c, but couldn't be changed because it was public)
# i.e. -m switch
# i.e. directory or zipfile execution
# Leave the sys module alone
# Helper that gives a nicer error message when attempting to
# execute a zipfile or directory by invoking __main__.py
# Also moves the standard __main__ out of the way so that the
# preexisting __loader__ entry doesn't cause issues
# Check for a compiled file first
# That didn't work, so try it as normal source code
# Not a valid sys.path entry, so run the code directly
# execfile() doesn't help as we want to allow compiled files
# Finder is defined for path, so add it to
# the start of sys.path
# Here's where things are a little different from the run_module
# case. There, we only had to replace the module in sys while the
# code was running and doing so was somewhat optional. Here, we
# have no choice and we have to remove it even while we read the
# code. If we don't do this, a __loader__ attribute in the
# existing __main__ module may prevent location of the new module.
# Run the module specified as the next command line argument
# Make the requested module sys.argv[0]
# number of bytes to return by default
# An analysis of the MS-Word extensions is available at
# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf
# Internal -- update line number and offset.  This should be
# called for each piece of data exactly once, in order -- in other
# words the concatenation of all the input strings to this
# function should be exactly the entire input.
# Should not fail
# Internal -- parse declaration (for use by subclasses).
# This is some sort of declaration; in "HTML as
# deployed," this should only be the document type
# declaration ("<!DOCTYPE html...>").
# ISO 8879:1986, however, has more complex
# declaration syntax for elements in <!...>, including:
# --comment--
# [marked section]
# name in the following list: ENTITY, DOCTYPE, ELEMENT,
# ATTLIST, NOTATION, SHORTREF, USEMAP,
# LINKTYPE, LINK, IDLINK, USELINK, SYSTEM
# the empty comment <!>
# Start of comment followed by buffer boundary,
# or just a buffer boundary.
# A simple, practical version could look like: ((name|stringlit) S*) + '>'
#comment
# Locate --.*-- as the body of the comment
#marked section
# Locate [statusWord [...arbitrary SGML...]] as the body of the marked section
# Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA
# Note that this is extended by Microsoft Office "Save as Web" function
# to include [if...] and [endif].
#all other declaration elements
# end of declaration syntax
# According to the HTML5 specs sections "8.2.4.44 Bogus
# comment state" and "8.2.4.45 Markup declaration open
# state", a comment token should be emitted.
# Calling unknown_decl provides more flexibility though.
# incomplete
# this could be handled in a separate doctype parser
# must tolerate []'d groups in a content model in an element declaration
# also in data attribute specifications of attlist declaration
# also link type declaration subsets in linktype declarations
# also link attribute specification lists in link declarations
# Internal -- parse a marked section
# Override this to handle MS-word extension syntax <![if word]>content<![endif]>
# look for standard ]]> ending
# look for MS Office ]> ending
# Internal -- parse comment, return length or -1 if not terminated
# Internal -- scan past the internal subset in a <!DOCTYPE declaration,
# returning the index just past any whitespace following the trailing ']'.
# end of buffer; incomplete
# handle the individual names
# parameter entity reference
# end of buffer reached
# Internal -- scan past <!ELEMENT declarations
# style content model; just skip until '>'
# Internal -- scan past <!ATTLIST declarations
# scan a series of attribute descriptions; simplified:
# an enumerated type; look for ')'
# end of buffer, incomplete
# end of buffer
# all done
# Internal -- scan past <!NOTATION declarations
# Internal -- scan past <!ENTITY declarations
# Internal -- scan a name token and the new position and the token, or
# return -1 if we've reached the end of the buffer.
# To be overridden -- handlers for unknown objects
# turtle.py: a Tkinter based turtle graphics module for Python
# Version 1.1b - 4. 5. 2009
# Copyright (C) 2006 - 2010  Gregor Lingl
# email: glingl@aon.at
# This software is provided 'as-is', without any express or implied
# warranty.  In no event will the authors be held liable for any damages
# arising from the use of this software.
# Permission is granted to anyone to use this software for any purpose,
# including commercial applications, and to alter it and redistribute it
# freely, subject to the following restrictions:
# 1. The origin of this software must not be misrepresented; you must not
# 2. Altered source versions must be plainly marked as such, and must not be
# 3. This notice may not be removed or altered from any source distribution.
# Screen
# TurtleScreen
# RawTurtle
# docstrings
# value need not be converted
### From here up to line    : Tkinter - Interface for turtle.py            ###
### May be replaced by an interface to some different graphics toolkit     ###
## helper functions for Scrolled Canvas, to forward Canvas-methods
## to ScrolledCanvas class
### MANY CHANGES ###
### NEWU!
# expected: ordinary TK.Canvas
# needs amendment
# the window isn't managed by a geometry manager
### else data assumed to be PhotoImage
# Force Turtle window to the front on OS X. This is needed because
# the Turtle window will show behind the Terminal window when you
# start the demo from the command line.
# image
## else shape assumed to be Shape-instance
# mode == "logo":
## three dummy methods to be implemented by child class:
# or "user" or "noresize"
# TurtleScreenBase
# too make self deepcopy-able
# else return None
############################## Delete stampitem from undobuffer if necessary
# if clearstamp is called directly.
## Version with undo-stuff
# Turtle now at end,
# now update currentLine
###### 42! answer to the ultimate question
# of life, the universe and everything
#count=True)
# restore former situation
# Turtle now at position old,
##### If screen were to gain a dot function, see GH #104218.
## check if there is any poly?
################################################################
### screen oriented methods recurring to methods of TurtleScreen
### bit of a hack for methods - turn it into a function
# but we drop the "self" param.
# Try and build one for Python defined functions
## The following mechanism makes all methods of RawTurtle and Turtle available
## as functions. So we can enhance, change, add, delete methods to these
## classes and do not need to change anything here.
# draw 3 squares; the last filled
# move out of the way
# some text
# staircase
# filled staircase
# more text
#end_fill()
# Helper functions.
# Conditions for adding methods.  The boxes indicate what action the
# dataclass decorator takes.  For all of these tables, when I talk
# about init=, repr=, eq=, order=, unsafe_hash=, or frozen=, I'm
# referring to the arguments to the @dataclass decorator.  When
# checking if a dunder method already exists, I mean check for an
# entry in the class's __dict__.  I never check to see if an attribute
# is defined in a base class.
# Key:
# +=========+=========================================+
# + Value   | Meaning                                 |
# | <blank> | No action: no method is added.          |
# +---------+-----------------------------------------+
# | add     | Generated method is added.              |
# | raise   | TypeError is raised.                    |
# | None    | Attribute is set to None.               |
# __init__
# +=======+=======+=======+
# | False |       |       |
# +-------+-------+-------+
# | True  | add   |       |  <- the default
# __repr__
# __setattr__
# __delattr__
# | False |       |       |  <- the default
# | True  | add   | raise |
# Raise because not adding these methods would break the "frozen-ness"
# of the class.
# __eq__
# __lt__
# __le__
# __gt__
# __ge__
# Raise because to allow this case would interfere with using
# functools.total_ordering.
# __hash__
# +=======+=======+=======+========+========+
# | False | False | False |        |        | No __eq__, use the base class __hash__
# +-------+-------+-------+--------+--------+
# | False | False | True  |        |        | No __eq__, use the base class __hash__
# | False | True  | False | None   |        | <-- the default, not hashable
# | False | True  | True  | add    |        | Frozen, so hashable, allows override
# | True  | False | False | add    | raise  | Has no __eq__, but hashable
# | True  | False | True  | add    | raise  | Has no __eq__, but hashable
# | True  | True  | False | add    | raise  | Not frozen, but hashable
# | True  | True  | True  | add    | raise  | Frozen, so hashable
# For boxes that are blank, __hash__ is untouched and therefore
# inherited from the base class.  If the base is object, then
# id-based hashing is used.
# Note that a class may already have __hash__=None if it specified an
# __eq__ method in the class body (not one that was created by
# @dataclass).
# See _hash_action (below) for a coded version of this table.
# __match_args__
# __match_args__ is always added unless the class already defines it. It is a
# tuple of __init__ parameter names; non-init fields must be matched by keyword.
# Raised when an attempt is made to modify a frozen class.
# A sentinel object for default values to signal that a default
# factory will be used.  This is given a nice repr() which will appear
# in the function signature of dataclasses' constructors.
# A sentinel object to detect if a parameter is supplied or not.  Use
# a class to give it a better repr.
# A sentinel object to indicate that following fields are keyword-only by
# default.  Use a class to give it a better repr.
# Since most per-field metadata will be unused, create an empty
# read-only proxy that can be shared among all fields.
# Markers for the various kinds of fields and pseudo-fields.
# The name of an attribute on the class where we store the Field
# objects.  Also used to check if a class is a Data Class.
# The name of an attribute on the class that stores the parameters to
# @dataclass.
# The name of the function, that if it exists, is called at the end of
# __init__.
# String regex that string annotations for ClassVar or InitVar must match.
# Allows "identifier.identifier[" or "identifier[".
# https://bugs.python.org/issue33453 for details.
# Atomic immutable types which don't require any recursive handling and for which deepcopy
# returns the same object. We can provide a fast-path for these types in asdict and astuple.
# Common JSON Serializable types
# Other common types
# Other types that are also unaffected by deepcopy
# typing objects, e.g. List[int]
# Instances of Field are only ever created from within this module,
# and only from the field() function, although Field instances are
# exposed externally as (conceptually) read-only objects.
# name and type are filled in after the fact, not in __init__.
# They're not known at the time this class is instantiated, but it's
# convenient if they're available later.
# When cls._FIELDS is filled in with a list of Field objects, the name
# and type fields will have been populated.
# Private: not to be used by user code.
# This is used to support the PEP 487 __set_name__ protocol in the
# case where we're using a field that contains a descriptor as a
# default value.  For details on __set_name__, see
# https://peps.python.org/pep-0487/#implementation-details.
# Note that in _process_class, this Field object is overwritten
# with the default value, so the end result is a descriptor that
# had __set_name__ called on it at the right time.
# There is a __set_name__ method on the descriptor, call
# it.
# This function is used instead of exposing Field creation directly,
# so that a type checker can be told (via overloads) that this is a
# function whose type depends on its parameters.
# Returns the fields as __init__ will output them.  It returns 2 tuples:
# the first for normal args, and the second for keyword args.
# Return a string representing each field of obj_name as a tuple
# member.  So, if fields is ['x', 'y'] and obj_name is "self",
# return "(self.x,self.y)".
# Special case for the 0-tuple.
# Note the trailing comma, needed if this turns out to be a 1-tuple.
# Keep track if this method is allowed to be overwritten if it already
# exists in the class.  The error is method-specific, so keep it with
# the name.  We'll use this when we generate all of the functions in
# the add_fns_to_class call.  overwrite_error is either True, in which
# case we'll raise an error, or it's a string, in which case we'll
# raise an error and append this string.
# Should this function always overwrite anything that's already in the
# class?  The default is to not overwrite a function that already
# exists.
# Compute the text of the entire function, add it to the text we're generating.
# The source to all of the functions we're generating.
# The locals they use.
# The names of all of the functions, used for the return value of the
# outer function.  Need to handle the 0-tuple specially.
# txt is the entire function we're going to execute, including the
# bodies of the functions we're defining.  Here's a greatly simplified
# version:
# def __create_fn__():
# return __init__,__repr__
# Now that we've generated the functions, assign them into cls.
# See if it's an error to overwrite this particular function.
# If we're a frozen class, then assign to our fields in __init__
# via object.__setattr__.  Otherwise, just use a simple
# assignment.
# self_name is what "self" is called in this function: don't
# hard-code "self", since that might be a field name.
# Return the text of the line in the body of __init__ that will
# initialize this field.
# This field has a default factory.  If a parameter is
# given, use it.  If not, call the factory.
# This is a field that's not in the __init__ params, but
# has a default factory function.  It needs to be
# initialized here by calling the factory function,
# because there's no other way to initialize it.
# For a field initialized with a default=defaultvalue, the
# class dict just has the default value
# (cls.fieldname=defaultvalue).  But that won't work for a
# default factory, the factory must be called in __init__
# and we must assign that to self.fieldname.  We can't
# fall back to the class dict's value, both because it's
# not set, and because it might be different per-class
# (which, after all, is why we have a factory function!).
# No default factory.
# There's no default, just do an assignment.
# If the class has slots, then initialize this field.
# This field does not need initialization: reading from it will
# just use the class attribute that contains the default.
# Signify that to the caller by returning None.
# Only test this now, so that we can create variables for the
# default.  However, return None to signify that we're not going
# to actually do the assignment statement for InitVars.
# Now, actually generate the field assignment.
# Return the __init__ parameter string for this field.  For
# example, the equivalent of 'x:int=3' (except instead of 'int',
# reference a variable set to int, and instead of '3', reference a
# variable set to 3).
# There's no default, and no default_factory, just output the
# variable name and type.
# There's a default, this will be the name that's used to look
# it up.
# There's a factory function.  Set a marker.
# fields contains both real fields and InitVar pseudo-fields.
# Make sure we don't have fields without defaults following fields
# with defaults.  This actually would be caught when exec-ing the
# function source code, but catching it here gives a better error
# message, and future-proofs us in case we build up the function
# using ast.
# Only consider the non-kw-only fields in the __init__ call.
# line is None means that this field doesn't require
# initialization (it's a pseudo-field).  Just skip it.
# Does this class have a post-init function?
# If no body lines, use 'pass'.
# Add the keyword-only args.  Because the * can only be added if
# there's at least one keyword-only arg, there needs to be a test here
# (instead of just concatenting the lists together).
# This test uses a typing internal class, but it's the best way to
# test if this is a ClassVar.
# The module we're checking against is the module we're
# currently in (dataclasses.py).
# Given a type annotation string, does it refer to a_type in
# a_module?  For example, when checking that annotation denotes a
# ClassVar, then a_module is typing, and a_type is
# typing.ClassVar.
# It's possible to look up a_module given a_type, but it involves
# looking in sys.modules (again!), and seems like a waste since
# the caller already knows a_module.
# - annotation is a string type annotation
# - cls is the class that this annotation was found in
# - a_module is the module we want to match
# - a_type is the type in that module we want to match
# - is_type_predicate is a function called with (obj, a_module)
# Since this test does not do a local namespace lookup (and
# instead only a module (global) lookup), there are some things it
# gets wrong.
# With string annotations, cv0 will be detected as a ClassVar:
# But in this example cv1 will not be detected as a ClassVar:
# In C1, the code in this function (_is_type) will look up "CV" in
# the module and not find it, so it will not consider cv1 as a
# ClassVar.  This is a fairly obscure corner case, and the best
# way to fix it would be to eval() the string "CV" with the
# correct global and local namespaces.  However that would involve
# a eval() penalty for every single field of every dataclass
# that's defined.  It was judged not worth it.
# No module name, assume the class's module did
# "from dataclasses import InitVar".
# Look up module_name in the class's module.
# Return a Field object for this field name and type.  ClassVars and
# InitVars are also returned, but marked as such (see f._field_type).
# default_kw_only is the value of kw_only to use if there isn't a field()
# that defines it.
# If the default value isn't derived from Field, then it's only a
# normal default value.  Convert it to a Field().
# This is a field in __slots__, so it has no default value.
# Only at this point do we know the name and the type.  Set them.
# Assume it's a normal field until proven otherwise.  We're next
# going to decide if it's a ClassVar or InitVar, everything else
# is just a normal field.
# In addition to checking for actual types here, also check for
# string annotations.  get_type_hints() won't always work for us
# (see https://github.com/python/typing/issues/508 for example),
# plus it's expensive and would require an eval for every string
# annotation.  So, make a best effort to see if this is a ClassVar
# or InitVar using regex's and checking that the thing referenced
# is actually of the correct type.
# For the complete discussion, see https://bugs.python.org/issue33453
# If typing has not been imported, then it's impossible for any
# annotation to be a ClassVar.  So, only look for ClassVar if
# typing has been imported by any module (not necessarily cls's
# module).
# If the type is InitVar, or if it's a matching string annotation,
# then it's an InitVar.
# Validations for individual fields.  This is delayed until now,
# instead of in the Field() constructor, since only here do we
# know the field name, which allows for better error reporting.
# Special restrictions for ClassVar and InitVar.
# Should I check for other field settings? default_factory
# seems the most serious to check for.  Maybe add others.  For
# example, how about init=False (or really,
# init=<not-the-default-init-value>)?  It makes no sense for
# ClassVar and InitVar to specify init=<anything>.
# kw_only validation and assignment.
# For real and InitVar fields, if kw_only wasn't specified use the
# default value.
# Make sure kw_only isn't set for ClassVars
# For real fields, disallow mutable defaults.  Use unhashable as a proxy
# indicator for mutability.  Read the __hash__ attribute from the class,
# not the instance.
# Never overwrites an existing attribute.  Returns True if the
# attribute already exists.
# Decide if/how we're going to create a hash function.  Key is
# (unsafe_hash, eq, frozen, does-hash-exist).  Value is the action to
# take.  The common case is to do nothing, so instead of providing a
# function that is a no-op, use None to signify that.
# It's sort of a hack that I'm setting this here, instead of at
# func_builder.add_fns_to_class time, but since this is an exceptional case
# (it's not setting an attribute to a function, but to a scalar value),
# just do it directly here.  I might come to regret this.
# Raise an exception.
# See https://bugs.python.org/issue32929#msg312829 for an if-statement
# version of this table.
# Now that dicts retain insertion order, there's no reason to use
# an ordered dict.  I am leveraging that ordering here, because
# derived class fields overwrite base class fields, but the order
# is defined by the base class, which is found first.
# Theoretically this can happen if someone writes
# a custom string to cls.__module__.  In which case
# such dataclass won't be fully introspectable
# (w.r.t. typing.get_type_hints) but will still function
# correctly.
# Find our base classes in reverse MRO order, and exclude
# ourselves.  In reversed order so that more derived classes
# override earlier field definitions in base classes.  As long as
# we're iterating over them, see if all or any of them are frozen.
# By default `all_frozen_bases` is `None` to represent a case,
# where some dataclasses does not have any bases with `_FIELDS`
# Only process classes that have been processed by our
# decorator.  That is, they have a _FIELDS attribute.
# Annotations defined specifically in this class (not in base classes).
# Fields are found from cls_annotations, which is guaranteed to be
# ordered.  Default values are from class attributes, if a field
# has a default.  If the default value is a Field(), then it
# contains additional info beyond (and possibly including) the
# actual default value.  Pseudo-fields ClassVars and InitVars are
# included, despite the fact that they're not real fields.  That's
# dealt with later.
# Now find fields in our class.  While doing so, validate some
# things, and set the default values (as class attributes) where
# we can.
# Get a reference to this module for the _is_kw_only() test.
# See if this is a marker to change the value of kw_only.
# Switch the default to kw_only=True, and ignore this
# annotation: it's not a real field.
# Otherwise it's a field of some type.
# If the class attribute (which is the default value for this
# field) exists and is of type 'Field', replace it with the
# real default.  This is so that normal class introspection
# sees a real default value, not a Field.
# If there's no default, delete the class attribute.
# This happens if we specify field(repr=False), for
# example (that is, we specified a field object, but
# no default value).  Also if we're using a default
# factory.  The class attribute should not be set at
# all in the post-processed class.
# Do we have any Field members that don't also have annotations?
# Check rules that apply if we are derived from any dataclasses.
# Raise an exception if any of our bases are frozen, but we're not.
# Raise an exception if we're frozen, but none of our bases are.
# Remember all of the fields on our class (including bases).  This
# also marks this class as being a dataclass.
# Was this class defined with an explicit __hash__?  Note that if
# __eq__ is defined in this class, then python will automatically
# set __hash__ to None.  This is a heuristic, as it's possible
# that such a __hash__ == None was not auto-generated, but it's
# close enough.
# If we're generating ordering methods, we must be generating the
# eq methods.
# Include InitVars and regular fields (so, not ClassVars).  This is
# initialized here, outside of the "if init:" test, because std_init_fields
# is used with match_args, below.
# The name to use for the "self"
# param in __init__.  Use "self"
# if possible.
# Get the fields as a list, and include only real fields.  This is
# used in all of the following methods.
# Create __eq__ method.  There's no need for a __ne__ method,
# since python will call __eq__ and negate it.
# Create and set the ordering methods.
# Create a comparison function.  If the fields in the object are
# named 'x' and 'y', then self_tuple is the string
# '(self.x,self.y)' and other_tuple is the string
# '(other.x,other.y)'.
# Decide if/how we're going to create a hash function.
# Generate the methods and add them to the class.  This needs to be done
# before the __doc__ logic below, since inspect will look at the __init__
# signature.
# Create a class doc-string.
# In some cases fetching a signature is not possible.
# But, we surely should not fail in this case.
# I could probably compute this once.
# It's an error to specify weakref_slot if slots is False.
# _dataclass_getstate and _dataclass_setstate are needed for pickling frozen
# classes with slots.  These could be slightly more performant if we generated
# the code instead of iterating over fields.  But that can be a project for
# another day, if performance becomes an issue.
# use setattr because dataclass may be frozen
# `__dictoffset__` and `__weakrefoffset__` can tell us whether
# the base type has dict/weakref slots, in a way that works correctly
# for both Python classes and C extension types. Extension types
# don't use `__slots__` for slot creation
# Slots may be any iterable, but we cannot handle an iterator
# because it will already be (partially) consumed.
# Need to create a new class, since we can't set __slots__
# Make sure __slots__ isn't already set.
# Create a new dict for our new class.
# Make sure slots don't overlap with those in base classes.
# The slots for our class.  Remove slots from our base classes.  Add
# '__weakref__' if weakref_slot was given, unless it is already present.
# gh-93521: '__weakref__' also needs to be filtered out if
# already present in inherited_slots
# Remove our attributes, if present. They'll still be
# Remove __dict__ itself.
# Clear existing `__weakref__` descriptor, it belongs to a previous type:
# gh-102069
# And finally create the class.
# Need this for pickling frozen classes with slots.
# See if we're being called as @dataclass or @dataclass().
# We're called with parens.
# We're called as @dataclass without parens.
# Might it be worth caching this, per class?
# Exclude pseudo-fields.  Note that fields is sorted by insertion
# order, so the order of the tuple is as the fields were defined.
# dataclass instance: fast path for the common case
# handle the builtin types first for speed; subclasses handled below
# obj is a namedtuple.  Recurse into it, but the returned
# object is another namedtuple of the same type.  This is
# similar to how other list- or tuple-derived classes are
# treated (see below), but we just need to create them
# differently because a namedtuple's __init__ needs to be
# called differently (see bpo-34363).
# I'm not using namedtuple's _asdict()
# method, because:
# - it does not recurse in to the namedtuple fields and
# - I don't actually want to return a dict here.  The main
# obj is a defaultdict, which has a different constructor from
# dict as it requires the default_factory as its first arg.
# Assume we can create an object of this type by passing in a
# generator
# generator (which is not true for namedtuples, handled
# above).
# While we're looking through the field names, validate that they
# are identifiers, are not keywords, and not duplicates.
# Update 'ns' with the user-supplied namespace plus our calculated values.
# We use `types.new_class()` instead of simply `type()` to allow dynamic creation
# of generic dataclasses.
# For pickling to work, the __module__ variable needs to be set to the frame
# where the dataclass is created.
# Apply the normal decorator.
# We're going to mutate 'changes', but that's okay because it's a
# new dict, even if called with 'replace(self, **my_changes)'.
# It's an error to have init=False fields in 'changes'.
# If a field is not in 'changes', read its value from the provided 'self'.
# Only consider normal fields or InitVars.
# Error if this field is specified in changes.
# Create the new object, which calls __init__() and
# __post_init__() (if defined), using all of the init fields we've
# added and/or left in 'changes'.  If there are values supplied in
# changes that aren't fields, this will correctly raise a
# TypeError.
# Access WeakSet through the weakref module.
# This code is separated-out because it is needed
# by abc.py to load everything else at startup.
# This context manager registers itself in the current iterators of the
# weak container, such as to delay all removals until the context manager
# exits.
# This technique should be relatively thread-safe (since sets are).
# Don't create cycles
# A list of keys to be removed
# Caveat: the iterator will keep a strong reference to
# `item` until it is resumed or closed.
# We can not use io.text_encoding() here because old openhook doesn't
# take encoding parameter.
# restrict mode argument to reading modes
# repeat with next file
# restore FileInput._readline
# EncodingWarning is emitted in __init__() already
# The next few lines may raise OSError
# This may raise OSError
# Custom hooks made previous to Python 3.10 didn't have
# encoding argument
# hide FileInput._readline
# EncodingWarning is emitted in FileInput() already.
# gzip and bz2 are binary mode by default.
# Shortcut for use in isinstance testing
# These are purely informational; no code uses these.
# File format version we write
# Original protocol 0
# Protocol 0 with INST added
# Original protocol 1
# Protocol 1 with BINFLOAT added
# Protocol 2
# Protocol 3
# Protocol 4
# Protocol 5
# Old format versions we can read
# This is the highest protocol number we know how to read.
# The protocol we write by default.  May be less than HIGHEST_PROTOCOL.
# Only bump this if the oldest still supported version of Python already
# includes it.
# An instance of _Stop is raised by Unpickler.load_stop() in response to
# the STOP opcode, passing the object that is the result of unpickling.
# Pickle opcodes.  See pickletools.py for extensive docs.  The listing
# here is in kind-of alphabetical order of 1-character pickle code.
# pickletools groups them by purpose.
# push special markobject on stack
# every pickle ends with STOP
# discard topmost stack item
# discard stack top through topmost markobject
# duplicate top stack item
# push float object; decimal string argument
# push integer or bool; decimal string argument
# push four-byte signed int
# push 1-byte unsigned int
# push long; decimal string argument
# push 2-byte unsigned int
# push None
# push persistent object; id is taken from string arg
# apply callable to argtuple, both on stack
# push string; NL-terminated string argument
# push string; counted binary string argument
# push Unicode string; raw-unicode-escaped'd argument
# append stack top to list below it
# call __setstate__ or __dict__.update()
# push self.find_class(modname, name); 2 string args
# build a dict from stack items
# push empty dict
# extend list on stack by topmost stack slice
# push item from memo on stack; index is string arg
# build & push class instance
# push item from memo on stack; index is 4-byte arg
# build list from topmost stack items
# push empty list
# store stack top in memo; index is string arg
# add key+value pair to dict
# build tuple from topmost stack items
# push empty tuple
# modify dict by adding topmost key+value pairs
# push float; arg is 8-byte float encoding
# not an opcode; see INT docs in pickletools.py
# identify pickle protocol
# build object by applying cls.__new__ to argtuple
# push object from extension registry; 1-byte index
# ditto, but 2-byte index
# ditto, but 4-byte index
# build 1-tuple from stack top
# build 2-tuple from two topmost stack items
# build 3-tuple from three topmost stack items
# push True
# push False
# push long from < 256 bytes
# push really big long
# Protocol 3 (Python 3.x)
# push bytes; counted binary string argument
# push short string; UTF-8 length < 256 bytes
# push very long string
# push very long bytes string
# push empty set on the stack
# modify set by adding topmost stack items
# build frozenset from topmost stack items
# like NEWOBJ but work with keyword only arguments
# same as GLOBAL but using names on the stacks
# store top of the stack in memo
# indicate the beginning of a new frame
# push bytearray
# push next out-of-band buffer
# make top of stack readonly
# Issue a single call to the write method of the underlying
# file object for the frame opcode with the size of the
# frame. The concatenation is expected to be less expensive
# than issuing an additional call to write.
# Issue a separate call to write to append the frame
# contents without concatenation to the above to avoid a
# memory copy.
# Start the new frame with a new io.BytesIO instance so that
# the file object can have delayed access to the previous frame
# contents via an unreleased memoryview of the previous
# io.BytesIO instance.
# Terminate the current frame and flush it to the file.
# Perform direct write of the header and payload of the large binary
# object. Be careful not to concatenate the header and the payload
# prior to calling 'write' as we do not want to allocate a large
# temporary bytes object.
# We intentionally do not insert a protocol 4 frame opcode to make
# it possible to optimize file.read calls in the loader.
# Tools used for pickling.
# Protect the iteration by using a list copy of sys.modules against dynamic
# modules that trigger imports of other modules upon calls to getattr.
# bpo-42406
# Pickling machinery
# Check whether Pickler was initialized correctly. This is
# only needed to mimic the behavior of _pickle.Pickler.dump().
# The Pickler memo is a dictionary mapping object ids to 2-tuples
# that contain the Unpickler memo key and the object being memoized.
# The memo key is written to the pickle and will become
# the key in the Unpickler's memo.  The object is stored in the
# Pickler memo so that transient objects are kept alive during
# pickling.
# The use of the Unpickler memo length as the memo key is just a
# convention.  The only requirement is that the memo values be unique.
# But there appears no advantage to any other scheme, and this
# scheme allows the Unpickler memo to be implemented as a plain (but
# growable) array, indexed by memo key.
# Return a PUT (BINPUT, LONG_BINPUT) opcode string, with argument i.
# Return a GET (BINGET, LONG_BINGET) opcode string, with argument i.
# Check for persistent id (defined by a subclass)
# Check the memo
# Check the type dispatch table
# Call unbound method with explicit self
# Check private dispatch table if any, or else
# copyreg.dispatch_table
# Check for a class with a custom metaclass; treat as regular
# class
# Check for a __reduce_ex__ method, fall back to __reduce__
# Check for string returned by reduce(), meaning "save as global"
# Assert that reduce() returned a tuple
# Assert that it returned an appropriately sized tuple
# Save the reduce() output and finally memoize the object
# This exists so a subclass can override it
# Save a persistent id reference
# This API is called by some subclasses
# A __reduce__ implementation can direct protocol 2 or newer to
# use the more efficient NEWOBJ opcode, while still
# allowing protocol 0 and 1 to work normally.  For this to
# work, the function returned by __reduce__ should be
# called __newobj__, and its first argument should be a
# class.  The implementation for __newobj__
# should be as follows, although pickle has no way to
# verify this:
# def __newobj__(cls, *args):
# Protocols 0 and 1 will pickle a reference to __newobj__,
# while protocol 2 (and above) will pickle a reference to
# cls, the remaining args tuple, and the NEWOBJ code,
# which calls cls.__new__(cls, *args) at unpickling time
# (see load_newobj below).  If __reduce__ returns a
# three-tuple, the state from the third tuple item will be
# pickled regardless of the protocol, calling __setstate__
# at unpickling time (see load_build below).
# Note that no standard __newobj__ implementation exists;
# you have to provide your own.  This is to enforce
# compatibility with Python 2.2 (pickles written using
# protocol 0 or 1 in Python 2.3 should be unpicklable by
# Python 2.2).
# If the object is already in the memo, this means it is
# recursive. In this case, throw away everything we put on the
# stack, and fetch the object back from the memo.
# More new special cases (that work with older protocols as
# well): when __reduce__ returns a tuple with 4 or 5 items,
# the 4th and 5th item should be iterators that provide list
# items and dict items (as (key, value) tuples), or None.
# If a state_setter is specified, call it instead of load_build
# to update obj's with its previous state.
# First, push state_setter and its tuple of expected arguments
# (obj, state) onto the stack.
# simple BINGET opcode as obj is already memoized.
# Trigger a state_setter(obj, state) function call.
# The purpose of state_setter is to carry-out an
# inplace modification of obj. We do not care about what the
# method might return, so its output is eventually removed from
# the stack.
# Methods below this point are dispatched through the dispatch table
# If the int is small enough to fit in a signed 4-byte 2's-comp
# format, we can store it more efficiently than the general
# case.
# First one- and two-byte unsigned ints:
# Next check for 4-byte signed ints:
# helper for writing bytes objects for protocol >= 3
# without memoizing them
# bytes object is empty
# helper for writing bytearray objects for protocol >= 5
# bytearray is empty
# Write data in-band
# XXX The C implementation avoids a copy here
# Write data out-of-band
# Escape what raw-unicode-escape doesn't, but memoize the original.
# EOF on DOS
# tuple is empty
# Subtle.  Same as in the big comment below.
# proto 0 or proto 1 and tuple isn't empty, or proto > 1 and tuple
# has more than 3 elements.
# Subtle.  d was not in memo when we entered save_tuple(), so
# the process of saving the tuple's elements must have saved
# the tuple itself:  the tuple is recursive.  The proper action
# now is to throw away everything we put on the stack, and
# simply GET the tuple (it's already constructed).  This check
# could have been done in the "for element" loop instead, but
# recursive tuples are a rare thing.
# proto 0 -- POP_MARK not available
# No recursion.
# proto 0 -- can't use EMPTY_LIST
# Helper to batch up APPENDS sequences
# else tmp is empty, and we're done
# proto 0 -- can't use EMPTY_DICT
# Helper to batch up SETITEMS sequences; proto >= 1 only
# Should never happen in normal circumstances,
# since the type and the value of the code are
# checked in copyreg.add_extension().
# Non-ASCII identifiers are supported only with protocols >= 3.
# In protocol < 4, objects with multi-part __qualname__
# are represented as
# getattr(getattr(..., attrname1), attrname2).
# Unpickling machinery
# Check whether Unpickler was initialized correctly. This is
# only needed to mimic the behavior of _pickle.Unpickler.dump().
# Return a list of items pushed in the stack after last MARK instruction.
# Corrupt or hostile pickle -- we never write one like this
# Used to allow strings from Python 2 to be decoded either as
# bytes or Unicode strings.  This should be used only with the
# STRING, BINSTRING and SHORT_BINSTRING opcodes.
# Strip outermost quotes
# Deprecated BINSTRING uses signed 32-bit length
# INST and OBJ differ only in how they get a class object.  It's not
# only sensible to do the rest in a common routine, the two routines
# previously diverged and grew different bugs.
# klass is the class to instantiate, and k points to the topmost mark
# object, following which are the arguments for klass.__init__.
# Stack is ... markobject classobject arg1 arg2 ...
# note that 0 is forbidden
# Corrupt or hostile pickle.
# Subclasses may override this.
# Even if the PEP 307 requires extend() and append() methods,
# fall back on append() if the object has no extend() method
# for backward compatibility.
# Shorthands
# Use the faster _pickle if possible
# Doctest
# Members:
# a
# b
# b2j
# fullbcount
# matching_blocks
# opcodes
# isjunk
# bjunk
# bpopular
# For each element x in b, set b2j[x] to a list of the indices in
# b where x appears; the indices are in increasing order; note that
# the number of times x appears in b is len(b2j[x]) ...
# when self.isjunk is defined, junk elements don't show up in this
# map at all, which stops the central find_longest_match method
# from starting any matching block at a junk element ...
# b2j also does not contain entries for "popular" elements, meaning
# elements that account for more than 1 + 1% of the total elements, and
# when the sequence is reasonably large (>= 200 elements); this can
# be viewed as an adaptive notion of semi-junk, and yields an enormous
# speedup when, e.g., comparing program files with hundreds of
# instances of "return NULL;" ...
# note that this is only called when b changes; so for cross-product
# kinds of matches, it's best to call set_seq2 once, then set_seq1
# repeatedly
# Because isjunk is a user-defined (not C) function, and we test
# for junk a LOT, it's important to minimize the number of calls.
# Before the tricks described here, __chain_b was by far the most
# time-consuming routine in the whole module!  If anyone sees
# Jim Roskind, thank him again for profile.py -- I never would
# have guessed that.
# The first trick is to build b2j ignoring the possibility
# of junk.  I.e., we don't call isjunk at all yet.  Throwing
# out the junk later is much cheaper than building b2j "right"
# from the start.
# Purge junk elements
# separate loop avoids separate list of keys
# Purge popular elements that are not junk
# ditto; as fast for 1% deletion
# CAUTION:  stripping common prefix or suffix would be incorrect.
# E.g.,
# Longest matching block is "ab", but if common prefix is
# stripped, it's "a" (tied with "b").  UNIX(tm) diff does so
# strip, so ends up claiming that ab is changed to acab by
# inserting "ca" in the middle.  That's minimal but unintuitive:
# "it's obvious" that someone inserted "ac" at the front.
# Windiff ends up at the same place as diff, but by pairing up
# the unique 'b's and then matching the first two 'a's.
# find longest junk-free match
# during an iteration of the loop, j2len[j] = length of longest
# junk-free match ending with a[i-1] and b[j]
# look at all instances of a[i] in b; note that because
# b2j has no junk keys, the loop is skipped if a[i] is junk
# a[i] matches b[j]
# Extend the best by non-junk elements on each end.  In particular,
# "popular" non-junk elements aren't in b2j, which greatly speeds
# the inner loop above, but also means "the best" match so far
# doesn't contain any junk *or* popular non-junk elements.
# Now that we have a wholly interesting match (albeit possibly
# empty!), we may as well suck up the matching junk on each
# side of it too.  Can't think of a good reason not to, and it
# saves post-processing the (possibly considerable) expense of
# figuring out what to do with it.  In the case of an empty
# interesting match, this is clearly the right thing to do,
# because no other kind of match is possible in the regions.
# This is most naturally expressed as a recursive algorithm, but
# at least one user bumped into extreme use cases that exceeded
# the recursion limit on their box.  So, now we maintain a list
# ('queue`) of blocks we still need to look at, and append partial
# results to `matching_blocks` in a loop; the matches are sorted
# at the end.
# a[alo:i] vs b[blo:j] unknown
# a[i:i+k] same as b[j:j+k]
# a[i+k:ahi] vs b[j+k:bhi] unknown
# if k is 0, there was no matching block
# It's possible that we have adjacent equal blocks in the
# matching_blocks list now.  Starting with 2.5, this code was added
# to collapse them.
# Is this block adjacent to i1, j1, k1?
# Yes, so collapse them -- this just increases the length of
# the first block by the length of the second, and the first
# block so lengthened remains the block to compare against.
# Not adjacent.  Remember the first block (k1==0 means it's
# the dummy we started with), and make the second block the
# new block to compare against.
# invariant:  we've pumped out correct diffs to change
# a[:i] into b[:j], and the next matching block is
# a[ai:ai+size] == b[bj:bj+size].  So we need to pump
# out a diff to change a[i:ai] into b[j:bj], pump out
# the matching block, and move (i,j) beyond the match
# the list of matching blocks is terminated by a
# sentinel with size 0
# Fixup leading and trailing groups if they show no changes.
# End the current group and start a new one whenever
# there is a large range with no changes.
# viewing a and b as multisets, set matches to the cardinality
# of their intersection; this counts the number of matches
# without regard to order, so is clearly an upper bound
# avail[x] is the number of times x appears in 'b' less the
# number of times we've seen it in 'a' so far ... kinda
# can't have more matches than the number of elements in the
# shorter sequence
# Move the best scorers to head of list
# Strip scores for the best n matches
# dump the shorter block first -- reduces the burden on short-term
# memory if the blocks are of very different sizes
# don't synch up unless the lines have a similarity score of at
# least cutoff; best_ratio tracks the best score seen so far
# 1st indices of equal lines (if any)
# search for the pair that matches best without being identical
# (identical lines must be junk lines, & we don't want to synch up
# on junk -- unless we have to)
# computing similarity is expensive, so use the quick
# upper bounds first -- have seen this speed up messy
# compares by a factor of 3.
# note that ratio() is only expensive to compute the first
# time it's called on a sequence pair; the expensive part
# of the computation is cached by cruncher
# no non-identical "pretty close" pair
# no identical pair either -- treat it as a straight replace
# no close pair, but an identical pair -- synch up on that
# there's a close pair, so forget the identical pair (if any)
# a[best_i] very similar to b[best_j]; eqi is None iff they're not
# identical
# pump out diffs from before the synch point
# do intraline marking on the synch pair
# pump out a '-', '?', '+', '?' quad for the synched lines
# the synch pair is identical
# pump out diffs from after the synch point
# With respect to junk, an earlier version of ndiff simply refused to
# *start* a match with a junk element.  The result was cases like this:
# If you consider whitespace to be junk, the longest contiguous match
# not starting with junk is "e Thread currentThread".  So ndiff reported
# that "e volatil" was inserted between the 't' and the 'e' in "private".
# While an accurate view, to people that's absurd.  The current version
# looks for matching blocks that are entirely junk-free, then extends the
# longest one of those as far as possible but only with matching junk.
# So now "currentThread" is matched, then extended to suck up the
# preceding blank; then "private" is matched, and extended to suck up the
# following blank; then "Thread" is matched; and finally ndiff reports
# that "volatile " was inserted before "Thread".  The only quibble
# remaining is that perhaps it was really the case that " volatile"
# was inserted after "private".  I can live with that <wink>.
### Per the diff spec at http://www.unix.org/single_unix_specification/
# lines start numbering with one
# empty ranges begin at line just before the range
### See http://www.unix.org/single_unix_specification/
# Checking types is weird, but the alternative is garbled output when
# someone passes mixed bytes and str to {unified,context}_diff(). E.g.
# without this check, passing filenames as bytes results in output like
# because of how str.format() incorporates bytes objects.
# regular expression for finding intraline change indices
# create the difference iterator to generate the differences
# Handle case where no user markup is to be added, just return line of
# text with user's line format to allow for usage of the line number.
# Handle case of intraline changes
# find intraline changes (store change type and indices in tuples)
# process each tuple inserting our special marks that won't be
# noticed by an xml/html escaper.
# Handle case of add/delete entire line
# if line of text is just a newline, insert a space so there is
# something for the user to highlight and see.
# insert marks that won't be noticed by an xml/html escaper.
# Return line of text, first allow user's line formatter to do its
# thing (such as adding the line number) then replace the special
# marks with what the user's change markup.
# Load up next 4 lines so we can look ahead, create strings which
# are a concatenation of the first character of each of the 4 lines
# so we can do some very readable comparisons.
# When no more lines, pump out any remaining blank lines so the
# corresponding add/delete lines get a matching blank line so
# all line pairs get yielded at the next level.
# simple intraline change
# in delete block, add block coming: we do NOT want to get
# caught up on blank lines yet, just process the delete line
# in delete block and see an intraline change or unchanged line
# coming: yield the delete line and then blanks
# intraline change
# delete FROM line
# in add block, delete block coming: we do NOT want to get
# caught up on blank lines yet, just process the add line
# will be leaving an add block: yield blanks then add line
# inside an add block, yield the add line
# unchanged text, yield it to both sides
# Catch up on the blank lines so when we yield the next from/to
# pair, they are lined up.
# Collecting lines of text until we have a from/to pair
# Once we have a pair, remove them from the collection and yield it
# Handle case where user does not want context differencing, just yield
# them up without doing anything else with them.
# Handle case where user wants context differencing.  We must do some
# storage of lines until we know for sure that they are to be yielded.
# Store lines up until we find a difference, note use of a
# circular queue because we only need to keep around what
# we need for context.
# Yield lines that we have collected so far, but first yield
# the user's separator.
# Now yield the context lines after the change
# If another change within the context, extend the context
# Catch exception from next() and return normally
# hide real spaces
# expand tabs into spaces
# replace spaces from expanded tabs back into tab characters
# (we'll replace them with markup after we do differencing)
# if blank line or context separator, just add it to the output list
# if line text doesn't need wrapping, just add it to the output list
# scan text looking for the wrap point, keeping track if the wrap
# point is inside markers
# wrap point is inside text, break it up into separate lines
# if wrap point is inside markers, place end marker at end of first
# line and start marker at beginning of second line because each
# line will have its own table tag markup around it.
# tack on first line onto the output list
# use this routine again to wrap the remaining text
# pull from/to data and flags from mdiff iterator
# check for context separators and pass them through
# for each from/to line split it at the wrap column to form
# list of text lines.
# yield from/to line in pairs inserting blank lines as
# necessary when one side has more wrapped lines
# pull from/to data and flags from mdiff style iterator
# store HTML markup of the lines into the lists
# exceptions occur for lines where context separators go
# handle blank lines where linenum is '>' or ''
# replace those things that would get confused with HTML symbols
# make space non-breakable so they don't get compressed or line wrapped
# Generate a unique anchor prefix so multiple tables
# can exist on the same HTML page without conflicts.
# store prefixes so line format method has access
# all anchor names will be generated using the unique "to" prefix
# process change flags, generating middle column of next anchors/links
# at the beginning of a change, drop an anchor a few lines
# (the context lines) before the change for the previous
# link
# at the beginning of a change, drop a link to the next
# change
# check for cases where there is no content to avoid exceptions
# if not a change on first line, drop a link
# redo the last link to link to the top
# make unique anchor prefixes so that multiple tables may exist
# on the same page without conflict.
# change tabs to spaces before it gets more difficult after we insert
# markup
# create diffs iterator which generates side by side from/to data
# set up iterator to wrap lines that exceed desired width
# collect up from/to lines and flags into lists (also format the lines)
# mdiff yields None on separator lines skip the bogus ones
# generated for the first line
# subprocess - Subprocesses with accessible I/O streams
# For more information about this module, see PEP 324.
# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
# NOTE: We intentionally exclude list2cmdline as it is
# considered an internal implementation detail.  issue10838.
# use presence of msvcrt to detect Windows-like platforms (see bpo-8110)
# some platforms do not support subprocesses
# used in methods that are called by __del__
# There's no obvious reason to set this, but allow it anyway so
# .stdout is a transparent alias for .output
# When select or poll has indicated that the file is writable,
# we can write up to _PIPE_BUF bytes without risk of blocking.
# POSIX defines PIPE_BUF as >= 512.
# poll/select have the advantage of not requiring any extra file
# descriptor, contrarily to epoll/kqueue (also, they require a single
# syscall).
# On Windows we just need to close `Popen._handle` when we no longer need
# it, so that the kernel can free it. `Popen._handle` gets closed
# implicitly when the `Popen` instance is finalized (see `Handle.__del__`,
# which is calling `CloseHandle` as requested in [1]), so there is nothing
# for `_cleanup` to do.
# [1] https://docs.microsoft.com/en-us/windows/desktop/ProcThread/
# creating-processes
# This lists holds Popen instances for which the underlying process had not
# exited at the time its __del__ method got called: those processes are
# wait()ed for synchronously from _cleanup() when a new Popen object is
# created, to avoid zombie processes.
# This can happen if two threads create a new Popen instance.
# It's harmless that it was already removed, so ignore.
# XXX This function is only used by multiprocessing and the test suite,
# but it's here so that it can be imported when Python is compiled without
# threads.
# 'inspect': 'i',
# 'interactive': 'i',
# -O is handled in _optim_args_from_interpreter_flags()
# -W options
# -X options
# Return default text encoding and emit EncodingWarning if
# sys.flags.warn_default_encoding is true.
# Including KeyboardInterrupt, wait handled that.
# We don't call p.wait() again as p.__exit__ does that for us.
# Explicitly passing input=None was previously equivalent to passing an
# empty string. That is maintained here for backwards compatibility.
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads.  communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
# Including KeyboardInterrupt, communicate handled that.
# We don't call process.wait() as .__exit__ does that for us.
# See
# http://msdn.microsoft.com/en-us/library/17w5ykft.aspx
# or search http://msdn.microsoft.com for
# "Parsing C++ Command-Line Arguments"
# Add a space to separate this argument from the others
# Don't know if we need to double yet.
# Double backslashes.
# Normal char
# Add remaining backslashes, if any.
# Various tools for executing commands and looking at their output and status.
# os.posix_spawn() is not available
# posix_spawn() is a syscall on both macOS and Solaris,
# and properly reports errors
# Check libc name and runtime libc version
# parse 'glibc 2.28' as ('glibc', (2, 28))
# reject unknown format
# glibc 2.24 has a new Linux posix_spawn implementation using vfork
# which properly reports errors to the parent process.
# Note: Don't use the implementation in earlier glibc because it doesn't
# use vfork (even if glibc 2.26 added a pipe to properly report errors
# to the parent process).
# By default, assume that posix_spawn() does not properly report errors.
# These are primarily fail-safe knobs for negatives. A True value does not
# guarantee the given libc/syscall API will be used.
# Set here since __del__ checks it
# Held while anything is calling waitpid before returncode has been
# updated to prevent clobbering returncode if wait() or poll() are
# called from multiple threads at once.  After acquiring the lock,
# code must re-check self.returncode to see if another thread just
# finished a waitpid() call.
# Restore default
# POSIX
# Validate the combinations of text and universal_newlines
# How long to resume waiting on a child after the first ^C.
# There is no right value for this.  The purpose is to be polite
# yet remain good for interactive users trying to exit a tool.
# 1/xkcd221.getRandomNumber()
# Use the default buffer size for the underlying binary streams
# since they don't support line buffering.
# The internal APIs are int-only
# make sure that the gids are all positive here so we can do less
# checking in the C code
# Input and output objects. The general principle is like
# this:
# Parent                   Child
# ------                   -----
# p2cwrite   ---stdin--->  p2cread
# c2pread    <--stdout---  c2pwrite
# errread    <--stderr---  errwrite
# On POSIX, the child objects are file descriptors.  On
# Windows, these are Windows file handles.  The parent objects
# are file descriptors on both platforms.  The parent objects
# are -1 when not using PIPEs. The child objects are -1
# when not redirecting.
# From here on, raising exceptions may cause file descriptor leakage
# We wrap OS handles *before* launching the child, otherwise a
# quickly terminating child could make our fds unwrappable
# (see #8458).
# Cleanup if the child failed starting.
# Ignore EBADF or other errors.
# universal_newlines as retained as an alias of text_mode for API
# compatibility. bpo-31756
# Flushing a BufferedWriter may raise an error
# https://bugs.python.org/issue25942
# In the case of a KeyboardInterrupt we assume the SIGINT
# was also already sent to our child processes.  We can't
# block indefinitely as that is not user friendly.
# If we have not already waited a brief amount of time in
# an interrupted .wait() or .communicate() call, do so here
# for consistency.
# Note that this has been done.
# resume the KeyboardInterrupt
# Wait for the process to terminate, to avoid zombies.
# We didn't get to successfully create a child process.
# Not reading subprocess exit status creates a zombie process which
# is only destroyed at the parent python process exit
# In case the child hasn't been waited on, check if it's done.
# Child is still running, keep us alive until we can wait on it.
# communicate() must ignore broken pipe errors.
# bpo-19612, bpo-30418: On Windows, stdin.write() fails
# with EINVAL if the child process exited or if the child
# process is still running but closed the pipe.
# Optimization: If we are not worried about timeouts, we haven't
# started communicating, and we have one or zero pipes, using select()
# or threads is unnecessary.
# See the detailed comment in .wait().
# nothing else should wait.
# The first keyboard interrupt waits briefly for the child to
# exit under the common assumption that it also received the ^C
# generated SIGINT and will exit rapidly.
# self._devnull is not always defined.
# Prevent a double close of these handles/fds from __init__ on error.
# Windows methods
# Assuming file-like object
# An handle with it's lowest two bits set might be a special console
# handle that if passed in lpAttributeList["handle_list"], will
# cause it to fail.
# Process startup details
# bpo-34044: Copy STARTUPINFO since it is modified above,
# so the caller can reuse it multiple times.
# If we were given an handle_list or need to create one
# When using the handle_list we always request to inherit
# handles but the only handles that will be inherited are
# the ones in the handle_list
# gh-101283: without a fully-qualified path, before Windows
# checks the system directories, it first looks in the
# application directory, and also the current directory if
# NeedCurrentDirectoryForExePathW(ExeName) is true, so try
# to avoid executing unqualified "cmd.exe".
# Start the process
# no special security
# Child is launched. Close the parent's copy of those pipe
# handles that only the child should have open.  You need
# to make sure that no handles to the write end of the
# output pipe are maintained in this process or else the
# pipe will not close when the child process exits and the
# ReadFile will hang.
# Retain the process handle, but close the thread handle
# API note: Returns immediately if timeout_millis == 0.
# Start reader threads feeding into a list hanging off of this
# object, unless they've already been started.
# Wait for the reader threads, or time out.  If we time out, the
# threads remain reading and the fds left open in case the user
# calls communicate again.
# Collect the output from and close both pipes, now that we know
# both have been read successfully.
# All data exchanged.  Translate lists into strings.
# Don't signal a process that we know has already died.
# Don't terminate a process that we know has already died.
# ERROR_ACCESS_DENIED (winerror 5) is received when the
# process already died.
# POSIX methods
# child's stdout is not set, use parent's stdout
# See _Py_RestoreSignals() in Python/pylifecycle.c
# On Android the default shell is at '/system/bin/sh'.
# For transferring possible exec failure from child to parent.
# Data format: "exception name:hex errno:description"
# Pickle is not used; it is complex and involves memory allocation.
# errpipe_write must not be in the standard io 0, 1, or 2 fd range.
# We must avoid complex work that could involve
# malloc or free in the child process to avoid
# potential deadlocks, thus we do all this here.
# and pass it to fork_exec()
# Use execv instead of execve.
# This matches the behavior of os._execvpe().
# be sure the FD is closed no matter what
# Wait for exec to fail or succeed; possibly raising an
# exception (limited in size)
# The encoding here should match the encoding
# written in by the subprocess implementations
# like _posixsubprocess
# The error must be from chdir(cwd).
# This method is called (indirectly) by __del__, so it cannot
# refer to anything outside of its local scope.
# Something else is busy calling waitpid.  Don't allow two
# at once.  We know nothing yet.
# Another thread waited.
# This happens if SIGCLD is set to be ignored or
# waiting for child processes has otherwise been
# disabled for our process.  This child is dead, we
# can't get the status.
# http://bugs.python.org/issue15756
# This happens if SIGCLD is set to be ignored or waiting
# for child processes has otherwise been disabled for our
# process.  This child is dead, we can't get the status.
# Enter a busy loop if we have a timeout.  This busy loop was
# cribbed from Lib/threading.py in Thread.wait() at r71065.
# 500 us -> initial delay of 1 ms
# Check the pid and loop as waitpid has been known to
# return 0 even without WNOHANG in odd situations.
# http://bugs.python.org/issue14396.
# Flush stdio buffer.  This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
# communicate() must ignore BrokenPipeError.
# Only create this mapping if we haven't already.
# Impossible :)
# XXX Rewrite these to use non-blocking I/O on the file
# objects; they are no longer using C stdio!
# Translate newlines, if requested.
# This also turns bytes into strings.
# This method is called from the _communicate_with_*() methods
# so that if we time out while communicating, we can continue
# sending input if we retry.
# bpo-38630: Polling reduces the risk of sending a signal to the
# wrong process if the process completed, the Popen.returncode
# attribute is still None, and the pid has been reassigned
# (recycled) to a new different process. This race condition can
# happens in two cases.
# Case 1. Thread A calls Popen.poll(), thread B calls
# Popen.send_signal(). In thread A, waitpid() succeed and returns
# the exit status. Thread B calls kill() because poll() in thread A
# did not set returncode yet. Calling poll() in thread B prevents
# the race condition thanks to Popen._waitpid_lock.
# Case 2. waitpid(pid, 0) has been called directly, without
# using Popen methods: returncode is still None is this case.
# Calling Popen.poll() will set returncode to a default value,
# since waitpid() fails with ProcessLookupError.
# Skip signalling a process that we know has already died.
# The race condition can still happen if the race condition
# described above happens between the returncode test
# and the kill() call.
# Suppress the race condition error; bpo-40550.
# builtin type
# The size of the digests returned by HMAC depends on the underlying
# hashing module used.  Use digest_size from the instance of HMAC instead.
# 512-bit HMAC; can be changed in subclasses.
# self.blocksize is the default blocksize. self.block_size is
# effective block size as well as the public API attribute.
# Call __new__ directly to avoid the expensive __init__.
# Taken from _osx_support _read_output function
# type: (List[int], int) -> str
# Infer the ABI bitwidth from maxsize (assuming 64 bit as the default)
# vrtl[version, release, technology_level]
# extract version, release and technology level from a VRMF string
# type: (str) -> List[int]
# type: () -> Tuple[str, int]
# All AIX systems to have lslpp installed in this location
# subprocess may not be available during python bootstrap
# type: ignore
# type: () -> str
# extract vrtl from the BUILD_GNU_TYPE as an int
# type: () -> List[int]
# AIX_BUILDDATE is defined by configure with:
# lslpp -Lcq bos.rte | awk -F:  '{ print $NF }'
# ____________________________________________________________
# Simple interface
# Most of the functionality is in the base class.
# This subclass only adds convenient and backward-compatible methods.
# call information
# ncalls column of pstats (before '/')
# ncalls column of pstats (after '/')
# tottime column of pstats
# cumtime column of pstats
# subcall information
# built-in functions ('~' sorts at the end)
#.  Copyright (C) 2005-2010   Gregory P. Smith (greg@krypto.org)
# This tuple and __get_builtin_constructor() must be modified if a new
# always available algorithm is added.
# Prefer our blake2 implementation
# OpenSSL 1.1.0 comes with a limited implementation of blake2b/s. The OpenSSL
# implementations neither support keyed blake2 (blake2 MAC) nor advanced
# features like salt, personalization, or tree hashing. OpenSSL hash-only
# variants are available as 'blake2b512' and 'blake2s256', though.
# no extension module, this hash is unsupported.
# Prefer our builtin blake2 implementation.
# MD5, SHA1, and SHA2 are in all supported OpenSSL versions
# SHA3/shake are available in OpenSSL 1.1.1+
# Allow the C module to raise ValueError.  The function will be
# defined but the hash not actually available.  Don't fall back to
# builtin if the current security policy blocks a digest, bpo#40695.
# Use the C function directly (very fast)
# If the _hashlib module (OpenSSL) doesn't support the named
# hash, try using our builtin implementations.
# This allows for SHA224/256 and SHA384/512 support even though
# the OpenSSL library prior to 0.9.8 doesn't provide them.
# OpenSSL's PKCS5_PBKDF2_HMAC requires OpenSSL 1.0+ with HMAC and SHA
# OpenSSL's scrypt requires OpenSSL 1.1+
# On Linux we could use AF_ALG sockets and sendfile() to archive zero-copy
# hashing with hardware acceleration.
# io.BytesIO object, use zero-copy buffer
# Only binary files implement readinto().
# binary file, socket.SocketIO object
# Note: socket I/O uses different syscalls than file I/O.
# Reusable buffer to reduce allocations.
# try them all, some may not work due to the OpenSSL
# version not supporting that algorithm.
# Cleanup locals()
# Copyright (c) 2004 Python Software Foundation.
# Written by Eric Price <eprice at tjhsst.edu>
# This module should be kept in sync with the latest updates of the
# IBM specification as it evolves.  Those updates will be treated
# as bug fixes (deviation from the spec is a compatibility, usability
# bug) and will be backported.  At this point the spec is stabilizing
# and the updates are becoming fewer, smaller, and less significant.
# Two major classes
# Named tuple representation
# Contexts
# Exceptional conditions that trigger InvalidOperation
# Constants for use in setting up contexts
# Functions for manipulating contexts
# Limits for the C version for compatibility
# C version: compile time choice that enables the thread local context (deprecated, now always true)
# C version: compile time choice that enables the coroutine local context
# sys.modules lookup (--without-threads)
# For pickling
# Highest version of the spec this complies with
# See http://speleotrove.com/decimal/
# compatible libmpdec version
# Rounding
# Compatibility with the C version
# Errors
# List of public traps and flags
# Map conditions (per the spec) to signals
# Valid rounding modes
##### Context Functions ##################################################
# The getcontext() and setcontext() function manage access to a thread-local
# current context.
# Don't contaminate the namespace
##### Decimal class #######################################################
# Do not subclass Decimal from numbers.Real and do not register it as such
# (because Decimals are not interoperable with floats).  See the notes in
# numbers.py for more detail.
# Generally, the value of the Decimal instance is given by
# Special values are signified by _is_special == True
# We're immutable, so use __new__ not __init__
# Note that the coefficient, self._int, is actually stored as
# a string rather than as a tuple of digits.  This speeds up
# the "digits to integer" and "integer to digits" conversions
# that are used in almost every arithmetic operation on
# Decimals.  This is an internal detail: the as_tuple function
# and the Decimal constructor still deal with tuples of
# digits.
# From a string
# REs insist on real strings, so we can too.
# finite number
# NaN
# infinity
# From an integer
# From another decimal
# From an internal working value
# tuple/list conversion (possibly from as_tuple())
# process sign.  The isinstance test rejects floats
# infinity: value[1] is ignored
# process and validate the digits in value[1]
# skip leading zeros
# NaN: digits form the diagnostic
# finite number: digits give the coefficient
# handle integer inputs
# check for zeros;  Decimal('0') == Decimal('-0')
# If different signs, neg one is less
# self_adjusted < other_adjusted
# Note: The Decimal standard doesn't cover rich comparisons for
# Decimals.  In particular, the specification is silent on the
# subject of what should happen for a comparison involving a NaN.
# We take the following approach:
# This behavior is designed to conform as closely as possible to
# that specified by IEEE 754.
# Compare(NaN, NaN) = NaN
# In order to make sure that the hash of a Decimal instance
# agrees with the hash of a numerically equal integer, float
# or Fraction, we follow the rules for numeric hashes outlined
# in the documentation.  (See library docs, 'Built-in Types').
# Find n, d in lowest terms such that abs(self) == n / d;
# we'll deal with the sign later.
# self is an integer.
# Find d2, d5 such that abs(self) = n / (2**d2 * 5**d5).
# (n & -n).bit_length() - 1 counts trailing zeros in binary
# representation of n (provided n is nonzero).
# Invariant:  eval(repr(d)) == d
# self._exp == 'N'
# number of digits of self._int to left of decimal point
# dotplace is number of digits of self._int to the left of the
# decimal point in the mantissa of the output string (that is,
# after adjusting the exponent)
# no exponent required
# usual scientific notation: 1 digit on left of the point
# engineering notation, zero
# engineering notation, nonzero
# -Decimal('0') is Decimal('0'), not Decimal('-0'), except
# in ROUND_FLOOR rounding mode.
# + (-0) = 0, except in ROUND_FLOOR rounding mode.
# If both INF, same sign => same as both, opposite => error.
# Can't both be infinity here
# If the answer is 0, the sign should be negative, in this case.
# Equal and opposite
# OK, now abs(op1) > abs(op2)
# So we know the sign, and op1 > 0.
# Now, op1 > abs(op2) > 0
# self - other is computed as self + other.copy_negate()
# Special case for multiplying by zero
# Fixing in case the exponent is out of bounds
# Special case for multiplying by power of 10
# Special cases for zeroes
# OK, so neither = 0, INF or NaN
# result is not exact; adjust to ensure correct rounding
# result is exact; get as close to ideal exponent as possible
# Here the quotient is too large to be representable
# self == +/-infinity -> InvalidOperation
# other == 0 -> either InvalidOperation or DivisionUndefined
# other = +/-infinity -> remainder = self
# self = 0 -> remainder = self, with ideal exponent
# catch most cases of large or small quotient
# expdiff >= prec+1 => abs(self/other) > 10**prec
# expdiff <= -2 => abs(self/other) < 0.1
# adjust both arguments to have the same exponent, then divide
# remainder is r*10**ideal_exponent; other is +/-op2.int *
# 10**ideal_exponent.   Apply correction to ensure that
# abs(remainder) <= abs(other)/2
# result has same sign as self unless r is negative
# maximum length of payload is precision if clamp=0,
# precision-1 if clamp=1.
# decapitate payload if necessary
# self is +/-Infinity; return unaltered
# if self is zero then exponent should be between Etiny and
# Emax if clamp==0, and between Etiny and Etop if clamp==1.
# exp_min is the smallest allowable exponent of the result,
# equal to max(self.adjusted()-context.prec+1, Etiny)
# overflow: exp_min > Etop iff self.adjusted() > Emax
# round if self has too many digits
# check whether the rounding pushed the exponent out of range
# raise the appropriate signals, taking care to respect
# the precedence described in the specification
# raise Clamped on underflow to 0
# fold down if clamp == 1 and self has too few digits
# here self was representable to begin with; return unchanged
# for each of the rounding functions below:
# each function returns either -1, 0, or 1, as follows:
# two-argument form: use the equivalent quantize call
# one-argument form
# compute product; raise InvalidOperation if either operand is
# a signaling NaN or if the product is zero times infinity.
# deal with NaNs: if there are any sNaNs then first one wins,
# (i.e. behaviour for NaNs is identical to that of fma)
# check inputs: we apply same restrictions as Python's pow()
# additional restriction for decimal: the modulus must be less
# than 10**prec in absolute value
# define 0**0 == NaN, for consistency with two-argument pow
# (even though it hurts!)
# compute sign of result
# convert modulo to a Python integer, and self and other to
# Decimal integers (i.e. force their exponents to be >= 0)
# compute result using integer pow()
# In the comments below, we write x for the value of self and y for the
# value of other.  Write x = xc*10**xe and abs(y) = yc*10**ye, with xc
# and yc positive integers not divisible by 10.
# The main purpose of this method is to identify the *failure*
# of x**y to be exactly representable with as little effort as
# possible.  So we look for cheap and easy tests that
# eliminate the possibility of x**y being exact.  Only if all
# these tests are passed do we go on to actually compute x**y.
# Here's the main idea.  Express y as a rational number m/n, with m and
# n relatively prime and n>0.  Then for x**y to be exactly
# representable (at *any* precision), xc must be the nth power of a
# positive integer and xe must be divisible by n.  If y is negative
# then additionally xc must be a power of either 2 or 5, hence a power
# of 2**n or 5**n.
# There's a limit to how small |y| can be: if y=m/n as above
# then:
# Note that since x is not equal to 1, at least one of (1) and
# (2) must apply.  Now |y| < 1/nbits(xc) iff |yc|*nbits(xc) <
# 10**-ye iff len(str(|yc|*nbits(xc)) <= -ye.
# There's also a limit to how large y can be, at least if it's
# positive: the normalized result will have coefficient xc**y,
# so if it's representable then xc**y < 10**p, and y <
# p/log10(xc).  Hence if y*log10(xc) >= p then the result is
# not exactly representable.
# if len(str(abs(yc*xe)) <= -ye then abs(yc*xe) < 10**-ye,
# so |y| < 1/xe and the result is not representable.
# Similarly, len(str(abs(yc)*xc_bits)) <= -ye implies |y|
# < 1/nbits(xc).
# case where xc == 1: result is 10**(xe*y), with xe*y
# required to be an integer
# result is now 10**(xe * 10**ye);  xe * 10**ye must be integral
# if other is a nonnegative integer, use ideal exponent
# case where y is negative: xc must be either a power
# of 2 or a power of 5.
# quick test for power of 2
# now xc is a power of 2; e is its exponent
# We now have:
# The exact result is:
# provided that both e*y and xe*y are integers.  Note that if
# 5**(-e*y) >= 10**p, then the result can't be expressed
# exactly with p digits of precision.
# Using the above, we can guard against large values of ye.
# 93/65 is an upper bound for log(10)/log(5), so if
# then
# so 5**(-e*y) >= 10**p, and the coefficient of the result
# can't be expressed in p digits.
# emax >= largest e such that 5**e < 10**p.
# Find -e*y and -xe*y; both must be integers
# e >= log_5(xc) if xc is a power of 5; we have
# equality all the way up to xc=5**2658
# Guard against large values of ye, using the same logic as in
# the 'xc is a power of 2' branch.  10/3 is an upper bound for
# log(10)/log(2).
# An exact power of 10 is representable, but can convert to a
# string of any length. But an exact power of 10 shouldn't be
# possible at this point.
# now y is positive; find m and n such that y = m/n
# compute nth root of xc*10**xe
# if 1 < xc < 2**n then xc isn't an nth power
# compute nth root of xc using Newton's method
# initial estimate
# now xc*10**xe is the nth root of the original xc*10**xe
# compute mth power of xc*10**xe
# if m > p*100//_log10_lb(xc) then m > p/log10(xc), hence xc**m >
# 10**p and the result is not representable.
# An exact power of 10 is representable, but can convert to a string
# of any length. But an exact power of 10 shouldn't be possible at
# this point.
# by this point the result *is* exactly representable
# adjust the exponent to get as close as possible to the ideal
# exponent, if necessary
# either argument is a NaN => result is NaN
# 0**0 = NaN (!), x**0 = 1 for nonzero x (including +/-Infinity)
# result has sign 1 iff self._sign is 1 and other is an odd integer
# -ve**noninteger = NaN
# (-0)**noninteger = 0**noninteger
# negate self, without doing any unwanted rounding
# 0**(+ve or Inf)= 0; 0**(-ve or -Inf) = Infinity
# Inf**(+ve or Inf) = Inf; Inf**(-ve or -Inf) = 0
# 1**other = 1, but the choice of exponent and the flags
# depend on the exponent of self, and on whether other is a
# positive integer, a negative integer, or neither
# exp = max(self._exp*max(int(other), 0),
# 1-context.prec) but evaluating int(other) directly
# is dangerous until we know other is small (other
# could be 1e999999999)
# compute adjusted exponent of self
# self ** infinity is infinity if self > 1, 0 if self < 1
# self ** -infinity is infinity if self < 1, 0 if self > 1
# from here on, the result always goes through the call
# to _fix at the end of this function.
# crude test to catch cases of extreme overflow/underflow.  If
# log10(self)*other >= 10**bound and bound >= len(str(Emax))
# then 10**bound >= 10**len(str(Emax)) >= Emax+1 and hence
# self**other >= 10**(Emax+1), so overflow occurs.  The test
# for underflow is similar.
# self > 1 and other +ve, or self < 1 and other -ve
# possibility of overflow
# self > 1 and other -ve, or self < 1 and other +ve
# possibility of underflow to 0
# try for an exact result with precision +1
# usual case: inexact result, x**y computed directly as exp(y*log(x))
# compute correctly rounded result:  start with precision +3,
# then increase precision until result is unambiguously roundable
# unlike exp, ln and log10, the power function respects the
# rounding mode; no need to switch to ROUND_HALF_EVEN here
# There's a difficulty here when 'other' is not an integer and
# the result is exact.  In this case, the specification
# requires that the Inexact flag be raised (in spite of
# exactness), but since the result is exact _fix won't do this
# for us.  (Correspondingly, the Underflow signal should also
# be raised for subnormal results.)  We can't directly raise
# these signals either before or after calling _fix, since
# that would violate the precedence for signals.  So we wrap
# the ._fix call in a temporary context, and reraise
# afterwards.
# pad with zeros up to length context.prec+1 if necessary; this
# ensures that the Rounded signal will be raised.
# create a copy of the current context, with cleared flags/traps
# round in the new context
# raise Inexact, and if necessary, Underflow
# propagate signals to the original context; _fix could
# have raised any of Overflow, Underflow, Subnormal,
# Inexact, Rounded, Clamped.  Overflow needs the correct
# arguments.  Note that the order of the exceptions is
# important here.
# if both are inf, it is OK
# exp._exp should be between Etiny and Emax
# raise appropriate flags
# call to fix takes care of any necessary folddown, and
# signals Clamped if necessary
# pad answer with zeros if necessary
# too many digits; round and lose data.  If self.adjusted() <
# exp-1, replace self by 10**(exp-1) before rounding
# it can happen that the rescale alters the adjusted exponent;
# for example when rounding 99.97 to 3 significant figures.
# When this happens we end up with an extra 0 at the end of
# the number; a second rescale fixes this.
# the method name changed, but we provide also the old one, for compatibility
# exponent = self._exp // 2.  sqrt(-0) = -0
# At this point self represents a positive number.  Let p be
# the desired precision and express self in the form c*100**e
# with c a positive real number and e an integer, c and e
# being chosen so that 100**(p-1) <= c < 100**p.  Then the
# (exact) square root of self is sqrt(c)*10**e, and 10**(p-1)
# <= sqrt(c) < 10**p, so the closest representable Decimal at
# precision p is n*10**e where n = round_half_even(sqrt(c)),
# the closest integer to sqrt(c) with the even integer chosen
# in the case of a tie.
# To ensure correct rounding in all cases, we use the
# following trick: we compute the square root to an extra
# place (precision p+1 instead of precision p), rounding down.
# Then, if the result is inexact and its last digit is 0 or 5,
# we increase the last digit to 1 or 6 respectively; if it's
# exact we leave the last digit alone.  Now the final round to
# p places (or fewer in the case of underflow) will round
# correctly and raise the appropriate flags.
# use an extra digit of precision
# write argument in the form c*100**e where e = self._exp//2
# is the 'ideal' exponent, to be used if the square root is
# exactly representable.  l is the number of 'digits' of c in
# base 100, so that 100**(l-1) <= c < 100**l.
# rescale so that c has exactly prec base 100 'digits'
# find n = floor(sqrt(c)) using Newton's method
# result is exact; rescale to use ideal exponent e
# assert n % 10**shift == 0
# result is not exact; fix last digit as described above
# round, and fit to current context
# If one operand is a quiet NaN and the other is number, then the
# number is always returned
# If both operands are finite and equal in numerical value
# then an ordering is applied:
# If the signs differ then max returns the operand with the
# positive sign and min returns the operand with the negative sign
# If the signs are the same then the exponent is used to select
# the result.  This is exactly the ordering used in compare_total.
# If NaN or Infinity, self._exp is string
# if one is negative and the other is positive, it's easy
# let's handle both NaN types
# compare payloads as though they're integers
# exp(NaN) = NaN
# exp(-Infinity) = 0
# exp(0) = 1
# exp(Infinity) = Infinity
# the result is now guaranteed to be inexact (the true
# mathematical result is transcendental). There's no need to
# raise Rounded and Inexact here---they'll always be raised as
# a result of the call to _fix.
# we only need to do any computation for quite a small range
# of adjusted exponents---for example, -29 <= adj <= 10 for
# the default context.  For smaller exponent the result is
# indistinguishable from 1 at the given precision, while for
# larger exponent the result either overflows or underflows.
# overflow
# underflow to 0
# p+1 digits; final round will raise correct flags
# general case
# compute correctly rounded result: increase precision by
# 3 digits at a time until we get an unambiguously
# roundable result
# at this stage, ans should round correctly with *any*
# rounding mode, not just with ROUND_HALF_EVEN
# for 0.1 <= x <= 10 we use the inequalities 1-1/x <= ln(x) <= x-1
# argument >= 10; we use 23/10 = 2.3 as a lower bound for ln(10)
# argument <= 0.1
# 1 < self < 10
# adj == -1, 0.1 <= self < 1
# ln(NaN) = NaN
# ln(0.0) == -Infinity
# ln(Infinity) = Infinity
# ln(1.0) == 0.0
# ln(negative) raises InvalidOperation
# result is irrational, so necessarily inexact
# correctly rounded result: repeatedly increase precision by 3
# until we get an unambiguously roundable result
# at least p+3 places
# assert len(str(abs(coeff)))-p >= 1
# For x >= 10 or x < 0.1 we only need a bound on the integer
# part of log10(self), and this comes directly from the
# exponent of x.  For 0.1 <= x <= 10 we use the inequalities
# 1-1/x <= log(x) <= x-1. If x > 1 we have |log10(x)| >
# (1-1/x)/2.31 > 0.  If x < 1 then |log10(x)| > (1-x)/2.31 > 0
# self >= 10
# self < 0.1
# log10(NaN) = NaN
# log10(0.0) == -Infinity
# log10(Infinity) = Infinity
# log10(negative or -Infinity) raises InvalidOperation
# log10(10**n) = n
# answer may need rounding
# correctly rounded result: repeatedly increase precision
# until result is unambiguously roundable
# logb(NaN) = NaN
# logb(+/-Inf) = +Inf
# logb(0) = -Inf, DivisionByZero
# otherwise, simply return the adjusted exponent of self, as a
# Decimal.  Note that no attempt is made to fit the result
# into the current context.
# fill to context.prec
# make the operation, and clean starting zeroes
# comparison == 1
# decide which flags to raise using value of ans
# if precision == 1 then we don't raise Clamped for a
# result 0E-Etiny.
# just a normal, regular, boring number, :)
# get values, pad if necessary
# let's rotate!
# let's shift!
# Support for pickling, copy, and deepcopy
# I'm immutable; therefore I am my own clone
# My components are also immutable
# PEP 3101 support.  the _localeconv keyword argument should be
# considered private: it's provided for ease of testing only.
# Note: PEP 3101 says that if the type is not present then
# there should be at least one digit after the decimal point.
# We take the liberty of ignoring this requirement for
# Decimal---it's presumably there to make sure that
# format(float, '') behaves similarly to str(float).
# special values don't care about the type or precision
# a type of None defaults to 'g' or 'G', depending on context
# if type is '%', adjust exponent of self accordingly
# round if necessary, taking rounding mode from the context
# special case: zeros with a positive exponent can't be
# represented in fixed point; rescale them to 0e0.
# figure out placement of the decimal point
# find digits before and after decimal point, and get exponent
# done with the decimal-specific stuff;  hand over the rest
# of the formatting to the _format_number function
# Register Decimal as a kind of Number (an abstract base class).
# However, do not register it as Real (because Decimals are not
# interoperable with floats).
##### Context class #######################################################
# Set defaults; for everything except flags and _ignored_flags,
# inherit from DefaultContext.
# raise TypeError even for strings to have consistency
# among various implementations.
# Don't touch the flag
# The errors define how to handle themselves.
# Errors should only be risked on copies of the context
# self._ignored_flags = []
# Do not mutate-- This way, copies of a context leave the original
# alone.
# We inherit object.__hash__, so we must deny this explicitly
# An exact conversion
# Apply the context rounding
# Methods
# sign: 0 or 1
# int:  int
# exp:  None, int, or string
# assert isinstance(value, tuple)
# Let exp = min(tmp.exp - 1, tmp.adjusted() - precision - 1).
# Then adding 10**exp to tmp has the same effect (after rounding)
# as adding any positive quantity smaller than 10**exp; similarly
# for subtraction.  So if other is smaller than 10**exp we replace
# it with 10**exp.  This avoids tmp.exp - other.exp getting too large.
##### Integer arithmetic functions used by ln, log10, exp and __pow__ #####
# val_n = largest power of 10 dividing n.
# The basic algorithm is the following: let log1p be the function
# log1p(x) = log(1+x).  Then log(x/M) = log1p((x-M)/M).  We use
# the reduction
# repeatedly until the argument to log1p is small (< 2**-L in
# absolute value).  For small y we can use the Taylor series
# expansion
# truncating at T such that y**T is small enough.  The whole
# computation is carried out in a form of fixed-point arithmetic,
# with a real number z being represented by an integer
# approximation to z*M.  To avoid loss of precision, the y below
# is actually an integer approximation to 2**R*y*M, where R is the
# number of reductions performed so far.
# argument reduction; R = number of reductions performed
# Taylor series with T terms
# increase precision by 2; compensate for this by dividing
# final result by 100
# write c*10**e as d*10**f with either:
# Thus for c*10**e close to 1, f = 0
# error < 5 + 22 = 27
# error < 1
# exact
# error < 2.31
# error < 0.5
# Increase precision by 2. The precision increase is compensated
# for at the end with a division by 100.
# rewrite c*10**e as d*10**f with either f >= 0 and 1 <= d <= 10,
# or f <= 0 and 0.1 <= d <= 1.  Then we can compute 10**p * log(c*10**e)
# as 10**p * log(d) + 10**p*f * log(10).
# compute approximation to 10**p*log(d), with error < 27
# error of <= 0.5 in c
# _ilog magnifies existing error in c by a factor of at most 10
# p <= 0: just approximate the whole thing by 0; error < 2.31
# compute approximation to f*10**p*log(10), with error < 11.
# error in f * _log10_digits(p+extra) < |f| * 1 = |f|
# after division, error < |f|/10**extra + 0.5 < 10 + 0.5 < 11
# error in sum < 11+27 = 38; error after division < 0.38 + 0.5 < 1
# digits are stored as a string, for quick conversion to
# integer in the case that we've already computed enough
# digits; the stored digits should always be correct
# (truncated, not rounded to nearest).
# compute p+3, p+6, p+9, ... digits; continue until at
# least one of the extra digits is nonzero
# compute p+extra digits, correct to within 1ulp
# keep all reliable digits so far; remove trailing zeros
# and next nonzero digit
# Algorithm: to compute exp(z) for a real number z, first divide z
# by a suitable power R of 2 so that |z/2**R| < 2**-L.  Then
# compute expm1(z/2**R) = exp(z/2**R) - 1 using the usual Taylor
# series
# Now use the identity
# R times to compute the sequence expm1(z/2**R),
# expm1(z/2**(R-1)), ... , exp(z/2), exp(z).
# Find R such that x/2**R/M <= 2**-L
# Taylor series.  (2**L)**T > M
# Expansion
# we'll call iexp with M = 10**(p+2), giving p+3 digits of precision
# compute log(10) with extra precision = adjusted exponent of c*10**e
# compute quotient c*10**e/(log(10)) = c*10**(e+q)/(log(10)*10**q),
# rounding down
# reduce remainder back to original precision
# error in result of _iexp < 120;  error after division < 0.62
# Find b such that 10**(b-1) <= |y| <= 10**b
# log(x) = lxc*10**(-p-b-1), to p+b+1 places after the decimal point
# compute product y*log(x) = yc*lxc*10**(-p-b-1+ye) = pc*10**(-p-1)
# we prefer a result that isn't exactly 1; this makes it
# easier to compute a correctly rounded result in __pow__
# if x**y > 1:
##### Helper Functions ####################################################
# Comparison with a Rational instance (also includes integers):
# self op n/d <=> self*d op n (for n and d integers, d positive).
# A NaN or infinity can be left unchanged without affecting the
# comparison result.
# Comparisons with float and complex types.  == and != comparisons
# with complex numbers should succeed, returning either True or False
# as appropriate.  Other comparisons return NotImplemented.
##### Setup Specific Contexts ############################################
# The default context prototype used by Context()
# Is mutable, so that new contexts can have different default values
# Pre-made alternate contexts offered by the specification
# Don't change these; the user should be able to select these
# contexts and be able to reproduce results from other implementations
# of the spec.
##### crud for parsing strings #############################################
# Regular expression used for parsing numeric strings.  Additional
# comments:
# 1. Uncomment the two '\s*' lines to allow leading and/or trailing
# whitespace.  But note that the specification disallows whitespace in
# a numeric string.
# 2. For finite numbers (not infinities and NaNs) the body of the
# number between the optional sign and the optional exponent must have
# at least one decimal digit, possibly after the decimal point.  The
# lookahead expression '(?=\d|\.\d)' checks this.
##### PEP3101 support functions ##############################################
# The functions in this section have little to do with the Decimal
# class, and could potentially be reused or adapted for other pure
# Python numeric classes that want to implement __format__
# A format specifier for Decimal looks like:
# The locale module is only needed for the 'n' format specifier.  The
# rest of the PEP 3101 code functions quite happily without it, so we
# don't care too much if locale isn't present.
# get the dictionary
# zeropad; defaults for fill and alignment.  If zero padding
# is requested, the fill and align fields should be absent.
# PEP 3101 originally specified that the default alignment should
# be left;  it was later agreed that right-aligned makes more sense
# for numeric types.  See http://bugs.python.org/issue6857.
# default sign handling: '-' for negative, '' for positive
# minimumwidth defaults to 0; precision remains None if not given
# if format type is 'g' or 'G' then a precision of 0 makes little
# sense; convert it to 1.  Same if format type is unspecified.
# determine thousands separator, grouping, and decimal separator, and
# add appropriate entries to format_dict
# apart from separators, 'n' behaves just like 'g'
# how much extra space do we have to play with?
# The result from localeconv()['grouping'], and the input to this
# function, should be a list of integers in one of the
# following three forms:
# max(..., 1) forces at least 1 digit to the left of a separator
##### Useful Constants (internal use only) ################################
# Reusable defaults
# _SignedInfinity[sign] is infinity w/ that sign
# Constants related to the hash implementation;  hash(x) is based
# on the reduction of x modulo _PyHASH_MODULUS
# hash values to use for positive and negative infinities, and nans
# _PyHASH_10INV is the inverse of 10 modulo the prime _PyHASH_MODULUS
# date.max.toordinal()
# Utility functions, adapted from Python's Demo/classes/Dates.py, which
# also assumes the current Gregorian calendar indefinitely extended in
# both directions.  Difference:  Dates.py calls January 1 of year 0 day
# number 1.  The code here calls January 1 of year 1 day number 1.  This is
# to match the definition of the "proleptic Gregorian" calendar in Dershowitz
# and Reingold's "Calendrical Calculations", where it's the base calendar
# for all computations.  See the book for algorithms for converting between
# proleptic Gregorian ordinals and many other calendar systems.
# -1 is a placeholder for indexing purposes.
# number of days in 400 years
# A 4-year cycle has an extra leap day over what we'd get from pasting
# together 4 single years.
# Similarly, a 400-year cycle has an extra leap day over what we'd get from
# pasting together 4 100-year cycles.
# OTOH, a 100-year cycle has one fewer leap day than we'd get from
# pasting together 25 4-year cycles.
# n is a 1-based index, starting at 1-Jan-1.  The pattern of leap years
# repeats exactly every 400 years.  The basic strategy is to find the
# closest 400-year boundary at or before n, then work with the offset
# from that boundary to n.  Life is much clearer if we subtract 1 from
# n first -- then the values of n at 400-year boundaries are exactly
# those divisible by _DI400Y:
# ..., -399, 1, 401, ...
# Now n is the (non-negative) offset, in days, from January 1 of year, to
# the desired date.  Now compute how many 100-year cycles precede n.
# Note that it's possible for n100 to equal 4!  In that case 4 full
# 100-year cycles precede the desired day, which implies the desired
# day is December 31 at the end of a 400-year cycle.
# Now compute how many 4-year cycles precede it.
# And now how many single years.  Again n1 can be 4, and again meaning
# that the desired day is December 31 at the end of the 4-year cycle.
# Now the year is correct, and n is the offset from January 1.  We find
# the month via an estimate that's either exact or one too large.
# estimate is too large
# Now the year and month are correct, and n is the offset from the
# start of that month:  we're done!
# Month and day names.  For localized versions, see the calendar module.
# Skip trailing microseconds when us==0.
# Correctly substitute for %z and %Z escapes in strftime formats.
# Don't call utcoffset() or tzname() unless actually needed.
# the string to use for %f
# the string to use for %z
# the string to use for %:z
# the string to use for %Z
# Scan format for %z, %:z and %Z escapes, replacing as needed.
# strftime is going to have at this: escape %
# Helpers for parsing the result of isoformat()
# See the comment in _datetimemodule.c:_find_isoformat_datetime_separator
# This is as far as we need to resolve the ambiguity for
# the moment - if we have YYYY-Www-##, the separator is
# either a hyphen at 8 or a number at 10.
# We'll assume it's a hyphen at 8 because it's way more
# likely that someone will use a hyphen as a separator than
# a number, but at this point it's really best effort
# because this is an extension of the spec anyway.
# TODO(pganssle): Document this
# YYYY-Www (8)
# YYYY-MM-DD (10)
# YYYYWww (7) or YYYYWwwd (8)
# If the index of the last number is even, it's YYYYWwwd
# YYYYMMDD (8)
# It is assumed that this is an ASCII-only string of lengths 7, 8 or 10,
# see the comment on Modules/_datetimemodule.c:_find_isoformat_datetime_separator
# YYYY-?Www-?D?
# Parses things of the form HH[:?MM[:?SS[{.,}fff[fff]]]]
# Format supported is HH[:MM[:SS[.fff[fff]]]][+HH:MM[:SS[.ffffff]]]
# This is equivalent to re.search('[+-Z]', tstr), but faster
# Valid time zone strings are:
# HH                  len: 2
# HHMM                len: 4
# HH:MM               len: 5
# HHMMSS              len: 6
# HHMMSS.f+           len: 7+
# HH:MM:SS            len: 8
# HH:MM:SS.f+         len: 10+
# tuple[int, int, int] -> tuple[int, int, int] version of date.fromisocalendar
# Year is bounded this way because 9999-12-31 is (9999, 52, 5)
# ISO years have 53 weeks in them on years starting with a
# Thursday and leap years starting on a Wednesday
# Now compute the offset from (Y, 1, 1) in days:
# Calculate the ordinal day for monday, week 1
# Just raise TypeError if the arg isn't None or a string.
# name is the offset-producing method, "utcoffset" or "dst".
# offset is what it returned.
# If offset isn't None or timedelta, raises TypeError.
# If offset is None, returns None.
# Else offset is checked for being in range.
# If it is, its integer value is returned.  Else ValueError is raised.
# Based on the reference implementation for divmod_near
# in Objects/longobject.c.
# round up if either r / b > 0.5, or r / b == 0.5 and q is odd.
# The expression r / b > 0.5 is equivalent to 2 * r > b if b is
# positive, 2 * r < b if b negative.
# The representation of (days, seconds, microseconds) was chosen
# arbitrarily; the exact rationale originally specified in the docstring
# was "Because I felt like it."
# Doing this efficiently and accurately in C is going to be difficult
# and error-prone, due to ubiquitous overflow possibilities, and that
# C double doesn't have enough bits of precision to represent
# microseconds over 10K years faithfully.  The code here tries to make
# explicit where go-fast assumptions can be relied on, in order to
# guide the C implementation; it's way more convoluted than speed-
# ignoring auto-overflow-to-long idiomatic Python could be.
# XXX Check that all inputs are ints or floats.
# Final values, all integer.
# s and us fit in 32-bit signed ints; d isn't bounded.
# Normalize everything to days, seconds, microseconds.
# Get rid of all fractions, and normalize s and us.
# Take a deep breath <wink>.
# can't overflow
# days isn't referenced again before redefinition
# daysecondsfrac isn't referenced again
# seconds isn't referenced again before redefinition
# exact value not critical
# secondsfrac isn't referenced again
# Just a little bit of carrying possible for microseconds and seconds.
# Read-only field accessors
# for CPython compatibility, we cannot use
# our __class__ here, but need a real timedelta
# Comparisons of timedelta objects with other.
# Pickle support.
# Pickle support
# More informative error message.
# Additional constructors
# Conversions to string
# XXX These shouldn't depend on time.localtime(), because that
# clips the usable dates to [1970 .. 2038).  At least ctime() is
# easily done without using strftime() -- that's better too because
# strftime("%c", ...) is locale specific.
# Standard conversions, __eq__, __le__, __lt__, __ge__, __gt__,
# __hash__ (and helpers)
# Comparisons of date objects with other.
# Computations
# Day-of-the-week and week-of-the-year, according to ISO
# 1-Jan-0001 is a Monday
# Internally, week and day have origin 0
# so functions w/ args named "date" can get at the class
# See the long comment block at the end of this file for an
# explanation of this algorithm.
# This code is intended to pickle the object without making the
# class public. See https://bugs.python.org/msg352381
# Standard conversions, __hash__ (and helpers)
# Comparisons of time objects with other.
# arbitrary non-zero value
# zero or None
# Conversion to string
# The spec actually requires that time-only ISO 8601 strings start with
# T, but the extended format allows this to be omitted as long as there
# is no ambiguity with date strings.
# The year must be >= 1000 else Python's strftime implementation
# can raise a bogus exception.
# Timezone functions
# so functions w/ args named "time" can get at the class
# clamp out leap seconds if the platform has them
# As of version 2015f max fold in IANA database is
# 23 hours at 1969-09-30 13:00:00 in Kwajalein.
# Let's probe 24 hours in the past to detect a transition:
# On Windows localtime_s throws an OSError for negative values,
# thus we can't perform fold detection for values of time less
# than the max time fold. See comments in _datetimemodule's
# version of this method for more details.
# Split this at the separator
# Our goal is to solve t = local(u) for u.
# We found one solution, but it may not be the one we need.
# Look for an earlier solution (if `fold` is 0), or a
# later one (if `fold` is 1).
# We have found both offsets a and b, but neither t - a nor t - b is
# a solution.  This means t is in the gap.
# Detect gap
# This happens in a gap or a fold
# Extract TZ data
# Convert self to UTC, and attach the new time zone object.
# Convert from UTC to tz's local time.
# Ways to produce a string.
# These are never zero
# Comparisons of datetime objects with other.
# Assume that allow_mixed means that we are called from __eq__
# XXX What follows could be done more efficiently...
# this will take offsets into account
# Helper to calculate the day number of the Monday starting week 1
# See weekday() above
# Sentinel value to disallow None
# bpo-37642: These attributes are rounded to the nearest minute for backwards
# compatibility, even though the constructor will accept a wider range of
# values. This may change in the future.
# Some time zone algebra.  For a datetime x, let
# Now some derived rules, where k is a duration (timedelta).
# 1. x.o = x.s + x.d
# 2. If x and y have the same tzinfo member, x.s = y.s.
# 3. The naive UTC time corresponding to x is x.n - x.o.
# 4. (x+k).s = x.s
# 5. (x+k).n = x.n + k
# Now we can explain tz.fromutc(x).  Let's assume it's an interesting case
# (meaning that the various tzinfo methods exist, and don't blow up or return
# None when called).
# The function wants to return a datetime y with timezone tz, equivalent to x.
# x is already in UTC.
# By #3, we want
# The algorithm starts by attaching tz to x.n, and calling that y.  So
# x.n = y.n at the start.  Then it wants to add a duration k to y, so that [1]
# becomes true; in effect, we want to solve [2] for k:
# By #1, this is the same as
# By #5, (y+k).n = y.n + k, which equals x.n + k because x.n=y.n at the start.
# Substituting that into [3],
# On the RHS, (y+k).d can't be computed directly, but y.s can be, and we
# approximate k by ignoring the (y+k).d term at first.  Note that k can't be
# very large, since all offset-returning methods return a duration of magnitude
# less than 24 hours.  For that reason, if y is firmly in std time, (y+k).d must
# be 0, so ignoring it has no consequence then.
# In any case, the new value is
# It's helpful to step back at look at [4] from a higher level:  it's simply
# mapping from UTC to tz's standard time.
# At this point, if
# we have an equivalent time, and are almost done.  The insecurity here is
# at the start of daylight time.  Picture US Eastern for concreteness.  The wall
# time jumps from 1:59 to 3:00, and wall hours of the form 2:MM don't make good
# sense then.  The docs ask that an Eastern tzinfo class consider such a time to
# be EDT (because it's "after 2"), which is a redundant spelling of 1:MM EST
# on the day DST starts.  We want to return the 1:MM EST spelling because that's
# the only spelling that makes sense on the local wall clock.
# In fact, if [5] holds at this point, we do have the standard-time spelling,
# but that takes a bit of proof.  We first prove a stronger result.  What's the
# difference between the LHS and RHS of [5]?  Let
# Now
# Plugging that back into [6] gives
# So diff = z.d.
# If [5] is true now, diff = 0, so z.d = 0 too, and we have the standard-time
# spelling we wanted in the endcase described above.  We're done.  Contrarily,
# if z.d = 0, then we have a UTC equivalent, and are also done.
# If [5] is not true now, diff = z.d != 0, and z.d is the offset we need to
# add to z (in effect, z is in tz's standard time, and we need to shift the
# local clock into tz's daylight time).
# Let
# and we can again ask whether
# If so, we're done.  If not, the tzinfo class is insane, according to the
# assumptions we've made.  This also requires a bit of proof.  As before, let's
# compute the difference between the LHS and RHS of [8] (and skipping some of
# the justifications for the kinds of substitutions we've done several times
# already):
# So z' is UTC-equivalent to x iff z'.d = z.d at this point.  If they are equal,
# we've found the UTC-equivalent so are done.  In fact, we stop with [7] and
# return z', not bothering to compute z'.d.
# How could z.d and z'd differ?  z' = z + z.d [7], so merely moving z' by
# a dst() offset, and starting *from* a time already in DST (we know z.d != 0),
# would have to change the result dst() returns:  we start in DST, and moving
# a little further into it takes us out of DST.
# There isn't a sane case where this can happen.  The closest it gets is at
# the end of DST, where there's an hour in UTC with no spelling in a hybrid
# tzinfo class.  In US Eastern, that's 5:MM UTC = 0:MM EST = 1:MM EDT.  During
# that hour, on an Eastern clock 1:MM is taken as being in standard time (6:MM
# UTC) because the docs insist on that, but 0:MM is taken as being in daylight
# time (4:MM UTC).  There is no local time mapping to 5:MM UTC.  The local
# clock jumps from 1:59 back to 1:00 again, and repeats the 1:MM hour in
# standard time.  Since that's what the local clock *does*, we want to map both
# UTC hours 5:MM and 6:MM to 1:MM Eastern.  The result is ambiguous
# in local time, but so it goes -- it's the way the local clock works.
# When x = 5:MM UTC is the input to this algorithm, x.o=0, y.o=-5 and y.d=0,
# so z=0:MM.  z.d=60 (minutes) then, so [5] doesn't hold and we keep going.
# z' = z + z.d = 1:MM then, and z'.d=0, and z'.d - z.d = -60 != 0 so [8]
# (correctly) concludes that z' is not UTC-equivalent to x.
# Because we know z.d said z was in daylight time (else [5] would have held and
# we would have stopped then), and we know z.d != z'.d (else [8] would have held
# and we have stopped then), and there are only 2 possible values dst() can
# return in Eastern, it follows that z'.d must be 0 (which it is in the example,
# but the reasoning doesn't depend on the example -- it depends on there being
# two possible dst() outcomes, one zero and the other non-zero).  Therefore
# z' must be in standard time, and is the spelling we want in this case.
# Note again that z' is not UTC-equivalent as far as the hybrid tzinfo class is
# concerned (because it takes z' as being in standard time rather than the
# daylight time we intend here), but returning it gives the real-life "local
# clock repeats an hour" behavior when mapping the "unspellable" UTC hour into
# tz.
# When the input is 6:MM, z=1:MM and z.d=0, and we stop at once, again with
# the 1:MM standard time spelling we want.
# So how can this break?  One of the assumptions must be violated.  Two
# possibilities:
# 1) [2] effectively says that y.s is invariant across all y belong to a given
# 2) There may be versions of "double daylight" time where the tail end of
# In any case, it's clear that the default fromutc() is strong enough to handle
# "almost all" time zones:  so long as the standard offset is invariant, it
# doesn't matter if daylight time transition points change from year to year, or
# if daylight time is skipped in some years; it doesn't matter how large or
# small dst() may get within its bounds; and it doesn't even matter if some
# perverse time zone returns a negative dst()).  So a breaking case must be
# pretty bizarre, and a tzinfo subclass can override fromutc() if it is.
# Import types and functions implemented in C
# 3 digits (xx.x UNIT)
# 4 or 5 digits (xxxx UNIT)
# frame is a tuple: (filename: str, lineno: int)
# frames is a tuple of frame tuples: see Frame constructor for the
# format of a frame tuple; it is reversed, because _tracemalloc
# returns frames sorted from most recent to oldest, but the
# Python API expects oldest to most recent
# trace is a tuple: (domain: int, size: int, traceback: tuple).
# See Traceback constructor for the format of the traceback tuple.
# traces is a tuple of trace tuples: see Trace constructor
# traces is a tuple of trace tuples: see _Traces constructor for
# the exact format
# key_type == 'filename':
# cumulative statistics
# Originally contributed by Sjoerd Mullender.
# Significantly modified by Jeffrey Yasskin <jyasskin at gmail.com>.
# on the reduction of x modulo the prime _PyHASH_MODULUS.
# Value to be used for rationals that reduce to infinity modulo
# _PyHASH_MODULUS.
# To make sure that the hash of a Fraction agrees with the hash
# of a numerically equal integer, float or Decimal instance, we
# follow the rules for numeric hashes outlined in the
# documentation.  (See library docs, 'Built-in Types').
# ValueError means there is no modular inverse.
# The general algorithm now specifies that the absolute value of
# the hash is
# where N is self._numerator and P is _PyHASH_MODULUS.  That's
# optimized here in two ways:  first, for a non-negative int i,
# hash(i) == i % P, but the int hash implementation doesn't need
# to divide, and is faster than doing % P explicitly.  So we do
# instead.  Second, N is unbounded, so its product with dinv may
# be arbitrarily expensive to compute.  The final answer is the
# same if we use the bounded |N| % P instead, which can again
# be done with an int hash() call.  If 0 <= i < P, hash(i) == i,
# so this nested hash() call wastes a bit of time making a
# redundant copy when |N| < P, but can save an arbitrarily large
# amount of computation for large |N|.
# Helpers for formatting
# The divmod quotient is correct for round-ties-towards-positive-infinity;
# In the case of a tie, we zero out the least significant bit of q.
# Special case for n == 0.
# Find integer m satisfying 10**(m - 1) <= abs(n)/d <= 10**m. (If abs(n)/d
# is a power of 10, either of the two possible values for m is fine.)
# Round to a multiple of 10**(m - figures). The significand we get
# satisfies 10**(figures - 1) <= significand <= 10**figures.
# Adjust in the case where significand == 10**figures, to ensure that
# 10**(figures - 1) <= significand < 10**figures.
# Pattern for matching non-float-style format specifications.
# Pattern for matching float-style format specifications;
# supports 'e', 'E', 'f', 'F', 'g', 'G' and '%' presentation types.
# Exact conversion
# Handle construction from strings.
# *very* normal case
# Algorithm notes: For any real number x, define a *best upper
# approximation* to x to be a rational number p/q such that:
# Define *best lower approximation* similarly.  Then it can be
# proved that a rational number is a best upper or lower
# approximation to x if, and only if, it is a convergent or
# semiconvergent of the (unique shortest) continued fraction
# associated to x.
# To find a best rational approximation with denominator <= M,
# we find the best upper and lower approximations with
# denominator <= M and take whichever of these is closer to x.
# In the event of a tie, the bound with smaller denominator is
# chosen.  If both denominators are equal (which can happen
# only when max_denominator == 1 and self is midway between
# two integers) the lower bound---i.e., the floor of self, is
# taken.
# Determine which of the candidates (p0+k*p1)/(q0+k*q1) and p1/q1 is
# closer to self. The distance between them is 1/(q1*(q0+k*q1)), while
# the distance from p1/q1 to self is d/(q1*self._denominator). So we
# need to compare 2*(q0+k*q1) with self._denominator/d.
# Validate and parse the format specifier.
# Determine the body and sign representation.
# Pad with fill character if necessary and return.
# align == "="
# Round to get the digits we need, figure out where to place the point,
# and decide whether to use scientific notation. 'point_pos' is the
# relative to the _end_ of the digit string: that is, it's the number
# of digits that should follow the point.
# presentation_type in "eEgG"
# Get the suffix - the part following the digits, if any.
# String of output digits, padded sufficiently with zeros on the left
# so that we'll have at least one digit before the decimal point.
# Before padding, the output has the form f"{sign}{leading}{trailing}",
# where `leading` includes thousands separators if necessary and
# `trailing` includes the decimal separator where appropriate.
# Do zero padding if required.
# When adding thousands separators, they'll be added to the
# zero-padded portion too, so we need to compensate.
# Insert thousands separators if required.
# We now have a sign and a body. Pad with fill character if necessary
# Refuse the temptation to guess if both alignment _and_
# zero padding are specified.
# Includes ints.
# Rational arithmetic algorithms: Knuth, TAOCP, Volume 2, 4.5.1.
# Assume input fractions a and b are normalized.
# 1) Consider addition/subtraction.
# Let g = gcd(da, db). Then
# Now, if g > 1, we're working with smaller integers.
# Note, that t, (da//g) and (db//g) are pairwise coprime.
# Indeed, (da//g) and (db//g) share no common factors (they were
# removed) and da is coprime with na (since input fractions are
# normalized), hence (da//g) and na are coprime.  By symmetry,
# (db//g) and nb are coprime too.  Then,
# Above allows us optimize reduction of the result to lowest
# terms.  Indeed,
# is a normalized fraction.  This is useful because the unnormalized
# denominator d could be much larger than g.
# We should special-case g == 1 (and g2 == 1), since 60.8% of
# randomly-chosen integers are coprime:
# https://en.wikipedia.org/wiki/Coprime_integers#Probability_of_coprimality
# Note, that g2 == 1 always for fractions, obtained from floats: here
# g is a power of 2 and the unnormalized numerator t is an odd integer.
# 2) Consider multiplication
# Let g1 = gcd(na, db) and g2 = gcd(nb, da), then
# Note, that after divisions we're multiplying smaller integers.
# Also, the resulting fraction is normalized, because each of
# two factors in the numerator is coprime to each of the two factors
# in the denominator.
# Indeed, pick (na//g1).  It's coprime with (da//g2), because input
# fractions are normalized.  It's also coprime with (db//g1), because
# common factors are removed by g1 == gcd(na, db).
# As for addition/subtraction, we should special-case g1 == 1
# and g2 == 1 for same reason.  That happens also for multiplying
# rationals, obtained from floats.
# Same as _mul(), with inversed b.
# A fractional power will generally produce an
# irrational number.
# If a is an int, keep it that way if possible.
# The negations cleverly convince floordiv to return the ceiling.
# Deal with the half case:
# See _operator_fallbacks.forward to check that the results of
# these operations will always be Fraction and therefore have
# round().
# comparisons with an infinity or nan should behave in
# the same way for any finite a, so treat a as zero.
# Since a doesn't know how to compare with b, let's give b
# a chance to compare itself with a.
# convert other to a Rational instance where reasonable.
# bpo-39274: Use bool() because (a._numerator != 0) can return an
# object which is not a bool.
# support for pickling, copy, and deepcopy
# system configuration generated and used by the sysconfig module
# portions copyright 2001, Autonomous Zones Industries, Inc., all rights...
# err...  reserved and offered to the public under the terms of the
# Python 2.2 license.
# Author: Zooko O'Whielacronx
# http://zooko.com/
# mailto:zooko@zooko.com
# Copyright 2000, Mojam Media, Inc., all rights reserved.
# Author: Skip Montanaro
# Copyright 1999, Bioreason, Inc., all rights reserved.
# Author: Andrew Dalke
# Copyright 1995-1997, Automatrix, Inc., all rights reserved.
# Copyright 1991-1995, Stichting Mathematisch Centrum, all rights reserved.
# Permission to use, copy, modify, and distribute this Python software and
# its associated documentation for any purpose without fee is hereby
# granted, provided that the above copyright notice appears in all copies,
# and that both that copyright notice and this permission notice appear in
# supporting documentation, and that the name of neither Automatrix,
# Bioreason or Mojam Media be used in advertising or publicity pertaining to
# distribution of the software without specific, written prior permission.
# haven't seen this one before, so see if the module name is
# on the ignore list.
# Identical names, so ignore
# check if the module is a proper submodule of something on
# the ignore list
# Need to take some care since ignoring
# "cmp" mustn't mean ignoring "cmpcache" but ignoring
# "Spam" must also mean ignoring "Spam.Eggs".
# Now check that filename isn't in one of the directories
# must be a built-in, so we must ignore
# Ignore a file when it contains one of the ignorable paths
# The '+ os.sep' is to ensure that d is a parent directory,
# as compared to cases like:
# or
# Tried the different ways, so we don't ignore this module
# If the file 'path' is part of a package, then the filename isn't
# enough to uniquely identify it.  Try to do the right thing by
# looking in sys.path for the longest matching prefix.  We'll
# assume that the rest is the package name.
# the drive letter is never part of the module name
# map (filename, lineno) to count
# Try to merge existing counts file.
# turn the counts data ("(filename, lineno) = count") into something
# accessible on a per-file basis
# accumulate summary info, if needed
# If desired, get a list of the line numbers which represent
# executable content (returned as a dict for better lookup speed)
# try and store counts and module info into self.outfile
# ``lnotab`` is a dict of executable lines, or a line number "table"
# do the blank/comment match to try to mark more lines
# (help the reader find stuff that hasn't been covered)
# Highlight never-executed lines, unless the line contains
# #pragma: NO COVER
# get all of the lineno information from the code of this scope level
# and check the constants for references to other code objects
# find another code object, so recurse into it
# If the first token is a string, then it's the module docstring.
# Add this special case so that the test in the loop passes.
# keys are (filename, linenumber)
# for memoizing os.path.basename
# Ahem -- do nothing?  Okay.
## use of gc.get_referrers() was suggested by Michael Hudson
# all functions which refer to this code object
# require len(func) == 1 to avoid ambiguity caused by calls to
# new.function(): "In the face of ambiguity, refuse the
# temptation to guess."
# ditto for new.classobj()
# cache the result - assumption is that new.* is
# not called later to disturb this relationship
# _caller_cache could be flushed if functions in
# the new module get called.
# XXX Should do a better job of identifying methods
# XXX _modname() doesn't work right for packages, so
# the ignore support won't work right for packages
# record the file name and line number of every trace
# try to emulate __main__ namespace as much as possible
# Changes and improvements suggested by Steve Majewski.
# Modified by Jack to work on the mac.
# Modified by Siebren to support docstrings and PASV.
# Modified by Phil Schwartz to add storbinary and storlines callbacks.
# Modified by Giampaolo Rodola' to add TLS support.
# Magic number from <socket.h>
# Process data out of band
# The standard FTP server control port
# The sizehint parameter passed to readline() calls
# Exception raised when an error or invalid response is received
# unexpected [123]xx reply
# 4xx errors
# 5xx errors
# response does not begin with [1-5]
# All exceptions (hopefully) that may be raised here and that aren't
# (always) programming errors on our side
# Line terminators (we always output CRLF, but accept any of CRLF, CR, LF)
# The class itself
# Disables https://bugs.python.org/issue43285 security if set to True.
# Context management protocol: try to quit() if active
# Internal: "sanitize" a string for printing
# Internal: send one line to the server, appending CRLF
# Internal: send one command to the server (through putline())
# Raise EOFError if the connection is closed
# Internal: get a response from the server, which may possibly
# consist of multiple lines.  Return a single string with no
# trailing CRLF.  If the response consists of multiple lines,
# these are separated by '\n' characters in the string
# Raise various errors if the response indicates an error
# Get proper port
# Get proper host
# Some servers apparently send a 200 reply to
# a LIST or STOR command, before the 150 reply
# (and way before the 226 reply). This seems to
# be in violation of the protocol (which only allows
# 1xx or error messages for LIST), so we just discard
# this response.
# See above.
# this is conditional in case we received a 125
# If there is no anonymous ftp password specified
# then we'll just use anonymous@
# We don't send any other thing because:
# - We want to remain anonymous
# - We want to stop SPAM
# - We don't want to let ftp sites to discriminate by the user,
# shutdown ssl layer
# does nothing, but could return error
# The SIZE command is defined in RFC-3659
# fix around non-compliant implementations such as IIS shipped
# with Windows server 2003
# PROT defines whether or not the data channel is to be protected.
# Though RFC-2228 defines four possible protection levels,
# RFC-4217 only recommends two, Clear and Private.
# Clear (PROT C) means that no security is to be used on the
# data-channel, Private (PROT P) means that the data-channel
# should be protected by TLS.
# PBSZ command MUST still be issued, but must have a parameter of
# '0' to indicate that no buffering is taking place and the data
# connection should not be encapsulated.
# --- Overridden FTP methods
# overridden as we can't pass MSG_OOB flag to sendall()
# should contain '(|||port|)'
# Not compliant to RFC 959, but UNIX ftpd does this
# RFC 959: the user must "listen" [...] BEFORE sending the
# transfer request.
# So: STOR before RETR, because here the target is a "user".
# RFC 959
# get name of alternate ~/.netrc file:
# no account for host
# These lists are documented as part of the dis module's API
# for backward compatibility
# On WASI, getuid() is indicated as a stub but it may also be missing.
# Look for a machine, default, or macdef top-level keyword
# a macro definition finished with consecutive new-line
# characters. The first \n is encountered by the
# readline() method and this is the second \n.
# We're looking at start of an entry for a named machine or default.
# Naming convention: Variables named "wr" are weak reference objects;
# they are called this instead of "ref" to avoid name collisions with
# the module-global ref() function imported from _weakref.
# Import after _weakref to avoid circular import.
# The self-weakref trick is needed to avoid creating a reference
# cycle.
# We inherit the constructor without worrying about the input
# dictionary; since it uses our .update() method, we get the right
# checks (if the other dictionary is a WeakValueDictionary,
# objects are unwrapped on the way out, and we always wrap on the
# way in).
# Atomic removal is necessary since this function
# can be called asynchronously by the GC
# We shouldn't encounter any KeyError, because this method should
# always be called *before* mutating the dict.
# This should only happen
# A list of dead weakrefs (keys to be removed)
# NOTE: We don't need to call this method before mutating the dict,
# because a dead weakref never compares equal to a live weakref,
# even if they happened to refer to equal objects.
# However, it means keys may already have been removed.
# self._pending_removals may still contain keys which were
# explicitly removed, we have to scrub them (see issue #21173).
# Finalizer objects don't have any state of their own.  They are
# just used as keys to lookup _Info objects in the registry.  This
# ensures that they cannot be part of a ref-cycle.
# We may register the exit function more than once because
# of a thread race, but that is harmless
# Return live finalizers marked for exit, oldest first
# At shutdown invoke finalizers for which atexit is true.
# This is called once all other non-daemonic threads have been
# joined.
# gc is disabled, so (assuming no daemonic
# threads) the following is the only line in
# this function which might trigger creation
# of a new finalizer
# prevent any more finalizers from executing during shutdown
# Strings representing various path-related bits and pieces.
# Normalize the case of a pathname.  Trivial in Posix, string.lower on Mac.
# On MS-DOS this may also turn slashes into backslashes; however, other
# normalizations (such as optimizing '../' away) are not allowed
# (another function should be defined to do that).
# Return whether a path is absolute.
# Trivial in Posix, harder on the Mac or MS-DOS.
# Join pathnames.
# Ignore the previous parts if a part is absolute.
# Insert a '/' unless the first part is empty or already ends in '/'.
# rest).  If the path ends in '/', tail will be empty.  If there is no
# '/' in the path, head  will be empty.
# Trailing '/'es are stripped from head unless it is the root.
# Split a pathname into a drive specification and the rest of the
# path.  Useful on DOS/Windows/NT; on Unix, the drive is always empty.
# Relative path, e.g.: 'foo'
# Absolute path, e.g.: '/foo', '///foo', '////foo', etc.
# Precisely two leading slashes, e.g.: '//foo'. Implementation defined per POSIX, see
# https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_13
# Return the tail (basename) part of a path, same as split(path)[1].
# Return the head (dirname) part of a path, same as split(path)[0].
# (Does this work for all UNIXes?  Is it even guaranteed to work by Posix?)
# It doesn't exist -- so not a mount point. :-)
# A symlink can never be a mount point
# path/.. on a different device as path or the same i-node as path
# pwd module unavailable, return path unchanged
# bpo-10496: if the current user identifier doesn't exist in the
# password database, return the path unchanged
# bpo-10496: if the user name from the path doesn't exist in the
# if no user home, return the path unchanged on VxWorks
# This expands the forms $variable and ${variable} only.
# Non-existent variables are left unchanged.
# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A/B.
# It should be understood that this may change the meaning of the path
# if it contains symbolic links!
# Return a canonical path (i.e. the absolute location of a file on the
# filesystem).
# The stack of unresolved path parts. When popped, a special value of None
# indicates that a symlink target has been resolved, and that the original
# symlink path can be retrieved by popping again. The [::-1] slice is a
# very fast way of spelling list(reversed(...)).
# Number of unprocessed parts in 'rest'. This can differ from len(rest)
# later, because 'rest' might contain markers for unresolved symlinks.
# The resolved path, which is absolute throughout this function.
# Note: getcwd() returns a normalized and symlink-free path.
# Mapping from symlink paths to *fully resolved* symlink targets. If a
# symlink is encountered but not yet resolved, the value is None. This is
# used both to detect symlink loops and to speed up repeated traversals of
# the same links.
# resolved symlink target
# current dir
# parent dir
# Already seen this path
# use cached value
# The symlink is not resolved, so we must have a symlink loop.
# Raise OSError(errno.ELOOP)
# Resolve the symbolic link
# Symlink target is absolute; reset resolved path.
# Mark this symlink as seen but not fully resolved.
# Push the symlink path onto the stack, and signal its specialness
# by also pushing None. When these entries are popped, we'll
# record the fully-resolved symlink target in the 'seen' mapping.
# Push the unresolved symlink target parts onto the stack.
# An error occurred and was ignored.
# Return the longest common sub-path of the sequence of paths given as input.
# The paths are not normalized before comparing them (this is the
# responsibility of the caller). Any trailing separator is stripped from the
# returned path.
# Similar to functools.wraps(), but only assign __doc__.
# __module__ should be preserved,
# __name__ and __qualname__ are already fine,
# __annotations__ is not set.
# The maximum length of a log message in bytes, including the level marker and
# tag, is defined as LOGGER_ENTRY_MAX_PAYLOAD at
# https://cs.android.com/android/platform/superproject/+/android-14.0.0_r1:system/logging/liblog/include/log/log.h;l=71.
# Messages longer than this will be truncated by logcat. This limit has already
# been reduced at least once in the history of Android (from 4076 to 4068 between
# API level 23 and 26), so leave some headroom.
# UTF-8 uses a maximum of 4 bytes per character, so limiting text writes to this
# size ensures that we can always avoid exceeding MAX_BYTES_PER_WRITE.
# However, if the actual number of bytes per character is smaller than that,
# then we may still join multiple consecutive text writes into binary
# writes containing a larger number of characters.
# When embedded in an app on current versions of Android, there's no easy way to
# monitor the C-level stdout and stderr. The testbed comes with a .c file to
# redirect them to the system log using a pipe, but that wouldn't be convenient
# or appropriate for all apps. So we redirect at the Python level instead.
# Not embedded in an app.
# The default is surrogateescape for stdout and backslashreplace for
# stderr, but in the context of an Android log, readability is more
# important than reversibility.
# In case `s` is a str subclass that writes itself to stdout or stderr
# when we call its methods, convert it to an actual str.
# We want to emit one log message per line wherever possible, so split
# the string into lines first. Note that "".splitlines() == [], so
# nothing will be logged for an empty string.
# The size and behavior of TextIOWrapper's buffer is not part of its public
# API, so we handle buffering ourselves to avoid truncation.
# Since this is a line-based logging system, line buffering cannot be turned
# off, i.e. a newline always causes a flush.
# Writing an empty string to the stream should have no effect.
# This is needed by the test suite --timeout option, which uses faulthandler.
# When a large volume of data is written to logcat at once, e.g. when a test
# module fails in --verbose3 mode, there's a risk of overflowing logcat's own
# buffer and losing messages. We avoid this by imposing a rate limit using the
# token bucket algorithm, based on a conservative estimate of how fast `adb
# logcat` can consume data.
# The logcat buffer size of a device can be determined by running `logcat -g`.
# We set the token bucket size to half of the buffer size of our current minimum
# API level, because other things on the system will be producing messages as
# well.
# https://cs.android.com/android/platform/superproject/+/android-14.0.0_r1:system/logging/liblog/include/log/log_read.h;l=39
# Encode null bytes using "modified UTF-8" to avoid them truncating the
# message.
# If the bucket level is still below zero, the clock must have gone
# backwards, so reset it to zero and continue.
# We check for __abstractmethods__ here because cls might by a C
# implementation or a python implementation (especially during
# testing), and we want to handle both cases.
# Check the existing abstract methods of the parents, keep only the ones
# that are not implemented.
# Also add any other newly added abstract methods.
# New I/O library conforming to PEP 3116.
# Pretend this exception was created here.
# for seek()
# Declaring ABCs in C is tricky so we do it here.
# Method descriptions and default implementations are inherited from the C
# version however.
# This module is in the public domain.  No warranties.
# Create constants for the compiler flags in Include/code.h
# We try to get them from dis to avoid duplication
# See Include/object.h
# module
# this includes types.Function, types.BuiltinFunctionType,
# types.BuiltinMethodType, functools.partial, functools.singledispatch,
# "class funclike" from Lib/test/test_inspect... on and on it goes.
# "Inject" type parameters into the local namespace
# (unless they are shadowed by assignments *in* the local namespace),
# as a way of emulating annotation scopes when calling `eval()`
# ----------------------------------------------------------- type-checking
# mutual exclusion
# Lie for children.  The addition of partial.__get__
# doesn't currently change the partial objects behaviour,
# not counting a warning about future changes.
# CPython and equivalent
# Other implementations
# A marker for markcoroutinefunction and iscoroutinefunction.
# It looks like ABCMeta.__new__ has finished running;
# TPFLAGS_IS_ABSTRACT should have been accurate.
# It looks like ABCMeta.__new__ has not finished running yet; we're
# probably in __init_subclass__. We'll look for abstractmethods manually.
# add any DynamicClassAttributes to the list of names if object is a class;
# this may result in duplicate entries if, for example, a virtual
# attribute with the same name as a DynamicClassAttribute exists
# First try to get the value via getattr.  Some descriptors don't
# like calling their __get__ (see bug #1785), so fall back to
# looking in the __dict__.
# handle the duplicate key
# could be a (currently) missing slot member, or a buggy
# __dir__; discard and move on
# for attributes stored in the metaclass
# :dd any DynamicClassAttributes to the list of names;
# attribute with the same name as a DynamicClassAttribute exists.
# Get the object associated with the name, and where it was defined.
# Normal objects will be looked up with both getattr and directly in
# its class' dict (in case getattr fails [bug #1785], and also to look
# for a docstring).
# For DynamicClassAttributes on the second pass we only look in the
# class's dict.
# Getting an obj from the __dict__ sometimes reveals more than
# using getattr.  Static and class methods are dramatic examples.
# if the resulting object does not live somewhere in the
# mro, drop it and search the mro manually
# first look in the classes
# then check the metaclasses
# unable to locate the attribute anywhere, most likely due to
# buggy custom __dir__; discard and move on
# Classify the object or its descriptor.
# ----------------------------------------------------------- class helpers
# -------------------------------------------------------- function helpers
# remember the original func for error reporting
# Memoise by id to tolerate non-hashable objects, but store objects to
# ensure they aren't destroyed, which would allow their IDs to be reused.
# -------------------------------------------------- source code extraction
# Find minimum indentation of any non-blank lines after first line.
# Remove indentation.
# Remove any trailing or leading blank lines.
# Check for paths that look like an actual module file
# try longest suffixes first, in case they overlap
# Apple mobile framework markers are another type of non-source file
# return a filename found in the linecache even if it doesn't exist on disk
# only return a non-existent filename if the module has a PEP 302 loader
# Try the filename to modulename cache
# Try the cache again with the absolute file name
# Update the filename to module name cache and check yet again
# Copy sys.modules in order to cope with changes while iterating
# Have already mapped this module, so skip it
# Always map to the name the module knows itself by
# Check the main module
# Check builtins
# Invalidate cache if needed.
# Allow filenames in form of "<something>" to pass through.
# `doctest` monkeypatches `linecache` module to enable
# inspection, so let `linecache.getlines` to be called.
# Look for a comment block at the top of the file.
# Look for a preceding block of comments at the same indentation.
# skip any decorators
# look for the first "def", "class" or "lambda"
# skip to the end of the line
# stop skipping when a NEWLINE is seen
# lambdas always end at the first NEWLINE
# hitting a NEWLINE when in a decorator without args
# ends the decorator
# the end of matching indent/dedent pairs end a block
# (note that this only works for "def"/"class" blocks,
# Include comments if indented at least as much as the block
# any other token on the same indentation level end the previous
# block as well, except the pseudo-tokens COMMENT and NL.
# for module or frame that corresponds to module, return all source lines
# --------------------------------------------------- class tree extraction
# ------------------------------------------------ argument list extraction
# Re: `skip_bound_arg=False`
# There is a notable difference in behaviour between getfullargspec
# and Signature: the former always returns 'self' parameter for bound
# methods, whereas the Signature always shows the actual calling
# signature of the passed object.
# To simulate this behaviour, we "unbind" bound methods, to trick
# inspect.signature to always return their first parameter ("self",
# usually)
# Re: `follow_wrapper_chains=False`
# getfullargspec() historically ignored __wrapped__ attributes,
# so we ensure that remains the case in 3.3+
# Most of the times 'signature' will raise ValueError.
# But, it can also raise AttributeError, and, maybe something
# else. So to be fully backwards compatible, we catch all
# possible exceptions here, and reraise a TypeError.
# compatibility with 'func.__kwdefaults__'
# compatibility with 'func.__defaults__'
# implicit 'self' (or 'cls' for classmethods) argument
# Nonlocal references are named in co_freevars and resolved
# by looking them up in __closure__ by positional index
# Global and builtin references are named in co_names and resolved
# by looking them up in __globals__ or __builtins__
# -------------------------------------------------- stack frame extraction
# The nth entry in code.co_positions() corresponds to instruction (2*n)th since Python 3.10+
# FrameType.f_lineno is now a descriptor that grovels co_lnotab
# ------------------------------------------------ static version of getattr
# Normally we'd have to check whether the result of weakref_entry()
# is None here, in case the object the weakref is pointing to has died.
# In this specific case, however, we know that the only caller of this
# function is `_shadowed_dict()`, and that therefore this weakref is
# guaranteed to point to an object that is still alive.
# gh-118013: the inner function here is decorated with lru_cache for
# performance reasons, *but* make sure not to pass strong references
# to the items in the mro. Doing so can lead to unexpected memory
# consumption in cases where classes are dynamically created and
# destroyed, and the dynamically created classes happen to be the only
# objects that hold strong references to other objects that take up a
# significant amount of memory.
# for types we check the metaclass too
# ------------------------------------------------ generator introspection
# ------------------------------------------------ coroutine introspection
# ----------------------------------- asynchronous generator introspection
###############################################################################
### Function Signature Object (PEP 362)
# Once '__signature__' will be added to 'C'-level
# callables, this check won't be necessary
# If positional-only parameter is bound by partial,
# it effectively disappears from the signature
# This means that this parameter, and all parameters
# after it should be keyword-only (and var-positional
# should be removed). Here's why. Consider the following
# function:
# "partial(foo, a='spam')" will have the following
# signature: "(*, a='spam', b, c)". Because attempting
# to call that partial with "(10, 20)" arguments will
# raise a TypeError, saying that "a" argument received
# multiple values.
# Set the new default value
# was passed as a positional argument
# Drop first parameter:
# '(p1, p2[, ...])' -> '(p2[, ...])'
# Unless we add a new parameter type we never
# get here
# It's a var-positional parameter.
# Do nothing. '(*args[, ...])' -> '(*args[, ...])'
# Can't test 'isinstance(type)' here, as it would
# also be True for regular python classes.
# Can't use the `in` operator here, as it would
# invoke the custom __eq__ method.
# All function-like objects are obviously callables,
# and not classes.
# Important to use _void ...
# ... and not None here
# token stream always starts with ENCODING token, skip it
# Support constant folding of a couple simple binary operations
# commonly used to define default values in text signatures
# non-keyword-only parameters
# *args
# **kwargs
# Possibly strip the bound argument:
# for builtins, self parameter is always positional-only!
# If it's not a pure Python function, and not a duck type
# of pure function:
# Parameter information.
# Non-keyword-only parameters w/o defaults.
# ... w/ defaults.
# Keyword-only parameters.
# Is 'func' is a pure Python function - don't validate the
# parameters list (for correct order and defaults), it should be OK.
# In this case we skip the first parameter of the underlying
# function (usually `self` or `cls`).
# Was this function wrapped by a decorator?
# Unwrap until we find an explicit signature or a MethodType (which will be
# handled explicitly below).
# If the unwrapped object is a *method*, we might want to
# skip its first parameter (self).
# See test_signature_wrapped_bound_method for details.
# since __text_signature__ is not writable on classes, __signature__
# may contain text (or be a callable that returns text);
# if so, convert it
# Unbound partialmethod (see functools.partialmethod)
# This means, that we need to calculate the signature
# as if it's a regular partial object, but taking into
# account that the first positional argument
# (usually `self`, or `cls`) will not be passed
# automatically (as for boundmethods)
# First argument of the wrapped callable is `*args`, as in
# `partialmethod(lambda *args)`.
# If it's a pure Python function, or an object that is duck type
# of a Python function (Cython functions, for instance), then:
# obj is a class or a metaclass
# First, let's see if it has an overloaded __call__ defined
# in its metaclass
# NOTE: The user-defined method can be a function with a thin wrapper
# around object.__new__ (e.g., generated by `@warnings.deprecated`)
# Go through the MRO and see if any class has user-defined
# pure Python __new__ or __init__ method
# Now we check if the 'obj' class has an own '__new__' method
# or an own '__init__' method
# At this point we know, that `obj` is a class, with no user-
# defined '__init__', '__new__', or class-level '__call__'
# Since '__text_signature__' is implemented as a
# descriptor that extracts text signature from the
# class docstring, if 'obj' is derived from a builtin
# class, its own '__text_signature__' may be 'None'.
# Therefore, we go through the MRO (except the last
# class in there, which is 'object') to find the first
# class with non-empty text signature.
# If 'base' class has a __text_signature__ attribute:
# return a signature based on it
# No '__text_signature__' was found for the 'obj' class.
# Last option is to check if its '__init__' is
# object.__init__ or type.__init__.
# We have a class (not metaclass), but no user-defined
# __init__ or __new__ for it
# Return a signature of 'object' builtin.
# An object with __call__
# These are implicit arguments generated by comprehensions. In
# order to provide a friendlier interface to users, we recast
# their name as "implicitN" and treat them as positional-only.
# See issue 19611.
# It's possible for C functions to have a positional-only parameter
# where the name is a keyword, so for compatibility we'll allow it.
# Add annotation and default value
# We're done here. Other arguments
# will be mapped in 'BoundArguments.kwargs'
# plain argument
# plain keyword argument
# This BoundArguments was likely produced by
# Signature.bind_partial().
# No default for this parameter, but the
# previous parameter of had a default
# There is a default for this parameter.
# Let's iterate through the positional arguments and corresponding
# parameters
# No more positional arguments
# No more parameters. That's it. Just need to check that
# we have no `kwargs` after this while loop
# That's OK, just empty *args.  Let's start parsing
# Raise a TypeError once we are sure there is no
# **kwargs param later.
# That's fine too - we have a default value for this
# parameter.  So, lets start parsing `kwargs`, starting
# with the current parameter
# No default, not VAR_KEYWORD, not VAR_POSITIONAL,
# not in `kwargs`
# We have a positional argument to process
# Looks like we have no parameter for this positional
# We have an '*args'-like argument, let's fill it with
# all positional arguments we have left and move on to
# the next phase
# Now, we iterate through the remaining parameters to process
# keyword arguments
# Memorize that we have a '**kwargs'-like parameter
# Named arguments don't refer to '*args'-like parameters.
# We only arrive here if the positional arguments ended
# before reaching the last parameter before *args.
# We have no value for this parameter.  It's fine though,
# if it has a default value, or it is an '*args'-like
# parameter, left alone by the processing of positional
# arguments.
# Process our '**kwargs'-like parameter
# It's not a positional-only parameter, and the flag
# is set to 'True' (there were pos-only params before.)
# OK, we have an '*args'-like parameter, so we won't need
# a '*' to separate keyword-only arguments
# We have a keyword-only parameter to render and we haven't
# rendered an '*args'-like parameter before, so add a '*'
# separator to the parameters list ("foo(arg1, *, arg2)" case)
# This condition should be only triggered once, so
# reset the flag
# There were only positional-only parameters, hence the
# flag was not reset to 'False'
# Redirect stdout and stderr to the Apple system log. This method is
# invoked by init_apple_streams() (initconfig.c) if config->use_system_logger
# is enabled.
# We want to emit one log message per line, so split
# the string before sending it to the superclass.
# Encode null bytes using "modified UTF-8" to avoid truncating the
# message. This should not affect the return value, as the caller
# may be expecting it to match the length of the input.
# Long option support added by Lars Wirzenius <liw@iki.fi>.
# Gerrit Holl <gerrit@nl.linux.org> moved the string-based exceptions
# to class-based exceptions.
# Peter Åstrand <astrand@lysator.liu.se> added gnu_getopt().
# TODO for gnu_getopt():
# - GNU getopt_long_only mechanism
# - allow the caller to specify ordering
# - RETURN_IN_ORDER option
# - GNU extension with '-' as first character of option string
# - optional arguments, specified by double colons
# - an option string with a W followed by semicolon should
# Bootstrapping Python: gettext's dependencies not built yet
# Allow options after non-option arguments?
# No exact match, so better be unique.
# XXX since possibilities contains all valid continuations, might be
# nice to work them into the error msg
# XML support
# XML 'header'
# Regex to find any control chars, except for \t \n and \r
# copied from base64.encodebytes(), with added maxlinelength argument
# Contents should conform to a subset of ISO 8601
# (in particular, YYYY '-' MM '-' DD 'T' HH ':' MM ':' SS 'Z'.  Smaller units
# may be omitted with # convert DOS line endings
# convert Mac line endings
# escape '&'
# escape '<'
# escape '>'
# Reject plist files with entity declarations to avoid XML vulnerabilities in expat.
# Regular plist files don't contain those declarations, and Apple's plutil tool does not
# accept them either.
# this is the root object
# element handlers
# plist has fixed encoding of utf-8
# XXX: is this test needed?
# Also check for alternative XML encodings, this is slightly
# overkill because the Apple tools (and plistlib) will not
# generate files with these encodings.
# expat does not support utf-32
#(codecs.BOM_UTF32_BE, "utf-32-be"),
#(codecs.BOM_UTF32_LE, "utf-32-le"),
# Binary Plist
# The basic file format:
# HEADER
# object...
# refid->offset...
# TRAILER
# The referenced source code also mentions URL (0x0c, 0x0d) and
# UUID (0x0e), but neither can be generated using the Cocoa libraries.
# int
# real
# date
# timestamp 0 of binary plists corresponds to 1/1/2001
# (year of Mac OS X 10.0), instead of 1/1/1970.
# data
# ascii string
# unicode string
# UID
# used by Key-Archiver plist files
# array
# tokenH == 0xB0 is documented as 'ordset', but is not actually
# implemented in the Apple reference code.
# tokenH == 0xC0 is documented as 'set', but sets cannot be used in
# plists.
# dict
# Flattened object list:
# Mappings from object->objectid
# First dict has (type(object), object) as the key,
# second dict is used when object is not hashable and
# has id(object) as the key.
# Create list of all objects in the plist
# Size of object references in serialized containers
# depends on the number of objects in the plist.
# Write file header
# Write object list
# Write refnum->object offset table
# Write trailer
# First check if the object is in the object table, not used for
# containers to ensure that two subcontainers with the same contents
# will be serialized as distinct values.
# Add to objectreference map
# And finally recurse into containers
# Generic bits
# This module is used to map the old Python 2 names to the new names used in
# Python 3 for the pickle module.  This needed to make pickle streams
# generated with Python 2 loadable by Python 3.
# This is a copy of lib2to3.fixes.fix_imports.MAPPING.  We cannot import
# lib2to3 and use the mapping defined there, because lib2to3 uses pickle.
# Thus, this could cause the module to be imported recursively.
# This contains rename rules that are easy to handle.  We ignore the more
# complex stuff (e.g. mapping the names in the urllib and types modules).
# These rules should be run before import names are fixed.
# StandardError is gone in Python 3, so we map it to Exception
# Same, but for 3.x to 2.x
# Non-mutual mappings.
# For compatibility with broken pickles saved in old Python 3 versions
# Try importing the _locale module.
# If this fails, fall back on a basic 'C' locale emulation.
# Yuck:  LC_MESSAGES is non-standard:  can't tell whether it exists before
# trying the import.  So __all__ is also fiddled at the end of the file.
# Locale emulation
# 'C' locale default values
# These may or may not exist in _locale, so be sure to set them.
# With this dict, you can override some items of localeconv's return value.
# This is useful for testing purposes.
### Number formatting APIs
# Author: Martin von Loewis
# improved by Georg Brandl
# Iterate over grouping intervals
# if grouping is -1, we are done
# 0: re-use last group ad infinitum
#perform the grouping from right to left
# only non-digit characters remain (sign, spaces)
# Strip a given amount of excess padding from the given string
# Transform formatted as locale number according to the locale settings
# floats and decimal ints need special action!
# check for illegal values
# '<' and '>' are markers if the sign must be inserted between symbol and value
# the default if nothing specified;
# this should be the most fitting sign position
#First, get rid of the grouping
#next, replace the decimal point with a dot
#do grouping
#standard formatting
### Locale name aliasing engine
# Author: Marc-Andre Lemburg, mal@lemburg.com
# Various tweaks by Fredrik Lundh <fredrik@pythonware.com>
# store away the low-level version of setlocale (it's
# overridden below)
# Convert the encoding to a C lib compatible encoding string
#print('norm encoding: %r' % norm_encoding)
#print('aliased encoding: %r' % norm_encoding)
#print('found encoding %r' % encoding)
# Normalize the locale name and extract the encoding and modifier
# ':' is sometimes used as encoding delimiter.
# First lookup: fullname (possibly with encoding and modifier)
#print('first lookup failed')
# Second try: fullname without modifier (possibly with encoding)
#print('lookup without modifier succeeded')
#print('second lookup failed')
# Third try: langname (without encoding, possibly with modifier)
#print('lookup without encoding succeeded')
# Fourth try: langname (without encoding and modifier)
#print('lookup without modifier and encoding succeeded')
# Deal with locale modifiers
# Assume Latin-9 for @euro locales. This is bogus,
# since some systems may use other encodings for these
# locales. Also, we ignore other modifiers.
# On macOS "LC_CTYPE=UTF-8" is a valid locale setting
# for getting UTF-8 handling for text.
# check if it's supported by the _locale module
# make sure the code/encoding values are valid
# map windows language identifier to language name
# ...add other platform-specific processing here, if
# necessary...
# fall back on POSIX behaviour
# convert to string
# When _locale.getencoding() is missing, locale.getencoding() uses the
# Python filesystem encoding.
# On Unix, if CODESET is available, use that.
### Database
# The following data was extracted from the locale.alias file which
# comes with X11 and then hand edited removing the explicit encoding
# definitions and adding some more aliases. The file is usually
# available as /usr/lib/X11/locale/locale.alias.
# The local_encoding_alias table maps lowercase encoding alias names
# to C locale encoding names (case-sensitive). Note that normalize()
# first looks up the encoding in the encodings.aliases dictionary and
# then applies this mapping to find the correct C lib name for the
# encoding.
# Mappings for non-standard encoding names used in locale names
# Mappings from Python codec names to C lib encoding names
# XXX This list is still incomplete. If you know more
# mappings, please file a bug report. Thanks.
# The locale_alias table maps lowercase alias names to C locale names
# (case-sensitive). Encodings are always separated from the locale
# name using a dot ('.'); they should only be given in case the
# language name is needed to interpret the given encoding alias
# correctly (CJK codes often have this need).
# Note that the normalize() function which uses this tables
# removes '_' and '-' characters from the encoding part of the
# locale name before doing the lookup. This saves a lot of
# space in the table.
# MAL 2004-12-10:
# Updated alias mapping to most recent locale.alias file
# from X.org distribution using makelocalealias.py.
# These are the differences compared to the old mapping (Python 2.4
# and older):
# MAL 2008-05-30:
# These are the differences compared to the old mapping (Python 2.5
# AP 2010-04-12:
# These are the differences compared to the old mapping (Python 2.6.5
# SS 2013-12-20:
# These are the differences compared to the old mapping (Python 3.3.3
# SS 2014-10-01:
# Updated alias mapping with glibc 2.19 supported locales.
# SS 2018-05-05:
# Updated alias mapping with glibc 2.27 supported locales.
# These are the differences compared to the old mapping (Python 3.6.5
# SS 2025-02-04:
# Updated alias mapping with glibc 2.41 supported locales and the latest
# X lib alias mapping.
# These are the differences compared to the old mapping (Python 3.13.1
# SS 2025-06-10:
# Remove 'c.utf8' -> 'en_US.UTF-8' because 'en_US.UTF-8' does not exist
# on all platforms.
# This maps Windows language identifiers to locale strings.
# This list has been updated from
# http://msdn.microsoft.com/library/default.asp?url=/library/en-us/intl/nls_238z.asp
# to include every locale up to Windows Vista.
# NOTE: this mapping is incomplete.  If your language is missing, please
# submit a bug report as detailed in the Python devguide at:
# Make sure you include the missing language identifier and the suggested
# locale code.
# Afrikaans
# Albanian
# Alsatian - France
# Amharic - Ethiopia
# Arabic - Saudi Arabia
# Arabic - Iraq
# Arabic - Egypt
# Arabic - Libya
# Arabic - Algeria
# Arabic - Morocco
# Arabic - Tunisia
# Arabic - Oman
# Arabic - Yemen
# Arabic - Syria
# Arabic - Jordan
# Arabic - Lebanon
# Arabic - Kuwait
# Arabic - United Arab Emirates
# Arabic - Bahrain
# Arabic - Qatar
# Armenian
# Assamese - India
# Azeri - Latin
# Azeri - Cyrillic
# Bashkir
# Basque - Russia
# Belarusian
# Begali
# Bosnian - Cyrillic
# Bosnian - Latin
# Breton - France
# Bulgarian
# Catalan
# Chinese - Simplified
# Chinese - Taiwan
# Chinese - PRC
# Chinese - Hong Kong S.A.R.
# Chinese - Singapore
# Chinese - Macao S.A.R.
# Chinese - Traditional
# Corsican - France
# Croatian
# Croatian - Bosnia
# Czech
# Danish
# Dari - Afghanistan
# Divehi - Maldives
# Dutch - The Netherlands
# Dutch - Belgium
# English - United States
# English - United Kingdom
# English - Australia
# English - Canada
# English - New Zealand
# English - Ireland
# English - South Africa
# English - Jamaica
# English - Caribbean
# English - Belize
# English - Trinidad
# English - Zimbabwe
# English - Philippines
# English - India
# English - Malaysia
# English - Singapore
# Estonian
# Faroese
# Filipino
# Finnish
# French - France
# French - Belgium
# French - Canada
# French - Switzerland
# French - Luxembourg
# French - Monaco
# Frisian - Netherlands
# Galician
# Georgian
# German - Germany
# German - Switzerland
# German - Austria
# German - Luxembourg
# German - Liechtenstein
# Greek
# Greenlandic - Greenland
# Gujarati
# Hausa - Latin
# Hebrew
# Hindi
# Hungarian
# Icelandic
# Indonesian
# Inuktitut - Syllabics
# Inuktitut - Latin
# Irish - Ireland
# Italian - Italy
# Italian - Switzerland
# Japanese
# Kannada - India
# Kazakh
# Khmer - Cambodia
# K'iche - Guatemala
# Kinyarwanda - Rwanda
# Konkani
# Korean
# Kyrgyz
# Lao - Lao PDR
# Latvian
# Lithuanian
# Lower Sorbian - Germany
# Luxembourgish
# FYROM Macedonian
# Malay - Malaysia
# Malay - Brunei Darussalam
# Malayalam - India
# Maltese
# Maori
# Mapudungun
# Marathi
# Mohawk - Canada
# Mongolian - Cyrillic
# Mongolian - PRC
# Nepali
# Norwegian - Bokmal
# Norwegian - Nynorsk
# Occitan - France
# Oriya - India
# Pashto - Afghanistan
# Persian
# Polish
# Portuguese - Brazil
# Portuguese - Portugal
# Punjabi
# Quechua (Bolivia)
# Quechua (Ecuador)
# Quechua (Peru)
# Romanian - Romania
# Romansh
# Russian
# Sami Finland
# Sami Norway
# Sami Sweden
# Sami Northern Norway
# Sami Northern Sweden
# Sami Northern Finland
# Sami Skolt
# Sami Southern Norway
# Sami Southern Sweden
# Sanskrit
# Serbian - Cyrillic
# Serbian - Bosnia Cyrillic
# Serbian - Latin
# Serbian - Bosnia Latin
# Sinhala - Sri Lanka
# Northern Sotho
# Setswana - Southern Africa
# Slovak
# Slovenian
# Spanish - Spain
# Spanish - Mexico
# Spanish - Spain (Modern)
# Spanish - Guatemala
# Spanish - Costa Rica
# Spanish - Panama
# Spanish - Dominican Republic
# Spanish - Venezuela
# Spanish - Colombia
# Spanish - Peru
# Spanish - Argentina
# Spanish - Ecuador
# Spanish - Chile
# Spanish - Uruguay
# Spanish - Paraguay
# Spanish - Bolivia
# Spanish - El Salvador
# Spanish - Honduras
# Spanish - Nicaragua
# Spanish - Puerto Rico
# Spanish - United States
# Swahili
# Swedish - Sweden
# Swedish - Finland
# Syriac
# Tajik - Cyrillic
# Tamazight - Latin
# Tamil
# Tatar
# Telugu
# Thai
# Tibetan - Bhutan
# Tibetan - PRC
# Turkish
# Turkmen - Cyrillic
# Uighur - Arabic
# Ukrainian
# Upper Sorbian - Germany
# Urdu
# Urdu - India
# Uzbek - Latin
# Uzbek - Cyrillic
# Vietnamese
# Welsh
# Wolof - Senegal
# Xhosa - South Africa
# Yakut - Cyrillic
# Yi - PRC
# Yoruba - Nigeria
# Zulu
# Iterators in Python aren't a matter of type but of protocol.  A large
# and changing number of builtin types implement *some* flavor of
# iterator.  Don't check the type!  Use hasattr to check for both
# "__iter__" and "__next__" attributes instead.
# Same as FunctionType
# Same as BuiltinFunctionType
# Not for export
# Provide a PEP 3115 compliant mechanism for class creation
# Don't alter the provided mapping
# when meta is a type, we first determine the most-derived metaclass
# instead of invoking the initial candidate directly
# next two lines make DynamicClassAttribute act the same as property
# support for abstract methods
# TODO: Implement this in C.
# Check if 'func' is a coroutine function.
# (0x180 == CO_COROUTINE | CO_ITERABLE_COROUTINE)
# Check if 'func' is a generator function.
# (0x20 == CO_GENERATOR)
# 0x100 == CO_ITERABLE_COROUTINE
# The following code is primarily to support functions that
# return generator-like objects (for instance generators
# compiled with Cython).
# Delay functools and _collections_abc import for speeding up types import.
# 'coro' is a native coroutine object or an iterable coroutine
# 'coro' is either a pure Python generator iterator, or it
# implements collections.abc.Generator (and does not implement
# collections.abc.Coroutine).
# 'coro' is either an instance of collections.abc.Coroutine or
# some other object -- pass it through.
# Issue 19330: ensure context manager instances have good docstrings
# Unfortunately, this still doesn't provide good help output when
# inspecting the created context manager instances, since pydoc
# currently bypasses the instance docstring and shows the docstring
# for the class instead.
# See http://bugs.python.org/issue19404 for more details.
# _GCMB instances are one-shot context managers, so the
# CM must be recreated each time a decorated function is
# called
# do not keep args and kwds alive unnecessarily
# they are only needed for recreation, which is not possible anymore
# Need to force instantiation so we can reliably
# tell if we get the same exception back
# Suppress StopIteration *unless* it's the same exception that
# was passed to throw().  This prevents a StopIteration
# raised inside the "with" statement from being suppressed.
# Don't re-raise the passed in exception. (issue27122)
# Avoid suppressing if a StopIteration exception
# was passed to throw() and later wrapped into a RuntimeError
# (see PEP 479 for sync generators; async generators also
# have this behavior). But do this only if the exception wrapped
# by the RuntimeError is actually Stop(Async)Iteration (see
# issue29692).
# only re-raise if it's *not* the exception that was
# passed to throw(), because __exit__() must not raise
# an exception unless __exit__() itself failed.  But throw()
# has to raise the exception to signal propagation, so this
# fixes the impedance mismatch between the throw() protocol
# and the __exit__() protocol.
# Avoid suppressing if a Stop(Async)Iteration exception
# was passed to athrow() and later wrapped into a RuntimeError
# We use a list of old targets to make this CM re-entrant
# Unlike isinstance and issubclass, CPython exception handling
# currently only looks at the concrete type hierarchy (ignoring
# the instance and subclass checking hooks). While Guido considers
# that a bug rather than a feature, it's a fairly hard one to fix
# due to various internal implementation details. suppress provides
# the simpler issubclass based semantics, rather than trying to
# exactly reproduce the limitations of the CPython interpreter.
# See http://bugs.python.org/issue12029 for more details
# We use an unbound method rather than a bound method to follow
# the standard lookup behaviour for special methods.
# Not a context manager, so assume it's a callable.
# Allow use as a decorator.
# We look up the special methods on the type to match the with
# statement.
# We changed the signature, so using @wraps is not appropriate, but
# setting __wrapped__ may still help with introspection.
# Allow use as a decorator
# Inspired by discussions on http://bugs.python.org/issue13585
# We manipulate the exception state so it behaves as though
# we were actually nesting multiple with statements
# Context may not be correct, so find the end of the chain
# Context is already set correctly (see issue 20317)
# Change the end of the chain to point to the exception
# we expect it to reference
# Callbacks are invoked in LIFO order to match the behaviour of
# nested context managers
# simulate the stack of exceptions by setting the context
# bare "raise exc" replaces our carefully
# set-up context
# Inspired by discussions on https://bugs.python.org/issue29302
# Not an async context manager, so assume it's a coroutine function
# Derived from uuid.UUID("00000001-0000-0010-8000-00aa00389b71").bytes_le
# whether to align to word (2-byte) boundaries
# subtract header
# maybe fix alignment
# else, assume it is an open file object already
# User visible methods.
# Internal methods.
# Read the entire UUID from the chunk
# pragma: nocover
# It is relatively expensive to construct new timedelta objects, and in most
# cases we're looking at the same deltas, like integer numbers of hours, etc.
# To improve speed and memory use, we'll keep a dictionary with references
# to the ones we've already used so far.
# Loading every time zone in the 2020a version of the time zone database
# requires 447 timedeltas, which requires approximately the amount of space
# that ZoneInfo("America/New_York") with 236 transitions takes up, so we will
# set the cache size to 512 so that in the common case we always get cache
# hits, but specifically crafted ZoneInfo objects don't leak arbitrary amounts
# of memory.
# Update the "strong" cache
# Disable pickling for objects created from files
# Detect fold
# idx is the transition that occurs after this timestamp, so we
# subtract off 1 to get the current ttinfo
# Retrieve all the data as it exists in the zoneinfo file
# Infer the DST offsets (needed for .dst()) from the data
# Convert all the transition times (UTC) into "seconds since 1970-01-01 local time"
# Construct `_ttinfo` objects for each transition in the file
# Find the first non-DST transition
# Set the "fallback" time zone
# Determine if this is a "fixed offset" zone, meaning that the output
# of the utcoffset, dst and tzname functions does not depend on the
# specific datetime passed.
# We make three simplifying assumptions here:
# 1. If _tz_after is not a _ttinfo, it has transitions that might
# 2. If _ttinfo_list contains more than one _ttinfo object, the objects
# 3. _ttinfo_list contains no unused _ttinfos (in which case an
# Violations to these assumptions would be fairly exotic, and exotic
# zones should almost certainly not be used with datetime.time (the
# only thing that would be affected by this).
# Now we must transform our ttis and abbrs into `_ttinfo` objects,
# but there is an issue: .dst() must return a timedelta with the
# difference between utcoffset() and the "standard" offset, but
# the "base offset" and "DST offset" are not encoded in the file;
# we can infer what they are from the isdst flag, but it is not
# sufficient to just look at the last standard offset, because
# occasionally countries will shift both DST offset and base offset.
# Provisionally assign all to 0.
# We're only going to look at daylight saving time
# Skip any offsets that have already been assigned
# If the following transition is also DST and we couldn't
# find the DST offset by this point, we're going to have to
# skip it and hope this transition gets assigned later
# If we didn't find a valid value for a given index, we'll end up
# with dstoff = 0 for something where `isdst=1`. This is obviously
# wrong - one hour will be a much better guess than 0
# Start with the timestamps and modify in-place
# These are assertions because the constructor should only be called
# by functions that would fail before passing start or end
# With fold = 0, the period (denominated in local time) with the
# smaller offset starts at the end of the gap and ends at the end of
# the fold; with fold = 1, it runs from the start of the gap to the
# beginning of the fold.
# So in order to determine the DST boundaries we need to know both
# the fold and whether DST is positive or negative (rare), and it
# turns out that this boils down to fold XOR is_positive.
# For positive DST, the ambiguous period is one dst_diff after the end
# of DST; for negative DST, the ambiguous period is one dst_diff before
# the start of DST.
# convert bool to int
# TODO: These are not actually epoch dates as they are expressed in local time
# We know year and month, we need to convert w, d into day of month
# Week 1 is the first week in which day `d` (where 0 = Sunday) appears.
# Week 5 represents the last occurrence of day `d`, so we need to know
# the range of the month.
# This equation seems magical, so I'll break it down:
# 1. calendar says 0 = Monday, POSIX says 0 = Sunday
# 2. Get first day - desired day mod 7: -1 % 7 = 6, so we don't need
# 3. Add 1 because month days are a 1-based index.
# Now use a 0-based index version of `w` to calculate the w-th
# occurrence of `d`
# month_day will only be > days_in_month if w was 5, and `w` means
# "last occurrence of `d`", so now we just check if we over-shot the
# end of the month and if so knock off 1 week.
# The tz string has the format:
# std[offset[dst[offset],start[/time],end[/time]]]
# std and dst must be 3 or more characters long and must not contain
# a leading colon, embedded digits, commas, nor a plus or minus signs;
# The spaces between "std" and "offset" are only for display and are
# not actually present in the string.
# The format of the offset is ``[+|-]hh[:mm[:ss]]``
# This is a static ttinfo, don't return _TZStr
# Anything passed to this function should already have hit an equivalent
# regular expression to find the section to parse.
# Yes, +5 maps to an offset of -5h
# We need `_reset_tzpath` helper function because it produces a warning,
# it is used as both a module-level call and a public API.
# This is how we equalize the stacklevel for both calls.
# If anything has been filtered out, we will warn about it
# We only care about the kinds of path normalizations that would change the
# length of the key - e.g. a/../b -> a/b, or a/b/ -> a/b. On Windows,
# normpath will also change from a/b to a\b, but that would still preserve
# the length.
# Start with loading from the tzdata package if it exists: this has a
# pre-assembled list of zones that only requires opening one file.
# right/ and posix/ are special directories and shouldn't be
# included in the output of available zones
# posixrules is a special symlink-only time zone where it exists, it
# should not be included in the output
# gh-85702: Prevent PermissionError on Windows
# There are four types of exception that can be raised that all amount
# to "we cannot find this key":
# ImportError: If package_name doesn't exist (e.g. if tzdata is not
# FileNotFoundError: If resource_name doesn't exist in the package
# UnicodeEncodeError: If package_name or resource_name are not UTF-8,
# IsADirectoryError: If package_name without a resource_name specified.
# Version 2+ has 64-bit integer transition times
# Version 2+ also starts with a Version 1 header and data, which
# we need to skip now
# Transition times and types
# Local time type records
# Time zone designations
# Leap second records
# Standard/wall indicators
# UT/local indicators
# Now we need to read the second header, which is not the same
# as the first
# The data portion starts with timecnt transitions and indices
# Read the ttinfo struct, (utoff, isdst, abbrind)
# Now read the abbreviations. They are null-terminated strings, indexed
# not by position in the array but by position in the unsplit
# abbreviation string. I suppose this makes more sense in C, which uses
# null to terminate the strings, but it's inconvenient here...
# Gets a string starting at idx and running until the next \x00
# We cannot pre-populate abbr_vals by splitting on \x00 because there
# are some zones that use subsets of longer abbreviations, like so:
# Where the idx to abbr mapping should be:
# {0: "LMT", 4: "AHST", 5: "HST", 9: "HDT"}
# The remainder of the file consists of leap seconds (currently unused) and
# the standard/wall and ut/local indicators, which are metadata we don't need.
# In version 2 files, we need to skip the unnecessary data to get at the TZ string:
# Each leap second record has size (time_size + 4)
# Should be \n
# The header starts with a 4-byte "magic" value
# Slots are defined in the order that the bytes are arranged
#!/usr/bin/python3.13
# -*- python -*-
# Keep this script in sync with python-config.sh.in
# add the prefix/lib/pythonX.Y/config dir, but only if there is no
# shared library in prefix/lib/.
# Please leave this code so that it runs under older versions of
# Python 3 (no f-strings).  That will allow benchmarking for
# cross-version comparisons.  To run the benchmark on Python 2,
# comment-out the nonlocal reads and writes.
# Generate lines from fileiter.  If whilematch is true, continue reading
# while the regexp object pat matches line.  If whilematch is false, lines
# are read so long as pat doesn't match them.  In any case, the first line
# that doesn't match pat (when whilematch is true), or that does match pat
# (when whilematch is false), is lost, and fileiter will resume at the line
# following it.
# guts is type name here
# NOTE: Bytecode introspection modules (opcode, dis, etc.) should only
# be imported when loading a single dataset. When comparing datasets, it
# could get it wrong, leading to subtle errors.
# TODO: Check for parity
# Hack to handle older data files where some uops
# are missing an underscore prefix in their name
# Join using the first column as a key
# Join using the first column as a key, and indicate the change in the
# second column of each input table as a new column
# Join using the first column as a key, indicating the change in the second
# column of each input table as a new column, and omit all other columns
# Join using the first column as a key, and indicate the change as a new
# column, but don't sort by the amount of change.
# To preserve ordering, use A's keys as is and then add any in B that
# aren't in A
# Don't include any zero entries at the end
# Determine threshold for switching from longobject.c divmod to
# _pylong.int_divmod().
# Data generation
# Shuffle it a bit...
# Do 3 random exchanges.
# Replace the last 10 with random floats.
# Replace 1% of the elements at random.
# Arrange for lots of duplicates.
# Force the elements to be distinct objects, else timings can be
# artificially low.
# All equal.  Again, force the elements to be distinct objects.
# This one looks like [3, 2, 1, 0, 0, 1, 2, 3].  It was a bad case
# for an older implementation of quicksort, which used the median
# of the first, last and middle elements as the pivot.
# Force to float, so that the timings are comparable.  This is
# significantly faster if we leave them as ints.
# =========
# Benchmark
# Benching this method!
# This needs `pyperf` 3rd party library:
# Written by Martin v. Löwis <loewis@informatik.hu-berlin.de>
# the keys are sorted in the .mo file
# For each string, we need size and file offset.  Each string is NUL
# terminated; the NUL does not count into the size.
# The header is 7 32-bit unsigned integers.  We don't use hash tables, so
# the keys start right after the index tables.
# translated string.
# and the values start after the keys
# The string table first has the list of keys, then the list of values.
# Each entry has first the size of the string, then the file offset.
# Magic
# Version
# # of entries
# start of key index
# start of value index
# size and offset of hash table
# Compute .mo name from .po name and arguments
# Start off assuming Latin-1, so everything decodes without failure,
# until we know the exact encoding
# Parse the catalog
# If we get a comment line after a msgstr, this is a new entry
# Record a fuzzy mark
# Skip comments
# Now we are in a msgid or msgctxt section, output previous section
# Filter out POT-Creation-Date
# See issue #131852
# See whether there is an encoding declaration
# This is a message with plural forms
# separator of singular and plural
# Now we are in a msgstr section
# Separator of the various plural forms
# Skip empty lines
# Add last entry
# Compute output
# parse options
# do it
### Codec APIs
### encodings module API
#"
### Decoding Table
### Encoding table
# iso2022_jp.py: Python Unicode Codec for ISO2022_JP
# Written by Hye-Shik Chang <perky@FreeBSD.org>
### Decoding Map
### Encoding Map
# shift_jis_2004.py: Python Unicode Codec for SHIFT_JIS_2004
# Cache lookup
# Import the module:
# First try to find an alias for the normalized encoding
# name and lookup the module using the aliased name, then try to
# lookup the module using the standard import scheme, i.e. first
# try in the encodings package, then at top-level.
# Import is absolute to prevent the possibly malicious import of a
# module with side-effects that is not in the 'encodings' package.
# ImportError may occur because 'encodings.(modname)' does not exist,
# or because it imports a name that does not exist (see mbcs and oem)
# Not a codec module
# Cache misses
# Now ask the module for the registry entry
# Cache the codec registry entry
# Register its aliases (without overwriting previously registered
# aliases)
# Return the registry entry
# Register the search_function in the Python codec registry
# bpo-671666, bpo-46668: If Python does not implement a codec for current
# Windows ANSI code page, use the "mbcs" codec instead:
# WideCharToMultiByte() and MultiByteToWideChar() functions with CP_ACP.
# Python does not support custom code pages.
# Imports may fail while we are shutting down
# not enough data to decide if this really is a BOM
# => try again on the next call
# state[1] must be 0 here, as it isn't passed along to the caller
# state[1] will be ignored by BufferedIncrementalDecoder.setstate()
# not enough data to decide if this is a BOM
# (else) no BOM present
# cp949.py: Python Unicode Codec for CP949
# This module implements the RFCs 3490 (IDNA) and 3491 (Nameprep)
# IDNA section 3.1
# IDNA section 5
# This assumes query strings, so AllowUnassigned is true
# type: (str) -> str
# Map
# Map to nothing
# Normalize
# Prohibit
# Check bidi
# There is a RandAL char in the string. Must perform further
# tests:
# 1) The characters in section 5.8 MUST be prohibited.
# This is table C.8, which was already checked
# 2) If a string contains any RandALCat character, the string
# MUST NOT contain any LCat character.
# 3) If a string contains any RandALCat character, a
# RandALCat character MUST be the first character of the
# string, and a RandALCat character MUST be the last
# character of the string.
# type: (str) -> bytes
# Step 1: try ASCII
# Skip to step 3: UseSTD3ASCIIRules is false, so
# Skip to step 8.
# Step 2: nameprep
# Step 3: UseSTD3ASCIIRules is false
# Step 4: try ASCII
# Step 5: Check ACE prefix
# Step 6: Encode with PUNYCODE
# Step 7: Prepend ACE prefix
# Step 8: Check size
# do not check for empty as we prepend ace_prefix.
# Protection from https://github.com/python/cpython/issues/98433.
# https://datatracker.ietf.org/doc/html/rfc5894#section-6
# doesn't specify a label size limit prior to NAMEPREP. But having
# one makes practical sense.
# This leaves ample room for nameprep() to remove Nothing characters
# per https://www.rfc-editor.org/rfc/rfc3454#section-3.1 while still
# preventing us from wasting time decoding a big thing that'll just
# hit the actual <= 63 length limit in Step 6.
# Step 1: Check for ASCII
# Step 2: Perform nameprep
# It doesn't say this, but apparently, it should be ASCII now
# Step 3: Check for ACE prefix
# Step 4: Remove ACE prefix
# Step 5: Decode using PUNYCODE
# Step 6: Apply ToASCII
# Step 7: Compare the result of step 6 with the one of step 3
# label2 will already be in lower case.
# Step 8: return the result of step 5
# IDNA is quite clear that implementations must be strict
# ASCII name: fast path
# Join with U+002E
# IDNA allows decoding to operate on Unicode strings, too.
# XXX obviously wrong, see #3232
# Fast path
# Keep potentially unfinished label until the next call
# Must be ASCII string
# Remove newline chars from filename
# Encode
# Find start of encoded data
# Decode
# Workaround for broken uuencoders by /Fredrik Lundh
#sys.stderr.write("Warning: %s\n" % str(v))
# cp950.py: Python Unicode Codec for CP950
# iso2022_jp_2004.py: Python Unicode Codec for ISO2022_JP_2004
# Note: Binding these as C functions will result in the class not
# converting them to methods. This is intended.
# big5.py: Python Unicode Codec for BIG5
# shift_jis.py: Python Unicode Codec for SHIFT_JIS
# hz.py: Python Unicode Codec for HZ
# euc_jis_2004.py: Python Unicode Codec for EUC_JIS_2004
# iso2022_jp_ext.py: Python Unicode Codec for ISO2022_JP_EXT
# euc_jisx0213.py: Python Unicode Codec for EUC_JISX0213
# iso2022_kr.py: Python Unicode Codec for ISO2022_KR
# gbk.py: Python Unicode Codec for GBK
# http://ru.wikipedia.org/wiki/КОИ-8
# http://www.opensource.apple.com/source/libiconv/libiconv-4/libiconv/tests/KOI8-T.TXT
# shift_jisx0213.py: Python Unicode Codec for SHIFT_JISX0213
# cp932.py: Python Unicode Codec for CP932
#!/usr/bin/env python
### Map
### Filter API
# Please keep this list sorted alphabetically by value !
# ascii codec
# some email headers use this non-standard name
# base64_codec codec
# big5 codec
# big5hkscs codec
# bz2_codec codec
# cp037 codec
# cp1026 codec
# cp1125 codec
# cp1140 codec
# cp1250 codec
# cp1251 codec
# cp1252 codec
# cp1253 codec
# cp1254 codec
# cp1255 codec
# cp1256 codec
# cp1257 codec
# cp1258 codec
# cp273 codec
# cp424 codec
# cp437 codec
# cp500 codec
# cp775 codec
# cp850 codec
# cp852 codec
# cp855 codec
# cp857 codec
# cp858 codec
# cp860 codec
# cp861 codec
# cp862 codec
# cp863 codec
# cp864 codec
# cp865 codec
# cp866 codec
# cp869 codec
# cp932 codec
# cp949 codec
# cp950 codec
# euc_jis_2004 codec
# euc_jisx0213 codec
# euc_jp codec
# euc_kr codec
# gb18030 codec
# gb2312 codec
# gbk codec
# hex_codec codec
# hp_roman8 codec
# hz codec
# iso2022_jp codec
# iso2022_jp_1 codec
# iso2022_jp_2 codec
# iso2022_jp_2004 codec
# iso2022_jp_3 codec
# iso2022_jp_ext codec
# iso2022_kr codec
# iso8859_10 codec
# iso8859_11 codec
# iso8859_13 codec
# iso8859_14 codec
# iso8859_15 codec
# iso8859_16 codec
# iso8859_2 codec
# iso8859_3 codec
# iso8859_4 codec
# iso8859_5 codec
# iso8859_6 codec
# iso8859_7 codec
# iso8859_8 codec
# iso8859_9 codec
# johab codec
# koi8_r codec
# kz1048 codec
# latin_1 codec
# Note that the latin_1 codec is implemented internally in C and a
# lot faster than the charmap codec iso8859_1 which uses the same
# encoding. This is why we discourage the use of the iso8859_1
# codec and alias it to latin_1 instead.
# mac_cyrillic codec
# mac_greek codec
# mac_iceland codec
# mac_latin2 codec
# mac_roman codec
# mac_turkish codec
# mbcs codec
# ptcp154 codec
# quopri_codec codec
# rot_13 codec
# shift_jis codec
# shift_jis_2004 codec
# shift_jisx0213 codec
# tis_620 codec
# utf_16 codec
# utf_16_be codec
# utf_16_le codec
# utf_32 codec
# utf_32_be codec
# utf_32_le codec
# utf_7 codec
# utf_8 codec
# uu_codec codec
# zlib_codec codec
# temporary mac CJK aliases, will be replaced by proper codecs in 3.1
# big5hkscs.py: Python Unicode Codec for BIG5HKSCS
# iso2022_jp_3.py: Python Unicode Codec for ISO2022_JP_3
# state info we return to the caller:
# 0: stream is in natural order for this platform
# 2: endianness hasn't been determined yet
# (we're never writing in unnatural order)
# additional state info from the base class must be None here,
# as it isn't passed along to the caller
# additional state info we pass to the caller:
# 1: stream is in unnatural order
# iso2022_jp_2.py: Python Unicode Codec for ISO2022_JP_2
# euc_kr.py: Python Unicode Codec for EUC_KR
# gb2312.py: Python Unicode Codec for GB2312
# johab.py: Python Unicode Codec for JOHAB
# euc_jp.py: Python Unicode Codec for EUC_JP
# encodings module API
# this codec needs the optional zlib module !
# gb18030.py: Python Unicode Codec for GB18030
# this codec needs the optional bz2 module !
##################### Encoding #####################################
# Punycode parameters: tmin = 1, tmax = 26, base = 36
# ((base - tmin) * tmax) // 2 == 455
# base - tmin
# Punycode parameters: initial bias = 72, damp = 700, skew = 38
##################### Decoding #####################################
# A-Z
# 0x30-26
# This function raises UnicodeDecodeError with position in the extended.
# Caller should add the offset.
# There was an error in decoding. We can't continue because
# synchronization is lost.
# Import them explicitly to cause an ImportError
# on non-Windows systems
# for IncrementalDecoder, IncrementalEncoder, ...
# iso2022_jp_1.py: Python Unicode Codec for ISO2022_JP_1
# pysqlite2/__init__.py: the pysqlite2 package.
# Copyright (C) 2005 Gerhard Häring <gh@ghaering.de>
# This file is part of pysqlite.
# pysqlite2/dbapi2.py: the DB-API 2.0 interface
# Copyright (C) 2004-2005 Gerhard Häring <gh@ghaering.de>
# Clean up namespace
# Prepare REPL banner and prompts.
# SQL statement provided on the command-line; execute it directly.
# No SQL provided; start the REPL.
# Mimic the sqlite3 console shell's .dump command
# Author: Paul Kippes <kippesp@gmail.com>
# Every identifier in sql is quoted based on a comment in sqlite
# documentation "SQLite adds new keywords from time to time when it
# takes on new features. So to prevent your code from being broken by
# future enhancements, you should normally quote any identifier that
# is an English language word, even if you do not have to."
# Make sure we get predictable results.
# Disable foreign key constraints, if there is any foreign key violation.
# Return database objects which match the filter pattern.
# sqlite_master table contains the SQL CREATE statements for the database.
# Build the insert statement for each row of the current table
# Now when the type is 'index', 'trigger', or 'view'
# gh-79009: Yield statements concerning the sqlite_sequence table at the
# end of the transaction.
# Note: the class is final, it is not intended for inheritance.
# fail fast with short traceback
# `signal.signal` may throw if `threading.main_thread` does
# not support signals (e.g. embedded interpreter with signals
# not registered - see gh-91880)
# CancelledError
# Call set_event_loop only once to avoid calling
# attach_loop multiple times on child watchers
# wakeup loop if it is blocked by select() with long timeout
# UDP sockets may not have a peer name
# None or bytearray.
# Set when close() called.
# only wake up the waiter when connection_made() has been called
# XXX If there is a pending overlapped read on the other
# end then it may fail with ERROR_NETNAME_DELETED if we
# just close our end.  First calling shutdown() seems to
# cure it, but maybe using DisconnectEx() would be better.
# bpo-33694: Don't cancel self._read_fut because cancelling an
# overlapped WSASend() loss silently data with the current proactor
# implementation.
# If CancelIoEx() fails with ERROR_NOT_FOUND, it means that WSASend()
# completed (even if HasOverlappedIoCompleted() returns 0), but
# Overlapped.cancel() currently silently ignores the ERROR_NOT_FOUND
# error. Once the overlapped is ignored, the IOCP loop will ignores the
# completion I/O event and so not read the result of the overlapped
# WSARecv().
# Call the protocol method after calling _loop_reading(),
# since the protocol can decide to pause reading again.
# Don't call any protocol method while reading is paused.
# The protocol will be called on resume_reading().
# deliver data later in "finally" clause
# we got end-of-file so no need to reschedule a new read
# It's a new slice so make it immutable so protocols upstream don't have problems
# the future will be replaced by next proactor.recv call
# since close() has been called we ignore any read data
# bpo-33694: buffer_updated() has currently no fast path because of
# a data loss issue caused by overlapped WSASend() cancellation.
# reschedule a new read
# Observable states:
# 1. IDLE: _write_fut and _buffer both None
# 2. WRITING: _write_fut set; _buffer None
# 3. BACKED UP: _write_fut set; _buffer a bytearray
# We always copy the data, so the caller can't modify it
# while we're still waiting for the I/O to happen.
# IDLE -> WRITING
# Pass a copy, except if it's already immutable.
# WRITING -> BACKED UP
# Make a mutable copy which we can extend.
# BACKED UP
# Append to buffer (also copies).
# XXX most likely self._force_close() has been called, and
# it has set self._write_fut to None.
# Now that we've reduced the buffer size, tell the
# protocol to resume writing if it was paused.  Note that
# we do this last since the callback is called immediately
# and it may add more data to the buffer (even causing the
# protocol to be paused again).
# the transport has been closed
# We don't need to call _protocol.connection_made() since our base
# constructor does it for us.
# The base constructor sets _buffer = None, so we set it here
# Ensure that what we buffer is immutable.
# No current write operations are active, kick one off
# else: A write operation is already kicked off
# We are in a _loop_writing() done callback, get the result
# The connection has been closed
# convenient alias
# socket file descriptor => Future
# wakeup fd can only be installed to a file descriptor from the main thread
# We want connection_lost() to be called when other end closes
# Call these methods before closing the event loop (before calling
# BaseEventLoop.close), because they can schedule callbacks with
# call_soon(), which is forbidden when the event loop is closed.
# Close the event loop
# A self-socket, really. :-)
# may raise
# When we scheduled this Future, we assigned it to
# _self_reading_future. If it's not there now, something has
# tried to cancel the loop while this callback was still in the
# queue (see windows_events.ProactorEventLoop.run_forever). In
# that case stop here instead of continuing to schedule a new
# iteration.
# _close_self_pipe() has been called, stop waiting for data
# This may be called from a different thread, possibly after
# _close_self_pipe() has been called or even while it is
# running.  Guard for self._csock being None or closed.  When
# a socket is closed, send() raises OSError (with errno set to
# EBADF, but let's not rely on the exact error code).
# Events are processed in the IocpProactor._poll() method
# flake8: noqa
# This relies on each of the submodules having an __all__ variable.
# Contains code from https://github.com/MagicStack/uvloop/tree/v0.16.0
# SPDX-License-Identifier: PSF-2.0 AND (MIT OR Apache-2.0)
# SPDX-FileCopyrightText: Copyright (c) 2015-2021 MagicStack Inc.  http://magic.io
# After the connection is lost, log warnings after this many write()s.
# Seconds to wait before retrying accept().
# Number of stack entries to capture in debug mode.
# The larger the number, the slower the operation in debug mode
# (see extract_stack() in format_helpers.py).
# Number of seconds to wait for SSL handshake to complete
# The default timeout matches that of Nginx.
# Number of seconds to wait for SSL shutdown to complete
# The default timeout mimics lingering_time
# Used in sendfile fallback code.  We use fallback for platforms
# that don't support sendfile, or for TLS connections.
# KiB
# Default timeout for joining the threads in the threadpool
# The enum should be here to break circular dependencies between
# base_events and sslproto
# This tracks the state of app protocol (https://git.io/fj59P):
# * cm: connection_made()
# * dr: data_received()
# * er: eof_received()
# * cl: connection_lost()
# Client side may pass ssl=True to use a default
# context; in that case the sslcontext passed is None.
# The default is secure for client connections.
# Python 3.4+: use up-to-date strong settings.
# Required for sendfile fallback pause_writing/resume_writing logic
# for test only
# Buffer size passed to read()
# SSL-specific extra info. More info are set when the handshake
# completes.
# App data write buffering
# transport, ex: SelectorSocketTransport
# SSL and state machine
# Set when connection_lost called
# Flow Control
# Make fast hasattr check first
# Just mark the app transport as closed so that its __dealloc__
# doesn't complain.
# Handshake flow
# start handshake timeout count down
# Add extra info that becomes available after handshake.
# Shutdown flow
# Outgoing flow
# Incoming flow
# close_notify
# Flow control for writes from APP socket
# Flow control for reads to APP socket
# Flow control for reads from SSL socket
# Flow control for writes to SSL socket
# Constants/globals
# Replacement for os.pipe() using handles instead of fds
# Wrapper for a pipe handle
# Replacement for subprocess.Popen using overlapped pipe handles
# Fallback to send
# Test if the selector is monitoring 'event' events
# for the file descriptor 'fd'.
# This method is only called once for each event loop tick where the
# listening socket has triggered an EVENT_READ. There may be multiple
# connections waiting for an .accept() so it is called in a loop.
# See https://bugs.python.org/issue27906 for more details.
# Early exit because the socket accept buffer is empty.
# There's nowhere to send the error, so just log it.
# Some platforms (e.g. Linux keep reporting the FD as
# ready, so we remove the read handler temporarily.
# We'll try again in a while.
# The event loop will catch, log and ignore it.
# gh-109534: When an exception is raised by the SSLProtocol object the
# exception set in this future can keep the protocol object alive and
# cause a reference cycle.
# It's now up to the protocol to handle the connection.
# This code matches selectors._fileobj_to_fd function.
# Remove both writer and connector.
# _sock_recv() can add itself as an I/O callback if the operation can't
# be done immediately. Don't use it directly, call sock_recv().
# try again next time
# _sock_recv_into() can add itself as an I/O callback if the operation
# can't be done immediately. Don't use it directly, call
# sock_recv_into().
# _sock_recvfrom() can add itself as an I/O callback if the operation
# sock_recvfrom().
# all data sent
# use a trick with a list in closure to store a mutable state
# Future cancellation can be scheduled on previous loop iteration
# Needed to break cycles when an exception occurs.
# Issue #23618: When the C function connect() fails with EINTR, the
# connection runs in background. We have to wait until the socket
# becomes writable to be notified when the connection succeed or
# fails.
# Jump to any except clause below.
# socket is still registered, the callback will be retried later
# Buffer size passed to recv().
# Attribute used in the destructor: it must be set even if the constructor
# is not called (see _SelectorSslTransport which may start by raising an
# exception)
# Set when call to connection_lost scheduled.
# Set when pause_reading() called
# test if the transport was closed
# Should be called from exception handler only.
# Disable the Nagle algorithm -- small writes will be
# sent without waiting for the TCP ACK.  This generally
# decreases the latency (in some cases significantly.)
# only start reading when connection_made() has been called
# We're keeping the connection open so the
# protocol can write more, but we still can't
# receive more, so remove the reader callback.
# Optimization: try to send now.
# Not all was written; register write handler.
# Add it to the buffer.
# May append to buffer.
# Not all data was written
# If the entire buffer couldn't be written, register a write handler
# Attempt to send it right away first.
# Try again later.
# Helper to generate new task names
# This uses itertools.count() instead of a "+= 1" operation because the latter
# is not thread safe. See bpo-11866 for a longer explanation.
# capturing the set of eager tasks first, so if an eager task "graduates"
# to a regular task in another thread, we don't risk missing it.
# Looping over the WeakSet isn't safe as it can be updated from another
# thread, therefore we cast it to list prior to filtering. The list cast
# itself requires iteration, so we repeat it several times ignoring
# RuntimeErrors (which are not very likely to occur).
# See issues 34970 and 36607 for details.
# Inherit Python Task implementation
# from a Python Future implementation.
# An important invariant maintained while a Task not done:
# _fut_waiter is either None or a Future.  The Future
# can be either done() or not done().
# The task can be in any of 3 states:
# - 1: _fut_waiter is not None and not _fut_waiter.done():
# - 2: (_fut_waiter is None or _fut_waiter.done()) and __step() is scheduled:
# - 3:  _fut_waiter is None and __step() is *not* scheduled:
# * In state 1, one of the callbacks of __fut_waiter must be __wakeup().
# * The transition from 1 to 2 happens when _fut_waiter becomes done(),
# * It transitions from 2 to 3 when __step() is executed, and it clears
# If False, don't log a message if the task is destroyed while its
# status is still pending
# raise after Future.__init__(), attrs are required for __del__
# prevent logging for pending task in __del__
# These two lines are controversial.  See discussion starting at
# https://github.com/python/cpython/pull/31394#issuecomment-1053545331
# Also remember that this is duplicated in _asynciomodule.c.
# if self._num_cancels_requested > 1:
# Leave self._fut_waiter; it may be a Task that
# catches and ignores the cancellation so we may have
# to cancel it again later.
# It must be the case that self.__step is already scheduled.
# We use the `send` method directly, because coroutines
# don't have `__iter__` and `__next__` methods.
# Task is cancelled right before coro stops.
# Save the original exception so we can chain it later.
# I.e., Future.cancel(self).
# Yielded Future must come from Future.__iter__().
# Bare yield relinquishes control for one event loop iteration.
# Yielding a generator is just wrong.
# Yielding something else is an error.
# This may also be a cancellation.
# Don't pass the value of `future.result()` explicitly,
# as `Future.__iter__` and `Future.__await__` don't need it.
# If we call `_step(value, None)` instead of `_step()`,
# Python eval loop would use `.send(value)` method call,
# instead of `__next__()`, which is slower for futures
# that return non-generator iterators from their `__iter__`.
# _CTask is needed for tests.
# Use legacy API if context is not needed
# wait() and as_completed() similar to those in PEP 3148.
# The special case for timeout <= 0 is for the following case:
# async def test_waitfor():
# asyncio.run(test_waitfor())
# We cannot wait on *fut* directly to make
# sure _cancel_and_wait itself is reliably cancellable.
# Sentinel for _wait_for_one().
# Can't do todo.remove(f) in the loop.
# _handle_timeout() was here first.
# Wait for the next future to be done and return it unless resolve is
# set, in which case return either the result of the future or raise
# Dummy value from _handle_timeout().
# If any child tasks were actually cancelled, we should
# propagate the cancellation request regardless of
# *return_exceptions* argument.  See issue 32684.
# Mark exception retrieved.
# Check if 'fut' is cancelled first, as
# 'fut.exception()' will *raise* a CancelledError
# instead of returning it.
# All futures are done; create a list of results
# and set it to the 'outer' future.
# Check if 'fut' is cancelled first, as 'fut.exception()'
# will *raise* a CancelledError instead of returning it.
# Also, since we're adding the exception return value
# to 'results' instead of raising it, don't bother
# setting __context__.  This also lets us preserve
# calling '_make_cancelled_error()' at most once.
# If gather is being cancelled we must propagate the
# cancellation regardless of *return_exceptions* argument.
# See issue 32684.
# bpo-46672
# 'arg' was not a Future, therefore, 'fut' is a new
# Future created specifically for 'arg'.  Since the caller
# can't control it, disable the "destroy pending task"
# warning.
# There's a duplicate Future object in coros_or_futures.
# Run done callbacks after GatheringFuture created so any post-processing
# can be performed at this point
# optimization: in the special case that *all* futures finished eagerly,
# this will effectively complete the gather eagerly, with the last
# callback setting the result (or exception) on outer before returning it
# Shortcut.
# Mark inner's result as retrieved.
# Collectively these two sets hold references to the complete set of active
# tasks. Eagerly executed tasks use a faster regular set as an optimization
# but may graduate to a WeakSet if the task blocks on IO.
# Dictionary containing tasks that are currently active in
# all running event loops.  {EventLoop: Task}
# The child exited, but we don't understand its status.
# This shouldn't happen, but if it does, let's just
# return that status; perhaps that helps debug it.
# ignore null bytes written by _write_to_self()
# set_wakeup_fd() raises ValueError if this is not the
# main thread.  By calling it early we ensure that an
# event loop running in another thread cannot add a signal
# handler.
# Register a dummy signal handler to ask Python to write the signal
# number in the wakeup file descriptor. _process_self_data() will
# read signal numbers from this file descriptor to handle signals.
# Set SA_RESTART to limit EINTR occurrences.
# Assume it's some race condition.
# Remove it properly.
# Check early.
# Raising exception before process creation
# prevents subprocess execution if the watcher
# is not ready to handle it.
# Check for abstract socket. `str` and `bytes` paths are supported.
# Directory may have permissions only to create socket.
# Let's improve the error message by adding
# with what exact address it occurs.
# Skip one loop iteration so that all 'loop.add_reader'
# go through.
# Remove the callback early.  It should be rare that the
# selector says the fd is ready but the call still returns
# EAGAIN, and I am willing to take a hit in that case in
# order to simplify the common case.
# On 32-bit architectures truncate to 1GiB to avoid OverflowError
# If we have an ENOTCONN and this isn't a first call to
# sendfile(), i.e. the connection was closed in the middle
# of the operation, normalize the error to ConnectionError
# to make it consistent across all Posix systems.
# Is this a unix socket that needs cleanup?
# max bytes we read in one event loop iteration
# should be called by exception handler only
# Set when close() or write_eof() called.
# On AIX, the reader trick (to be notified when the read end of the
# socket is closed) only works for sockets. On other platforms it
# works for pipes and sockets. (Exception: OS X 10.4?  Issue #19294.)
# Pipe was closed by peer.
# Remove writer here, _fatal_error() doesn't it
# because _buffer is empty.
# write_eof is all what we needed to close the write pipe
# Use a socket pair for stdin on AIX, since it does not
# support selecting read events on the write end of a
# socket (which we use in order to detect closing of the
# other end).
# The child process is already reaped
# (may happen if waitpid() is called elsewhere).
# asyncio never calls remove_child_handler() !!!
# The method is no-op but is implemented because
# abstract base classes require it.
# Prevent a race condition in case a child terminated
# during the switch.
# self._loop should always be available here
# as '_sig_chld' is added as a signal handler
# in 'attach_loop'
# Prevent a race condition in case the child is already terminated.
# The child process is still alive.
# May happen if .remove_child_handler() is called
# after os.waitpid() returns.
# The child is running.
# The child is dead already. We can fire the callback.
# Because of signal coalescing, we must keep calling waitpid() as
# long as we're able to reap a child.
# No more child processes exist.
# A child process is still alive.
# unknown child
# It may not be registered yet.
# Implementation note:
# The class keeps compatibility with AbstractChildWatcher ABC
# To achieve this it has empty attach_loop() method
# and doesn't accept explicit loop argument
# for add_child_handler()/remove_child_handler()
# but retrieves the current loop by get_running_loop()
# Don't save the loop but initialize itself if called first time
# The reason to do it here is that attach_loop() is called from
# unix policy only for the main thread.
# Main thread is required for subscription on SIGCHLD signal
# blocked by security policy like SECCOMP
# pragma: no branch
# We have no use for the "as ..."  clause in the with
# statement for locks.
# Implement fair scheduling, where thread always waits
# its turn. Jumping the queue if all are cancelled is an optimization.
# Currently the only exception designed be able to occur here.
# Ensure the lock invariant: If lock is not claimed (or about
# to be claimed by us) and there is a Task in waiters,
# ensure that the Task at the head will run.
# assert self._locked is False
# .done() means that the waiter is already set to wake up.
# Export the lock's locked(), acquire() and release() methods.
# Must re-acquire lock even if wait is cancelled.
# We only catch CancelledError here, since we don't want any
# other (fatal) errors with the future to cause us to spin.
# Re-raise most recent exception instance.
# Break reference cycles.
# Any error raised out of here _may_ have occurred after this Task
# believed to have been successfully notified.
# Make sure to notify another Task instead.  This may result
# in a "spurious wakeup", which is allowed as part of the
# Condition Variable protocol.
# Due to state, or FIFO rules (must allow others to run first).
# Maintain FIFO, wait for others to start even if _value > 0.
# Our Future was successfully set to True via _wake_up_next(),
# but we are not about to successfully acquire(). Therefore we
# must undo the bookkeeping already done and attempt to wake
# up someone else.
# New waiters may have arrived but had to wait due to FIFO.
# Wake up as many as are allowed.
# There was no-one to wake up.
# `fut` is now `done()` and not `cancelled()`.
# notify all tasks when state changes
# count tasks in Barrier
# wait for the barrier reaches the parties number
# when start draining release and return index of waited task
# Block while the barrier drains or resets.
# Wake up any tasks waiting for barrier to drain.
# Block until the barrier is ready for us,
# or raise an exception if it is broken.
# unless a CancelledError occurs
# see if the barrier is in a broken state
# Release the tasks waiting in the barrier.
# Enter draining state.
# Next waiting tasks will be blocked until the end of draining.
# Wait in the barrier until we are released. Raise an exception
# wait for end of filling
# If we are the last tasks to exit the barrier, signal any tasks
#reset the barrier, waking up tasks
# Create the child process: set the _proc attribute
# See gh-114177
# skip closing the pipe if loop is already closed
# this can happen e.g. when loop is closed immediately after
# process is killed
# has the child process finished?
# the child process has finished, but the
# transport hasn't been notified yet?
# the process may have already exited or may be running setuid
# Don't clear the _proc reference yet: _post_init() may still run
# asyncio uses a child watcher: copy the status into the Popen
# object. On Python 3.6, it is required to avoid a ResourceWarning.
# wake up futures waiting for wait()
# Initial delay in seconds for connect_pipe() before retrying to connect
# Maximum delay in seconds for connect_pipe() before retrying to connect
# Keep a reference to the Overlapped object to keep it alive until the
# wait is unregistered
# Should we call UnregisterWaitEx() if the wait completes
# or is cancelled?
# non-blocking wait: use a timeout of 0 millisecond
# The wait was unregistered: it's not safe to destroy the Overlapped
# object
# ERROR_IO_PENDING means that the unregister is pending
# If the wait was cancelled, the wait may never be signalled, so
# it's required to unregister it. Otherwise, IocpProactor.close() will
# wait forever for an event which will never come.
# If the IocpProactor already received the event, it's safe to call
# _unregister() because we kept a reference to the Overlapped object
# which is used as a unique key.
# ERROR_IO_PENDING is not an error, the wait was unregistered
# initialize the pipe attribute before calling _server_pipe_handle()
# because this function can raise an exception and the destructor calls
# the close() method
# Create new instance and return previous one.  This ensures
# that (until the server is closed) there is always at least
# one pipe handle for address.  Therefore if a client attempt
# to connect it will not fail with FileNotFoundError.
# Return a wrapper for a new pipe handle.
# Close all instances which have not been connected to by a client.
# self_reading_future always uses IOCP, so even though it's
# been cancelled, we need to make sure that the IOCP message
# is received so that the kernel is not holding on to the
# memory, possibly causing memory corruption later. Only
# unregister it if IO is complete in all respects. Otherwise
# we need another _poll() later to complete the IO.
# A client connected before the server was closed:
# drop the client (close the pipe) and exit
# WSARecvFrom will report ERROR_PORT_UNREACHABLE when the same
# socket is used to send to an address that is not listening.
# Use SO_UPDATE_ACCEPT_CONTEXT so getsockname() etc work.
# Coroutine closing the accept socket if the future is cancelled
# WSAConnect will complete immediately for UDP sockets so we don't
# need to register any IOCP operation
# The socket needs to be locally bound before we call ConnectEx().
# Probably already locally bound; check using getsockname().
# Use SO_UPDATE_CONNECT_CONTEXT so getsockname() etc work.
# ConnectNamePipe() failed with ERROR_PIPE_CONNECTED which means
# that the pipe is connected. There is no need to wait for the
# completion of the connection.
# Unfortunately there is no way to do an overlapped connect to
# a pipe.  Call CreateFile() in a loop until it doesn't fail with
# ERROR_PIPE_BUSY.
# ConnectPipe() failed with ERROR_PIPE_BUSY: retry later
# add_done_callback() cannot be used because the wait may only complete
# in IocpProactor.close(), while the event loop is not running.
# RegisterWaitForSingleObject() has a resolution of 1 millisecond,
# round away from zero to wait *at least* timeout seconds.
# We only create ov so we can use ov.address as a key for the cache.
# Note that this second wait means that we should only use
# this with handles types where a successful wait has no
# effect.  So events or processes are all right, but locks
# or semaphores are not.  Also note if the handle is
# signalled and then quickly reset, then we may return
# False even though we have not timed out.
# To get notifications of finished ops on this objects sent to the
# completion port, were must register the handle.
# XXX We could also use SetFileCompletionNotificationModes()
# to avoid sending notifications to completion port of ops
# that succeed immediately.
# Return a future which will be set with the result of the
# operation when it completes.  The future's value is actually
# the value returned by callback().
# The operation has completed, so no need to postpone the
# work.  We cannot take this short cut if we need the
# NumberOfBytes, CompletionKey values returned by
# PostQueuedCompletionStatus().
# Even if GetOverlappedResult() was called, we have to wait for the
# notification of the completion in GetQueuedCompletionStatus().
# Register the overlapped operation to keep a reference to the
# OVERLAPPED object, otherwise the memory is freed and Windows may
# read uninitialized memory.
# Register the overlapped operation for later.  Note that
# we only store obj to prevent it from being garbage
# collected too early.
# GetQueuedCompletionStatus() has a resolution of 1 millisecond,
# key is either zero, or it is used to return a pipe
# handle which should be closed to avoid a leak.
# Don't call the callback if _register() already read the result or
# if the overlapped has been cancelled
# Remove unregistered futures
# obj is a socket or pipe handle.  It will be closed in
# BaseProactorEventLoop._stop_serving() which will make any
# pending operations fail quickly.
# already closed
# Cancel remaining registered operations.
# Nothing to do with cancelled futures
# _WaitCancelFuture must not be cancelled
# Wait until all cancelled overlapped complete: don't exit with running
# overlapped to prevent a crash. Display progress every second if the
# loop is still running.
# handle a few events, or timeout
# See: https://docs.python.org/3/library/asyncio-dev.html#asyncio-debug-mode.
# A marker for iscoroutinefunction.
# Prioritize native coroutine check to speed-up
# asyncio.iscoroutine.
# Just in case we don't want to cache more than 100
# positive types.  That shouldn't ever happen, unless
# someone stressing the system on purpose.
# Coroutines compiled with Cython sometimes don't have
# proper __qualname__ or __name__.  While that is a bug
# in Cython, asyncio shouldn't crash with an AttributeError
# in its __repr__ functions.
# Stop masking Cython bugs, expose them in a friendly way.
# Built-in types might not have __qualname__ or __name__.
# If Cython's coroutine has a fake code object without proper
# co_filename -- expose that.
# use reprlib to limit the length of the output
# Limit the amount of work to a reasonable amount, as extract_stack()
# can be called for each coroutine and future in debug mode.
# 64 KiB
# UNIX Domain Sockets are supported on this platform
# Wake up the writer(s) if currently paused.
# This is a stream created by the `create_server()` function.
# Keep a strong reference to the reader until a connection
# is established.
# Prevent a warning in SSLProtocol.eof_received:
# "returning true from eof_received()
# has no effect when using ssl"
# Prevent reports about unhandled exceptions.
# Better than self._closed._log_traceback = False hack
# failed constructor
# drain() expects that the reader has an exception() method
# Wait for protocol.connection_lost() call
# Raise connection closing error if any,
# ConnectionResetError otherwise
# Yield to the event loop so connection_lost() may be
# called.  Without this, _drain_helper() would return
# immediately, and code that calls
# in a loop would never call connection_lost(), so it
# would not see an error when the socket is closed.
# The line length limit is  a security feature;
# it also doubles as half the buffer limit.
# Whether we're done.
# A future used by _wait_for_data()
# The transport can't be paused.
# We'll just have to buffer all data.
# Forget the transport so we don't keep trying.
# StreamReader uses a future to link the protocol feed_data() method
# to a read coroutine. Running two read coroutines at the same time
# would have an unexpected behaviour. It would not possible to know
# which coroutine would get the next data.
# Waiting for data while paused will make deadlock, so prevent it.
# This is essential for readexactly(n) for case when n > self._limit.
# Makes sure shortest matches wins
# Consume whole buffer except last bytes, which length is
# one less than max_seplen. Let's check corner cases with
# separator[-1]='SEPARATOR':
# * we have received almost complete separator (without last
# * last byte of buffer is first byte of separator, i.e.
# `offset` is the number of bytes from the beginning of the buffer
# where there is no occurrence of any `separator`.
# Loop until we find a `separator` in the buffer, exceed the buffer size,
# or an EOF has happened.
# Check if we now have enough data in the buffer for shortest
# separator to fit.
# `separator` is in the buffer. `match_start` and
# `match_end` will be used later to retrieve the
# data.
# see upper comment for explanation.
# Complete message (with full separator) may be present in buffer
# even when EOF flag is set. This may happen when the last chunk
# adds data which makes separator be found. That's why we check for
# EOF *after* inspecting the buffer.
# _wait_for_data() will resume reading if stream was paused.
# This used to just loop creating a new waiter hoping to
# collect everything in self._buffer, but that would
# deadlock if the subprocess sends more than self.limit
# bytes.  So just call self.read(self._limit) until EOF.
# This will work right even if buffer is less than n bytes
# replace status
# case 1: 'async def' coroutines
# case 2: legacy coroutines
# case 3: async generators
# case 4: unknown objects
# heavy-duty debugging
# Class variables serving as defaults for instance variables.
# A saved CancelledError for later chaining as an exception context.
# This field is used for a dual purpose:
# - Its presence is a marker to declare that a class implements
# - It is set by __iter__() below so that Task._step() can tell
# set_exception() was not called, or result() or exception()
# has consumed the exception
# Don't implement running(); see http://bugs.python.org/issue18699
# New method not in PEP 3148.
# So-called internal methods (note: no set_running_or_notify_cancel()).
# This tells Task to wait for completion.
# May raise too.
# make compatible with 'yield from'.
# Needed for testing purposes.
# Tries to call Future.get_loop() if it's available.
# Otherwise fallbacks to using the old '_loop' property.
# _CFuture is needed for tests.
# States for Future.
# (Future) -> str
# use reprlib to limit the length of the output, especially
# for very long strings
# TODO: when we have aiter() and anext(), allow async iterables in coro_fns.
# in eager tasks this waits for the calling task to append this task
# to running_tasks, in regular tasks this wait is a no-op that does
# not yield a future. See gh-124309.
# Wait for the previous task to finish, or for delay seconds
# Use asyncio.wait_for() instead of asyncio.wait() here, so
# that if we get cancelled at this point, Event.wait() is also
# cancelled, otherwise there will be a "Task destroyed but it is
# pending" later.
# Get the next coroutine to run
# Start task that will run the next coroutine
# next_task has been appended to running_tasks so next_task is ok to
# start.
# Prepare place to put this coroutine's exceptions if not won
# Kickstart the next coroutine
# Store winner's results
# Cancel all other tasks. We take care to not cancel the current
# task as well. If we do so, then since there is no `await` after
# here and CancelledError are usually thrown at one, we will
# encounter a curious corner case where the current task will end
# up as done() == True, cancelled() == False, exception() ==
# asyncio.CancelledError. This behavior is specified in
# https://bugs.python.org/issue30048
# first_task has been appended to running_tasks so first_task is ok to start.
# Make sure no tasks are left running if we leave this function
# If run_one_coro raises an unhandled exception, it's probably a
# programming error, and I want to see it.
# Since there are no new cancel requests, we're
# handling this.
# drop the reference early
# type: ignore[import-not-found]
# expected via the `exit` and `quit` commands
# unexpected issue
# NoQA
# Fix the completer function to use the interactive console locals
# Since calling `wait_closed()` is not mandatory,
# we shouldn't log the traceback if this is not awaited.
# communicate() ignores BrokenPipeError and ConnectionResetError.
# write() and drain() can raise these exceptions.
# Futures.
# These three are overridable in subclasses.
# End of the overridable methods.
# Wake up the next waiter (if any) that isn't cancelled.
# Just in case putter is not done yet.
# Clean self._putters from canceled putters.
# The putter could be removed from self._putters by a
# previous get_nowait call or a shutdown call.
# We were woken up by get_nowait(), but can't take
# the call.  Wake up the next in line.
# Just in case getter is not done yet.
# Clean self._getters from canceled getters.
# The getter could be removed from self._getters by a
# previous put_nowait call, or a shutdown call.
# We were woken up by put_nowait(), but can't take
# make local alias for the standard exception
# Name the logger after the package.
# Minimum number of _scheduled timer handles before cleanup of
# cancelled handles is performed.
# Minimum fraction of _scheduled timer handles that are cancelled
# before cleanup of cancelled handles is performed.
# Maximum timeout passed to select to avoid OS limitations
# format the task
# Try to skip getaddrinfo if "host" is already an IP. Users might have
# handled name resolution in their own code and pass in resolved IPs.
# If port's a service name like "http", don't skip getaddrinfo.
# Linux's inet_pton doesn't accept an IPv6 zone index after host,
# like '::1%lo0'.
# The host has already been resolved.
# "host" is not an IP address.
# Group addresses by family
# Issue #22429: run_forever() already finished, no need to
# stop it.
# Never happens if peer disconnects after sending the whole content
# Thus disconnection is always an exception from user perspective
# Cancel the future.
# Basically it has no effect because protocol is switched back,
# no code should wait for it anymore.
# Weak references so we don't break Transport's ability to
# detect abandoned transports
# Waiters are unblocked by self._wakeup(), which is called
# from two places: self.close() and self._detach(), but only
# when both conditions have become true. To signal that this
# has happened, self._wakeup() sets self._waiters to None.
# Identifier of the thread running the event loop, or None if the
# event loop is not running
# The preserved state of async generator hooks.
# In debug mode, if the execution of a callback or a step of a task
# exceed this duration in seconds, the slow callback/task is logged.
# A weak set of all asynchronous generators that are
# being iterated by the loop.
# Set to True when `loop.shutdown_asyncgens` is called.
# Set to True when `loop.shutdown_default_executor` is called.
# gh-128552: prevent a refcycle of
# task.exception().__traceback__->BaseEventLoop.create_task->task
# If Python version is <3.6 or we don't have any asynchronous
# generators alive.
# Restore any pre-existing async generator hooks.
# An exception is raised if the future didn't complete, so there
# is no need to log the "destroy pending task" message
# The coroutine raised a BaseException. Consume the exception
# to not log a warning, the caller doesn't have access to the
# local task.
# Only check when the default executor is being used
# NB: sendfile syscall is not supported for SSL sockets and
# non-mmap files even if sendfile is supported by OS
# skip local addresses of different family
# all bind attempts failed
# An error when closing a newly created socket is
# not important, but it can overwrite more important
# non-OSError error. So ignore it.
# Use host as default for server_hostname.  It is an error
# if host is empty or not set, e.g. when an
# already-connected socket was passed or when only a port
# is given.  To avoid this error, you can pass
# server_hostname='' -- this will bypass the hostname
# check.  (This also means that if host is a numeric
# IP/IPv6 address, we will attempt to verify that exact
# address; this will probably fail, but it is possible to
# create a certificate for a specific IP address, so we
# don't judge it here.)
# If using happy eyeballs, default to interleave addresses by family
# not using happy eyeballs
# using happy eyeballs
# can't use functools.partial as it keeps a reference
# to exceptions
# can't use sock, _, _ as it keeks a reference to exceptions
# If they all have the same str(), raise one.
# Raise a combined exception so the user can see all
# the various error messages.
# No exceptions were collected, raise a timeout error
# We allow AF_INET, AF_INET6, AF_UNIX as long as they
# are SOCK_STREAM.
# We support passing AF_UNIX sockets even though we have
# a dedicated API for that: create_unix_connection.
# Disallowing AF_UNIX in this method, breaks backwards
# compatibility.
# Get the socket from the transport because SSL transport closes
# the old socket and creates a new SSL socket
# Pause early so that "ssl_protocol.data_received()" doesn't
# have a chance to get called before "ssl_protocol.connection_made()".
# show the problematic kwargs in exception msg
# join address by (family, protocol)
# Using order preserving dict
# each addr has to have info for each (family, proto) pair
# "host" is already a resolved IP.
# Assume it's a bad family/type/protocol combination.
# Disable IPv4/IPv6 dual stack support (enabled by
# default on Linux) which makes a single socket
# listen on both address families.
# Assume the family is not enabled (bpo-30945)
# don't log parameters: they may contain sensitive information
# (password) and may be too long
# Second protection layer for unexpected errors
# in the default implementation, as well as for subclassed
# event loops with overloaded "default_exception_handler".
# Even though Futures don't have a context,
# Task is a subclass of Future,
# and sometimes the 'future' key holds a Task.
# Handles also have a context.
# Exception in the user set custom exception handler.
# Let's try default handler.
# Guard 'default_exception_handler' in case it is
# overloaded.
# Remove delayed calls that were cancelled if their number
# is too high
# Remove delayed calls that were cancelled from head of queue.
# Compute the desired timeout.
# Handle 'later' callbacks that are ready.
# This is the only place where callbacks are actually *called*.
# All other places just add them to ready.
# Note: We run all currently scheduled callbacks, but not any
# callbacks scheduled by callbacks run this time around --
# they will be run the next time (after another I/O poll).
# Use an idiom that is thread-safe without using locks.
# Adapted with permission from the EdgeDB project;
# license: PSFL.
# Exceptions are heavy objects that can have object
# cycles (bad for GC); let's not keep a reference to
# a bunch of them. It would be nicer to use a try/finally
# in __aexit__ directly but that introduced some diff noise
# Our parent task is being cancelled:
# or there's an exception in "async with":
# We use while-loop here because "self._on_completed_fut"
# can be cancelled multiple times if our parent task
# is being cancelled repeatedly (or even once, when
# our own cancellation is already in progress)
# "wrapper" is being cancelled while "foo" is
# still running.
# If this flag is set we *must* call uncancel().
# If there are no pending cancellations left,
# don't propagate CancelledError.
# Propagate CancelledError if there is one, except if there
# are other errors -- those have priority.
# If the parent task is being cancelled from the outside
# of the taskgroup, un-cancel and re-cancel the parent task,
# which will keep the cancel count stable.
# Always schedule the done callback even if the task is
# already done (e.g. if the coro was able to complete eagerly),
# otherwise if the task completes with an exception then it will cancel
# the current task too early. gh-128550, gh-128588
# task.exception().__traceback__->TaskGroup.create_task->task
# Since Python 3.8 Tasks propagate all exceptions correctly,
# except for KeyboardInterrupt and SystemExit which are
# still considered special.
# Not sure if this case is possible, but we want to handle
# it anyways.
# If parent task *is not* being cancelled, it means that we want
# to manually cancel it to abort whatever is being run right now
# in the TaskGroup.  But we want to mark parent task as
# "not cancelled" later in __aexit__.  Example situation that
# we need to handle:
# asyncio doesn't currently provide a high-level transport API
# to shutdown the connection.
# Keep a representation in debug mode to keep callback and
# parameters. For example, to log the warning
# "Executing <Handle...> took 2.5 second"
# Running and stopping the event loop.
# Methods scheduling callbacks.  All these return Handles.
# Method scheduling a coroutine object: create a task.
# Methods for interacting with threads.
# Network I/O methods returning Futures.
# Pipes and subprocesses.
# The reason to accept file-like object instead of just file descriptor
# is: we need to own pipe and close it at transport finishing
# Can got complicated errors if pass f.fileno(),
# close fd in pipe transport then close f and vice versa.
# Ready-based callback registration methods.
# The add_*() methods return None.
# The remove_*() methods return True if something was removed,
# False if there was nothing to delete.
# Completion based I/O methods returning Futures.
# Signal handling.
# Task factory.
# Error handlers.
# Debug flag management.
# Child processes handling (Unix only).
# Move up the call stack so that the warning is attached
# to the line outside asyncio itself.
# Event loop policy.  The policy itself is always global, even if the
# policy's rules say that there is an event loop per thread (or other
# notion of context).  The default policy is installed by the first
# call to get_event_loop_policy().
# Lock for protecting the on-the-fly creation of the event loop policy.
# A TLS for the running event loop, used by _get_running_loop.
# NOTE: this function is implemented in C (see _asynciomodule.c)
# Alias pure-Python implementations for testing purposes.
# get_event_loop() is one of the most frequently called
# functions in asyncio.  Pure Python implementation is
# about 4 times slower than C-accelerated.
# Alias C implementations for testing purposes.
# Reset the loop and wakeupfd in the forked child process.
# Normally we would use contextlib.contextmanager.  However, this module
# is imported by runpy, which means we want to avoid any unnecessary
# dependencies.  Thus we use a class.
# Only the first thread to get the lock should trigger the load
# and reset the module's class. The rest can now getattr().
# Reentrant calls from the same thread must be allowed to proceed without
# triggering the load again.
# exec_module() and self-referential imports are the primary ways this can
# happen, but in any case we must return something to avoid deadlock.
# All module metadata must be gathered from __spec__ in order to avoid
# using mutated values.
# Get the original name to make sure no object substitution occurred
# Figure out exactly what attributes were mutated between the creation
# of the module and now.
# Code that set an attribute may have kept a reference to the
# assigned object, making identity more important than equality.
# If exec_module() was used directly there is no guarantee the module
# object was put into sys.modules.
# Update after loading since that's what would happen in an eager
# loading situation.
# Finally, stop triggering this method, if the module did not
# already update its own __class__.
# To trigger the load and raise an exception if the attribute
# doesn't exist.
# Threading is only needed for lazy loading, and importlib.util can
# be pulled in at interpreter startup, so defer until needed.
# Don't need to worry about deep-copying as trying to set an attribute
# on an object would have triggered the load,
# e.g. ``module.__spec__.loader = None`` would trigger a load from
# trying to access module.__spec__.
# Bootstrap help #####################################################
# Until bootstrapping is complete, DO NOT import any modules that attempt
# to import importlib._bootstrap (directly or indirectly). Since this
# partially initialised package would be present in sys.modules, those
# modules would get an uninitialised copy of the source version, instead
# of a fully initialised version (either the frozen one or the one
# initialised below if the frozen one is not available).
# Just the builtin component, NOT the full Python module
# importlib._bootstrap is the built-in import, ensure we don't create
# a second copy of the module.
# __file__ is not guaranteed to be defined, e.g. if this code gets
# frozen by a tool like cx_Freeze.
# To simplify imports in test code
# Fully bootstrapped at this point, import whatever you like, circular
# dependencies and startup overhead minimisation permitting :)
# Public API #########################################################
# The module may have replaced itself in sys.modules!
# IMPORTANT: Whenever making changes to this module, be sure to run a top-level
# `make regen-importlib` followed by `make` in order to get the frozen version
# of the module updated. Not doing so will result in the Makefile to fail for
# all others who don't have a ./python around to freeze the module in the early
# stages of compilation.
# See importlib._setup() for what is injected into the global namespace.
# When editing this code be aware that code executed at import time CANNOT
# reference any injected objects! This includes not only global code but also
# anything specified at the class level.
# Module injected manually by _set_bootstrap_module()
# Import builtin modules
# Assumption made in _path_join()
# Bootstrap-related code ######################################################
# Drive relative paths have to be resolved by the OS, so we reset the
# tail but do not add a path_sep prefix.
# Avoid losing the root's trailing separator when joining with nothing
# id() is used to generate a pseudo-random filename.
# We first write data to a temporary file, and then use os.replace() to
# perform an atomic rename.
# Raise an OSError so the 'except' below cleans up the partially
# written file.
# Finder/loader utility code ###############################################
# Magic word to reject .pyc files generated by other Python versions.
# It should change for each incompatible change to the bytecode.
# The value of CR and LF is incorporated so if you ever read or write
# a .pyc file in text mode the magic number will be wrong; also, the
# Apple MPW compiler swaps their values, botching string constants.
# There were a variety of old schemes for setting the magic number.
# The current working scheme is to increment the previous value by
# 10.
# Starting with the adoption of PEP 3147 in Python 3.2, every bump in magic
# number also includes a new "magic tag", i.e. a human readable string used
# to represent the magic number in __pycache__ directories.  When you change
# the magic number, you must also set a new unique magic tag.  Generally this
# can be named after the Python major version of the magic number bump, but
# it can really be anything, as long as it's different than anything else
# that's come before.  The tags are included in the following table, starting
# with Python 3.2a0.
# Known values:
#3021)
#4715)
#27213)
# MAGIC must change whenever the bytecode emitted by the compiler may no
# longer be understood by older implementations of the eval loop (usually
# due to the addition of new opcodes).
# Starting with Python 3.11, Python 3.n starts with magic number 2900+50n.
# Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array
# in PC/launcher.c must also be updated.
# For import.c
# Deprecated.
# We need an absolute path to the py file to avoid the possibility of
# collisions within sys.pycache_prefix, if someone has two different
# `foo/bar.py` on their system and they import both of them using the
# same sys.pycache_prefix. Let's say sys.pycache_prefix is
# `C:\Bytecode`; the idea here is that if we get `Foo\Bar`, we first
# make it absolute (`C:\Somewhere\Foo\Bar`), then make it root-relative
# (`Somewhere\Foo\Bar`), so we end up placing the bytecode file in an
# unambiguous `C:\Bytecode\Somewhere\Foo\Bar\`.
# Strip initial drive from a Windows path. We know we have an absolute
# path here, so the second part of the check rules out a POSIX path that
# happens to contain a colon at the second character.
# Strip initial path separator from `head` to complete the conversion
# back to a root-relative path before joining.
# We always ensure write access so we can update cached files
# later even when the source files are read-only on Windows (#6074)
# FIXME: @_check_name is used to define class methods before the
# _bootstrap module is set by _set_bootstrap_module().
# Only the first two flags are defined.
# To avoid bootstrap issues.
# Module specifications #######################################################
# The caller may simply want a partially populated location-
# oriented spec.  So we set the location to a bogus value and
# fill in as much as we can.
# ExecutionLoader
# If the location is on the filesystem, but doesn't actually exist,
# we could return None here, indicating that the location is not
# valid.  However, we don't have a good way of testing since an
# indirect location (e.g. a zip file or URL) will look like a
# non-existent file relative to the filesystem.
# Pick a loader if one wasn't provided.
# Set submodule_search_paths appropriately.
# Check the loader.
# 2022-10-06(warsaw): For now, this helper is only used in _warnings.c and
# that use case only has the module globals.  This function could be
# extended to accept either that or a module object.  However, in the
# latter case, it would be better to raise certain exceptions when looking
# at a module, which should have either a __loader__ or __spec__.loader.
# For backward compatibility, it is possible that we'll get an empty
# dictionary for the module globals, and that cannot raise an exception.
# If working with a module:
# raise AttributeError('Module globals is missing a __spec__')
# Loaders #####################################################################
# Warning implemented in _load_module_shim().
# For backwards compatibility, we delegate to set_data()
# The only reason for this method is for the name check.
# Issue #14857: Avoid the zero-argument form of super so the implementation
# of that form can be updated without breaking the frozen module.
# Adapt between the two APIs
# Figure out what directories are missing.
# Create needed directories.
# Probably another Python process already created the dir.
# Could be a permission error, read-only filesystem: just forget
# about writing the data.
# Same as above: just don't write the bytecode.
# Call _classify_pyc to do basic validation of the pyc but ignore the
# result. There's no source to check against.
# When invalidate_caches() is called, this epoch is incremented
# https://bugs.python.org/issue45703
# This is a top-level module. sys.path contains the parent path.
# Not a top-level module. parent-module.__path__ contains the
# If the parent's path has changed, recalculate _path
# Make a copy
# Note that no changes are made if a loader is returned, but we
# Save the copy
# This class is actually exposed publicly in a namespace package's __loader__
# attribute, so it should be available through a non-private name.
# https://github.com/python/cpython/issues/92054
# The import system never calls this method.
# We use this exclusively in module_from_spec() for backward-compatibility.
# Finders #####################################################################
# Drop entry if finder name is a relative path. The current
# working directory may have changed.
# Also invalidate the caches of _NamespacePaths
# Don't cache the failure as the cwd can easily change to
# a valid directory later on.
# If this ends up being a namespace package, namespace_path is
# This is possibly part of a namespace package.
# We found at least one namespace path.  Return a spec which
# can create the namespace package.
# Base (directory) path
# tail_module keeps the original casing, for __file__ and friends
# Check if the module is the name of a directory (and thus a package).
# If a namespace package, return the path if we don't
# Check for a file w/ a proper suffix exists.
# Directory has either been removed, turned into a file, or made
# unreadable.
# We store two cached versions, to handle runtime changes of the
# PYTHONCASEOK environment variable.
# Windows users can import modules with case-insensitive file
# suffixes (for legacy reasons). Make the suffix lowercase here
# so it's done once instead of for every import. This is safe as
# the specified suffixes to check against are always specified in a
# case-sensitive manner.
# If the ModuleSpec has been created by the FileFinder, it will have
# been created with an origin pointing to the .fwork file. We need to
# redirect this to the location in the Frameworks folder, using the
# content of the .fwork file.
# If the loader is created based on the spec for a loaded module, the
# path will be pointing at the Framework location. If this occurs,
# get the original .fwork location to use as the module's __file__.
# Ensure that the __file__ points at the .fwork location
# Import setup ###############################################################
# This function is used by PyImport_ExecCodeModuleObject().
# Not important enough to report.
# all others who don't have a ./python around to freeze the module
# in the early stages of compilation.
# Modules injected manually by _setup()
# Import done by _install_external_importers()
# Module-level locking ########################################################
# For a list that can have a weakref to it.
# Copied from weakref.py with some simplifications and modifications unique to
# bootstrapping importlib. Many methods were simply deleting for simplicity, so if they
# are needed in the future they may work if simply copied back in.
# Inlined to avoid issues with inheriting from _weakref.ref before _weakref is
# set by _setup(). Since there's only one instance of this class, this is
# not expensive.
# A dict mapping module names to weakrefs of _ModuleLock instances.
# Dictionary protected by the global import lock.
# A dict mapping thread IDs to weakref'ed lists of _ModuleLock instances.
# This maps a thread to the module locks it is blocking on acquiring.  The
# values are lists because a single thread could perform a re-entrant import
# and be "in the process" of blocking on locks for more than one module.  A
# thread can be "in the process" because a thread cannot actually block on
# acquiring more than one lock but it can have set up bookkeeping that reflects
# that it intends to block on acquiring more than one lock.
# The dictionary uses a WeakValueDictionary to avoid keeping unnecessary
# lists around, regardless of GC runs. This way there's no memory leak if
# the list is no longer needed (GH-106176).
# Interactions with _blocking_on are *not* protected by the global
# import lock here because each thread only touches the state that it
# owns (state keyed on its thread id).  The global import lock is
# re-entrant (i.e., a single thread may take it more than once) so it
# wouldn't help us be correct in the face of re-entrancy either.
# If we have already reached the target_id, we're done - signal that it
# is reachable.
# Otherwise, try to reach the target_id from each of the given candidate_ids.
# There are no edges out from this node, skip it.
# bpo 38091: the chain of tid's we encounter here eventually leads
# to a fixed point or a cycle, but does not reach target_id.
# This means we would not actually deadlock.  This can happen if
# other threads are at the beginning of acquire() below.
# Follow the edges out from this thread.
# Create an RLock for protecting the import process for the
# corresponding module.  Since it is an RLock, a single thread will be
# able to take it more than once.  This is necessary to support
# re-entrancy in the import system that arises from (at least) signal
# handlers and the garbage collector.  Consider the case of:
# If a different thread than the running one holds the lock then the
# thread will have to block on taking the lock, which is what we want
# for thread safety.
# The name of the module for which this is a lock.
# Can end up being set to None if this lock is not owned by any thread
# or the thread identifier for the owning thread.
# Represent the number of times the owning thread has acquired this lock
# via a list of True.  This supports RLock-like ("re-entrant lock")
# behavior, necessary in case a single thread is following a circular
# import dependency and needs to take the lock for a single module
# more than once.
# Counts are represented as a list of True because list.append(True)
# and list.pop() are both atomic and thread-safe in CPython and it's hard
# to find another primitive with the same properties.
# This is a count of the number of threads that are blocking on
# self.wakeup.acquire() awaiting to get their turn holding this module
# lock.  When the module lock is released, if this is greater than
# zero, it is decremented and `self.wakeup` is released one time.  The
# intent is that this will let one other thread make more progress on
# acquiring this module lock.  This repeats until all the threads have
# gotten a turn.
# This is incremented in self.acquire() when a thread notices it is
# going to have to wait for another thread to finish.
# See the comment above count for explanation of the representation.
# To avoid deadlocks for concurrent or re-entrant circular imports,
# look at _blocking_on to see if any threads are blocking
# on getting the import lock for any module for which the import lock
# is held by this thread.
# Try to find this thread.
# Start from the thread that holds the import lock for this
# Use the global "blocking on" state.
# Protect interaction with state on self with a per-module
# lock.  This makes it safe for more than one thread to try to
# acquire the lock for a single module at the same time.
# If the lock for this module is unowned then we can
# take the lock immediately and succeed.  If the lock
# for this module is owned by the running thread then
# we can also allow the acquire to succeed.  This
# supports circular imports (thread T imports module A
# which imports module B which imports module A).
# At this point we know the lock is held (because count !=
# 0) by another thread (because owner != tid).  We'll have
# to get in line to take the module lock.
# But first, check to see if this thread would create a
# deadlock by acquiring this module lock.  If it would
# then just stop with an error.
# It's not clear who is expected to handle this error.
# There is one handler in _lock_unlock_module but many
# times this method is called when entering the context
# manager _ModuleLockManager instead - so _DeadlockError
# will just propagate up to application code.
# This seems to be more than just a hypothetical -
# https://stackoverflow.com/questions/59509154
# https://github.com/encode/django-rest-framework/issues/7078
# Check to see if we're going to be able to acquire the
# lock.  If we are going to have to wait then increment
# the waiters so `self.release` will know to unblock us
# later on.  We do this part non-blockingly so we don't
# get stuck here before we increment waiters.  We have
# this extra acquire call (in addition to the one below,
# outside the self.lock context manager) to make sure
# self.wakeup is held when the next acquire is called (so
# we block).  This is probably needlessly complex and we
# should just take self.wakeup in the return codepath
# Now take the lock in a blocking fashion.  This won't
# complete until the thread holding this lock
# (self.owner) calls self.release.
# Taking the lock has served its purpose (making us wait), so we can
# give it up now.  We'll take it w/o blocking again on the
# next iteration around this 'while' loop.
# The following two functions are for consumption by Python/import.c.
# bpo-31070: Check if another thread created a new lock
# after the previous lock was destroyed
# but before the weakref callback was called.
# Concurrent circular import, we'll accept a partially initialized
# module object.
# Frame stripping magic ###############################################
# Typically used by loader classes as a method replacement.
# Fall through to a catch-all which always succeeds.
# file-location attributes
# aka, undefined
# the default
# This function is meant for use in _setup().
# loader will stay None.
# The passed-in module may be not support attribute assignment,
# in which case we simply don't set the attributes.
# __name__
# __loader__
# A backward compatibility hack.
# While the docs say that module.__file__ is not set for
# built-in modules, and the code below will avoid setting it if
# spec.has_location is false, this is incorrect for namespace
# packages.  Namespace packages have no location, but their
# __spec__.origin is None, and thus their module.__file__
# should also be None for consistency.  While a bit of a hack,
# this is the best place to ensure this consistency.
# See # https://docs.python.org/3/library/importlib.html#importlib.abc.Loader.load_module
# and bpo-32305
# __package__
# __spec__
# __path__
# XXX We should extend __path__ if it's already a list.
# __file__/__cached__
# Typically loaders will not implement create_module().
# If create_module() returns `None` then it means default
# module creation should be used.
# Used by importlib.reload() and _load_module_shim().
# Namespace package.
# Update the order of insertion into sys.modules for module
# clean-up at shutdown.
# It is assumed that all callers have been warned about using load_module()
# appropriately before calling this function.
# The module must be in sys.modules at this point!
# Move it to the end of sys.modules.
# Since module.__path__ may not line up with
# spec.submodule_search_paths, we can't necessarily rely
# on spec.parent here.
# A helper for direct use by the import system.
# Not a namespace package.
# This must be done before putting the module in sys.modules
# (otherwise an optimization shortcut in import.c becomes
# wrong).
# A namespace package so do nothing.
# Move the module to the end of sys.modules.
# We don't ensure that the import-related module attributes get
# set in the sys.modules replacement case.  Such modules are on
# their own.
# A method used during testing of _load_unlocked() and by
# _load_module_shim().
# The module is missing FrozenImporter-specific values.
# Fix up the spec attrs.
# Fix up the module attrs (the bare minimum).
# These checks ensure that _fix_up_module() is only called
# in the right places.
# Check the loader state.
# The only frozen modules with "origname" set are stdlib modules.
# Check the file attrs.
# We get the marshaled data in exec_module() (the loader
# part of the importer), instead of here (the finder part).
# The loader is the usual place to get the data that will
# be loaded into the module.  (For example, see _LoaderBasics
# in _bootstrap_external.py.)  Most importantly, this importer
# is simpler if we wait to get the data.
# However, getting as much data in the finder as possible
# to later load the module is okay, and sometimes important.
# (That's why ModuleSpec.loader_state exists.)  This is
# especially true if it avoids throwing away expensive data
# the loader would otherwise duplicate later and can be done
# efficiently.  In this case it isn't worth it.
# Warning about deprecation implemented in _load_module_shim().
# Import itself ###############################################################
# PyImport_Cleanup() is running or has been called.
# We check sys.modules here for the reload case.  While a passed-in
# target will usually indicate a reload there is no guarantee, whereas
# sys.modules provides one.
# The parent import may have already imported this module.
# We use the found spec since that is the one that
# we would have used if the parent module hadn't
# beaten us to the punch.
# Crazy side-effects!
# Temporarily add child we are currently importing to parent's
# _uninitialized_submodules for circular import tracking.
# Set the module as an attribute on its parent.
# Optimization: we avoid unneeded module locking if the module
# already exists in sys.modules and is fully initialized.
# Optimization: only call _bootstrap._lock_unlock_module() if
# module.__spec__._initializing is True.
# NOTE: because of this, initializing must be set *before*
# putting the new module in sys.modules.
# The hell that is fromlist ...
# If a package was imported, try to import stuff from fromlist.
# Backwards-compatibility dictates we ignore failed
# imports triggered by fromlist for modules that don't
# exist.
# Return up to the first dot in 'name'. This is complicated by the fact
# that 'name' may be relative.
# Figure out where to slice the module's name up to the first dot
# in 'name'.
# Slice end needs to be positive to alleviate need to special-case
# when ``'.' not in name``.
# Set up the spec for existing builtin/frozen modules.
# Directly load built-in modules needed during bootstrap.
# Instantiation requires _weakref to have been set.
# By default, defer to default semantics for the new module.
# We don't define exec_module() here since that would break
# hasattr checks we do to support backward compatibility.
# We don't define find_spec() here since that would break
# One of the paths did not resolve (a directory does not exist).
# Just return something that will not exist.
# We can't use
# a issubclass() check here because apparently abc.'s __subclasscheck__()
# hook wants to create a weak reference to the object, but
# zipimport.zipimporter does not support weak references, resulting in a
# TypeError.  That seems terrible.
# type: ignore[union-attr]
# also exclude 'wrapper' due to singledispatch in the call stack
# deferred for performance (python/cpython#109829)
# gh-93353: Keep a reference to call os.remove() in late Python
# finalization.
# Not using tempfile.NamedTemporaryFile as it leads to deeper 'try'
# blocks due to the need to close the temporary file to work on Windows
# properly.
# from more_itertools 9.0
# type: ignore[misc]
# For compatibility with versions where *encoding* was a positional
# argument, it needs to be given explicitly when there are multiple
# *path_names*.
# This limitation can be removed in Python 3.15.
# This deliberately raises FileNotFoundError instead of
# NotImplementedError so that if this method is accidentally called,
# it'll still do the right thing.
# overload per python/importlib_metadata#435
# type: ignore[override]
# Required until Python 3.14
# This last clause is here to support old egg-info files.  Its
# effect is to just end up using the PathDistribution's self._path
# (which points to the egg-info file) attribute unchanged.
# Delay csv import, since Distribution.files is not as widely used
# as other parts of importlib.metadata
# Prepend the .egg-info/ subdir to the lines in this file.
# But this subdir is only available from PathDistribution's
# self._path.
# '@' is uniquely indicative of a url_req.
# rpartition is faster than splitext and suitable for this purpose.
# python/typeshed#10328
# from jaraco.collections 3.3
# Do not remove prior to 2024-01-01 or Python 3.14
# suppress spurious error from mypy
# from jaraco.text 3.5
# cache lower since it's likely to be called frequently.
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
# copied from more_itertools 8.8
# from jaraco.functools 3.3
# it's the first call, replace the method with a cached, bound method
# Support cache clear even before cache has been created.
# From jaraco.functools 3.3
# provide pyexpat submodules as xml.parsers.expat submodules
# must do ampersand first
# must do ampersand last
# use a text writer as is
# use a codecs stream writer as is
# wrap a binary writer with TextIOWrapper
# Keep the original file open when the TextIOWrapper is
# destroyed
# This is to handle passed objects that aren't in the
# IOBase hierarchy, but just have a write method
# TextIOWrapper uses this methods to determine
# if BOM (for UTF-16, etc) should be added
# contains uri -> prefix dicts
# Per http://www.w3.org/XML/1998/namespace, The 'xml' prefix is
# bound by definition to http://www.w3.org/XML/1998/namespace.  It
# does not need to be declared and will not usually be found in
# self._current_context.
# The name is in a non-empty namespace
# If it is not the default namespace, prepend the prefix
# Return the unqualified name
# ContentHandler methods
# ErrorHandler methods
# DTDHandler methods
# EntityResolver methods
# XMLReader methods
# XMLFilter methods
# --- Utility functions
# this is the parser list used by the make_parser function if no
# alternatives are given as parameters to the function
# tell modulefinder that importing sax potentially imports expatreader
# The parser module was found, but importing it
# failed unexpectedly, pass this exception through
# The parser module detected that it won't work properly,
# so try the next one
# --- Internal utility methods used by make_parser
# If we're using a sufficiently recent version of Python, we can use
# weak references to avoid cycles between the parser and content
# handler, otherwise we'll just have to pretend.
# --- ExpatLocator
# --- ExpatParser
# bpo-30264: Close the source on error to not leak resources:
# xml.sax.parse() doesn't give access to the underlying parser
# to the caller
# Redefined setContentHandler to allow changing handlers during parsing
# IncrementalParser methods
# The isFinal parameter is internal to the expat reader.
# If it is set to true, expat will check validity of the entire
# document. When feeding chunks, they are not normally final -
# except when invoked from close.
# FIXME: when to invoke error()?
# If we are completing an external entity, do nothing here
# break cycle created by expat handlers pointing to our methods
# Keep ErrorColumnNumber and ErrorLineNumber after closing.
# This pyexpat does not support SkippedEntity
# Locator methods
# event handlers
# no namespace
# default namespace
# this is not used (call directly to ContentHandler)
# FIXME: save error info here?
# The SAX spec requires to report skipped PEs with a '%'
# ---
# ===== XMLREADER =====
# ===== LOCATOR =====
# ===== INPUTSOURCE =====
# ===== ATTRIBUTESIMPL =====
# ===== ATTRIBUTESNSIMPL =====
# ===== SAXEXCEPTION =====
# ===== SAXPARSEEXCEPTION =====
# We need to cache this stuff at construction time.
# If this exception is raised, the objects through which we must
# traverse to get this information may be deleted by the time
# it gets caught.
# ===== SAXNOTRECOGNIZEDEXCEPTION =====
# ===== SAXNOTSUPPORTEDEXCEPTION =====
#============================================================================
# HANDLER INTERFACES
# ===== ERRORHANDLER =====
# ===== CONTENTHANDLER =====
# ===== DTDHandler =====
# ===== ENTITYRESOLVER =====
# CORE FEATURES
# true: Perform Namespace processing (default).
# false: Optionally do not perform Namespace processing
# access: (parsing) read-only; (not parsing) read/write
# true: Report the original prefixed names and attributes used for Namespace
# false: Do not report attributes used for Namespace declarations, and
# true: All element names, prefixes, attribute names, Namespace URIs, and
# false: Names are not necessarily interned, although they may be (default).
# true: Report all validation errors (implies external-general-entities and
# false: Do not report validation errors.
# true: Include all external general (text) entities.
# false: Do not include external general entities.
# true: Include all external parameter entities, including the external
# false: Do not include any external parameter entities, even the external
# CORE PROPERTIES
# data type: xml.sax.sax2lib.LexicalHandler
# description: An optional extension handler for lexical events like comments.
# access: read/write
# data type: xml.sax.sax2lib.DeclHandler
# description: An optional extension handler for DTD-related events other
# data type: org.w3c.dom.Node
# description: When parsing, the current DOM node being visited if this is
# data type: String
# description: The literal string of characters that was the source for
# access: read-only
# description: The name of the encoding to assume for input data.
# access: write: set the encoding, e.g. established by a higher-level
# initial value: UTF-8
# data type: Dictionary
# description: The dictionary used to intern common strings in the document
# access: write: Request that the parser uses a specific dictionary, to
# $Id: __init__.py 3375 2008-02-13 08:05:08Z fredrik $
# elementtree package
# --------------------------------------------------------------------
# The ElementTree toolkit is
# Copyright (c) 1999-2008 by Fredrik Lundh
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS.  IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# See https://www.python.org/psf/license for licensing details.
# ElementTree
# $Id: ElementPath.py 3375 2008-02-13 08:05:08Z fredrik $
# limited xpath support for element trees
# history:
# 2003-05-23 fl   created
# 2003-05-28 fl   added support for // etc
# 2003-08-27 fl   fixed parsing of periods in element names
# 2007-09-10 fl   new selection engine
# 2007-09-12 fl   fixed parent selector
# 2007-09-13 fl   added iterfind; changed findall to return a list
# 2007-11-30 fl   added namespaces support
# 2009-10-30 fl   added child element value filter
# Copyright (c) 2003-2009 by Fredrik Lundh.  All rights reserved.
# fredrik@pythonware.com
# http://www.pythonware.com
# Copyright (c) 1999-2009 by Fredrik Lundh
# Implementation module for XPath support.  There's usually no reason
# to import this module directly; the <b>ElementTree</b> does this for
# you, if needed.
# Same as '*', but no comments or processing instructions.
# It can be a surprise that '*' includes those, but there is no
# justification for '{*}*' doing the same.
# Any tag that is not in a namespace.
# The tag in any (or no) namespace.
# '}name'
# Any tag in the given namespace.
# '{}tag' == 'tag'
# FIXME: raise error if .. is applied at toplevel?
# FIXME: replace with real parser!!! refs:
# http://javascript.crockford.com/tdop/tdop.html
# ignore whitespace
# use signature to determine predicate type
# [@attribute] predicate
# [@attribute='value'] or [@attribute!='value']
# [tag]
# [.='value'] or [tag='value'] or [.!='value'] or [tag!='value']
# [index] or [last()] or [last()-index]
# [index]
# FIXME: what if the selector is "*" ?
# Generate all matching objects.
# compile selector pattern
# implicit all (FIXME: keep this?)
# execute selector pattern
# Find first matching object.
# Find all matching objects.
# Find text for first matching object.
# $Id: ElementInclude.py 3375 2008-02-13 08:05:08Z fredrik $
# limited xinclude support for element trees
# 2003-08-15 fl   created
# 2003-11-14 fl   fixed default loader
# Copyright (c) 2003-2004 by Fredrik Lundh.  All rights reserved.
# Limited XInclude support for the ElementTree package.
# For security reasons, the inclusion depth is limited to this read-only value by default.
# Fatal include error.
# Default loader.  This loader reads an included resource from disk.
# @param href Resource reference.
# @param parse Parse mode.  Either "xml" or "text".
# @param encoding Optional text encoding (UTF-8 by default for "text").
# @return The expanded resource.  If the parse mode is "xml", this
# @throws OSError If the loader fails to load the resource.
# Expand XInclude directives.
# @param elem Root Element or any ElementTree of a tree to be expanded
# @param loader Optional resource loader.  If omitted, it defaults
# @param base_url The base URL of the original file, to resolve
# @param max_depth The maximum number of recursive inclusions.
# @throws LimitedRecursiveIncludeError If the {@link max_depth} was exceeded.
# @throws FatalIncludeError If the function fails to include a given
# @throws OSError If the function fails to load a given resource.
# @throws ValueError If negative {@link max_depth} is passed.
# @returns None. Modifies tree pointed by {@link elem}
# look for xinclude elements
# process xinclude directive
# FIXME: this makes little sense with recursive includes
#---------------------------------------------------------------------
# Copyright (c) 1999-2008 by Fredrik Lundh.  All rights reserved.
# public symbols
# emulate old behaviour, for now
# Need to refer to the actual Python implementation, not the
# shadowing C implementation.
# assert iselement(element)
# first node
# If no parser was specified, create a default XMLParser
# The default XMLParser, when it comes from an accelerator,
# can define an internal _parse_whole API for efficiency.
# It can be used to parse the whole source without feeding
# it with chunks.
# assert self._root is not None
# lxml.etree compatibility.  use output method instead
# serialization support
# returns text write method and release all resources after using
# file_or_filename is a file name
# file_or_filename is a file-like object
# encoding determines if it is a text or binary writer
# Keep the original file open when the BufferedWriter is
# identify namespaces used in this tree
# maps qnames to *encoded* prefix:local names
# maps uri:s to prefixes
# calculate serialized qname representation
# default element
# FIXME: can this be handled in XML 1.0?
# populate qname and namespaces table
# sort on prefix
# FIXME: handle boolean attributes
# this optional method is imported at the end of the module
# "well-known" namespace prefixes
# xml schema
# dublin core
# For tests and troubleshooting
# escape character data
# it's worth avoiding do-nothing calls for strings that are
# shorter than 500 characters, or so.  assume that's, by far,
# the most common case in most applications.
# escape attribute value
# Although section 2.11 of the XML specification states that CR or
# CR LN should be replaced with just LN, it applies only to EOLNs
# which take part of organizing file into lines. Within attributes,
# we are replacing these with entity numbers, so they do not count.
# http://www.w3.org/TR/REC-xml/#sec-line-ends
# The current solution, contained in following six lines, was
# discussed in issue 17582 and 39011.
# debugging
# Reduce the memory consumption by reusing indentation strings.
# Start a new indentation level for the first child.
# Dedent after the last child by overwriting the previous indentation.
# parsing
# Use the internal, undocumented _parser argument for now; When the
# parser argument of iterparse is removed, this can be killed.
# load event buffer
# TODO: Emit a ResourceWarning if it was not explicitly closed.
# (When the close() method will be supported in all maintained Python versions.)
# The _parser argument is for internal use only and must not be relied
# upon in user code. It will be removed in a future release.
# See https://bugs.python.org/issue17741 for more details.
# wire up the parser for event reporting
# iterparse needs this to set its root attribute properly :(
# Parse XML document from string constant.  Alias for XML().
# data collector
# element stack
# last element
# root element
# true if we're after an end tag
# also see ElementTree and TreeBuilder
# underscored names are provided for compatibility only
# name memo cache
# main callbacks
# miscellaneous callbacks
# Configure pyexpat: buffering, new-style attribute handling.
# unknown
# Internal API for XMLPullParser
# events_to_report: a list of events to report during parsing (same as
# the *events* of XMLPullParser's constructor.
# events_queue: a list of actual parsing events that will be populated
# by the underlying parser.
# TreeBuilder does not implement .start_ns()
# TreeBuilder does not implement .end_ns()
# expand qname, and convert name string to ascii, if possible
# Handler for expat's StartElementHandler. Since ordered_attributes
# is set, the attributes are reported as a list of alternating
# attribute name,value.
# deal with undefined entities
# XML_ERROR_UNDEFINED_ENTITY
# inside a doctype declaration
# parse doctype contents
# end of data
# get rid of circular references
# C14N 2.0
# Stack with globally and newly declared namespaces as (uri, prefix) pairs.
# Stack with user declared namespace prefixes as (uri, prefix) pairs.
# almost no element declares new namespaces
# Not declared yet => add new declaration.
# No default namespace declared => no prefix needed.
# As soon as a default namespace is defined,
# anything that has no namespace (and thus, no prefix) goes there.
# we may have to resolve qnames in text content
# Need to parse text first to see if it requires a prefix declaration.
# Resolve prefixes in attribute and tag text.
# Assign prefixes in lexicographical order of used URIs.
# Write namespace declarations in prefix order ...
# almost always empty
# ... followed by attributes in URI+name order
# No prefix for attributes in default ('') namespace.
# Honour xml:space attributes.
# Write the tag.
# Write the resolved qname text content.
# shorter than 500 character, or so.  assume that's, by far,
# Import the C accelerators
# Element is going to be shadowed by the C implementation. We need to keep
# the Python version of it accessible for some "creative" by external code
# (see tests)
# Element, SubElement, ParseError, TreeBuilder, XMLParser, _set_factories
# Deprecated alias for xml.etree.ElementTree
# DOM implementations may use this as a base class for their own
# Node implementations.  If they don't, the constants defined here
# should still be used as the canonical definitions as they match
# the values given in the W3C recommendation.  Client code can
# safely refer to these values in all tests of Node.nodeType
# values.
#ExceptionCode
# Based on DOM Level 3 (WD 9 April 2002)
# This is a list of well-known implementations.  Well-known names
# should be published by posting to xml-sig@python.org, and are
# subsequently recorded in this file.
# DOM implementations not officially registered should register
# themselves with their
# User did not specify a name, try implementations in arbitrary
# order, returning the one that has the required features
# typically ImportError, or AttributeError
# This module should only be imported using "import *".
# The following names are defined:
# For backward compatibility
# use class' pop instead
# Retrieve xml namespace declaration attributes.
# When using namespaces, the reader may or may not
# provide us with the original name. If not, create
# *a* valid tagName from the current context.
# When the tagname is not prefixed, it just appears as
# localname
# Can't do that in startDocument, since we need the tagname
# XXX: obtain DocumentType
# Put everything we have seen so far into the document
# This content handler relies on namespace support
# use IncrementalParser interface, so we get the desired
# pull effect
# Note that the DOMBuilder class in LoadSave constrains which of these
# values can be set using the DOM Level 3 LoadSave feature.
# This dictionary maps from (feature,value) to a list of
# (option,value) pairs that should be set on the Options object.
# If a (feature,value) setting is not in this dictionary, it is
# not supported by the DOMBuilder.
# determine the encoding if the transport provided it
# determine the base URI is we can
# XXX should we check the scheme here as well?
# import email.message
# assert isinstance(info, email.message.Message)
# There's really no need for this class; concrete implementations
# should just implement the endElement() and startElement()
# methods as appropriate.  Using this makes it easy to only
# implement one of them.
# What does it mean to "clear" a document?  Does the
# documentElement disappear?
# This is used by the ID-cache invalidation checks; the list isn't
# actually complete, since the nodes being checked will never be the
# DOCUMENT_NODE or DOCUMENT_FRAGMENT_NODE.  (The node being checked is
# the node being added or removed, not the node being modified.)
# this is non-null only for elements and attributes
# non-null only for NS elements and attributes
# Can pass encoding only to document, to put it into XML header
### The DOM does not clearly specify what to return in this case
# empty text node; discard
# collapse text node
# Overridden in Element and Attr where localName can be Non-Null
# Node interfaces from Level 3 (WD 9 April 2002)
# The "user data" functions use a dictionary that is only present
# if some user data has been set, so be careful not to assume it
# ignore handlers passed for None
# minidom-specific API:
# A Node is its own context manager, to ensure that an unlink() call occurs.
# This is similar to how a file object works.
# fast path with less checks; usable by DOM builders if careful
# return True iff node is part of a document tree
# See the comments in ElementTree.py for behavior and
# implementation details.
# Add the single child node that represents the value of the attr
# nodeValue and value are set elsewhere
# This implementation does not call the base implementation
# since most of that is not needed, and the expense of the
# method call is not warranted.  We duplicate the removal of
# children, but that's all we needed from the base class.
# same as set
# Attribute dictionaries are lazily created
# attributes are double-indexed:
# in the future: consider lazy generation
# of attribute objects this is too tricky
# for now because of headaches with
# namespaces.
# also sets nodeValue
# It might have already been part of this node, in which case
# it doesn't represent a change, and should not be returned.
# Restore this since the node is still useful and otherwise
# unlinked
# indent = current indentation
# addindent = indentation to add to higher levels
# newl = newline string
# DOM Level 3 attributes, based on the 22 Oct 2002 draft
# This creates a circular reference, but Element.unlink()
# breaks the cycle since the references to the attribute
# dictionaries are tossed.
# For childless nodes, normalize() has nothing to do.
# nodeValue is an alias for data
# nodeName is an alias for target
# DOM Level 3 (WD 9 April 2002)
# XXX This needs to be seriously changed if minidom ever
# supports EntityReference nodes.
# seq should be a list or tuple
# it's ok
# The spec is unclear what to raise here; SyntaxErr
# would be the other obvious candidate. Since Xerces raises
# InvalidCharacterErr, and since SyntaxErr is not listed
# for createDocument, that seems to be the better choice.
# XXX: need to check for illegal characters here and in
# createElement.
# DOM Level III clears this up when talking about the return value
# of this function.  If namespaceURI, qName and DocType are
# Null the document is returned without a document element
# Otherwise if doctype or namespaceURI are not None
# Then we go back to the above problem
# internal
# Document attributes from Level 3 (WD 9 April 2002)
# mapping of (namespaceURI, localName) -> ElementInfo
# This needs to be done before the next test since this
# may *be* the document element, in which case it should
# end up re-ordered to the end.
# A couple of implementation-specific helpers to create node types
# not supported by the W3C DOM specs:
# we never searched before, or the cache has been cleared
# Previous search was completed and cache is still valid;
# no matching node.
# add child elements to stack for continued searching
# check this node
# We have to process all ID attributes before
# returning in order to get all the attributes set to
# be IDs using Element.setIdAttribute*().
# attribute node
# It's not clear from a semantic perspective whether we should
# call the user data handlers for the NODE_RENAMED event since
# we're re-using the existing node.  The draft spec has been
# interpreted as meaning "no, don't call the handler unless a
# new node is created."
# Note the cloning of Document and DocumentType nodes is
# implementation specific.  minidom handles those cases
# directly in the cloneNode() methods.
# Check for _call_user_data_handler() since this could conceivably
# used with other DOM implementations (one of the FourThought
# DOMs, perhaps?).
# This is the Python mapping for interface NodeFilter from
# DOM2-Traversal-Range. It contains only constants.
# Warning!
# This module is tightly bound to the implementation details of the
# minidom DOM and can't be used with other DOM implementations.  This
# is due, in part, to a lack of appropriate methods in the DOM (there is
# no way to create Entity and Notation nodes via the DOM Level 2
# interface), and for performance.  The latter is the cause of some fairly
# cryptic code.
# Performance hacks:
# Expat typename -> TypeInfo
# not sure this is meaningful
# This *really* doesn't do anything in this case, so
# override it with something fast & minimal.
# This creates circular references!
# we don't care about parameter entities for the DOM
# internal entity
# node *should* be readonly, but we'll cheat
# To be general, we'd have to call isSameNode(), but this
# is sufficient for minidom:
# ignore this node & all descendents
# ignore this node, but make it's children become
# children of the parent node
# If this ever changes, Namespaces.end_element_handler() needs to
# be changed to match.
# We have element type information and should remove ignorable
# whitespace; identify for text nodes which contain only
# whitespace.
# Remove ignorable whitespace from the tree.
# This is still a little ugly, thanks to the pyexpat API. ;-(
# Don't include FILTER_INTERRUPT, since that's checked separately
# where allowed.
# move all child nodes to the parent, and remove this node
# node is handled by the caller
# restore the old handlers
# We're popping back out of the node we're skipping, so we
# shouldn't need to do anything but reset the handlers.
# framework document used by the fragment builder.
# Takes a string for the doctype, subset string, and namespace attrs string.
# get ns decls from node's ancestors
## this entref is the one that we made to put the subtree
# in; all of our given input is parsed in here.
# put the real document back, parse into the fragment to return
# list of (prefix, uri) ns declarations.  Namespace attrs are
# constructed from this and added to the element's attrs.
# This only adds some asserts to the original
# end_element_handler(), so we only define this when -O is not
# used.  If changing one, be sure to check the other to see if
# it needs to be changed as well.
# XXX This needs to be re-written to walk the ancestors of the
# context to build up the namespace information from
# declarations, elements, and attributes found in context.
# Otherwise we have to store a bunch more data on the DOM
# (though that *might* be more reliable -- not clear).
# add every new NS decl from context to L and attrs string
### OrderedDict
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# The sentinel is in self.__hardroot with a weakref proxy in self.__root.
# The prev links are weakref proxies (to prevent circular references).
# Individual links are kept alive by the hard reference in self.__map.
# Those hard references disappear when a key is deleted from an OrderedDict.
# Setting a new item creates a new link at the end of the linked list,
# and the inherited dictionary is updated with the new key/value pair.
# Deleting an existing item uses self.__map to find the link which gets
# removed by updating the links in the predecessor and successor nodes.
# Traverse the linked list in order.
# Traverse the linked list in reverse order.
# number of links including root
# instance dictionary
# internal dict and inherited dict
# link objects
# proxy objects
# The same as in __delitem__().
# Leave the pure Python version in place.
### namedtuple
# Validate the field names.  At the user's option, either generate an error
# message or automatically replace the field name with a valid name.
# Variables used in the methods and docstrings
# Create all the named tuple methods to be added to the class namespace
# Modify function metadata to help with introspection and debugging
# Build-up the class namespace dictionary
# and use type() to build the result class
# where the named tuple is created.  Bypass this step in environments where
# sys._getframe is not defined (Jython for example) or sys._getframe is not
# defined for arguments greater than 0 (IronPython), or where the user has
# specified a particular module.
### Load C helper function if available
# Needed so that self[missing_item] does not raise KeyError
# Emulate Bag.sortedByCount from Smalltalk
# Emulate Bag.do from Smalltalk and Multiset.begin from C++.
# Override dict methods where necessary
# There is no equivalent method for counters because the semantics
# would be ambiguous in cases such as Counter.fromkeys('aaabbc', v=2).
# Initializing counters to zero values isn't necessary because zero
# is already the default value for counter lookups.  Initializing
# to one is easily accomplished with Counter(set(iterable)).  For
# more exotic cases, create a dictionary first using a dictionary
# comprehension or dict.fromkeys().
# The regular dict.update() operation makes no sense here because the
# replace behavior results in some of the original untouched counts
# being mixed-in with all of the other counts for a mismash that
# doesn't have a straight-forward interpretation in most counting
# contexts.  Instead, we implement straight-addition.  Both the inputs
# and outputs are allowed to contain zero and negative counts.
# fast path when counter is empty
# dict() preserves the ordering returned by most_common()
# handle case where values are not orderable
# Multiset-style mathematical operations discussed in:
# Outputs guaranteed to only include positive counts.
# To strip negative and zero counts, add-in an empty counter:
# Results are ordered according to when an element is first
# encountered in the left operand and then by the order
# encountered in the right operand.
# When the multiplicities are all zero or one, multiset operations
# are guaranteed to be equivalent to the corresponding operations
# for regular sets.
### always at least one map
# can't use 'key in mapping' with defaultdict
# support subclasses that define __missing__
# reuses stored hash values if possible
# like Django's Context.push()
# like Django's Context.pop()
### UserDict
# Start by filling-out the abstract methods
# Modify __contains__ and get() to work like dict
# does when __missing__ is present.
# Now, add the methods in dicts but not in MutableMapping
# Create a copy and avoid triggering descriptors
### UserList
# XXX should this accept an arbitrary sequence?
### UserString
# the following methods are defined in alphabetical order:
# wrapper for any additional drawing routines
# that need to know about each other
# (sqrt(5)-1)/2 -- golden ratio
#title("Penrose-tiling with kites and darts.")
# create compound shape
# create dancers
# dance
# keep window open until user closes it
# colormixer
################################
# Mini Lindenmayer tool
###############################
# Example 1: Snake kolam
# Example 2: Anklets of Krishna
#create ne-1 additional turtles
# let those ne turtles make a step
# in parallel:
# benutzt Liste von turtles und Liste von Zweiglisten,
# fuer jede turtle eine!
# Hier 3 Baumgeneratoren:
# File: tdemo_chaos.py
# Author: Gregor Lingl
# Date: 2009-06-24
# A demonstration of chaos
# Now zoom in:
# vanish if hideturtle() is not available ;-)
# Terminator can occur here
# or here
# turtledemo user pressed STOP
# example derived from
# Turtle Geometry: The Computer as a Medium for Exploring Mathematics
# by Harold Abelson and Andrea diSessa
# p. 96-98
# rotate and draw first subcurve with opposite parity to big curve
# interface to and draw second subcurve with same parity as big curve
# third subcurve
# fourth subcurve
# a final turn is needed to make the turtle
# end up facing outward from the large square
# Visual Modeling with Logo: A Structural Approach to Seeing
# by James Clayson
# Koch curve, after Helge von Koch who introduced this geometric figure in 1904
# p. 146
# if dir = 1 turn outward
# if dir = -1 turn inward
# frame
## create compound yellow/blue turtleshape for planets
## setup gravitational system
# (help_label,  help_doc)
# Make sure we are the currently activated OS X application
# so that our menu bar appears.
# Leave Mac button colors alone - #44254.
# t._Screen is a singleton class instantiated or retrieved
# by calling Screen.  Since tdemo canvas needs a different
# configuration, we manually set class attributes before
# calling Screen and manually call superclass init after.
# For wheel up, event.delta = 120 on Windows, -1 on darwin.
# X-11 sends Control-Button-4 event instead.
# TJR: leave this one.
# square-->rectangle
# writer turtle
# make tower of 6 discs
# prepare spartanic user interface ;-)
# align blocks by the bottom edge
# range is non-inclusive of ending value
# move pivot to correct position
# WARNING WARNING WARNING WARNING
# This file is automatically written by generator.py. Any changes
# made here will be lost.
# To change the manually written methods edit libvirt-override.py
# To change the automatically written methods edit generator.py
# Manually written part of python bindings for libvirt
# The root of all libvirt errors.
# Never call virConnGetLastError().
# virGetLastError() is now thread local
# type: Optional[Tuple[int, int, str, int, str, Optional[str], Optional[str], int, int]]
# register the libvirt global error handler
# TODO: The C code requires a List and there is not *Mutable*Tuple for a better description such as
# auth: Tuple[List[int], Callable[[List[MutableTuple[int, str, str, str, Any]], _T], int], _T]
# Return library version.
# Invoke an EventHandle callback
# noqa E704
# noqa F811
# libvirt 0.9.2 and earlier required custom event loops to know
# that opaque=(cb, original_opaque) and pass the values individually
# to this wrapper. This should handle the back compat case, and make
# future invocations match the virEventHandleCallback prototype
# Invoke an EventTimeout callback
# future invocations match the virEventTimeoutCallback prototype
# a caller for the ff callbacks for custom event loop implementations
# Automatically written part of python bindings for libvirt
# Functions from module libvirt-host
# Functions from module libvirt-event
# Functions from module virterror
# virDomain functions from module libvirt-domain
# virDomain functions from module python
# virDomain functions from module libvirt-domain-checkpoint
# virDomain functions from module libvirt-domain-snapshot
# virDomain methods from virDomain.py (hand coded)
# virNetwork functions from module python
# virNetwork functions from module libvirt-network
# virNetwork methods from virNetwork.py (hand coded)
# virNetworkPort functions from module python
# virNetworkPort functions from module libvirt-network
# virInterface functions from module libvirt-interface
# virStoragePool functions from module python
# virStoragePool functions from module libvirt-storage
# virStoragePool methods from virStoragePool.py (hand coded)
# The size (in bytes) of buffer used in sendAll(),
# recvAll(), sparseSendAll() and sparseRecvAll()
# methods. This corresponds to the size of payload
# of a stream packet.
# virStorageVol functions from module libvirt-storage
# virStorageVol functions from module python
# virConnect functions from module python
# virConnect functions from module libvirt-interface
# virConnect functions from module libvirt-host
# virConnect functions from module libvirt-domain
# virConnect functions from module libvirt-storage
# virConnect functions from module libvirt
# virConnect functions from module libvirt-network
# virConnect functions from module libvirt-stream
# virConnect functions from module libvirt-nodedev
# virConnect functions from module libvirt-nwfilter
# virConnect functions from module libvirt-secret
# virConnect functions from module virterror
# virConnect methods from virConnect.py (hand coded)
# type: Dict[_DomainCB, _T]
# type: Dict[int, _T]
# virNodeDevice functions from module libvirt-nodedev
# virNodeDevice functions from module python
# virSecret functions from module python
# virSecret functions from module libvirt-secret
# virSecret functions from module libvirt
# virNWFilter functions from module python
# virNWFilter functions from module libvirt-nwfilter
# virNWFilterBinding functions from module libvirt-nwfilter
# virStream functions from module libvirt-stream
# virStream methods from virStream.py (hand coded)
# virDomainCheckpoint functions from module libvirt-domain-checkpoint
# virDomainCheckpoint methods from virDomainCheckpoint.py (hand coded)
# virDomainSnapshot functions from module libvirt-domain-snapshot
# virDomainSnapshot functions from module python
# virDomainSnapshot methods from virDomainSnapshot.py (hand coded)
# virBlkioParameterType
# virCPUCompareResult
# virConnectBaselineCPUFlags
# virConnectCloseReason
# virConnectCompareCPUFlags
# virConnectCredentialType
# virConnectDomainEventAgentLifecycleReason
# virConnectDomainEventAgentLifecycleState
# virConnectDomainEventBlockJobStatus
# virConnectDomainEventDiskChangeReason
# virConnectFlags
# virConnectGetAllDomainStatsFlags
# virConnectGetDomainCapabilitiesFlags
# virConnectListAllDomainsFlags
# virConnectListAllInterfacesFlags
# virConnectListAllNetworksFlags
# virConnectListAllNodeDeviceFlags
# virConnectListAllSecretsFlags
# virConnectListAllStoragePoolsFlags
# virDomainAbortJobFlagsValues
# virDomainAgentResponseTimeoutValues
# virDomainAuthorizedSSHKeysSetFlags
# virDomainBackupBeginFlags
# virDomainBlockCommitFlags
# virDomainBlockCopyFlags
# virDomainBlockJobAbortFlags
# virDomainBlockJobInfoFlags
# virDomainBlockJobSetSpeedFlags
# virDomainBlockJobType
# virDomainBlockPullFlags
# virDomainBlockRebaseFlags
# virDomainBlockResizeFlags
# virDomainBlockedReason
# virDomainChannelFlags
# virDomainCheckpointCreateFlags
# virDomainCheckpointDeleteFlags
# virDomainCheckpointListFlags
# virDomainCheckpointXMLFlags
# virDomainConsoleFlags
# virDomainControlErrorReason
# virDomainControlState
# virDomainCoreDumpFlags
# virDomainCoreDumpFormat
# virDomainCrashedReason
# virDomainCreateFlags
# virDomainDefineFlags
# virDomainDestroyFlagsValues
# virDomainDeviceModifyFlags
# virDomainDirtyRateCalcFlags
# virDomainDirtyRateStatus
# virDomainDiskErrorCode
# virDomainEventCrashedDetailType
# virDomainEventDefinedDetailType
# virDomainEventGraphicsAddressType
# virDomainEventGraphicsPhase
# virDomainEventID
# virDomainEventIOErrorAction
# virDomainEventPMSuspendedDetailType
# virDomainEventResumedDetailType
# virDomainEventShutdownDetailType
# virDomainEventStartedDetailType
# virDomainEventStoppedDetailType
# virDomainEventSuspendedDetailType
# virDomainEventTrayChangeReason
# virDomainEventType
# virDomainEventUndefinedDetailType
# virDomainEventWatchdogAction
# virDomainFDAssociateFlags
# virDomainGetHostnameFlags
# virDomainGetJobStatsFlags
# virDomainGraphicsReloadType
# virDomainGuestInfoTypes
# virDomainInterfaceAddressesSource
# virDomainJobOperation
# virDomainJobType
# virDomainLifecycle
# virDomainLifecycleAction
# virDomainMemoryFailureActionType
# virDomainMemoryFailureFlags
# virDomainMemoryFailureRecipientType
# virDomainMemoryFlags
# virDomainMemoryModFlags
# virDomainMemoryStatTags
# virDomainMessageType
# virDomainMetadataType
# virDomainMigrateFlags
# virDomainMigrateMaxSpeedFlags
# virDomainModificationImpact
# virDomainNostateReason
# virDomainNumatuneMemMode
# virDomainOpenGraphicsFlags
# virDomainPMSuspendedDiskReason
# virDomainPMSuspendedReason
# virDomainPausedReason
# virDomainProcessSignal
# virDomainRebootFlagValues
# virDomainRunningReason
# virDomainSaveImageXMLFlags
# virDomainSaveRestoreFlags
# virDomainSetTimeFlags
# virDomainSetUserPasswordFlags
# virDomainShutdownFlagValues
# virDomainShutdownReason
# virDomainShutoffReason
# virDomainSnapshotCreateFlags
# virDomainSnapshotDeleteFlags
# virDomainSnapshotListFlags
# virDomainSnapshotRevertFlags
# virDomainSnapshotXMLFlags
# virDomainState
# virDomainStatsTypes
# virDomainUndefineFlagsValues
# virDomainVcpuFlags
# virDomainXMLFlags
# virErrorDomain
# virErrorLevel
# virErrorNumber
# virEventHandleType
# virIPAddrType
# virInterfaceDefineFlags
# virInterfaceXMLFlags
# virKeycodeSet
# virMemoryParameterType
# virNWFilterBindingCreateFlags
# virNWFilterDefineFlags
# virNetworkCreateFlags
# virNetworkDefineFlags
# virNetworkEventID
# virNetworkEventLifecycleType
# virNetworkMetadataType
# virNetworkPortCreateFlags
# virNetworkUpdateCommand
# virNetworkUpdateFlags
# virNetworkUpdateSection
# virNetworkXMLFlags
# virNodeAllocPagesFlags
# virNodeDeviceCreateXMLFlags
# virNodeDeviceDefineXMLFlags
# virNodeDeviceEventID
# virNodeDeviceEventLifecycleType
# virNodeDeviceUpdateFlags
# virNodeDeviceXMLFlags
# virNodeGetCPUStatsAllCPUs
# virNodeGetMemoryStatsAllCells
# virNodeSuspendTarget
# virSchedParameterType
# virSecretDefineFlags
# virSecretEventID
# virSecretEventLifecycleType
# virSecretUsageType
# virStoragePoolBuildFlags
# virStoragePoolCreateFlags
# virStoragePoolDefineFlags
# virStoragePoolDeleteFlags
# virStoragePoolEventID
# virStoragePoolEventLifecycleType
# virStoragePoolState
# virStorageVolCreateFlags
# virStorageVolDeleteFlags
# virStorageVolDownloadFlags
# virStorageVolInfoFlags
# virStorageVolResizeFlags
# virStorageVolType
# virStorageVolUploadFlags
# virStorageVolWipeAlgorithm
# virStorageXMLFlags
# virStreamEventType
# virStreamFlags
# virStreamRecvFlagsValues
# virTypedParameterFlags
# virTypedParameterType
# virVcpuHostCpuState
# virVcpuState
# typed parameter names
# The root of all libxml2 errors.
# Type of the wrapper class for the C objects wrappers
# id() is sometimes negative ...
# Errors raised by the wrappers when some tree handling failed.
# Example of a class to handle SAX events
#print msg
# This class is the ancestor of all the Node classes. It provides
# the basic functionalities shared by all nodes (and handle
# gracefylly the exception), like name, navigation in the tree,
# doc reference, content access and serializing to a string or URI
# why is this duplicate naming needed ?
# Those are common attributes to nearly all type of nodes
# defined as python2 properties
#
# Serialization routines, the optional arguments have the following
# meaning:
# Canonicalization routines:
# Selecting nodes using XPath, a bit slow because the context
# is allocated/freed every time but convenient.
# Remove namespaces
# support for python2 iterators
# implements the depth-first iterator for libxml2 DOM tree
# implements the breadth-first iterator for libxml2 DOM tree
# converters to present a nicer view of the XPath returns
# TODO try to cast to the most appropriate node class
# register an XPath function
# For the xmlTextReader parser configuration
# For the error callback severities
# register the libxml2 error handler
# normal behaviour when libxslt is not imported
# when libxslt is already imported, one must
# use libxst's error handler instead
# assert f is _xmlTextReaderErrorFunc
# The cleanup now goes though a wrapper in libxml.c
# The interface to xmlRegisterInputCallbacks.
# Since this API does not allow to pass a data object along with
# match/open callbacks, it is necessary to maintain a list of all
# Python callbacks.
# First pop python-level callbacks, when no more available - start
# popping built-in ones.
# Deprecated
# WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
# Everything before this line comes from libxml.py
# Everything after this line is automatically generated
# Functions from module HTMLparser
# Functions from module HTMLtree
# Functions from module SAX2
# Functions from module catalog
# Functions from module chvalid
# Functions from module debugXML
# Functions from module dict
# Functions from module encoding
# Functions from module entities
# Functions from module parser
# Functions from module parserInternals
# Functions from module python
# Functions from module relaxng
# Functions from module tree
# Functions from module uri
# Functions from module valid
# Functions from module xmlIO
# Functions from module xmlerror
# Functions from module xmlreader
# Functions from module xmlregexp
# Functions from module xmlsave
# Functions from module xmlschemas
# Functions from module xmlschemastypes
# Functions from module xmlstring
# accessors for xmlNode
# xmlNode functions from module HTMLparser
# xmlNode functions from module debugXML
# xmlNode functions from module tree
# xmlNode functions from module valid
# xmlNode functions from module xinclude
# xmlNode functions from module xmlschemas
# xmlNode functions from module xpath
# xmlNode functions from module xpathInternals
# xmlNode functions from module xpointer
# xmlDoc functions from module HTMLparser
# xmlDoc functions from module HTMLtree
# xmlDoc functions from module debugXML
# xmlDoc functions from module entities
# xmlDoc functions from module parser
# xmlDoc functions from module relaxng
# xmlDoc functions from module tree
# xmlDoc functions from module valid
# xmlDoc functions from module xinclude
# xmlDoc functions from module xmlreader
# xmlDoc functions from module xmlschemas
# xmlDoc functions from module xpath
# xmlDoc functions from module xpointer
# accessors for Error
# Error functions from module xmlerror
# accessors for parserCtxt
# parserCtxt functions from module HTMLparser
# parserCtxt functions from module parser
# parserCtxt functions from module parserInternals
# xmlAttr functions from module debugXML
# xmlAttr functions from module tree
# xmlAttr functions from module valid
# catalog functions from module catalog
# xmlDtd functions from module debugXML
# xmlDtd functions from module tree
# xmlDtd functions from module valid
# xmlEntity functions from module entities
# xmlNs functions from module tree
# xmlNs functions from module xpathInternals
# outputBuffer functions from module HTMLtree
# outputBuffer functions from module tree
# outputBuffer functions from module xmlIO
# inputBuffer functions from module xmlIO
# inputBuffer functions from module xmlreader
# xmlReg functions from module xmlregexp
# relaxNgSchema functions from module relaxng
# relaxNgSchema functions from module xmlreader
# relaxNgParserCtxt functions from module relaxng
# relaxNgValidCtxt functions from module relaxng
# relaxNgValidCtxt functions from module xmlreader
# Schema functions from module xmlreader
# Schema functions from module xmlschemas
# SchemaParserCtxt functions from module xmlschemas
# SchemaValidCtxt functions from module xmlreader
# SchemaValidCtxt functions from module xmlschemas
# xmlTextReader functions from module xmlreader
# xmlTextReaderLocator functions from module xmlreader
# accessors for URI
# URI functions from module uri
# ValidCtxt functions from module valid
# accessors for xpathContext
# xpathContext functions from module python
# xpathContext functions from module xpath
# xpathContext functions from module xpathInternals
# xpathContext functions from module xpointer
# accessors for xpathParserContext
# xpathParserContext functions from module xpathInternals
# htmlParserOption
# htmlStatus
# xmlAttributeDefault
# xmlAttributeType
# xmlBufferAllocationScheme
# xmlC14NMode
# xmlCharEncError
# xmlCharEncFlags
# xmlCharEncoding
# xmlDocProperties
# xmlElementContentOccur
# xmlElementContentType
# xmlElementType
# xmlElementTypeVal
# xmlEntityType
# xmlErrorDomain
# xmlErrorLevel
# xmlFeature
# xmlModuleOption
# xmlParserErrors
# xmlParserInputFlags
# xmlParserOption
# xmlParserProperties
# xmlParserSeverities
# xmlParserStatus
# xmlPatternFlags
# xmlReaderTypes
# xmlRelaxNGParserFlag
# xmlRelaxNGValidErr
# xmlResourceType
# xmlSaveOption
# xmlSchemaContentType
# xmlSchemaTypeType
# xmlSchemaValType
# xmlSchemaValidError
# xmlSchemaValidOption
# xmlSchemaWhitespaceValueType
# xmlSchematronValidOptions
# xmlTextReaderMode
# xmlXPathError
# xmlXPathObjectType
# snack.py: maps C extension module _snack to proper python types in module
# snack.
# The first section is a very literal mapping.
# The second section contains convenience classes that amalgamate
# the literal classes and make them more object-oriented.
# Form uses hotkeys
# we do the reference count for the helpArg in python! gross
# assume colorset is an integer for the custom color set
# combo widgets
# If the first element is not explicitly set to
# not be the default, make it be the default
# you normally want to pack a ButtonBar with growx = 1
# This file was automatically generated by SWIG (https://www.swig.org).
# Version 4.3.0
# Do not make changes to this file unless you know what you are doing - modify
# the SWIG interface file instead.
# Import the low-level C/C++ module
# Register transfer_control in _ftdi1:
# Register context in _ftdi1:
# Register device_list in _ftdi1:
# Register size_and_time in _ftdi1:
# Register FTDIProgressInfo in _ftdi1:
# Register version_info in _ftdi1:
# Register eeprom in _ftdi1:
# Text wrapper for ldb bindings
# Copyright (C) 2015 Petr Viktorin <pviktori@redhat.com>
# Published under the GNU LGPLv3 or later
# Text wrapper for tdb bindings
## Add wrappers for functions and getters that don't deal with text
# To change the manually written methods edit libvirt-qemu-override.py
# Manually written part of python bindings for libvirt-qemu
# Automatically written part of python bindings for libvirt-qemu
# Functions from module libvirt-qemu
# virConnectDomainQemuMonitorEventRegisterFlags
# virDomainQemuAgentCommandTimeoutValues
# virDomainQemuMonitorCommandFlags
# To change the manually written methods edit libvirt-lxc-override.py
# Automatically written part of python bindings for libvirt-lxc
# Functions from module libvirt-lxc
# -*- coding: utf-8 -*-
# SPDX-License-Identifier: GPL-2.0
# https://docs.docker.com/compose/compose-file/#service-configuration-reference
# https://docs.docker.com/samples/
# https://docs.docker.com/compose/gettingstarted/
# https://docs.docker.com/compose/django/
# https://docs.docker.com/compose/wordpress/
# TODO: podman pod logs --color -n -f pod_testlogs
# If you see an error here, use Python 3.7 or greater
# import fnmatch
# fnmatch.fnmatchcase(env, "*_HOST")
# helper functions
# identity filter
# "podman stop" takes only int
# Error: invalid argument "3.0" for "-t, --time" flag: strconv.ParseUint: parsing "3.0":
# invalid syntax
# Anonymous: Just specify a path and let the engine creates the volume
# - /var/lib/mysql
# dest must start with / like /foo:/var/lib/mysql
# otherwise it's option like /var/lib/mysql:rw
# Specify an absolute path mapping
# - /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
# - ./cache:/tmp/cache
# User-relative path
# - ~/configs:/etc/configs/:ro
# Named volume
# - datavolume:/var/lib/mysql
# TODO: ignore
# NOTE: if a named volume is used but not defined it
# gives ERROR: Named volume "abc" is used in service "xyz"
# unless it's anonymous-volume
# if already applied nothing todo
# handle anonymous or implied volume
# missing source
# docker and docker-compose support subset of bash variable substitution
# https://docs.docker.com/compose/compose-file/#variable-substitution
# https://docs.docker.com/compose/env-file/
# https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
# $VARIABLE
# ${VARIABLE}
# ${VARIABLE:-default} default if not set or empty
# ${VARIABLE-default} default if not set
# ${VARIABLE:?err} raise error if not set or empty
# ${VARIABLE?err} raise error if not set
# $$ means $
# Load service's environment variables
# we need to add `svc_envs` to the `subs_dict` so that it can evaluate the
# service environment that reference to another service environment.
# type: ignore[assignment]
# type: ignore[arg-type]
# if int or string return as is
# type: ignore[return-value]
# TODO: might move to using "volume list"
# podman volume list --format '{{.Name}}\t{{.MountPoint}}' \
# TODO: we might need to add mount_dict[mount_type]["propagation"] = "z"
# ulimit can be a single value, i.e. ulimit: host
# or a dictionary or list:
# --volume, -v[=[[SOURCE-VOLUME|HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
# [rw|ro]
# [z|Z]
# [[r]shared|[r]slave|[r]private]|[r]unbindable
# [[r]bind]
# [noexec|exec]
# [nodev|dev]
# [nosuid|suid]
# [O]
# [U]
# TODO: --tmpfs /tmp:rw,size=787448k,mode=1777
# assemble path for source file first, because we need it for all cases
# pass file secrets to "podman build" with param --secret
# pass file secrets to "podman run" as volumes
# v3.5 and up added external flag, earlier the spec
# only required a name to be specified.
# docker-compose does not support external secrets outside of swarm mode.
# However accessing these via podman is trivial
# since these commands are directly translated to
# podman-create commands, albeit we can only support a 1:1 mapping
# at the moment
# The target option is only valid for type=env,
# which in an ideal world would work
# for type=mount as well.
# having a custom name for the external secret
# has the same problem as well
# https://docs.docker.com/compose/gpu-support/
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
# v2: https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources
# cpus, cpu_shares, mem_limit, mem_reservation
# v3: https://docs.docker.com/compose/compose-file/compose-file-v3/#resources
# spec: https://github.com/compose-spec/compose-spec/blob/master/deploy.md#resources
# deploy.resources.{limits,reservations}.{cpus, memory}
# cpus_res_v3 = try_float(reservations.get('cpus', None), None)
# add args
# Handle pids limit from both container level and deploy section
# Ensure consistency between pids_limit and deploy.resources.limits.pids
# NOTE: `mode: host|ingress` is ignored
# TODO: add more options here, like dns, ipv6, etc.
# Note: podman-specific network mode
# type: ignore[operator]
# The bridge mode in podman is using the `podman` network.
# It seems weird, but we should keep this behavior to avoid
# breaking changes.
# networks can be specified as a dict with config per network or as a plain list without
# config.  Support both cases by converting the plain list to a dict with empty config.
# if a mac_address was specified on the container level, we need to check that it is not
# specified on the network level as well
# Note: mac_address is supported by compose spec now, and x-podman.mac_address
# is only for backward compatibility
# https://github.com/compose-spec/compose-spec/blob/main/05-services.md#mac_address
# if a mac_address was specified on the container level, apply it to the first network
# This works for Python > 3.6, because dict insert ordering is preserved, so we are
# sure that the first network we encounter here is also the first one specified by
# the user
# Container level service aliases
# network level service aliases
# TODO: double check -e , --add-host, -v, --read-only
# new environment variable is set
# environment variable already exists in environment so pass its value
# currently podman shipped by fedora does not package this
# WIP: healthchecks are still work in progress
# If it's a string, it's equivalent to specifying CMD-SHELL
# podman does not add shell to handle command with whitespace
# If it's a list, first item is either NONE, CMD or CMD-SHELL.
# interval, timeout and start_period are specified as durations.
# convert other parameters to string
# handle podman extension
# command, ..etc.
# Check if the value exists in the enum
# Check if this is a value coming from  reference
# pylint: disable-next=raise-missing-from
# Compute hash based on the frozenset of items to ensure order does not matter
# Compare equality based on dictionary content
# avoid A depens on A
# NOTE: avoid creating loops, A->B->A
# parse dependencies for each service
# TODO: manage properly the dependencies coming from base services when extended
# the compose file has been normalized. depends_on, if exists, can only be a dictionary
# the normalization adds a "service_started" condition by default
# parse link to get service name and remove alias
# expand the dependencies on each service
###################
# Override and reset tags
# type: ignore[no-redef]
# item is a tuple representing service's lower level key and value
# value can actually be a list, then all the elements from the list have to be
# collected
# type: ignore[index]
# podman and compose classes
# Iff part is last and non-empty, we leave an ongoing line to be completed later
# Make sure the last line ends with EOL
# pylint: disable=dangerous-default-value
# Intentionally mutable default argument to hold references to tasks
# pylint: disable=consider-using-with
# This is hacky to make the tasks not get garbage collected
# https://github.com/python/cpython/issues/91887
# deps should become a dictionary of dependencies
# the dependency service_started is set by default
# unless requested otherwise.
# clean duplicate mount targets
# this also works if podman hasn't been installed now
# just to make sure podman is running
# Priorities:
# - Command line --in-pod
# - docker-compose.yml x-podman.in_pod
# - Default value of true
# - Command line --pod-args
# - docker-compose.yml x-podman.pod_args
# - Default value
# If Docker Compose compatibility is enabled, set compatibility settings
# that are not explicitly set already.
# cmd = args.command
# make absolute
# no_ansi = args.no_ansi
# no_cleanup = args.no_cleanup
# dry_run = args.dry_run
# host_env = None
# env-file is relative to the CWD
# Load .env from the Compose file's directory to preserve
# behavior prior to 1.1.0 and to match with Docker Compose (v2).
# see: https://docs.docker.com/compose/reference/envvars/
# see: https://docs.docker.com/compose/env-file/
# Iterate over files primitively to allow appending to files in-loop
# log(filename, json.dumps(content, indent = 2))
# See also https://docs.docker.com/compose/how-tos/project-name/#set-a-project-name
# **project_name** is initialized to the argument of the `-p` command line flag.
# More strict then actually needed for simplicity:
# podman requires [a-zA-Z0-9][a-zA-Z0-9_.-]*
# If `include` is used, append included files to files
# As compose obj is updated and tested with every loop, not deleting `include`
# from it, results in it being tested again and again, original values for
# `include` be appended to `files`, and, included files be processed for ever.
# Solution is to remove 'include' key from compose obj. This doesn't break
# having `include` present and correctly processed in included files
# debug mode
# ver = compose.get('version')
# include services with no profile defined or the selected profiles
# NOTE: maybe add "extends.service" to _deps at this stage
# If there is no network_mode and networks in service,
# docker-compose will create default network named '<project_name>_default'
# and add the service to the default network.
# So we always set `default_net = 'default'` for compatibility
# volumes: [...]
# other top-levels:
# networks: {driver: ...}
# configs: {...}
# Check `--scale` args from CLI command
# Check `scale` value from compose yaml file
# Check `deploy: replicas:` value from compose yaml file
# Note: All conditions are necessary to handle case
# log(service_name,service_desc)
# log("deps:", [(c["name"], c["_deps"]) for c in given_containers])
# log("sorted:", [c["name"] for c in given_containers])
# pylint: disable=protected-access
# decorators to add commands and parse options
# pylint: disable=invalid-name,too-few-public-methods
# type: ignore[attr-defined]
# Trim extra indentation at start of multiline docstrings.
# type: ignore[list-item]
# actual commands
# pylint: disable=unused-argument
# URL contains a ":" character, a hint of a valid URL
# tweak path URL to get username from url parser
# when using a custom id for ssh property, path to a local SSH key is provided after "="
# Error if both `dockerfile_inline` and `dockerfile` are set
# if givent context was not recognized as git url, try joining paths to get a file locally
# normalize dockerfile path, as the user could have provided unpredictable file formats
# custom dockerfile name was also not found in the file system
# we need 'getattr' as compose_down_parse dose not configure 'no_deps'
# podman wait will return always with a rc -1.
# wait for the dependencies to be fulfilled
# start the container
# `podman build` does not cache, so don't always build
# if needed, tear down existing containers
# default is to tear down everything if any container is stale
# return first error code from create calls, if any
# return first error code from start calls, if any
# TODO: handle already existing
# TODO: if error creating do not enter loop
# TODO: colors if sys.stdout.isatty()
# Add colored service prefix to output by piping output through sed
# Task.cancelling() is new in python 3.11
# Generally a single returned item when using asyncio.FIRST_COMPLETED, but that's not
# guaranteed. If multiple tasks finish at the exact same time the choice of which
# finished "first" is arbitrary
# If 2 containers exit at the exact same time, the cancellation of the other ones
# cause the status to overwrite. Sleeping for 1 seems to fix this and make it match
# docker-compose
# Matches docker-compose behaviour, where the exit code of the task that triggered
# the cancellation is always propagated when aborting on failure
# defaults
# run podman
# adjust one-off container options
# TODO: handle volumes
# can't restart and --rm
# TODO: handle dependencies, handle creations
# the default value is to print all logs which is in podman = 0 and not
# needed to be passed
# to ensure that the user did not execute the command by mistake
# Determine the maximum length of each column
# Print each row
# Format each cell using the maximum column width
# command arguments parsing
# `--scale` argument needs to store as single value and not append,
# as multiple scale values could be confusing.
# NBD client library in userspace
# WARNING: THIS FILE IS GENERATED FROM
# generator/generator
# ANY CHANGES YOU MAKE TO THIS FILE WILL BE LOST.
# Copyright Red Hat
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# Re-export Error exception as nbd.Error, adding some methods.
# Ensure that buf is already buffer-like
# The NBD shell.
# Allow intermixing of various options for replay in command-line order:
# each option registered with this Action subclass will append a tuple
# to a single list of snippets
# For back-compat, provide --connect as an undocumented synonym to --uri
# These hidden options are used by bash tab completion.
# It's an error if -n is passed with certain other options.
# Handle the informational options which exit.
# If verbose, set LIBNBD_DEBUG=1
# Create the handle.
# Run all snippets
# https://stackoverflow.com/a/11754346
# If there are no explicit -c or --command parameters, go interactive.
# Error codes, corresponding to FDT_ERR_... in libfdt.h
# QUIET_ALL can be passed as the 'quiet' parameter to avoid exceptions
# altogether. All # functions passed this value will return an error instead
# of raising an exception.
# Pass this as the 'quiet' parameter to return -ENOTFOUND on NOTFOUND errors,
# instead of raising an exception.
# Compatibility for SWIG v4.2 and earlier. SWIG 4.2 would drop the first
# item from the list if it was None, returning only the second item.
# Expand size by this much when out of space
# Register fdt_header in _libfdt:
# Register fdt_reserve_entry in _libfdt:
# Register fdt_node_header in _libfdt:
# Register fdt_property in _libfdt:
# a special sentinel object
# send tracebacks to browser if true
# log tracebacks to files if not None
# number of source code lines per frame
# place to send the output
# just in case something goes wrong
# Copyright 2024 Google LLC
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Invert test to catch NaN
#! /usr/local/bin/python
# NOTE: the above "/usr/local/bin/python" is NOT a mistake.  It is
# intentionally NOT "/usr/bin/env python".  On many systems
# (e.g. Solaris), /usr/local/bin is not in $PATH as passed to CGI
# scripts, and /usr/local/bin is the default directory where Python is
# installed, so /usr/bin/env would be unable to find python.  Granted,
# binary installations by Linux vendors often install Python in
# /usr/bin.  So let those vendors patch cgi.py to match their choice
# of installation.
# History
# -------
# Michael McLay started this module.  Steve Majewski changed the
# interface to SvFormContentDict and FormContentDict.  The multipart
# parsing was inspired by code submitted by Andreas Paepcke.  Guido van
# Rossum rewrote, reformatted and documented the module and is currently
# responsible for its maintenance.
# =======
# Logging support
# Filename to log to, if not empty
# File object to log to, if not None
# The current logging function
# Parsing functions
# =================
# Maximum input we will accept when REQUEST_METHOD is POST
# 0 ==> unlimited input
# field keys and values (except for files) are returned as strings
# an encoding is required to decode the bytes read from self.fp
# fp.read() must return bytes
# For testing stand-alone
# Unknown content-type
# XXX Shouldn't, really
# RFC 2046, Section 5.1 : The "multipart" boundary delimiters are always
# represented as 7bit US-ASCII.
# Classes for field storage
# =========================
# Dummy attributes
# self.file = StringIO(value)
# Set default content-type for POST to what's traditional
# self.fp.read() must return bytes
# Process content-disposition header
# Process content-type header
# Honor any existing content-type header.  But if there is no
# content-type header, use some sensible defaults.  Assume
# outerboundary is "" at the outer level, but something non-false
# inside a multi-part.  The default for an inner part is text/plain,
# but for an outer part it should be urlencoded.  This should catch
# bogus clients which erroneously forget to include a content-type
# header.
# See below for what we do if there does exist a content-type header,
# but it happens to be something we don't understand.
# Ensure that we consume the file until we've hit our inner boundary
# Propagate max_num_fields into the sub class appropriately
# parser takes strings, not bytes
# Some clients add Content-Length for part headers, ignore them
# I/O buffering size for copy to file
# store data as bytes for files
# as strings for other fields
# keep bytes
# decode to string
# We may interrupt \r\n sequences if they span the 2**16
# byte boundary
# Test/debug code
# Replace with other classes to test those
# Utilities
# Invoke mainline
# Call test() when this file is run as a script (not imported as a module)
# Copyright (c) 2010-2024 Benjamin Peterson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Useful for very coarse version differentiation.
# Jython always uses 32 bits.
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
# 32-bit
# 64-bit
# Invokes __set__.
# This is a bit ugly, but it avoids running this again by
# removing this descriptor.
# Subclasses should override this
# in case of a reload
# eventually raises ImportError
# same as get_code
# mark as package
# Add windows specific modules.
# Workaround for standalone backslash
# If the file has an encoding, encode unicode with it.
# This does exactly the same what the :func:`py3:functools.update_wrapper`
# function does on Python versions after 3.2. It sets the ``__wrapped__``
# attribute on ``wrapper`` object and it doesn't raise an error if any of
# the attributes mentioned in ``assigned`` and ``updated`` are missing on
# ``wrapped`` object.
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
# This version introduced PEP 560 that requires a bit
# of extra care (we mimic what is done by __build_class__).
# Optimization: Fast return for the common case.
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
# required for PEP 302 and PEP 451
# see PEP 366 @ReservedAssignment
# PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
# Finally, add the importer to the meta path import hook.
# libvirtaio -- asyncio adapter for libvirt
# Copyright (C) 2017  Wojtek Porczyk <woju@invisiblethingslab.com>
# version 2.1 of the License, or (at your option) any later version.
# License along with this library; if not, see
# <http://www.gnu.org/licenses/>.
# noqa F401
# pylint: disable=too-few-public-methods
# file descriptors
# type: Dict
# It seems like loop.add_{reader,writer} can be run multiple times
# and will still register the callback only once. Likewise,
# remove_{reader,writer} may be run even if the reader/writer
# is not registered (and will just return False).
# For the edge case of empty callbacks, any() returns False.
# timeouts
# scheduling timeout for next loop iteration
# pylint: disable=no-member
# main implementation
# type: Dict[int, Callback]
# Transient asyncio.Event instance dynamically created
# and destroyed by drain()
# NOTE invariant: _finished.is_set() iff _pending == 0
# pylint: disable=bad-whitespace
# type: Optional[virEventAsyncIOImpl]
# Breakpoint: https://github.com/python/cpython/pull/119891
# Pure aliases, have always been in typing
# Breakpoint: https://github.com/python/cpython/pull/116129
# Added with bpo-45166 to 3.10.1+ and some 3.9 versions
# The functions below are modified copies of typing internal helpers.
# They are needed by _ProtocolMeta and they provide support for PEP 646.
# Breakpoint: https://github.com/python/cpython/pull/27342
# Some unconstrained type variables.  These are used by the container types.
# (These are not for export.)
# Breakpoint: https://github.com/python/cpython/pull/31841
# Vendored from cpython typing._SpecialFrom
# Having a separate class means that instances will not be rejected by
# typing._type_check.
# Note that inheriting from this class means that the object will be
# rejected by typing._type_check, so do not use it if the special form
# is arguably valid as a type by itself.
# Breakpoint: https://github.com/python/cpython/pull/30530
# @final exists in 3.8+, but we backport it for all versions
# before 3.11 to keep support for the __final__ attribute.
# See https://bugs.python.org/issue46342
# 3.15
# A Literal bug was fixed in 3.11.0, 3.10.1 and 3.9.8
# Breakpoint: https://github.com/python/cpython/pull/29334
# similar logic to typing._deduplicate on Python 3.9+
# 3.11+
# This is not a real generic class.  Don't use outside annotations.
# A few are simply re-exported for completeness.
# Breakpoint: https://github.com/python/cpython/pull/118681
# `__match_args__` attribute was removed from protocol members in 3.13,
# we want to backport this change to older Python versions.
# Breakpoint: https://github.com/python/cpython/pull/110683
# Inheriting from typing._ProtocolMeta isn't actually desirable,
# but is necessary to allow typing.Protocol and typing_extensions.Protocol
# to mix without getting TypeErrors about "metaclass conflict"
# NOTE: DO NOT call super() in any methods in this class
# That would call the methods on typing._ProtocolMeta on Python <=3.11
# and those are slow
# Hack so that typing.Generic.__class_getitem__
# treats typing_extensions.Protocol
# as equivalent to typing.Protocol
# This has to be defined, or the abc-module cache
# complains about classes with this metaclass being unhashable,
# if we define only __eq__!
# Breakpoint: https://github.com/python/cpython/pull/113401
# typing.Protocol classes on <=3.11 break if we execute this block,
# because typing.Protocol classes on <=3.11 don't have a
# `__protocol_attrs__` attribute, and this block relies on the
# `__protocol_attrs__` attribute. Meanwhile, typing.Protocol classes on 3.12.2+
# break if we *don't* execute this block, because *they* assume that all
# protocol classes have a `__non_callable_proto_members__` attribute
# (which this block sets)
# The "runtime" alias exists for backwards compatibility.
# Our version of runtime-checkable protocols is faster on Python <=3.11
# Breakpoint: https://github.com/python/cpython/pull/112717
# noqa: E501
# TypeError is consistent with the behavior of NoneType
# Update this to something like >=3.13.0b1 if and when
# PEP 728 is implemented in CPython
# The standard library TypedDict in Python 3.9.0/1 does not honour the "total"
# keyword with old-style TypedDict().  See https://bugs.python.org/issue42059
# The standard library TypedDict below Python 3.11 does not store runtime
# information about optional and required keys when using Required or NotRequired.
# Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11.
# Aaaand on 3.12 we add __orig_bases__ to TypedDict
# to enable better runtime introspection.
# On 3.13 we deprecate some odd ways of creating TypedDicts.
# Also on 3.13, PEP 705 adds the ReadOnly[] qualifier.
# PEP 728 (still pending) makes more changes.
# 3.10.0 and later
# typing.py generally doesn't let you inherit from plain Generic, unless
# the name of the class happens to be "Protocol"
# 3.14.0a7 and earlier
# This was specified in an earlier version of PEP 728. Support
# is retained for backwards compatibility, but only for Python
# 3.13 and lower.
# Support a field called "closed"
# Or "extra_items"
# Breakpoint: https://github.com/python/cpython/pull/104891
# Setting correct module is necessary to make typed dict classes
# pickleable.
# This runs when creating inline TypedDicts:
# 3.13+
# <=3.13
# replaces _strip_annotations()
# Breakpoint: https://github.com/python/cpython/pull/30304
# Assume if last argument is not None they are user defined
# < 3.11
# reverts injected Union[..., None] cases from typing.get_type_hints
# when a None default value is used.
# see https://github.com/python/typing_extensions/issues/310
# avoid accessing __annotations___
# Not a Union[..., None] or replacement conditions not fullfilled
# value=NoneType should have caused a skip above but check for safety
# Forward reference
# Compare if values differ. Note that even if equal
# value might be cached by typing._tp_cache contrary to original_evaluated
# 3.10: ForwardRefs of UnionType might be turned into _UnionGenericAlias
# Python 3.9 has get_origin() and get_args() but those implementations don't support
# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do.
# Breakpoint: https://github.com/python/cpython/pull/25298
# 3.9
# 3.10+
# for pickling:
# Classes using this metaclass must provide a _backported_typevarlike ClassVar
# Add default and infer_variance parameters from PEP 696 and 695
# PEP 695 implemented (3.12+), can pass infer_variance to typing.TypeVar
# Python 3.10+ has PEP 612
# Add default parameter - PEP 696
# PEP 695 implemented, can pass infer_variance to typing.TypeVar
# Inherits from list as a workaround for Callable checks in Python < 3.9.2.
# Trick Generic __parameters__.
# Hack to get typing._type_check to pass.
# 3.9.0-1
# Trick Generic into looking into this for __parameters__.
# Hack to get typing._type_check to pass in Generic.
# 3.9 used by __getitem__ below
# 3.9; accessed during GenericAlias.__getitem__ when substituting
# 3.9 & typing.ParamSpec
# Special case for Z[[int, str, bool]] == Z[int, str, bool]
# This class inherits from list do not convert
# determine new args
# 3.10
# needed for checks in collections.abc.Callable to accept this class
# 3.9.2
# <=3.10
# Hack: Arguments must be types, replace it with one.
# Remove dummy again
# backport needs __args__ adjustment only
# 3.11+; Concatenate does not accept ellipsis in 3.10
# Breakpoint: https://github.com/python/cpython/pull/30969
# <=3.12
# 3.14+?
# TypeForm(X) is equivalent to X but indicates to the type checker
# that the object is a TypeForm.
# PEP 692 changed the repr of Unpack[]
# Breakpoint: https://github.com/python/cpython/pull/104048
# <=3.11
# needed for compatibility with Generic[Unpack[Ts]]
# dataclass_transform exists in 3.11 but lacks the frozen_default parameter
# Breakpoint: https://github.com/python/cpython/pull/99958
# 3.12+
# Python 3.13.3+ contains a fix for the wrapped __new__
# Breakpoint: https://github.com/python/cpython/pull/132160
# Breakpoint: https://github.com/python/cpython/pull/99247
# Breakpoint: https://github.com/python/cpython/pull/23702
# We have to do some monkey patching to deal with the dual nature of
# Unpack/TypeVarTuple:
# - We want Unpack to be a kind of TypeVar so it gets accepted in
# - We want it to *not* be treated as a TypeVar for the purposes of
# If substituting a single ParamSpec with multiple arguments
# we do not check the count
# Generic modifies parameters variable, but here we cannot do this
# deal with TypeVarLike defaults
# required TypeVarLikes cannot appear after a defaulted one.
# since we validate TypeVarLike default in _collect_type_vars
# or _collect_parameters we can safely check parameters[alen]
# Breakpoint: https://github.com/python/cpython/pull/27515
# Python 3.11+
# - Catch AttributeError: not all Python implementations have sys._getframe()
# - Catch ValueError: maybe we're called from an unexpected module
# err on the side of leniency
# If we somehow get invoked from outside typing.py,
# also err on the side of leniency
# Cannot use "in" because origin may be an object with a buggy __eq__ that
# throws an error.
# Python 3.11+ _collect_type_vars was renamed to _collect_parameters
# A required TypeVarLike cannot appear after a TypeVarLike with a default
# if it was a direct call to `Generic[]` or `Protocol[]`
# Also, a TypeVarLike with a default cannot appear after a TypeVarTuple
# Collect nested type_vars
# tuple wrapped by  _prepare_paramspec_params(cls, params)
# A required TypeVarLike cannot appear after a TypeVarLike with default
# Backport typing.NamedTuple as it exists in Python 3.13.
# In 3.11, the ability to define generic `NamedTuple`s was supported.
# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8.
# On 3.12, we added __orig_bases__ to call-based NamedTuples
# On 3.13, we deprecated kwargs-based NamedTuples
# Breakpoint: https://github.com/python/cpython/pull/105609
# TODO: Use inspect.VALUE here, and make the annotations lazily evaluated
# BaseException.add_note() existed on py311,
# but the __set_name__ machinery didn't start
# using add_note() until py312.
# Making sure exceptions are raised in the same way
# as in "normal" classes seems most important here.
# Breakpoint: https://github.com/python/cpython/pull/95915
# noqa: B024
# As a courtesy, register the most common stdlib buffer classes.
# Backport of types.get_original_bases, available on 3.12+ in CPython
# NewType is a class on Python 3.10+, making it pickleable
# The error message for subclassing instances of NewType was improved on 3.11+
# Breakpoint: https://github.com/python/cpython/pull/30268
# Breakpoint: https://github.com/python/cpython/pull/21515
# PEP 604 methods
# It doesn't make sense to have these methods on Python <3.10
# Breakpoint: https://github.com/python/cpython/pull/124795
# Breakpoint: https://github.com/python/cpython/pull/103764
# 3.12-3.13
# Copied and pasted from https://github.com/python/cpython/blob/986a4e1b6fcae7fe7a1d0a26aea446107dd58dd2/Objects/genericaliasobject.c#L568-L582,
# so that we emulate the behaviour of `types.GenericAlias`
# on the latest versions of CPython
# Unpack Backport passes isinstance(type_param, TypeVar)
# Setting this attribute closes the TypeAliasType from further modification
# Match the Python 3.12 error messages exactly
# Allow [], [int], [int, str], [int, ...], [int, T]
# Note in <= 3.9 _ConcatenateGenericAlias inherits from list
# Using 3.9 here will create problems with Concatenate
# alias.__parameters__ is not complete if Concatenate is present
# as it is converted to a list from which no parameters are extracted.
# The presence of this method convinces typing._type_check
# that TypeAliasTypes are types.
# For forward compatibility with 3.12, reject Unions
# that are not accepted by the built-in Union.
# Available since Python 3.14.0a3
# PR: https://github.com/python/cpython/pull/124415
# Available since Python 3.14.0a1
# PR: https://github.com/python/cpython/pull/119891
# Implements annotationlib.ForwardRef.evaluate
# If we pass None to eval() below, the globals of this module are used.
# Type parameters exist in their own scope, which is logically
# between the locals and the globals. We simulate this by adding
# them to the globals.
# Evaluate the forward reference
# Recursively evaluate the type
# Make use of type_params
# lets not overwrite something present
# that Sentinels are types.
# Aliases for items that are in typing in all supported versions.
# We use hasattr() checks so this library will continue to import on
# future versions of Python that may remove these names.
# This is private, but it was defined by typing_extensions for a long time
# and some users rely on it.
# These are defined unconditionally because they are used in
# typing-extensions itself.
# Both libxml2mod and libxsltmod have a dependency on libxml2.so
# and they should share the same module, try to convince the python
# loader to work in that mode if feasible
# is there a better method ?
# Everything below this point is automatically generated
# Functions from module extensions
# Functions from module extra
# Functions from module xslt
# Functions from module xsltInternals
# Functions from module xsltlocale
# Functions from module xsltutils
# xpathParserContext functions from module extra
# xpathParserContext functions from module functions
# xpathContext functions from module functions
# accessors for transformCtxt
# transformCtxt functions from module attributes
# transformCtxt functions from module documents
# transformCtxt functions from module extensions
# transformCtxt functions from module extra
# transformCtxt functions from module imports
# transformCtxt functions from module namespaces
# transformCtxt functions from module python
# transformCtxt functions from module templates
# transformCtxt functions from module variables
# transformCtxt functions from module xsltInternals
# transformCtxt functions from module xsltutils
# accessors for stylesheet
# stylesheet functions from module attributes
# stylesheet functions from module documents
# stylesheet functions from module extensions
# stylesheet functions from module imports
# stylesheet functions from module keys
# stylesheet functions from module namespaces
# stylesheet functions from module pattern
# stylesheet functions from module preproc
# stylesheet functions from module python
# stylesheet functions from module variables
# stylesheet functions from module xsltInternals
# stylesheet functions from module xsltutils
# xsltDebugStatusCodes
# xsltErrorSeverityType
# xsltStyleType
# xsltLoadType
# xsltOutputType
# xsltSecurityOption
# xsltTransformState
# xsltDebugTraceCodes
# SPDX-FileCopyrightText: 2015 Eric Larson
# SPDX-License-Identifier: Apache-2.0
# Could do syntax based normalization of the URI before
# computing the digest. See Section 6.2.2 of Std 66.
# https://tools.ietf.org/html/rfc7234#section-5.2
# We do not support caching of partial content: so if the request contains a
# Range header then we don't want to load anything from the cache.
# Bail out if the request insists on fresh data
# Check whether we can load the response from the cache:
# If we have a cached permanent redirect, return it immediately. We
# don't need to test our response for other headers b/c it is
# intrinsically "cacheable" as it is Permanent.
# See:
# Client can try to refresh the value by repeating the request
# with cache busting headers as usual (ie no-cache).
# Without date or etag, the cached response can never be used
# and should be deleted.
# TODO: There is an assumption that the result will be a
# determine freshness
# Check the max-age pragma in the cache control header
# If there isn't a max-age, check for an expires header
# Determine if we are setting freshness limit in the
# request. Note, this overrides what was in the response.
# adjust our current age by our min fresh
# Return entry if it is fresh enough
# we're not fresh. If we don't have an Etag, clear it out
# return the original handler
# We pass in the body separately; just put a placeholder empty
# string in the metadata.
# body is None can happen when, for example, we're only updating
# headers, as is the case in update_cached_response().
# The weakref can be None only in case the user used streamed request
# and did not consume or close it, and holds no reference to requests.Response.
# In such case, we don't want to cache the response.
# From httplib2: Don't cache 206's since we aren't going to
# If we've been given a body, our response has a Content-Length, that
# Content-Length is valid then we can check to see if the body we've
# been given matches the expected size, and if it doesn't we'll just
# skip trying to cache it.
# Delete it from the cache if we happen to have it stored there
# https://tools.ietf.org/html/rfc7234#section-4.1:
# A Vary header field-value of "*" always fails to match.
# Storing such a response leads to a deserialization warning
# during cache lookup and is not allowed to ever be served,
# so storing it can be avoided.
# If we've been given an etag, then keep the response
# Add to the cache any permanent redirects. We do this before looking
# that the Date headers.
# Add to the cache if the response headers demand it. If there
# is no date header then we can't do anything about expiring
# the cache.
# cache when there is a max-age > 0
# If the request can expire, it means we should cache it
# in the meantime.
# we didn't have a cached response
# Lets update our headers with the headers from the new request:
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
# The server isn't supposed to send headers that would make
# the cached body invalid. But... just in case, we'll be sure
# to strip out ones we know that might be problmatic due to
# typical assumptions.
# we want a 200 b/c we have content via the cache
# update our cache
# The vagaries of garbage collection means that self.__fp is
# not always set.  By using __getattribute__ and the private
# name[0] allows looking up the attribute value and raising an
# AttributeError when it doesn't exist. This stop things from
# infinitely recursing calls to getattr in the case where
# self.__fp hasn't been set.
# [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers
# We just don't cache it then.
# TODO: Add some logging here...
# Empty file:
# Return the data without actually loading it into memory,
# relying on Python's buffer API and mmap(). mmap() just gives
# a view directly into the filesystem's memory cache, so it
# doesn't result in duplicate memory use.
# We assign this to None here, because otherwise we can get into
# really tricky problems where the CPython interpreter dead locks
# because the callback is holding a reference to something which
# has a __del__ method. Setting this to None breaks the cycle
# and allows the garbage collector to do it's thing normally.
# Closing the temporary file releases memory and frees disk space.
# Important when caching big files.
# We may be dealing with b'', a sign that things are over:
# it's passed e.g. after we've already closed self.__buf.
# urllib executes this read to toss the CRLF at the end
# of the chunk.
# When a body isn't passed in, we'll read the response. We
# also update the response with a new file handler to be
# sure it acts as though it was never read.
# Empty bytestring if body is stored separately
# Construct our vary headers
# Short circuit if we've been given an empty set of data
# Previous versions of this library supported other serialization
# formats, but these have all been removed.
# Special case the '*' Vary value as it means we cannot actually
# determine if the cached response is suitable for this request.
# This case is also handled in the controller code when creating
# a cache entry, but is left here for backwards compatibility.
# Ensure that the Vary headers for the cached response match our
# request
# This can happen if cachecontrol serialized to v1 format (pickle)
# using Python 2. A Python 2 str(byte string) will be unpickled as
# a Python 3 str (unicode string), which will cause the above to
# fail with:
# Discard any `strict` parameter serialized by older version of cachecontrol.
# Make a request to get a response
# Turn on logging
# try setting the cache
# Now try to get it
# type: ignore[index,misc]
# check for etags and add headers if appropriate
# Check for any heuristics that might update headers
# before trying to cache.
# apply any expiration heuristics
# We must have sent an ETag request. This could mean
# that we've been expired already or that we simply
# have an etag. In either case, we want to try and
# update the cache if that is the case.
# We are done with the server response, read a
# possible response body (compliant servers will
# not return one, but we cannot be 100% sure) and
# release the connection back to the pool.
# We always cache the 301 responses
# Wrap the response file with a wrapper that will cache the
# type: ignore[method-assign]
# See if we should invalidate the cache.
# Give the request a from_cache attr to let people use it
# type: ignore[no-untyped-call]
# NOTE: This method should not change as some may depend on it.
# Make sure the directory exists
# Write our actual file
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
# from winbase.h
# If the position is out of range, do nothing.
# Adjust for Windows' SetConsoleCursorPosition:
# Adjust for viewport's scroll position
# Resume normal processing
# Note that this is hard-coded for ANSI (vs wide) bytes.
# double-underscore everything to prevent clashes with names of
# attributes on the wrapped stream object.
# special method lookup bypasses __getattr__/__getattribute__, see
# https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit
# thus, contextlib magic methods are not proxied via __getattr__
# AttributeError in the case that the stream doesn't support being closed
# ValueError for the case that the stream has already been detached when atexit runs
# Control Sequence Introducer
# Operating System Command
# The wrapped stream (normally sys.stdout or sys.stderr)
# should we reset colors to defaults after every .write()
# create the proxy wrapping our output stream
# We test if the WinAPI works, because even if we are on Windows
# we may be using a terminal that doesn't support the WinAPI
# (e.g. Cygwin Terminal). In this case it's up to the terminal
# to support the ANSI codes.
# should we strip ANSI sequences from our output?
# should we should convert ANSI sequences into win32 calls?
# dict of ansi codes to win32 functions and parameters
# are we wrapping stderr?
# defaults:
# cursor position - absolute
# cursor position - relative
# A - up, B - down, C - forward, D - back
# 0 - change title and icon (we will only change title)
# 1 - change icon (we don't support this)
# 2 - change title
# from wincon.h
# dim text, dim background
# bright text, dim background
# dim text, bright background
# In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style.
# So that LIGHT_EX colors and BRIGHT style do not clobber each other,
# we track them separately, since LIGHT_EX is overwritten by Fore/Back
# and BRIGHT is overwritten by Style codes.
# Emulate LIGHT_EX with BRIGHT Style
# Emulate LIGHT_EX with BRIGHT_BACKGROUND Style
# Because Windows coordinates are 0-based,
# and win32.SetConsoleCursorPosition expects 1-based.
# I'm not currently tracking the position, so there is no default.
# position = self.get_position()
# 0 should clear from the cursor to the end of the screen.
# 1 should clear from the cursor to the beginning of the screen.
# 2 should clear the entire screen, and move cursor to (1,1)
# get the number of character cells in the current buffer
# get number of character cells before current cursor position
# invalid mode
# fill the entire screen with blanks
# now set the buffer's attributes accordingly
# put the cursor where needed
# 0 should clear from the cursor to the end of the line.
# 1 should clear from the cursor to the beginning of the line.
# 2 should clear the entire line.
# Can get TypeError in testsuite where 'fd' is a Mock()
# the subclasses declare class attributes which are numbers.
# Upon instantiation we define instance attributes, which are the same
# as the class attributes but wrapped with the ANSI escape sequence
# These are fairly well supported, but not part of the standard.
# no-op if it wasn't registered
# python 2: no atexit.unregister. Oh well, we did our best.
# Issue #74: objects might become None at exit
# Someone already ran init() and it did stuff, so we won't second-guess them
# On newer versions of Windows, AnsiToWin32.__init__ will implicitly enable the
# native ANSI support in the console as a side-effect. We only need to actually
# replace sys.stdout/stderr if we're in the old-style conversion mode.
# Use this for initial setup as well, to reduce code duplication
# python 2
# missing arguments
# wrong OSC command
# should work
# wrong set command
# see issue #247
# Pretend to be on Windows
# Pretend that our mock stream has native ANSI support
# Our fake console says it has native vt support, so AnsiToWin32 should
# enable that support and do nothing else.
# Now let's pretend we're on an old Windows console, that doesn't have
# native ANSI support.
# We can't use assertNotWrapped here because replace_by(None)
# changes stdout/stderr already.
# just_fix_windows_console should be a no-op
# Emulate stdout=not a tty, stderr=tty
# to check that we handle both cases correctly
# Regular single-call test
# second call without resetting is always a no-op
# If init() runs first, just_fix_windows_console should be a no-op
# sanity check: stdout should be a file or StringIO object.
# It will only be AnsiToWin32 if init() has previously wrapped it
# Check the light, extended versions.
# Class-level variable, for convenience to use as a singleton.
# noqa
# https://github.com/postgres/postgres/blob/master/src/bin/psql/describe.c#L5471-L5638
# This is a simple \d[+] command. No table name to follow.
# This is a \d <tablename> command. A royal pain in the ass.
# Execute the sql, get the results and call describe_one_table_details on each table.
# Create a namedtuple called tableinfo and match what's in describe.c
# If it's a seq, fetch it's value and store it for later.
# Do stuff here.
# Get column info
# index, or partitioned index
# Set the column names.
# /* Check if table is a view or materialized view */
# Prepare the cells of the table to print.
# Column
# Type
# Sequence
# Index column
# /* FDW options for foreign table column, only for 9.2 or later */
# Make Footers
# /* Footer information about an index */
# /* we assume here that index and table are in same schema */
# add_tablespace_footer(&cont, tableinfo.relkind,
# tableinfo.tablespace, true);
# /* Footer information about a sequence */
# /* Get the column that owns this sequence */
# /*
# * If we get no rows back, don't show anything (obviously). We should
# * never get more than one row back, but if we do, just ignore it and
# * don't print anything.
# */
# /* Footer information about a table */
# /* untranslated indextname */
# /* If exclusion constraint, print the constraintdef */
# /* Label as primary key or unique (but not both) */
# /* Everything after "USING" is echoed verbatim */
# /* Need these for deferrable PK/UNIQUE indexes */
# /* Add these for all cases */
# printTableAddFooter(&cont, buf.data);
# /* Print tablespace of the index on the same line */
# add_tablespace_footer(&cont, 'i',
# atooid(PQgetvalue(result, i, 10)),
# false);
# /* print table (and column) check constraints */
# /* untranslated contraint name and def */
# /* print foreign-key constraints (there are none if no triggers) */
# /* untranslated constraint name and def */
# /* print incoming foreign-key references (none if no triggers) */
# /* print rules */
# /* Everything after "CREATE RULE" is echoed verbatim */
# /* print partition info */
# /* print partition key */
# /* print list of partitions */
# /* Footer information about a view */
# * Print triggers next, if any (but only user-defined triggers).  This
# * could apply to either a table or a view.
# * split the output into 4 different categories. Enabled triggers,
# * disabled triggers and the two special ALWAYS and REPLICA
# * configurations.
# * Check if this trigger falls into the current category
# /* Print the category heading once */
# /* Everything after "TRIGGER" is echoed verbatim */
# * Finish printing the footer information about a table.
# /* print foreign server name */
# /* Footer information about foreign table */
# /* Print server name */
# /* Print per-table FDW options, if any */
# /* print inherited tables */
# /* print child tables */
# /* print the number of child tables, if any */
# /* display the list of child tables */
# /* Table type */
# /* OIDs, if verbose and not a materialized view */
# /* Tablespace info */
# add_tablespace_footer(&cont, tableinfo.relkind, tableinfo.tablespace,
# true);
# /* reloptions, if verbose */
# Found schema/name separator, move current pattern to schema
# Dollar is always quoted, whether inside quotes or not.
# Default static commands that don't rely on PGSpecial state are registered
# via the special_command decorator and stored in default_commands
# Account for 3 characters between each column
# Add 2 columns for a bit of buffer
# It is possible to have `\e filename` or `SELECT * FROM \e`. So we check
# for both conditions.
# The reason we can't simply do .strip('\e') is that it strips characters,
# not a substring. So it'll strip "e" in the end of the sql also!
# Ex: "select * from style\e" -> "select * from styl".
# Populate the editor buffer with the partial sql (if available) and a
# placeholder comment.
# Don't return None for the caller to deal with.
# Empty string is ok.
# Replace the specified file destination with STDIN or STDOUT
# In case of arguments aggregation we replace all positional arguments until the
# first one not present in the query. Then we aggregate all the remaining ones and
# replace the placeholder with them.
# remove consumed arguments ( - 1 to include current value)
# we consumed all arguments
# If either name or query is missing then print the usage and complain.
# SPDX-License-Identifier: MIT
# noqa: F403
# /!\ This version compatibility check section must be Python 2 compatible. /!\
# Copied from pyproject.toml
# From here on, we can use Python 3 features, but the syntax must remain
# Python 2 compatible.
# noqa: E402
# Remove '' and current working directory from the first entry
# of sys.path, if present to avoid using current directory
# in pip commands check, freeze, install, list and show,
# when invoked as python -m pip <command>
# If we are running from a wheel, add the wheel to sys.path
# This allows the usage python pip-*.whl/pip install pip-*.whl
# __file__ is pip-*.whl/pip/__main__.py
# first dirname call strips of '/__main__.py', second strips off '/pip'
# Resulting path is the name of the wheel itself
# Add that to sys.path so we can import pip
# init_logging() must be called before any call to logging.getLogger()
# which happens at import of most modules.
# This would happen if someone is using pip from inside a zip file. In that
# case, we can use that directly.
# virtualenv < 20 overwrites site.py without getsitepackages
# fallback on get_purelib/get_platlib.
# this is known to miss things, but shouldn't in the cases
# where getsitepackages() has been removed (inside a virtualenv)
# As the build environment is ephemeral, it's wasteful to
# pre-compile everything, especially as not every Python
# module will be used/compiled in most cases.
# The prefix specified two lines above, thus
# target from config file or env var should be ignored
# Customize site to:
# - ensure .pth files are honored
# - prevent access to system site packages
# We're explicitly evaluating with an empty extra value, since build
# environments are not provided any mechanism to select specific extras.
# FIXME: Consider direct URL?
# Shorthand
# The kinds of configurations there are.
# User Specific
# System Wide
# [Virtual] Environment Specific
# from PIP_CONFIG_FILE
# from Environment Variables
# NOTE: Maybe use the optionx attribute to normalize keynames.
# only prefer long opts
# Because we keep track of where we got the data from
# disassembling triggers a more useful error message than simply
# "No such key" in the case that the key isn't in the form command.option
# Modify the parser and the configuration
# The option was not removed.
# The section may be empty after the option was removed.
# Ensure directory exists.
# Ensure directory's permission(need to be writeable)
# Private routines
# NOTE: Dictionaries are not populated if not loaded. So, conditionals
# If there's specific variant set in `load_only`, load only
# that variant, not the others.
# Keeping track of the parsers used
# If there is no such file, don't bother reading it but create the
# parser anyway, to hold the data.
# Doing this is useful when modifying and saving files, where we don't
# need to construct a parser.
# See https://github.com/pypa/pip/issues/4963
# See https://github.com/pypa/pip/issues/4893
# XXX: This is patched in the tests.
# SMELL: Move the conditions out of this function
# per-user config is not loaded when env_config_file exists
# The legacy config file is overridden by the new config file
# virtualenv config
# Determine which parser to modify
# This should not happen if everything works correctly.
# Use the highest priority parser.
# Try to load the existing state
# Explicitly suppressing exceptions, since we don't want to
# error out if the cache file is invalid.
# Determine if we need to refresh the state
# If we do not have a path to cache in, don't bother saving.
# Check to make sure that we own the directory
# Now that we've ensured the directory is owned by this user, we'll go
# ahead and make sure that all our directories are created.
# Include the key so it's easy to tell which pip wrote the
# file.
# Since we have a prefix-specific state file, we can just
# overwrite whatever is there, no need to check.
# Best effort.
# Lets use PackageFinder to see what the latest pip version is
# Pass allow_yanked=False so we don't suggest upgrading to a
# yanked version.
# Explicitly set to False
# Only suggest upgrade if pip is installed by pip.
# We want to generate an url to use as our cache key, we don't want to
# just reuse the URL because it might have other items in the fragment
# and we don't care about those.
# Include interpreter name, major and minor version in cache key
# to cope with ill-behaved sdists that build a different wheel
# depending on the python version their setup.py is being run on,
# and don't encode the difference in compatibility tags.
# https://github.com/pypa/pip/issues/7296
# Encode our key url with sha224, we'll use this because it has similar
# security properties to sha256, but with a shorter total output (and
# thus less secure). However the differences don't make a lot of
# difference for our use case here.
# We want to nest the directories some to prevent having a ton of top
# level directories where we might run out of sub directories on some
# FS.
# Store wheels within the root cache_dir
# Built for a different python/arch/etc
# TODO: use DirectUrl.equivalent when
# https://github.com/pypa/pip/pull/10564 is merged.
# Scaffolding
# Ensure a proper reference is provided.
# Present the main message, with relevant context indented.
# Actual Errors
# Use `dist` in the error message because its stringification
# includes more information, like the version and location.
# Dodge circular import.
# In the case of URL-based requirements, display the original URL
# seen in the requirements file rather than the package name,
# so the output can be directly copied into the requirements file.
# In case someone feeds something downright stupid
# to InstallRequirement's constructor.
# For now, all the decent hashes have 6-char names, so we can get
# away with hard-coding space literals.
# LC_MESSAGES is in POSIX, but not the C standard. The most common
# platform that does not implement this category is Windows, where
# using other categories for console message localization is equally
# unreliable, so we fall back to the locale-less vendor message. This
# can always be re-evaluated when a vendor proposes a new alternative.
# Download retrying is not enabled.
# we only build PEP 660 editable requirements
# never cache editable requirements
# VCS checkout. Do not cache
# unless it points to an immutable commit hash.
# Otherwise, do not cache.
# Install build deps into temporary directory (PEP 518)
# Ignore return, we can't do anything else useful.
# Build the wheels.
# Record the download origin in the cache
# download_info is guaranteed to be set because when we build an
# InstallRequirement it has been through the preparer before, but
# let's be cautious.
# Update the link for this.
# notify success/failure
# Return a list of requirements that failed to build
# The following cases must use PEP 517
# We check for use_pep517 being non-None and falsy because that means
# the user explicitly requested --no-use-pep517.  The value 0 as
# opposed to False can occur when the value is provided via an
# environment variable or config file option (due to the quirk of
# strtobool() returning an integer in pip's configuration code).
# If we haven't worked out whether to use PEP 517 yet,
# and the user hasn't explicitly stated a preference,
# we do so if the project has a pyproject.toml file
# or if we cannot import setuptools or wheels.
# We fallback to PEP 517 when without setuptools or without the wheel package,
# so setuptools can be installed as a default build backend.
# For more info see:
# https://discuss.python.org/t/pip-without-setuptools-could-the-experience-be-improved/11810/9
# https://github.com/pypa/pip/issues/8559
# At this point, we know whether we're going to use PEP 517.
# If we're using the legacy code path, there is nothing further
# for us to do here.
# Either the user has a pyproject.toml with no build-system
# section, or the user has no pyproject.toml, but has opted in
# explicitly via --use-pep517.
# In the absence of any explicit backend specification, we
# assume the setuptools backend that most closely emulates the
# traditional direct setup.py execution, and require wheel and
# a version of setuptools that supports that backend.
# If we're using PEP 517, we have build system information (either
# from pyproject.toml, or defaulted by the code above).
# Note that at this point, we do not know if the user has actually
# specified a backend, though.
# Ensure that the build-system section in pyproject.toml conforms
# to PEP 518.
# Specifying the build-system table but not the requires key is invalid
# Error out if requires is not a list of strings
# Each requirement must be valid as per PEP 508
# If the user didn't specify a backend, we assume they want to use
# the setuptools backend. But we can't be sure they have included
# a version of setuptools which supplies the backend. So we
# make a note to check that this requirement is present once
# we have set up the environment.
# This is quite a lot of work to check for a very specific case. But
# the problem is, that case is potentially quite common - projects that
# adopted PEP 518 early for the ability to specify requirements to
# execute setup.py, but never considered needing to mention the build
# tools themselves. The original PEP 518 code had a similar check (but
# implemented in a different way).
# Import ssl from compat so the initial import occurs in only one place.
# Ignore warning raised when using --trusted-host.
# protocol, hostname, port
# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
# ssh is always secure.
# These are environment variables present when running under various
# CI systems.  For each variable, some CI systems that use the variable
# are indicated.  The collection was chosen so that for each of a number
# of popular systems, at least one of the environment variables is used.
# This list is used to provide some indication of and lower bound for
# CI traffic to PyPI.  Thus, it is okay if the list is not comprehensive.
# For more background, see: https://github.com/pypa/pip/issues/5499
# Azure Pipelines
# Jenkins
# AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI
# Explicit environment variable.
# We don't use the method of checking for a tty (e.g. using isatty())
# because some CI systems mimic a tty (e.g. Travis CI).  Thus that
# method doesn't provide definitive information in either direction.
# Complete Guess
# If for any reason `rustc --version` fails, silently ignore it
# The format of `rustc --version` is:
# `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'`
# We extract just the middle (1.52.1) part
# Use None rather than False so as not to give the impression that
# pip knows it is not being run under CI.  Rather, it is a null or
# inconclusive result.  Also, we include some value rather than no
# value to make it easier to know that the check has been run.
# format the exception raised as a io.BytesIO object,
# to return a better error message:
# Proxy manager replaces the pool manager, so inject our SSL
# context here too. https://github.com/pypa/pip/issues/13288
# Namespace the attribute with "pip_" just in case to prevent
# possible conflicts with the base class.
# Attach our User Agent to the request
# Attach our Authentication handler to the session
# Create our urllib3.Retry instance which will allow us to customize
# how we handle retries.
# Set the total number of retries that a particular request can
# have.
# A 503 error from PyPI typically means that the Fastly -> Origin
# connection got interrupted in some way. A 503 error in general
# is typically considered a transient error so we'll go ahead and
# retry it.
# A 500 may indicate transient error in Amazon S3
# A 502 may be a transient error from a CDN like CloudFlare or CloudFront
# A 520 or 527 - may indicate transient error in CloudFlare
# Add a small amount of back off between failed requests in
# order to prevent hammering the service.
# Our Insecure HTTPAdapter disables HTTPS validation. It does not
# support caching so we'll use it for all http:// URLs.
# If caching is disabled, we will also use it for
# https:// hosts that we've marked as ignoring
# TLS errors for (trusted-hosts).
# We want to _only_ cache responses on securely fetched origins or when
# the host is specified as trusted. We do this because
# we can't validate the response of an insecurely/untrusted fetched
# origin, and we don't want someone to be able to poison the cache and
# require manual eviction from the cache to fix it.
# Enable file:// urls
# Mount wildcard ports for the same host.
# Determine if this url used a secure transport mechanism
# The protocol to use to see if the protocol matches.
# Don't count the repository type as part of the protocol: in
# cases such as "git+ssh", only use "ssh". (I.e., Only verify against
# the last scheme.)
# Determine if our origin is a secure origin by looking through our
# hardcoded list of secure origins, as well as any additional ones
# configured on this PackageFinder instance.
# We don't have both a valid address or a valid network, so
# we'll check this origin against hostnames.
# We have a valid address and network, so see if the address
# is contained within the network.
# Check to see if the port matches.
# If we've gotten here, then this origin matches the current
# secure origin and we should return True
# If we've gotten to this point, then the origin isn't secure and we
# will not accept it as a valid location to search. We will however
# log a warning that we are ignoring it.
# Allow setting a default timeout on a session
# Allow setting a default proxies on a session
# Dispatch the actual request
# We need to sanitize the filename to prevent directory traversal
# in case the filename contains ".." path parts.
# fallback
# Have a look at the Content-Disposition header for a better guess
# If the download size is not known, then give up downloading the file.
# Fallback: if the server responded with 200 (i.e., the file has
# since been modified or range requests are unsupported) or any
# other unexpected status, restart the download from the beginning.
# No more resume attempts. Raise an error if the download is still incomplete.
# If we successfully completed the download via resume, manually cache it
# as a complete response to enable future caching
# Check if the adapter is the CacheControlAdapter (i.e. caching is enabled)
# Check SafeFileCache is being used
# Save metadata and then stream the file contents to cache.
# To better understand the download resumption logic, see the mdn web docs:
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Range_requests
# If possible, use a conditional range request to avoid corrupted
# downloads caused by the remote file changing in-between.
# Support keyring's get_credential interface which supports getting
# credentials without a username. This is only available for
# keyring>=15.2.0.
# This is the default implementation of keyring.get_credential
# https://github.com/jaraco/keyring/blob/97689324abcf01bd1793d49063e7ca01e03d7d07/keyring/backend.py#L134-L139
# keyring has previously failed and been disabled
# In the event of an unexpected exception
# we should warn the user
# all code within this function is stolen from shutil.which implementation
# bpo-35755: Don't use os.defpath if the PATH environment variable is
# set to an empty string
# When the user is prompted to enter credentials and keyring is
# available, we will offer to save them. If the user accepts,
# this value is set to the credentials they entered. After the
# request authenticates, the caller should call
# ``save_credentials`` to save these.
# The free function get_keyring_provider has been decorated with
# functools.cache. If an exception occurs in get_keyring_auth that
# cache will be cleared and keyring disabled, take that into account
# if you want to remove this indirection.
# We won't use keyring when --no-input is passed unless
# a specific provider is requested because it might require
# user interaction
# Do nothing if no url was provided
# Log the full exception (with stacktrace) at debug, so it'll only
# show up when running in verbose mode.
# Always log a shortened version of the exception.
# Split the credentials and netloc from the url.
# Start with the credentials embedded in the url
# Find a matching index url for this request
# Split the credentials from the url.
# If an index URL was found, try its embedded credentials
# Get creds from netrc if we still don't have them
# If we don't have a password and keyring is available, use it.
# The index url is more specific than the netloc, so try it first
# fmt: off
# fmt: on
# Try to get credentials from original url
# If credentials not found, use any stored credentials for this netloc.
# Do this if either the username or the password is missing.
# This accounts for the situation in which the user has specified
# the username in the index url, but the password comes from keyring.
# It is possible that the cached credentials are for a different username,
# in which case the cache should be ignored.
# Convert the username and password if they're None, so that
# this netloc will show up as "cached" in the conditional above.
# Further, HTTPBasicAuth doesn't accept None, so it makes sense to
# cache the value that is going to be used.
# Store any acquired credentials.
# Credentials were found
# Credentials were not found
# Get credentials for this request
# Set the url of the request to the url without any credentials
# Send the basic auth with this request
# Attach a hook to handle 401 responses
# Factored out to allow for easy patching in tests
# We only care about 401 responses, anything else we want to just
# Query the keyring for credentials:
# We are not able to prompt the user so simply return the response
# Prompt the user for a new username and password
# Store the new username and password to use for future requests
# Prompt to save the password to keyring
# Consume content and release the original connection to allow our new
# The result of the assignment isn't used, it's just needed to consume
# the content.
# Add our new username and password to the request
# On successful request, save the credentials that were used to
# keyring. (Note that if the user responded "no" above, this member
# is not set and nothing will be saved.)
# Send our new request
# The following comments and HTTP headers were originally added by
# Donald Stufft in git commit 22c562429a61bb77172039e480873fb239dd8c03.
# We use Accept-Encoding: identity here because requests defaults to
# accepting compressed responses. This breaks in a variety of ways
# depending on how the server is configured.
# - Some servers will notice that the file isn't a compressible file
# - Some servers will notice that the file is already compressed and
# - Some servers won't notice anything at all and will take a file
# By setting this to request only the identity encoding we're hoping
# to eliminate the third case.  Hopefully there does not exist a server
# which when given a file will notice it is already compressed and that
# you're not asking for a compressed file and will then decompress it
# before sending because if that's the case I don't think it'll ever be
# possible to make this work.
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn't utf-8, we fall back to iso-8859-1 for all other
# encodings.
# Special case for urllib3.
# We use decode_content=False here because we don't
# want urllib3 to mess with the raw bytes we get
# from the server. If we decompress inside of
# urllib3 then we cannot verify the checksum
# because the checksum will be of the compressed
# file. This breakage will only occur if the
# server adds a Content-Encoding header, which
# depends on how the server was configured:
# - Some servers will notice that the file isn't a
# - Some servers will notice that the file is
# - Some servers won't notice anything at all and
# By setting this not to decode automatically we
# hope to eliminate problems with the second case.
# Standard file-like object.
# From cachecontrol.caches.file_cache.FileCache._fn, brought into our
# class for backwards-compatibility and to avoid using a non-public
# method.
# The cache entry is only valid if both metadata and body exist.
# Inherit the read/write permissions of the cache directory
# to enable multi-user cache use-cases.
# select read/write permissions of cache directory
# set owner read/write permissions
# Change permissions only if there is no risk of following a symlink.
# For read-only ZIP files, ZipFile only needs methods read,
# seek, seekable and tell, not the whole IO protocol.
# After context manager exit, wheel.name
# is an invalid file by intention.
# For read-only ZIP files, ZipFile only needs
# methods read, seek, seekable and tell.
# TODO: Get range requests to be correctly cached
# This dictionary does a bunch of heavy lifting for help output:
# - Enables avoiding additional (costly) imports for presenting `--help`.
# - The ordering matters for help display.
# Even though the module path starts with the same "pip._internal.commands"
# prefix, the full path makes testing easier (specifically when modifying
# `commands_dict` in test setup / teardown).
# Lazy import the heavy index modules as most list invocations won't need 'em.
# Pass allow_yanked=False to ignore yanked versions.
# get_not_required must be called firstly in order to find and
# filter out all dependencies correctly. Otherwise a package
# can't be identified as requirement because some parent packages
# could be filtered out before.
# Create a set to remove duplicate packages, and cast it to a list
# to keep the return type consistent with get_outdated and
# get_uptodate
# Remove prereleases
# insert the header first: we need to know the size of column names
# Create and add a separator.
# if we're working on the 'outdated' list, separate out the
# latest_version and type
# build wheels
# copy from cache to target directory
# if this is the highest version, replace summary and score
# wrap and indent summary to fit terminal
# Only when installing is it permitted to use PEP 660.
# In other circumstances (pip wheel, pip download) we generate
# regular (i.e. non editable) metadata and wheels.
# editable doesn't really make sense for `pip download`, but the bowels
# of the RequirementSet code require that property.
# Determine action
# Determine which configuration files are to be loaded
# Load a new configuration
# Error handling happens here, not in the action-handlers.
# Default to user, unless there's a site file.
# Iterate over config files and print if they exist, and the
# key-value pairs present in them if they do
# This shouldn't happen, unless we see a username like that.
# If that happens, we'd appreciate a pull request fixing this.
# We successfully ran a modifying command. Need to save the
# configuration.
# This logic is from PEP 753 (Well-known Project URLs in Metadata).
# Avoid duplicates in requirements (e.g. due to environment markers).
# It's common that there is a "homepage" Project-URL, but Home-page
# remains unset (especially as PEP 621 doesn't surface the field).
# 'pip help' with no args is handled by pip.__init__.parseopt()
# the command we need help for
# Eagerly import self_outdated_check to avoid crashes. Otherwise,
# this module would be imported *after* pip was replaced, resulting
# in crashes if the new self_outdated_check module was incompatible
# with the rest of pip that's already imported, or allowing a
# wheel to execute arbitrary code on install by replacing
# self_outdated_check.
# Check whether the environment we're installing into is externally
# managed, as specified in PEP 668. Specifying --root, --target, or
# --prefix disables the check, since there's no reliable way to locate
# the EXTERNALLY-MANAGED file for those cases. An exception is also
# made specifically for "--dry-run --report" for convenience.
# Create a target directory for using with the target option
# If we're not replacing an already installed pip,
# we're not modifying it.
# Check for conflicts in the package set we're installing.
# Don't warn about script install locations if
# --target or --prefix has been specified
# Display a summary of installed packages, with extra care to
# display a package name as it was requested by the user.
# Checking both purelib and platlib directories for installed
# packages to be moved to target directory
# NOTE: There is some duplication here, with commands/check.py
# In some cases (config from tox), use_user_site can be set to an integer
# rather than a bool, which 'use_user_site is False' wouldn't catch.
# If we are here, user installs have not been explicitly requested/avoided
# user install incompatible with --prefix/--target
# If user installs are not enabled, choose a non-user install
# If we have permission for a non-user install, do that,
# otherwise do a user install.
# Mention the error if we are not going to show a traceback
# Spilt the error indication from a helper message (if any)
# Suggest useful actions to the user:
# Suggest to check "pip config debug" in case of invalid proxy
# On Windows, errors like EINVAL or ENOENT may occur
# if a file or folder name exceeds 255 characters,
# or if the full path exceeds 260 characters and long path support isn't enabled.
# This condition checks for such cases and adds a hint to the error output.
# Only fetch http files if no specific pattern given
# Add the pattern to the log message
# The wheel filename format, as specified in PEP 427, is:
# Additionally, non-alphanumeric values in the distribution are
# normalized to underscores (_), meaning hyphens can never occur
# before `-{version}`.
# Given that information:
# - If the pattern we're given contains a hyphen (-), the user is
# - If the pattern we're given doesn't contain a hyphen (-), the
# PEP 427: https://www.python.org/dev/peps/pep-0427/
# Purge non version specifying lines.
# Also, remove any space prefix or suffixes (including comments).
# Transform into "module" -> version dict.
# Module name can be uppercase in vendor.txt for some reason...
# PATCH: setuptools is actually only pkg_resources.
# We allow 'truststore' to fail to import due
# to being unavailable on Python 3.9 and earlier.
# Try to find version in debundled module info.
# Display the target options that were explicitly provided.
# TODO tags? scheme?
# direct_url. Note that we don't have download_info (as in the installation
# report) since it is not recorded in installed metadata.
# Emulate direct_url for legacy editable installs.
# installer
# requested
# Store the given py_version_info for when we call get_supported().
# This is used to cache the return value of get_(un)sorted_tags.
# Pass versions=None if no py_version_info was given since
# versions=None uses special default logic.
# To make mypy happy specify type hints that can come from either
# parse_wheel_filename or the legacy_wheel_file_re match.
# Check if the wheel filename is in the legacy format
# Generate the file tags from the legacy wheel filename
# Parse the build tag from the legacy wheel filename
# TODO: This needs Python 3.10's improved slots support for dataclasses
# to be converted into a dataclass.
# Don't include an allow_yanked default value to make sure each call
# site considers whether yanked releases are allowed. This also causes
# that decision to be made explicit in the calling code, which helps
# people when reading the code.
# Build find_links. If an argument starts with ~, it may be
# a local file relative to a home directory. So try normalizing
# it and if it exists, use the normalized version.
# This is deliberately conservative - it might be fine just to
# blindly normalize anything starting with a ~...
# If we don't have TLS enabled, then WARN if anyplace we're looking
# relies on TLS.
# Parse the URL
# URL is generally invalid if scheme and netloc is missing
# there are issues with Python and URL parsing, so this test
# is a bit crude. See bpo-20271, bpo-23505. Python doesn't
# always parse invalid URLs correctly - it should raise
# exceptions for malformed URLs
# For maximum compatibility with easy_install, ensure the path
# ends in a trailing slash.  Although this isn't in the spec
# (and PyPI can handle it without the slash) some other index
# implementations might break if they relied on easy_install's
# behavior.
# PEP 610 json for the download URL. download_info.archive_info.hashes may
# be absent when the requirement was installed from the wheel cache
# and the cache entry was populated by an older pip version that did not
# record origin.json.
# is_direct is true if the requirement was a direct URL reference (which
# includes editable requirements), and false if the requirement was
# downloaded from a PEP 503 index or --find-links.
# is_yanked is true if the requirement was yanked from the index, but
# was still selected by pip to conform to PEP 592.
# requested is true if the requirement was specified by the user (aka
# top level requirement), and false if it was installed as a dependency of a
# requirement. https://peps.python.org/pep-0376/#requested
# PEP 566 json encoding for metadata
# https://www.python.org/dev/peps/pep-0566/#json-compatible-metadata
# For top level requirements, the list of requested extras, if any.
# https://peps.python.org/pep-0508/#environment-markers
# TODO: currently, the resolver uses the default environment to evaluate
# environment markers, so that is what we report here. In the future, it
# should also take into account options such as --python-version or
# --platform, perhaps under the form of an environment_override field?
# https://github.com/pypa/pip/issues/11198
# set hashes before hash, since the hash setter will further populate hashes
# Auto-populate the hashes key to upgrade to the new format automatically.
# We don't back-populate the legacy hash key from hashes.
# Without a none, we want to discard everything as :all: covers it
# (not supported) path: Optional[str]
# (not supported) size: Optional[int]
# (not supported) upload_time: Optional[datetime]
# (not supported) marker: Optional[str]
# (not supported) requires_python: Optional[str]
# (not supported) dependencies
# (not supported) index: Optional[str]
# (not supported) attestation_identities: Optional[List[Dict[str, Any]]]
# (not supported) tool: Optional[Dict[str, Any]]
# should never happen
# (not supported) environments: Optional[List[str]]
# (not supported) extras: List[str] = []
# (not supported) dependency_groups: List[str] = []
# Order matters, earlier hashes have a precedence over later hashes for what
# we will pick to use.
# NB: we do not validate that the second group (.*) is a valid hex
# digest. Instead, we simply keep that string in this class, and then check it
# against Hashes when hash-checking is needed. This is easier to debug than
# proactively discarding an invalid hex digest, as we handle incorrect hashes
# and malformed hashes in the same place.
# Remove any unsupported hash types from the mapping. If this leaves no
# supported hashes, return None
# We unquote prior to quoting to make sure nothing is double quoted.
# Also, on Windows the path part might contain a drive letter which
# should not be quoted. On Linux where drive letters do not
# exist, the colon should be quoted. We rely on urllib.request
# to do the right thing here.
# Remove any URL authority section, leaving only the URL path.
# percent-encoded:                   /
# Split on the reserved characters prior to cleaning so that
# revision strings in VCS URLs are properly preserved.
# Normalize %xx escapes (e.g. %2f -> %2F)
# Split the URL into parts according to the general structure
# `scheme://netloc/path?query#fragment`.
# If the netloc is empty, then the URL refers to a local filesystem path.
# Temporarily replace scheme with file to ensure the URL generated by
# urlunsplit() contains an empty netloc (file://) as per RFC 1738.
# Restore original scheme.
# The comes_from, requires_python, and metadata_file_data arguments are
# only used by classmethods of this class, and are not used in client
# code directly.
# url can be a UNC windows share
# Store the url as a private attribute to prevent accidentally
# trying to set a new value.
# The .path property is hot, so calculate its value ahead of time.
# PEP 714: Indexes must use the name core-metadata, but
# clients should support the old name as a fallback for compatibility.
# The metadata info value may be a boolean, or a dict of hashes.
# The file exists, and hashes have been supplied
# The file exists, but there are no hashes
# False or not present: the file does not exist
# The Link.yanked_reason expects an empty string instead of a boolean.
# The Link.yanked_reason expects None instead of False.
# PEP 714: Indexes must use the name data-core-metadata, but
# The metadata info value may be the string "true", or a string of
# the form "hashname=hashval"
# The file does not exist
# Error - data is wrong. Treat as no hashes supplied.
# Make sure we don't leak auth information if the netloc
# includes a username and password.
# Per PEP 508.
# An egg fragment looks like a PEP 508 project name, along with
# an optional extras specifier. Anything else is invalid.
# According to RFC 8089, an empty host in file: means localhost.
# If there are multiple subdirectory values, use the first one.
# This matches the behavior of Link.subdirectory_fragment.
# If there are multiple hash values under the same algorithm, use the
# first one. This matches the behavior of Link.hash_value.
# This is part of a temporary hack used to block installs of PyPI
# packages which depend on external urls only necessary until PyPI can
# block such packages themselves
# Load pyproject.toml, to determine whether PEP 517 is to be used
# Set up the build isolation, if this requirement should be isolated
# Setup an isolated environment and install the build backend static
# requirements in it.
# Check that if the requirement is editable, it either supports PEP 660 or
# has a setup.py or a setup.cfg. This cannot be done earlier because we need
# to setup the build backend to verify it supports build_editable, nor can
# it be done later, because we want to avoid installing build requirements
# needlessly. Doing it here also works around setuptools generating
# UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory
# without setup.py nor setup.cfg.
# Install the dynamic build requirements.
# Check if the current environment provides build dependencies
# Isolate in a BuildEnvironment and install the build-time
# requirements.
# Install any extra build dependencies that the backend requests.
# This must be done in a second pass, as the pyproject.toml
# dependencies must be installed before we can call the backend.
# Editable requirements will always be source distributions. They use the
# legacy logic until we create a modern standard for them.
# If it's a wheel, it's a WheelDistribution
# Otherwise, a SourceDistribution
# Import distutils lazily to avoid deprecation warnings,
# but import it soon enough that it is in memory and available during
# a pip reinstall.
# Be noisy about incompatibilities if this platforms "should" be using
# sysconfig, but is explicitly opting out and using distutils instead.
# User-site not available.
# distutils incorrectly put PyPy packages under ``site-packages/python``
# in the ``posix_home`` scheme, but PyPy devs said they expect the
# directory name to be ``pypy`` instead. So we treat this as a bug fix
# and not warn about it. See bpo-43307 and python/cpython#24628.
# sysconfig's ``osx_framework_user`` does not include ``pythonX.Y`` in
# the ``include`` value, but distutils's ``headers`` does. We'll let
# CPython decide whether this is a bug or feature. See bpo-43948.
# On Red Hat and derived Linux distributions, distutils is patched to
# use "lib64" instead of "lib" for platlib.
# On Python 3.9+, sysconfig's posix_user scheme sets platlib against
# sys.platlibdir, but distutils's unix_user incorrectly continues
# using the same $usersite for both platlib and purelib. This creates a
# mismatch when sys.platlibdir is not "lib".
# Slackware incorrectly patches posix_user to use lib64 instead of lib,
# but not usersite to match the location.
# Both Debian and Red Hat patch Python to place the system site under
# /usr/local instead of /usr. Debian also places lib in dist-packages
# instead of site-packages, but the /usr/local check should cover it.
# MSYS2 MINGW's sysconfig patch does not include the "site-packages"
# part of the path. This is incorrect and will be fixed in MSYS.
# CPython's POSIX install script invokes pip (via ensurepip) against the
# interpreter located in the source tree, not the install site. This
# triggers special logic in sysconfig that's not present in distutils.
# https://github.com/python/cpython/blob/8c21941ddaf/Lib/sysconfig.py#L178-L194
# Check if this path mismatch is caused by distutils config files. Those
# files will no longer work once we switch to sysconfig, so this raises a
# deprecation message for them.
# Post warnings about this mismatch so user can report them back.
# Application Directories
# FIXME doesn't account for venv linked to global site-packages
# FIXME: keep src in cwd for now (it is not a temporary folder)
# In case the current working directory has been renamed or deleted
# under macOS + virtualenv sys.prefix is not properly resolved
# it is something like /path/to/python/bin/..
# Use getusersitepackages if this is present, as it ensures that the
# value is initialised properly.
# Notes on _infer_* functions.
# Unfortunately ``get_default_scheme()`` didn't exist before 3.10, so there's no
# way to ask things like "what is the '_prefix' scheme on this platform". These
# functions try to answer that with some heuristics while accounting for ad-hoc
# platforms not covered by CPython's default sysconfig implementation. If the
# ad-hoc implementation does not fully implement sysconfig, we'll fall back to
# a POSIX scheme.
# On Windows, prefx is just called "nt".
# User scheme unavailable.
# Update these keys if the user sets a custom home.
# Special case: When installing into a custom prefix, use posix_prefix
# instead of osx_framework_library. See _should_use_osx_framework_prefix()
# docstring for details.
# Logic here is very arbitrary, we're doing it for compatibility, don't ask.
# 1. Pip historically uses a special header path in virtual environments.
# 2. If the distribution name is not known, distutils uses 'UNKNOWN'. We
# Forcing to use /usr/local/bin for standard macOS framework installs.
# The following comment should be removed at some point in the future.
# mypy: strict-optional=False
# If pip's going to use distutils, it should not be using the copy that setuptools
# might have injected into the environment. This is done by removing the injected
# shim, if it's injected.
# See https://github.com/pypa/pip/issues/8761 for the original discussion and
# rationale for why this is done within pip.
# NOTE: setting user or home has the side-effect of creating the home dir
# or user base for installations during finalize_options()
# ideally, we'd prefer a scheme class that has no side-effects.
# install_lib specified in setup.cfg should install *everything*
# into there (i.e. it takes precedence over both purelib and
# platlib).  Note, i.install_lib is *always* set after
# finalize_options(); we only want to override here if the user
# has explicitly requested it hence going back to the config
# XXX: In old virtualenv versions, sys.prefix can contain '..' components,
# so we need to call normpath to eliminate them.
# buildout uses 'bin' on Windows too?
# Forcing to use /usr/local/bin for standard macOS framework installs
# Also log to ~/Library/Logs/ for use with the Console.app log viewer
# the options that don't get turned into an InstallRequirement
# should only be emitted once, even if the same option is in multiple
# requirements files, so we need to keep track of what has been emitted
# so that we don't emit it again if it's seen again
# keep track of which files a requirement is in so that we can
# give an accurate warning if a requirement appears multiple times.
# either it's not installed, or it is installed
# but has been processed already
# Warn about requirements that were included multiple times (in a
# single requirements file or in different requirements files).
# legacy version
# if PEP 610 metadata is present, use it
# name==version requirement
# Don't crash on unreadable or broken metadata.
# Info about dependencies of package_name
# Check if it's missing
# Check if there's a conflict
# Start from the current state
# Install packages
# Only warn about directly-dependent packages; create a whitelist of them
# Keep track of packages that were installed
# Modify it as installing requirement_set would (assuming no errors)
# Try to guess the file's MIME type. If the system MIME tables
# can't be loaded, give up.
# If a download dir is specified, is the file already downloaded there?
# let's download to a tmp dir
# If a download dir is specified, is the file already there and valid?
# If --require-hashes is off, `hashes` is either empty, the
# link's embedded hash, or MissingHashes; it is required to
# match. If --require-hashes is on, we are satisfied by any
# hash in `hashes` matching: a URL-based or an option-based
# one; no internet-sourced hash will be in `hashes`.
# non-editable vcs urls
# file urls
# http urls
# unpack the archive to the build dir location. even when only downloading
# archives, they have to be unpacked to parse dependencies, except wheels
# If already downloaded, does its hash match?
# noqa: PLR0913 (too many parameters)
# Where still-packed archives should be written to. If None, they are
# not saved, and are deleted immediately after unpacking.
# Is build isolation allowed?
# Should check build dependencies?
# Should hash-checking be required?
# Should install in user site-packages?
# Should wheels be downloaded lazily?
# How verbose should underlying tooling be?
# Are we using the legacy resolver?
# Memoized downloaded files, as mapping of url: path.
# Previous "header" printed for a link-based InstallRequirement
# If we used req.req, inject requirement source if available (this
# would already be included if we used req directly)
# Since source_dir is only set for editable requirements.
# We don't need to unpack wheels, so no need for a source
# build local directories in-tree
# We always delete unpacked sdists after pip runs.
# By the time this is called, the requirement's link should have
# been checked so we can tell what kind of requirements req is
# and raise some more informative errors than otherwise.
# (For example, we can raise VcsHashUnsupported for a VCS URL
# rather than HashMissing.)
# We could check these first 2 conditions inside unpack_url
# and save repetition of conditions, but then we would
# report less-useful error messages for unhashable
# requirements, complaining that there's no hash provided.
# Unpinned packages are asking for trouble when a new version
# is uploaded.  This isn't a security check, but it saves users
# a surprising hash mismatch in the future.
# file:/// URLs aren't pinnable, so don't complain about them
# not being pinned.
# If known-good hashes are missing for this requirement,
# shim it with a facade object that will provoke hash
# computation and then raise a HashMissing exception
# showing the user what the hash should be.
# Try PEP 658 metadata first, then fall back to lazy wheel if unavailable.
# (1) Get the link to the metadata file, if provided by the backend.
# (2) Download the contents of the METADATA file, separate from the dist itself.
# (3) Generate a dist just from those file contents.
# (4) Ensure the Name: field from the METADATA file matches the name from the
# --use-feature=fast-deps must be provided.
# Download to a temporary directory. These will be copied over as
# needed for downstream 'download', 'wheel', and 'install' commands.
# Map each link to the requirement that owns it. This allows us to set
# `req.local_file_path` on the appropriate requirement after passing
# all the links at once into BatchDownloader.
# Record the downloaded file path so wheel reqs can extract a Distribution
# in .get_dist().
# Record that the file is downloaded so we don't do it again in
# _prepare_linked_requirement().
# If this is an sdist, we need to unpack it after downloading, but the
# .source_dir won't be set up until we are in _prepare_linked_requirement().
# Add the downloaded archive to the install requirement to unpack after
# preparing the source dir.
# This step is necessary to ensure all lazy wheels are processed
# successfully by the 'download', 'wheel', and 'install' commands.
# Check if the relevant file is already available
# in the download directory
# When a locally built wheel has been found in cache, we don't warn
# about re-downloading when the already downloaded wheel hash does
# not match. This is because the hash must be checked against the
# original link, not the cached link. It that case the already
# downloaded file will be removed and re-fetched from cache (which
# implies a hash check against the cache entry's origin.json).
# The file is already available, so mark it as downloaded
# The file is not available, attempt to fetch only metadata
# None of the optimizations worked, fully prepare the requirement
# Determine if any of these requirements were already downloaded.
# Prepare requirements we found were already downloaded for some
# reason. The other downloads will be completed separately.
# TODO: separate this part out from RequirementPreparer when the v1
# resolver can be removed!
# We need to verify hashes, and we have found the requirement in the cache
# of locally built wheels.
# At this point we know the requirement was built from a hashable source
# artifact, and we verified that the cache entry's hash of the original
# artifact matches one of the hashes we expect. We don't verify hashes
# against the cached wheel, because the wheel is not the original.
# If download_info is set, we got it from the wheel cache.
# Editables don't go through this function (see
# prepare_editable_requirement).
# Make sure we have a hash in download_info. If we got it as part of the
# URL, it will have been verified and we can rely on it. Otherwise we
# compute it from the downloaded file.
# FIXME: https://github.com/pypa/pip/issues/11943
# We populate info.hash for backward compatibility.
# This will automatically populate info.hashes.
# For use in later processing,
# preserve the file path on the requirement.
# Make a .zip of the source_dir we already created.
# No distribution was downloaded for this requirement.
# Return the .egg-info directory.
# Sort for determinism.
# Note that BuildBackendHookCaller implements a fallback for
# prepare_metadata_for_build_wheel, so we don't have to
# consider the possibility that this hook doesn't exist.
# prepare_metadata_for_build_wheel/editable, so we don't have to
# Save values from the target and change them.
# Restore original values in the target.
# for mypy
# Get the file to write information about this requirement.
# Try reading from the file. If it exists and can be read from, a build
# is already in progress, so a LookupError is raised.
# If we're here, req should really not be building already.
# Start tracking this requirement.
# Delete the created file and the corresponding entry.
# XXX RECORD hashes will need to be updated
# Group scripts by the path they were installed in
# We don't want to warn for directories that are on PATH.
# If an executable sits with sys.executable, we don't warn for it.
# Format a message
# Add a note if any directory starts with ~
# Returns the formatted multiline message
# Normally, there should only be one row per path, in which case the
# second and third elements don't come into play when sorting.
# However, in cases in the wild where a path might happen to occur twice,
# we don't want the sort operation to trigger an error (but still want
# determinism).  Since the third element can be an int or string, we
# coerce each element to a string to avoid a TypeError in this case.
# For additional background, see--
# https://github.com/pypa/pip/issues/5868
# On Windows, do not handle relative paths if they belong to different
# logical disks
# Don't mutate caller's version
# Special case pip and setuptools to generate versioned wrappers
# The issue is that some projects (specifically, pip and setuptools) use
# code in setup.py to create "versioned" entry points - pip2.7 on Python
# 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
# the wheel metadata at build time, and so if the wheel is installed with
# a *different* version of Python the entry points will be wrong. The
# correct fix for this is to enhance the metadata to be able to describe
# such versioned entry points.
# Currently, projects using versioned entry points will either have
# incorrect versioned entry points, or they will not be able to distribute
# "universal" wheels (i.e., they will need a wheel per Python version).
# Because setuptools and pip are bundled with _ensurepip and virtualenv,
# we need to use universal wheels. As a workaround, we
# override the versioned entry points in the wheel and generate the
# correct ones.
# To add the level of hack in this section of code, in order to support
# ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment
# variable which will control which version scripts get installed.
# ENSUREPIP_OPTIONS=altinstall
# ENSUREPIP_OPTIONS=install
# DEFAULT
# Delete any other versioned pip entry points
# Delete any other versioned easy_install entry points
# Generate the console entry points specified in the wheel
# When we open the output file below, any existing file is truncated
# before we start writing the new contents. This is fine in most
# cases, but can cause a segfault if pip has loaded a shared
# object (e.g. from pyopenssl through its vendored urllib3)
# Since the shared object is mmap'd an attempt to call a
# symbol in it will then cause a segfault. Unlinking the file
# allows writing of new contents while allowing the process to
# continue to use the old copy.
# optimization: the file is created by open(),
# skip the decompression when there is 0 bytes to decompress.
# Override distlib's default script template with one that
# doesn't import `re` module, allowing scripts to load faster.
# noqa: C901, PLR0915 function is too long
# Record details of the files moved
# Get the defined entry points
# EP, EP.exe and EP-script.py are scripts generated for
# entry point EP by setuptools
# Ignore setuptools-generated scripts
# directory creation is lazy and after file filtering
# to ensure we don't install empty dirs; empty dirs can't be
# uninstalled.
# We de-duplicate installation paths, since there can be overlap (e.g.
# file in .data maps to same location as file in wheel root).
# Sorting installation paths makes it easier to reproduce and debug
# issues related to permissions on existing files.
# Compile all of the pyc files for the installed files
# Ensure old scripts are overwritten.
# See https://github.com/pypa/pip/issues/1800
# Ensure we don't generate any variants for scripts because this is almost
# never what somebody wants.
# See https://bitbucket.org/pypa/distlib/issue/35/
# This is required because otherwise distlib creates scripts that are not
# executable.
# See https://bitbucket.org/pypa/distlib/issue/32/
# Generate the console and GUI entry points specified in the wheel
# Record pip as the installer
# Record the PEP 610 direct URL reference
# Record the REQUESTED file
# Record details of all files installed
# Explicitly cast to typing.IO[str] as a workaround for the mypy error:
# "writer" has incompatible type "BinaryIO"; expected "_Writer"
# We don't want to blindly returned cached data for
# /simple/, because authors generally expecting that
# twine upload && pip install will function, but if
# they've done a pip install in the last ~10 minutes
# it won't. Thus by setting this to zero we will not
# blindly use any cached data, however the benefit of
# using max-age=0 instead of no-cache, is that we will
# still support conditional requests, so we will still
# minimize traffic sent in cases where the page hasn't
# changed at all, we will just always incur the round
# trip for the conditional GET now instead of only
# once per 10 minutes.
# For more information, please see pypa/pip#5670.
# The check for archives above only works if the url ends with
# something that looks like an archive. However that is not a
# requirement of an url. Unless we issue a HEAD request on every
# url we cannot know ahead of time for sure if something is a
# Simple API response or not. However we can check after we've
# downloaded it.
# Check for VCS schemes that do not support lookup as web pages.
# Tack index.html onto file:// URLs that point to directories
# add trailing slash if not present so urljoin doesn't trim
# final segment
# TODO: In the future, it would be nice if pip supported PEP 691
# Make sure find_links is a list before passing to create().
# The OrderedDict calls deduplicate sources by URL.
# File must have a valid wheel or sdist name,
# otherwise not worth considering as a package
# Get existing instance of _FlatDirectoryToUrls if it exists
# Is a local path.
# A file: URL.
# Include the wheel's tags in the reason string to
# simplify troubleshooting compatibility issues.
# This should be up by the self.ok_binary check, but see issue 2700.
# Make sure we're not returning back the given value.
# Collect the non-matches for logging purposes.
# Since the index of the tag in the _supported_tags list is used
# as a priority, precompute a map from tag to index/priority to be
# used in wheel.find_most_preferred_tag.
# Using None infers from the specifier instead.
# We turn the version object into a str here because otherwise
# when we're debundled but setuptools isn't, Python will see
# packaging.version.Version and
# pkg_resources._vendor.packaging.version.Version as different
# types. This way we'll use a str as a common data interchange
# format. If we stop using the pkg_resources provided specifier
# and start using our own, we can drop the cast to str().
# can raise InvalidWheelFilename
# sdist
# -1 for yanked.
# These are boring links that have already been logged somehow.
# Cache of the result of finding candidates
# session.verify is either a boolean (use default bundle/no SSL
# verification) or a string path to a custom CA bundle to use. We only
# care about the latter.
# Put the link at the end so the reason is more visible and because
# the link string is usually very long.
# we need to have a URL
# it's not a local file
# This is an intentional priority ordering
# This repeated parse_version and str() conversion is needed to
# handle different vendoring sources from pip and pkg_resources.
# If we stop using the pkg_resources provided specifier and start
# using our own, we can drop the cast to str().
# We have an existing version, and its the best version
# Project name and version must be separated by one single dash. Find all
# occurrences of dashes; if the string in front of it matches the canonical
# name, this is the one separating the name and version parts.
# This file intentionally does not import submodules
# help position must be aligned with __init__.parseopts.description
# leave full control over description to us
# some doc strings have initial newlines, some don't
# some doc strings have final newlines and spaces, some don't
# dedent, then reindent
# leave full control over epilog to us
# If its not a list, we should abort and just return the help text
# Configuration gives keys in an unordered manner. Order them.
# Pool the options into different groups
# noqa: PERF102
# ignore empty values
# Yield each group in their override order
# Accumulate complex default state.
# Then set the options with those values
# '--' because configuration supports only long names
# Ignore options not present in this parser. E.g. non-globals put
# in [global] by users that want them to apply to all applicable
# commands.
# From take_action
# Load the configuration, or error out in case of an error
# ours
# Return None rather than an empty list
# there's no type annotation on requests.Session, so it's
# automatically ContextManager[Any] and self._session becomes Any,
# then https://github.com/python/mypy/issues/7696 kicks in
# Handle custom ca-bundles from the user
# Handle SSL client certificate
# Handle timeouts
# Handle configured proxies
# Determine if we can prompt the user for authentication or not
# Make sure the index_group options are present.
# Otherwise, check if we're using the latest version of pip available.
# Don't complete if user hasn't sourced bash_completion file.
# Don't complete if autocompletion environment variables
# are not present
# subcommand
# subcommand options
# special case: 'help' subcommand has no options
# special case: list locally installed dists for show and uninstall
# if there are no dists installed, fall back to option completion
# filter out previously specified options from available options
# filter options by current input
# get completion type given cwords and available subcommand options
# get completion files and directories if ``completion_type`` is
# ``<file>``, ``<dir>`` or ``<path>``
# append '=' to options which require args
# Complete sub-commands (unless one is already given).
# show main parser options only when necessary
# get completion type given cwords and all available options
# Don't complete paths if they can't be accessed
# list all files that start with ``filename``
# complete regular files when there is not ``<dir>`` after option
# complete directories when there is ``<file>``, ``<path>`` or
# ``<dir>``after option
# Installations or downloads using dist restrictions must not combine
# source distributions and dist-specific wheels, as they are not
# guaranteed to be locally compatible.
###########
# options #
# Don't ask for input
# Option when path already exist
# This was made a separate function for unit-testing purposes.
# The empty string is the same as not providing a value.
# Then we are in the case of "3" or "37".
# The value argument will be None if --no-cache-dir is passed via the
# command-line, since the option doesn't accept arguments.  However,
# the value can be non-None if the option is triggered e.g. by an
# environment variable, like PIP_NO_CACHE_DIR=true.
# Then parse the string value to get argument error-checking.
# Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
# converted to 0 (like "false" or "no") caused cache_dir to be disabled
# rather than enabled (logic would say the latter).  Thus, we disable
# the cache directory not just on values that parse to True, but (for
# backwards compatibility reasons) also on values that parse to False.
# In other words, always set it to False if the option is provided in
# some (valid) form.
# check for 'pyproject.toml' filenames using pathlib
# Since --no-use-pep517 doesn't accept arguments, the value argument
# will be None if --no-use-pep517 is passed via the command-line.
# However, the value can be non-None if the option is triggered e.g.
# by an environment variable, for example "PIP_NO_USE_PEP517=true".
# If user doesn't wish to use pep517, we check if setuptools is installed
# and raise error if it is not.
# Otherwise, --no-use-pep517 was passed via the command-line.
# Hash values eventually end up in InstallRequirement.hashes due to
# __dict__ copying in process_line().
# No-op, a hold-over from the Python 2->3 transition.
# Features that are now always on. A warning is printed if they are used.
# always on since 24.2
# always on since 23.1
##########
# groups #
# add the general options
# so the help formatter knows
# create command listing for description
# If the named file exists, use it.
# If it's a directory, assume it's a virtual environment and
# look for the environment's Python executable.
# bin/python for Unix, Scripts/python.exe for Windows
# Try both in case of odd cases like cygwin.
# Could not find the interpreter specified
# Note: parser calls disable_interspersed_args(), so the result of this
# call is to split the initial args into the general options before the
# subcommand and everything else.
# For example:
# --python
# Re-invoke pip using the specified Python interpreter
# Set a flag so the child doesn't re-invoke itself, causing
# an infinite loop.
# --version
# pip || pip help -> print_help()
# the subcommand name
# all the args without the subcommand
# Do not import and use main() directly! Using it directly is actively
# discouraged by pip's maintainers. The name, location and behavior of
# this function is subject to change, so calling it directly is not
# portable across different pip versions.
# In addition, running pip in-process is unsupported and unsafe. This is
# elaborated in detail at
# https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program.
# That document also provides suggestions that should work for nearly
# all users that are considering importing and using main() directly.
# However, we know that certain users will still want to invoke pip
# in-process. If you understand and accept the implications of using pip
# in an unsupported manner, the best approach is to use runpy to avoid
# depending on the exact location of this entry point.
# The following example shows how to use runpy to invoke pip in that
# case:
# Note that this will exit the process after running, unlike a direct
# call to main. As it is not safe to do any processing after calling
# main, this should not be an issue in practice.
# Suppress the pkg_resources deprecation warning
# Note - we use a module of .*pkg_resources to cover
# the normal case (pip._vendor.pkg_resources) and the
# devendored case (a bare pkg_resources)
# Configure our deprecation warnings to be sent through loggers
# Needed for locale.getpreferredencoding(False) to work
# in pip._internal.utils.encoding.auto_decode
# setlocale can apparently crash if locale are uninitialized
# Commands should add options to this option group
# Add the general options
# Make sure we do the pip version check if the index_group options
# are present.
# Bypass our logger and write any remaining messages to
# stderr because stdout no longer works.
# factored out for testability
# We must initialize this before the tempdir manager, otherwise the
# configuration would not be accessible by the time we clean up the
# tempdir manager.
# Intentionally set as early as possible so globally-managed temporary
# directories are available to the rest of the code.
# Set verbosity so that it can be used elsewhere.
# Make sure that the --python argument isn't specified after the
# subcommand. We can tell, because if --python was specified,
# we should only reach this point if we're running in the created
# subprocess, which has the _PIP_RUNNING_IN_SUBPROCESS environment
# variable set.
# TODO: Try to get these passing down from the command?
# If a venv is required check if it can really be found
# Empirically, 8 updates/second looks nice
# Erase what we wrote before by backspacing to the beginning, writing
# spaces to overwrite the old text, and then backspacing again
# Now we have a blank slate to add our status
# Used for dumb terminals, non-interactive installs (no tty), etc.
# We still print updates occasionally (once every 60 seconds by default) to
# act as a keep-alive for systems like Travis-CI that take lack-of-output as
# an indication that a task has frozen.
# Interactive spinner goes directly to sys.stdout rather than being routed
# through the logging system, but it acts like it has level INFO,
# i.e. it's only displayed if we're at level INFO or better.
# Non-interactive spinner goes through the logging system, so it is always
# in sync with logging configuration.
# Don't show spinner if --quiet is given.
# The Windows terminal does not support the hide/show cursor ANSI codes,
# even via colorama. So don't even try.
# We don't want to clutter the output with control characters if we're
# writing to a file, or if the user is running with --quiet.
# See https://github.com/pypa/pip/issues/3418
# This kind of conflict can occur when the user passes an explicit
# build directory with a pre-existing folder. In that case we do
# not want to accidentally remove it.
# The long import name and duplicated invocation is needed to convince
# Mypy into correctly typechecking. Otherwise it would complain the
# "Resolver" class being redefined.
# NOTE: options.require_hashes may be set if --require-hashes is True
# If any requirement has hash options, enable hash checking.
# Display where finder is looking for packages
# Hiding the progress bar at initialization forces a refresh cycle to occur
# until the bar appears, avoiding very short flashes.
# no-op, when passed an iterator
# if install did not succeed, rollback previous uninstall
# Matches environment variable-style values in '${MY_VARIABLE_1}' with the
# variable name consisting of only uppercase letters, digits or the '_'
# (underscore). This follows the POSIX standard defined in IEEE Std 1003.1,
# 2013 Edition.
# options to be passed to requirements
# the 'dest' string values
# order of BOMS is important: codecs.BOM_UTF16_LE is a prefix of codecs.BOM_UTF32_LE
# so data.startswith(BOM_UTF16_LE) would be true for UTF32_LE data
# TODO: replace this with slots=True when dropping Python 3.9 support.
# We don't support multiple -e on one line
# preserve for the nested code path
# get the options that apply to requirements
# percolate options upward
# set finder options
# FIXME: it would be nice to keep track of the source
# of the find_links: support a find-links local path
# relative to a requirements file.
# We need to update the auth urls in session
# parse a nested requirements file
# original file is over http
# do a url join so relative paths work
# original file and nested file are paths
# do a join so relative paths work
# and then abspath so that we can identify recursive references
# Keeping a track where was each file first included in
# add offending line
# Build new parser for each line since it accumulates appendable
# options.
# By default optparse sys.exits on parsing errors. We want to wrap
# that in our own exception.
# NOTE: mypy disallows assigning to a method
# this ensures comments are always matched later
# last line contains \
# TODO: handle space after '\'.
# Pip has special support for file:// URLs (LocalFSAdapter).
# Delay importing heavy network modules until absolutely necessary.
# Assume this is a bare path.
# see https://peps.python.org/pep-0508/#complete-grammar
# ireq.req is a valid requirement so the regex should always match
# If a file path is specified with extras, strip off the extras.
# Treating it as code that has already been checked out
# Create a steppable iterator, so we can handle \-continuations.
# Skip blank lines/comments.
# Drop comments -- a hash without a space may be in a URL.
# If there is a line continuation, drop it, and append the next line.
# Try to parse and check if it is a requirements file.
# ---- The actual constructors follow ----
# TODO: The is_installable_dir test here might not be necessary
# If the path contains '@' and the part before it does not look
# like a path, try to treat it as a PEP 440 URL req instead.
# it's a local file, dir, or url
# Handle relative file URLs
# wheel file
# set the req to the egg fragment.  when it's not there, this
# will become an 'unnamed' requirement
# a requirement specifier
# Explicitly disallow pypi packages that depend on external urls
# source_dir is the local directory where the linked requirement is
# located, or unpacked. In case unpacking is needed, creating and
# populating source_dir is done by the RequirementPreparer. Note this
# is not necessarily the directory where pyproject.toml or setup.py is
# located - that one is obtained via unpacked_source_directory.
# original_link is the direct URL that was provided by the user for the
# requirement, either directly or via a constraints file.
# PEP 508 URL requirement
# When this InstallRequirement is a wheel obtained from the cache of locally
# built wheels, this is the source link corresponding to the cache entry, which
# was used to download and build the cached wheel.
# Information about the location of the artifact that was downloaded . This
# property is guaranteed to be set in resolver results.
# Path to any downloaded or already-existing package.
# This holds the Distribution object if this requirement is already installed.
# Whether the installation process should try to uninstall an existing
# distribution before installing this requirement.
# Temporary build location
# Set to True after successful installation
# Supplied options
# Set to True after successful preparation of this requirement
# User supplied requirement are explicitly requested for installation
# by the user via CLI arguments or requirements files, as opposed to,
# e.g. dependencies, extras or constraints.
# For PEP 517, the directory where we request the project metadata
# gets stored. We need this to pass to build_wheel, so the backend
# can ensure that the wheel matches the metadata (see the PEP for
# details).
# The static build requirements (from pyproject.toml)
# Build requirements that we will check are available
# The PEP 517 backend we should use to build the project
# Are we using PEP 517 for this requirement?
# After pyproject.toml has been loaded, the only valid values are True
# and False. Before loading, None is valid (meaning "use the default").
# Setting an explicit value before loading pyproject.toml is supported,
# but after loading this flag should be treated as read only.
# If config settings are provided, enforce PEP 517.
# This requirement needs more preparation before it can be built
# This requirement needs to be unpacked before it can be installed.
# Things that are valid for all kinds of requirements?
# Provide an extra to safely evaluate the markers
# without matching any extra
# Some systems have /tmp as a symlink which confuses custom
# builds (such as numpy). Thus, we ensure that the real path
# is returned.
# This is the only remaining place where we manually determine the path
# for the temporary directory. It is only needed for editables where
# it is the value of the --src option.
# When parallel builds are enabled, add a UUID to the build directory
# name so multiple builds do not interfere with each other.
# FIXME: Is there a better place to create the build_dir? (hg and bzr
# need this)
# `None` indicates that we respect the globally-configured deletion
# settings, which is what we actually want when auto-deleting.
# Construct a Requirement object from the generated metadata
# Everything is fine.
# If we're here, there's a mismatch. Log a warning about it.
# when installing editables, nothing pre-existing should ever
# satisfy
# Things valid for wheels
# When True, it means that this InstallRequirement is a local wheel file in the
# cache of locally built wheels.
# Things valid for sdists
# Act on the newly generated metadata, based on the name and version.
# For both source distributions and editables
# If a checkout exists, it's unwise to keep going.
# version inconsistencies are logged later, but do not fail
# the installation.
# For editable installations
# Static paths don't get updated
# Editable requirements are validated in Requirement constructors.
# So here, if it's neither a path nor a valid VCS URL, it's a bug.
# Top-level Actions
# 0o755
# Check for unsupported forms
# No plan yet for when the new resolver becomes default
# This directory has already been handled.
# If all the files we found are in our remaining set of files to
# remove, then remove them from the latter set and add a wildcard
# for the directory.
# Determine folders and files
# This walks the tree using os.walk to not miss extra folders
# that might get added.
# We are skipping this file. Add it to the set.
# Mapping from source file root to [Adjacent]TempDirectory
# for files under that directory.
# (old path, new path) tuples for each move that may need
# to be undone.
# Did not find any suitable root
# If we're moving a directory, we need to
# remove the destination first or else it will be
# moved to inside the existing directory.
# We just created new_path ourselves, so it will
# be removable.
# Create local cache of normalize_path results. Creating an UninstallPathSet
# can result in hundreds/thousands of redundant calls to normalize_path with
# the same args, which hurts performance.
# aka is_local, but caching normalized sys.prefix
# we normalize the head to resolve parent directory symlinks, but not
# the tail, since we only want to uninstall symlinks, not their targets
# __pycache__ files can show up after 'installed-files.txt' is created,
# due to imports
# In verbose mode, display all the files that are going to be
# deleted.
# Distribution is installed with metadata in a "flat" .egg-info
# directory. This means it is not a modern .dist-info installation, an
# egg, or legacy editable.
# If dist is editable and the location points to a ``.egg-info``,
# we are in fact in the legacy editable case.
# Uninstall cases order do matter as in the case of 2 installs of the
# same package, pip needs to uninstall the currently detected version
# FIXME: need a test for this elif block
# occurs with --single-version-externally-managed/--record outside
# of pip
# package installed by easy_install
# We cannot match on dist.egg_name because it can slightly vary
# i.e. setuptools-0.6c11-py2.6.egg vs setuptools-0.6rc11-py2.6.egg
# XXX We use normalized_dist_location because dist_location my contain
# a trailing / if the distribution is a zipped egg
# (which is not a directory).
# PEP 660 modern editable is handled in the ``.dist-info`` case
# above, so this only covers the setuptools-style editable.
# find distutils scripts= scripts
# find console_scripts and gui_scripts
# On Windows, os.path.normcase converts the entry to use
# backslashes.  This is correct for entries that describe absolute
# paths outside of site-packages, but all the others use forward
# slashes.
# os.path.splitdrive is used instead of os.path.isabs because isabs
# treats non-absolute paths with drive letter markings like c:foo\bar
# as absolute paths. It also does not recognize UNC paths if they don't
# have more than "\\sever\share". Valid examples: "\\server\share\" or
# "\\server\share\folder".
# If the file doesn't exist, log a warning and return
# windows uses '\r\n' with py3k, but uses '\n' with py2.x
# handle missing trailing newline
# On Python >=3.14 we only support importlib.metadata.
# On Python <3.14, if the environment variable is set, we obey what it says.
# On Python <3.11, we always use pkg_resources, unless the environment
# variable was set.
# On Python 3.11, 3.12 and 3.13, we check if the global constant is set.
# All pip versions supporting Python<=3.11 will support pkg_resources,
# and pkg_resources is the default for these, so let's not bother users.
# The Python distributor has set the global constant, so we don't
# warn, since it is not a user decision.
# The user has decided to use pkg_resources, so we warn.
# TODO: Move definition here.
# TODO: this property is relatively costly to compute, memoize it ?
# Search for an .egg-link file by walking sys.path, as it was
# done before by dist_is_editable().
# TODO: get project location from second line of egg_link file
# XXX if the distribution is a zipped egg, location has a trailing /
# so we resort to pathlib.Path to check the suffix in a reliable way.
# Fail silently if the installer file cannot be read.
# The metadata should NEVER be missing the Name: key, but if it somehow
# does, fall back to the known canonical name.
# Convert to str to satisfy the type checker; this can be a Header object.
# This extra Path-str cast normalizes entries.
# info is not relative to root.
# info *is* root.
# Section-less entries don't have markers.
# Comment; ignored.
# A section header.
# Make sure the distribution actually comes from a valid Python
# packaging distribution. Pip's AdjacentTempDirectory leaves folders
# e.g. ``~atplotlib.dist-info`` if cleanup was interrupted. The
# valid project name pattern is taken from PEP 508.
# Augment the default error with the origin of the file.
# This is populated lazily, to avoid loading metadata for all possible
# distributions eagerly.
# Build a PathMetadata object, from path to metadata. :wink:
# Determine the correct Distribution object type.
# A distutils-installed distribution is provided by FileMetadata. This
# provider has a "path" attribute not present anywhere else. Not the
# best introspection logic, but pip has been doing this for a long time.
# Search the distribution by looking through the working set.
# If distribution could not be found, call working_set.require to
# update the working set, and try to find the distribution again.
# This might happen for e.g. when you install a package twice, once
# using setup.py develop and again using setup.py install. Now when
# running pip uninstall twice, the package gets removed from the
# working set in the first uninstall, so we have to populate the
# working set again so that pip knows about it and the packages gets
# picked up and is successfully uninstalled the second time too.
# We didn't pass in any version specifiers, so this can never
# raise pkg_resources.VersionConflict.
# Extracted from https://github.com/pfmoore/pkg_metadata
# Name, Multiple-Use
# See if UTF-8 works
# If not, latin1 at least won't fail
# Accept both comma-separated and space-separated
# forms, for better compatibility with old data.
# Only allow iterating through the metadata directory.
# This method doesn't make sense for our in-memory wheel, but the API
# requires us to define it.
# Generate temp dir to contain the metadata file, and write the file contents.
# Construct dist pointing to the newly created directory.
# A distutils installation is always "flat" (not in e.g. egg form), so
# if this distribution's info location is NOT a pathlib.Path (but e.g.
# zipfile.Path), it can never contain any distutils scripts.
# importlib.metadata's EntryPoint structure satisfies BaseEntryPoint.
# From Python 3.10+, importlib.metadata declares PackageMetadata as the
# return type. This protocol is unfortunately a disaster now and misses
# a ton of fields that we need, including get() and get_payload(). We
# rely on the implementation that the object is actually a Message now,
# until upstream can improve the protocol. (python/cpython#94952)
# strip() because email.message.Message.get_all() may return a leading \n
# in case a long header was wrapped.
# Skip looking inside a wheel. Since a package inside a wheel is not
# always valid (due to .data directories etc.), its .dist-info entry
# should not be considered an installed distribution.
# To know exactly where we find a distribution, we have to feed in the
# paths one by one, instead of dumping the list to importlib.metadata.
# Expose a limited set of classes and functions so callers outside of
# the vcs package don't need to import deeper than `pip._internal.vcs`.
# (The test directory may still need to import from a vcs sub-package.)
# Import all vcs modules to register each VCS in the VcsSupport object.
# find the repo root
# Older versions of pip used to create standalone branches.
# Convert the standalone branch to a checkout by calling "bzr bind".
# hotfix the URL scheme after removing bzr+ from bzr+ssh:// re-add it
# Prefix.
# Major.
# Dot, minor.
# Optional dot, patch.
# Suffix, including any pre- and post-release segments we don't care about.
# SCP (Secure copy protocol) shorthand. e.g. 'git@example.com:foo/bar.git'
# Prevent the user's environment variables from interfering with pip:
# https://github.com/pypa/pip/issues/1130
# the current commit is different from rev,
# which means rev was something else than a commit hash
# return False in the rare case rev is both a commit hash
# and a tag or a branch; we don't want to cache in that case
# because that branch/tag could point to something else in the future
# git-symbolic-ref exits with empty stdout if "HEAD" is a detached
# HEAD rather than a symbolic ref.  In addition, the -q causes the
# command to exit with status code 1 instead of 128 in this case
# and to suppress the message to stderr.
# Pass rev to pre-filter the list.
# NOTE: We do not use splitlines here since that would split on other
# Include the offending line to simplify troubleshooting if
# this error ever occurs.
# Always fetch remote refs.
# Git fetch would fail with abbreviated commits.
# Don't fetch if we have the commit locally.
# The arg_rev property's implementation for Git ensures that the
# rev return value is always non-None.
# Do not show a warning for the common case of something that has
# the form of a Git commit hash.
# fetch the requested revision
# Change the revision to the SHA of the ref we fetched
# Then avoid an unnecessary subprocess call.
# Git added support for partial clone in 2.17
# https://git-scm.com/docs/partial-clone
# Speeds up cloning by functioning without a complete copy of repository
# Then a specific revision was requested.
# Only do a checkout if the current commit id doesn't match
# the requested revision.
# Then a specific branch was requested, and that branch
# is not yet checked out.
#: repo may contain submodules
# First fetch changes from the default remote
# fetch tags in addition to everything else
# Then reset to wanted revision (maybe even origin/master)
#: update submodules
# We need to pass 1 for extra_ok_returncodes since the command
# exits with return code 1 if there are no matching lines.
# This is already valid. Pass it though as-is.
# A local bare remote (git clone --mirror).
# Needs a file:// prefix.
# Add an ssh:// prefix and replace the ':' with a '/'.
# Otherwise, bail out.
# Works around an apparent Git bug
# (see https://article.gmane.org/gmane.comp.version-control.git/146500)
# Note: taken from setuptools.command.egg_info
# no sense walking uncontrolled subdirs
# FIXME: should we warn?
# save the root url
# not part of the same svn tree, skip it
# The --username and --password options can't be used for
# svn+ssh URLs, so keep the auth information in the URL.
# hotfix the URL scheme after removing svn+ from svn+ssh:// re-add it
# In cases where the source is in a subdirectory, we have to look up in
# the location until we find a valid project root.
# We've traversed up to the root of the filesystem without
# finding a Python project.
# subversion >= 1.7 does not have the 'entries' file
# get rid of the '8'
# get repository URL
# subversion >= 1.7
# Note that using get_remote_call_options is not necessary here
# because `svn info` is being run against a local directory.
# We don't need to worry about making sure interactive mode
# is being used to prompt for passwords, because passwords
# are only potentially needed for remote server requests.
# This member is used to cache the fetched version of the current
# ``svn`` client.
# Special value definitions:
# Example versions:
# Use cached version, if available.
# If parsing the version failed previously (empty tuple),
# do not attempt to parse it again.
# --non-interactive switch is available since Subversion 0.14.4.
# Subversion < 1.8 runs in interactive mode by default.
# By default, Subversion >= 1.8 runs in non-interactive mode if
# stdin is not a TTY. Since that is how pip invokes SVN, in
# call_subprocess(), pip must pass --force-interactive to ensure
# the user can be prompted for a password, if required.
# e.g. RHEL/CentOS 7, which is supported until 2024, ships with
# SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip
# can't safely add the option if the SVN version is < 1.8 (or unknown).
# find project root.
# Register more schemes with urlparse for various version control
# systems
# Choose the VCS in the inner-most directory. Since all repository
# roots found here would be either `location` or one of its
# parents, the longest path should have the most path components,
# i.e. the backend representing the inner-most repository.
# List of supported schemes for this Version Control
# Iterable of environment variable names to pass to call_subprocess().
# Remove the vcs prefix.
# https://github.com/python/mypy/issues/1174
# Do nothing if the response is "i".
# errno.ENOENT = no such file or directory
# In other words, the VCS executable isn't available
# errno.EACCES = Permission denied
# This error occurs, for instance, when the command is installed
# only for another user. So, the current user don't have
# permission to call the other user command.
# Windows can raise spurious ENOTEMPTY errors. See #6426.
# Retry every half second for up to 3 seconds
# See https://docs.python.org/3.12/whatsnew/3.12.html#shutil.
# noqa: PLE0704 - Bare exception used to reraise existing exception
# it's equivalent to os.path.exists
# convert to read/write
# use the original function to repeat the operation
# Implementation borrowed from os.renames().
# compileall.compile_dir() needs stdout.encoding to print to stdout
# type ignore is because TextIOBase.encoding is writeable
# Simulates an enum
# Only wrap host with square brackets when it is IPv6
# It must be a bare IPv6 address, so wrap it with brackets.
# Split from the right because that's how urllib.parse.urlsplit()
# behaves if more than one @ is present (which can be checked using
# the password attribute of urlsplit()'s return value).
# Split from the left because that's how urllib.parse.urlsplit()
# behaves if more than one : is present (which again can be checked
# using the password attribute of the return value)
# stripped url
# username/pass params are passed to subversion through flags
# and are not recognized in the url.
# This is useful for testing.
# The string being used for redaction doesn't also have to match,
# just the raw, original string.
# See https://github.com/pypa/pip/issues/1299 for more discussion
# On Windows, there are no "system managed" Python packages. Installing as
# Administrator via pip is the correct way of updating system environments.
# We choose sys.platform over utils.compat.WINDOWS here to enable Mypy platform
# checks: https://mypy.readthedocs.io/en/stable/common_issues.html
# Only for Python 3.3+
# if mode and regular file and any execute permissions for
# user/group/world?
# A directory
# Don't use read() to avoid allocating an arbitrarily large
# chunk of memory for the file's content
# PEP 706 added `tarfile.data_filter`, and made some other changes to
# Python's tarfile module (see below). The features were backported to
# security releases.
# Strip the leading directory from all files in the archive,
# including hardlink targets (which are relative to the
# unpack location).
# The tarfile filter in specific Python versions
# raises LinkOutsideDestinationError on valid input
# (https://github.com/python/cpython/issues/107845)
# Ignore the error there, but do use the
# more lax `tar_filter`
# Filter error messages mention the member name.
# No need to add it here.
# See PEP 706 note above.
# The PEP changed this from `int` to `Optional[int]`,
# where None means "use the default". Mypy doesn't
# know this yet.
# type: ignore [assignment]
# Some corrupt tar files seem to produce this
# (specifically bad symlinks)
# Update the timestamp (useful for cython compiled files)
# member have any execute permissions for user/group/world?
# FIXME: handle?
# FIXME: magic signatures?
# custom log level for `--verbose` output
# between DEBUG and INFO
# Kinds of temporary directories. Only needed for ones that are
# globally-managed.
# If we were given an explicit directory, resolve delete option
# now.
# Otherwise, we wait until cleanup and see what
# tempdir_registry says.
# The only time we specify path is in for editables where it
# is the value of the --src option.
# We realpath here because some systems have their default tmpdir
# symlinked to another directory.  This tends to confuse build
# scripts, so we canonicalize the path by traversing potential
# symlinks here.
# remove trailing new line
# first try with @retry; retrying to handle ephemeral errors
# last pass ignore/log all errors
# The characters that may be used to name the temp directory
# We always prepend a ~ and then rotate through these until
# a usable name is found.
# pkg_resources raises a different error for .dist-info folder
# with leading '-' and invalid metadata
# If we make it this far, we will have to make a longer name
# Continue if the name exists already
# Final fallback on the default behavior.
# Zip file path separators must be /
# BadZipFile for general corruption, KeyError for missing entry,
# and RuntimeError for password-protected files
# FeedParser (used by Parser) does not raise any exceptions. The returned
# message may have .defects populated, but for backwards-compatibility we
# currently ignore them.
# Shim to wrap setup.py invocation with setuptools
# Note that __file__ is handled via two {!r} *and* %r, to ensure that paths on
# Windows are correctly handled (it should be "C:\\Users" not "C:\Users").
# NOTE: Eventually, we'd want to also -S to the flags here, when we're
# isolating. Currently, it breaks Python in virtualenvs, because it
# relies on site.py to find parts of the standard library outside the
# virtualenv.
# if invalid, this is a pip bug
# For VCS links, we need to find out and add commit_id.
# If the requested VCS link corresponds to a cached
# wheel, it means the requested revision was an
# immutable commit hash, otherwise it would not have
# been cached. In that case we don't have a source_dir
# with the VCS checkout.
# If the wheel was not in cache, it means we have
# had to checkout from VCS to build and we have a source_dir
# which we can inspect to find out the commit id.
# If we don't have a way to check the effective uid of this process, then
# we'll just assume that we own the directory.
# Check if path is writable by current user.
# Special handling for root user in order to handle properly
# cases where users use sudo without -H flag.
# assume we don't own the path
# test_writable_dir and _test_writable_dir_win are copied from Flit,
# with the author's agreement to also place them under pip's license.
# If the directory doesn't exist, find the closest parent that does.
# Should never get here, but infinite loops are bad
# os.access doesn't work on Windows: http://bugs.python.org/issue2528
# and we can't use tempfile: http://bugs.python.org/issue22107
# This could be because there's a directory with the same name.
# But it's highly unlikely there's a directory called that,
# so we'll assume it's because the parent dir is not writable.
# This could as well be because the parent dir is not readable,
# due to non-privileged user access.
# This should never be reached
# If it's a symlink, return 0.
# NOTE: tests patch this name.
# Warnings <-> Logging Integration
# We use a specially named logger which will handle all of the
# deprecation messages for pip.
# Enable our Deprecation Warnings
# Determine whether or not the feature is already gone in this version.
# Raise as an error if this behaviour is deprecated.
# On Windows, a broken pipe can show up as EINVAL rather than EPIPE:
# https://bugs.python.org/issue19612
# https://bugs.python.org/issue30418
# For thread-safety
# Then the message already has a prefix.  We don't want it to
# look like "WARNING: DEPRECATION: ...."
# Reraise the original exception, rich 13.8.0+ exits by default
# instead, preventing our handler from firing.
# Our custom override on Rich's logger, to make things work as we need them to.
# If we are given a diagnostic error to present, present it with indentation.
# If a broken pipe occurred while calling write() or flush() on the
# stdout stream in logging's Handler.emit(), then raise our special
# exception so we can handle it in main() instead of logging the
# broken pipe error and continuing.
# The base Filter class allows only records from a logger (or its
# children).
# Determine the level to be logging at.
# The "root" logger should match the "console" level *unless* we also need
# to log to a user log file.
# Disable any logging besides WARNING unless we have DEBUG level logging
# enabled for vendored libraries.
# Shorthands for clarity
# A handler responsible for logging to the console messages
# from the "subprocessor" logger.
# Only use up to the first two numbers.
# Since we have always only checked that the platform starts
# with "macosx", for backwards-compatibility we extract the
# actual prefix provided by the user in case they provided
# something like "macosxcustom_". It may be good to remove
# this as undocumented or deprecate it in the future.
# arch pattern didn't match (?!)
# with "ios", for backwards-compatibility we extract the
# something like "ioscustom_". It may be good to remove
# manylinux1/manylinux2010 wheels run on most manylinux2014 systems
# with the exception of wheels depending on ncurses. PEP 599 states
# manylinux1/manylinux2010 wheels should be considered
# manylinux2014 wheels:
# https://www.python.org/dev/peps/pep-0599/#backwards-compatibility-with-manylinux2010-wheels
# manylinux1 wheels run on most manylinux2010 systems with the
# exception of wheels depending on ncurses. PEP 571 states
# manylinux1 wheels should be considered manylinux2010 wheels:
# https://www.python.org/dev/peps/pep-0571/#backwards-compatibility-with-manylinux1-wheels
# The performance counter is monotonic on all platforms we care
# about and has much better resolution than time.monotonic().
# The package provides no information
# Parsing requirement strings is expensive, and is also expected to happen
# with a low diversity of different arguments (at least relative the number
# constructed). This method adds a cache to requirement object creation to
# minimize repeated parsing of the same string to construct equivalent
# Requirement objects.
# The recommended hash algo of the moment. Change this whenever the state of
# the art changes; it won't hurt backward compatibility.
# Names of hashlib algorithms allowed by the --hash option and ``pip hash``
# Currently, those are the ones at least as collision-resistant as sha256.
# Make sure values are always sorted (to ease equality checks)
# If either of the Hashes object is entirely empty (i.e. no hash
# specified at all), all hashes from the other object are allowed.
# Otherwise only hashes that present in both objects are allowed.
# Pass our favorite hash in to generate a "gotten hash". With the
# empty list, it will never match, so an error will always raise.
# pypa/virtualenv case
# Although PEP 405 does not specify, the built-in venv module always
# writes with UTF-8. (pypa/pip#8717)
# avoids trailing newlines
# We're not in a "sane" venv, so assume there is no system
# site-packages access (since that's PEP 405's default state).
# PEP 405 compliance needs to be checked first since virtualenv >=20 would
# return True for both checks, but is only able to use the PEP 405 config.
# According to RFC 8089, same as empty authority.
# If we have a UNC path, prepend UNC share notation.
# On Windows, urlsplit parses the path as something like "/C:/Users/foo".
# This creates issues for path-related functions like io.open(), so we try
# to detect and strip the leading slash.
# Not UNC.
# Leading slash to strip.
# Drive letter.
# Colon + end of string, or colon + absolute path.
# os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
# to be broken or missing. This strategy is used in the standard library
# platform module:
# https://github.com/python/cpython/blob/fcf1d003bf4f0100c9d0921ff3d70e1127ca1b71/Lib/platform.py#L175-L183
# os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17":
# os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
# ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
# manpage says, "If filename is NULL, then the returned handle is for the
# main program". This way we can let the linker do the work to figure out
# which libc our process is actually using.
# We must also handle the special case where the executable is not a
# dynamically linked executable. This can occur when using musl libc,
# for example. In this situation, dlopen() will error, leading to an
# OSError. Interestingly, at least in the case of musl, there is no
# errno set on the OSError. The single string argument used to construct
# OSError comes from libc itself and is therefore not portable to
# hard code here. In any case, failure to call dlopen() means we
# can't proceed, so we bail on our attempt.
# Symbol doesn't exist -> therefore, we are not linked to
# glibc.
# Call gnu_get_libc_version, which returns a string like "2.5"
# py2 / py3 compatibility:
# platform.libc_ver regularly returns completely nonsensical glibc
# versions. E.g. on my computer, platform says:
# But the truth is:
# This is unfortunate, because it means that the linehaul data on libc
# versions that was generated by pip 8.1.2 and earlier is useless and
# misleading. Solution: instead of using platform, use our code that actually
# works.
# Check for list instead of CommandArgs since CommandArgs is
# only known during type-checking.
# Otherwise, arg is str or HiddenText.
# For HiddenText arguments, display the redacted form by calling str().
# Also, we don't apply str() to arguments that aren't HiddenText since
# this can trigger a UnicodeDecodeError in Python 2 if the argument
# has type unicode and includes a non-ascii character.  (The type
# checker doesn't ensure the annotations are correct in all cases.)
# Most places in pip use show_stdout=False. What this means is--
# - We connect the child's output (combined stderr and stdout) to a
# - We log this output to stderr at DEBUG level as it is received.
# - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't
# - If the subprocess exits with an error, we log the output to stderr
# If show_stdout=True, then the above is still done, but with DEBUG
# replaced by INFO.
# Then log the subprocess output at INFO level.
# Then log the subprocess output using VERBOSE.  This also ensures
# it will be logged to the log file (aka user_log), if enabled.
# Whether the subprocess will be visible in the console.
# Only use the spinner if we're not showing the subprocess output
# and we have a spinner.
# Convert HiddenText objects to the underlying str.
# In this mode, stdout and stderr are in the same pipe.
# Show the line immediately.
# Update the spinner.
# In this mode, stdout and stderr are in different pipes.
# We must use communicate() which is the only safe way to read both.
# log line by line to preserve pip log indenting
# Use ~/Application Support/pip, if the directory exists.
# Use a Linux-like ~/.config/pip, by default.
# for the discussion regarding site_config_dir locations
# see <https://github.com/pypa/pip/issues/1733>
# Unix-y system. Look in /etc as well.
# noqa: F401  # ignore unused
# AIX and Jython
# WARNING: time of check vulnerability, but best we can do w/o NOFOLLOW
# older versions of Jython don't have `os.fstat`
# raise OSError for parity with os.O_NOFOLLOW above
# The importlib.resources.open_text function was deprecated in 3.11 with suggested
# replacement we use below.
# packages in the stdlib that may have installation metadata, but should not be
# considered 'installed'.  this theoretically could be determined based on
# dist.location (py27:`sysconfig.get_paths()['stdlib']`,
# py26:sysconfig.get_config_vars('LIBDEST')), but fear platform variation may
# make this ineffective, so hard-coding
# windows detection, covers cpython and ironpython
# Try to use pip[X[.Y]] names, if those executables for this environment are
# the first on PATH with that name.
# Use the `-m` invocation, if there's no "nice" invocation.
# Try to use the basename, if it's the first executable.
# Virtual environments often symlink to their parent Python binaries, but we don't
# want to treat the Python binaries as equivalent when the environment's Python is
# not on PATH (not activated). Thus, we don't follow symlinks.
# Use the full executable name, because we couldn't find something simpler.
# No need to canonicalize - the candidate did this
# Convert comma-separated specifiers into "A, B, ..., F and G"
# This makes the specifier a bit more "human readable", without
# risking a change in meaning. (Hopefully! Not all edge cases have
# been checked)
# We can safely always allow prereleases here since PackageFinder
# already implements the prerelease logic, and would have filtered out
# prerelease candidates if the user does not expect them.
# for faster __eq__
# Reject if there are any mismatched URL constraints on this package.
# process candidates with extras last to ensure their base equivalent is
# already in the req_set if appropriate.
# Python's sort is stable so using a binary key function keeps relative order
# within both subsets.
# extend existing req's extras
# Check if there is already an installation under the same name,
# and set a flag for later stages to uninstall it, if needed.
# There is no existing installation -- nothing to uninstall.
# The --force-reinstall flag is set -- reinstall.
# The installation is different in version -- reinstall.
# The incoming distribution is editable, or different in
# editable-ness to installation -- reinstall.
# The incoming distribution is under file://
# is a local wheel -- do nothing.
# is a local sdist or path -- reinstall
# The reason can contain non-ASCII characters, Unicode
# is required for Python 2.
# Nothing is left to install, so we do not need an order.
# We hit a cycle, so we'll break it here.
# The walk is exponential and for pathologically connected graphs (which
# are the ones most likely to contain cycles in the first place) it can
# take until the heat-death of the universe. To counter this we limit
# the number of attempts to visit (i.e. traverse through) any given
# node. We choose a value here which gives decent enough coverage for
# fairly well behaved graphs, and still limits the walk complexity to be
# linear in nature.
# Time to visit the children!
# Simplify the graph, pruning leaves that have no dependencies. This is
# needed for large graphs (say over 200 packages) because the `visit`
# function is slower for large/densely connected graphs, taking minutes.
# See https://github.com/pypa/pip/issues/10557
# We repeat the pruning step until we have no more leaves to remove.
# This means we have at least one child
# No child.
# We are done simplifying.
# Calculate the weight for the leaves.
# Remove the leaves from the graph, making it simpler.
# Visit the remaining graph, this will only have nodes to handle if the
# graph had a cycle in it, which the pruning step above could not handle.
# `None` is guaranteed to be the root node by resolvelib.
# Sanity check: all requirement keys should be in the weights,
# and no other keys should be in the weights.
# Now give back all the weights, choosing the largest ones from what we
# accumulated.
# Avoid conflicting with the PyPI package "Python".
# check dependencies are valid
# TODO performance: this means we iterate the dependencies at least twice,
# we may want to cache parsed Requires-Dist
# Provide HashError the underlying ireq that caused it. This
# provides context for the resulting error message to show the
# offending line to the user.
# The output has been presented already, so don't duplicate it.
# Emit the Requires-Python requirement first to fail fast on
# unsupported candidates and avoid pointless downloads/preparation.
# Version may not be present for PEP 508 direct URLs
# Legacy cache entry that does not have origin.json.
# download_info may miss the archive_info.hashes field.
# This is just logging some messages, so we can do it eagerly.
# The returned dist would be exactly the same as self.dist because we
# set satisfied_by in _make_install_req_from_dist.
# TODO: Supply reason based on force_reinstall and upgrade_strategy.
# Add a dependency on the exact base
# (See note 2b in the class docstring)
# The user may have specified extras that the candidate doesn't
# support. We ignore any unsupported extras here.
# We don't return anything here, because we always
# depend on the base candidate, and we'll get the
# install requirement from that.
# We don't need to implement __eq__() and __ne__() since there is always
# only one RequiresPythonCandidate in a resolution, i.e. the host Python.
# The built-in object.__eq__() and object.__ne__() do exactly what we want.
# Inspired by Factory.get_installation_error
# TODO: Check already installed candidate, and use it if the link and
# editable flag match.
# We already tried this candidate before, and it does not build.
# Don't bother trying again.
# The InstallRequirement implementation requires us to give it a
# "template". Here we just choose the first requirement to represent
# all of them.
# Hopefully the Project model can correct this mismatch in the future.
# If --force-reinstall is set, we want the version from the index
# instead, so we "pretend" there is nothing installed.
# Don't use the installed distribution if its version
# does not fit the current dependency graph.
# The candidate is a known incompatibility. Don't use it.
# PEP 592: Yanked releases are ignored unless the specifier
# explicitly pins a version (via '==' or '===') that can be
# solely satisfied by a yanked release.
# PackageFinder returns earlier versions first, so we reverse.
# Not explicit.
# We've stripped extras from the identifier, and should always
# get a BaseCandidate here, unless there's a bug elsewhere.
# Collect basic lookup information from the requirements.
# If the current identifier contains extras, add requires and explicit
# candidates from entries from extra-less identifier.
# Add explicit candidates from constraints. We only do this if there are
# known ireqs, which represent requirements not already explicit. If
# there are no ireqs, we're constraining already-explicit requirements,
# which is handled later when we return the explicit candidates.
# If we're constrained to install a wheel incompatible with the
# target architecture, no candidates will ever be valid.
# Since we cache all the candidates, incompatibility identification
# can be made quicker by comparing only the id() values.
# If none of the requirements want an explicit candidate, we can ask
# the finder for candidates.
# Always make the link candidate for the base requirement to make it
# available to `find_candidates` for explicit candidate lookup for any
# set of extras.
# The extras are required separately via a second requirement.
# There's no way we can satisfy a URL requirement if the underlying
# candidate fails to build. An unnamed URL must be user-supplied, so
# we fail eagerly. If the URL is named, an unsatisfiable requirement
# can make the resolver do the right thing, either backtrack (and
# maybe find some other requirement that's buildable) or raise a
# ResolutionImpossible eventually.
# require the base from the link
# require the extras on top of the base candidate
# Ensure we only accept valid constraints
# Put requirements with extras at the end of the root requires. This does not
# affect resolvelib's picking preference but it does affect its initial criteria
# population: by putting extras at the end we enable the candidate finder to
# present resolvelib with a smaller set of candidates to resolvelib, already
# taking into account any non-transient constraints on the associated base. This
# means resolvelib will have fewer candidates to visit and reject.
# Python's list sort is stable, meaning relative order is kept for objects with
# the same key.
# Don't bother creating a dependency for an empty Requires-Python.
# TODO: Are there more cases this needs to return True? Editable?
# Not installed, no uninstallation required.
# We're installing into global site. The current installation must
# be uninstalled, no matter it's in global or user site, because the
# user site installation has precedence over global.
# We're installing into user site. Remove the user site installation.
# We're installing into user site, but the installed incompatible
# package is in global site. We can't uninstall that, and would let
# the new user installation to "shadow" it. But shadowing won't work
# in virtual environments, so we error out.
# Saying "version X is yanked" isn't entirely accurate.
# https://github.com/pypa/pip/issues/11745#issuecomment-1402805842
# If one of the things we can't solve is "we need Python X.Y",
# that is what we report.
# The comprehension above makes sure all Requirement instances are
# RequiresPythonRequirement, so let's cast for convenience.
# Otherwise, we have a set of causes which can't all be satisfied
# at once.
# The simplest case is when we have *one* cause that can't be
# satisfied. We just report that case.
# OK, we now have a list of requirements that can't all be
# satisfied at once.
# A couple of formatting helpers
# This is a root requirement, so we can report it directly
# Mark version as found to avoid trying other candidates with the same
# version, since they most likely have invalid metadata as well.
# If the installed candidate is better, yield it first.
# If the installed candidate is older than all other candidates.
# Implemented to satisfy the ABC check. This is not needed by the
# resolver, and should not be used by the provider either (for
# Notes on the relationship between the provider, the factory, and the
# candidate and requirement classes.
# The provider is a direct implementation of the resolvelib class. Its role
# is to deliver the API that resolvelib expects.
# Rather than work with completely abstract "requirement" and "candidate"
# concepts as resolvelib does, pip has concrete classes implementing these two
# ideas. The API of Requirement and Candidate objects are defined in the base
# classes, but essentially map fairly directly to the equivalent provider
# methods. In particular, `find_matches` and `is_satisfied_by` are
# requirement methods, and `get_dependencies` is a candidate method.
# The factory is the interface to pip's internal mechanisms. It is stateless,
# and is created by the resolver and held as a property of the provider. It is
# responsible for creating Requirement and Candidate objects, and provides
# services to those objects (access to pip's finder and preparer).
# HACK: Theoretically we should check whether this identifier is a valid
# "NAME[EXTRAS]" format, and parse out the name part with packaging or
# some regular expression. But since pip's resolver only spits out three
# kinds of identifiers: normalized PEP 503 names, normalized names plus
# extras, and Requires-Python, we can cheat a bit here.
# Requires-Python has only one candidate and the check is basically
# free, so we always do it first to avoid needless work if it fails.
# This skips calling get_preference() for all other identifiers.
# Check if this identifier is a backtrack cause
# There is no information for this identifier, so there's no known
# candidates.
# Go through the information and for each requirement,
# check if it's explicit (e.g., a direct link) and get the
# InstallRequirement (the second element) from get_candidate_lookup()
# iter_dependencies() can perform nontrivial work so delay until needed.
# This idiosyncratically converts the SpecifierSet to str and let
# check_requires_python then parse it again into SpecifierSet. But this
# is the legacy resolver so I'm just not going to bother refactoring.
# Actually prepare the files, and collect any exceptions. Most hash
# exceptions cannot be checked ahead of time, because
# _populate_link() needs to be called before we can make decisions
# based on link type.
# If the markers do not match, ignore this requirement.
# If the wheel is not supported, raise an error.
# Should check this after filtering out based on environment markers to
# allow specifying different wheels based on the environment/OS, in a
# single requirements file.
# This next bit is really a sanity check.
# Unnamed requirements are scanned again and the requirement won't be
# added as a dependency until after scanning.
# When no existing requirement exists, add the requirement as a
# dependency and it will be scanned again after.
# We'd want to rescan this requirement later
# Assume there's no need to scan, and that we've already
# encountered this for scanning.
# If we're now installing a constraint, mark the existing
# object for real installation.
# If we're now installing a user supplied requirement,
# mark the existing object as such.
# Return the existing requirement for addition to the parent and
# scanning again.
# Don't uninstall the conflict if doing a user install and the
# conflict is not a user install.
# Check for the possibility of an upgrade.  For link-based
# requirements we have to pull the tree down and inspect to assess
# the version #, so it's handled way down.
# Then the best version is installed.
# No distribution found, so we squash the error.  It will
# be raised later when we re-try later to do the install.
# Why don't we just raise here?
# Log a warning per PEP 592 if necessary before returning.
# Mark this as a unicode string to prevent
# "UnicodeEncodeError: 'ascii' codec can't encode character"
# in Python 2 when the reason contains non-ascii characters.
# satisfied_by is only evaluated by calling _check_skip_installed,
# so it must be None here.
# We eagerly populate the link, since that's our "legacy" behavior.
# NOTE
# The following portion is for determining if a certain package is
# going to be re-installed/upgraded or not and reporting to the user.
# This should probably get cleaned up in a future refactor.
# req.req is only avail after unpack for URL
# pkgs repeat check_if_exists to uninstall-on-upgrade
# (#14)
# Tell user what we are doing for this requirement:
# obtain (editable), skipping, processing (local url), collecting
# (remote url or package name)
# Parse and return dependencies
# This will raise UnsupportedPythonVersion if the given Python
# version isn't compatible with the distribution's Requires-Python.
# This idiosyncratically converts the Requirement to str and let
# make_install_req then parse it again into Requirement. But this is
# the legacy resolver so I'm just not going to bother refactoring.
# We add req_to_install before its dependencies, so that we
# can refer to it when adding dependencies.
# 'unnamed' requirements will get added here
# 'unnamed' requirements can only come from being directly
# provided by the user.
# The current implementation, which we may change at any point
# installs the user specified things in the order given, except when
# dependencies must come earlier to achieve topological order.
# Downstream redistributors which have debundled our dependencies should also
# patch this value to be true. This will trigger the additional patching
# to cause things like "six" to be available as pip.
# By default, look in this directory for a bunch of .whl files which we will
# add to the beginning of sys.path before attempting to import anything. This
# is done to support downstream re-distributors like Debian and Fedora who
# wish to create their own Wheels for our dependencies to aid in debundling.
# Define a small helper function to alias our vendored modules to the real ones
# if the vendored ones do not exist. This idea of this was taken from
# https://github.com/kennethreitz/requests/pull/2567.
# We can just silently allow import failures to pass here. If we
# got to this point it means that ``import pip._vendor.whatever``
# failed and so did ``import whatever``. Since we're importing this
# upfront in an attempt to alias imports, not erroring here will
# just mean we get a regular import error whenever pip *actually*
# tries to import one of these modules to use it, which actually
# gives us a better error message than we would have otherwise
# gotten.
# If we're operating in a debundled setup, then we want to go ahead and trigger
# the aliasing of our vendored libraries as well as looking for wheels to add
# to our sys.path. This will cause all of this code to be a no-op typically
# however downstream redistributors can enable it in a consistent way across
# all platforms.
# Actually look inside of WHEEL_DIR to find .whl files and add them to the
# front of our sys.path.
# Actually alias all of our vendored dependencies.
# Bidi rules should only be applied if string contains RTL characters
# String likely comes from a newer version of Unicode
# Bidi rule 1
# Bidi rule 2
# Bidi rule 3
# Bidi rule 4
# Bidi rule 5
# Bidi rule 6
# This file is automatically generated by tools/idna-data
# we could be immediately ahead of a tuple (start, end)
# with start < int_ <= end
# or we could be immediately behind a tuple (int_, end)
# vim: set fileencoding=utf-8 :
# type: Tuple[Union[Tuple[int, str], Tuple[int, str, str]], ...]
# Copyright 2015-2021 Nir Cohen
# Python 3.7
#: Translation table for normalizing the "ID" attribute defined in os-release
#: files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as defined in the os-release file, translated to lower case,
#:   with blanks translated to underscores.
#: * Value: Normalized value.
# Oracle Linux
# Newer versions of OpenSuSE report as opensuse-leap
#: Translation table for normalizing the "Distributor ID" attribute returned by
#: the lsb_release command, for use by the :func:`distro.id` method.
#: * Key: Value as returned by the lsb_release command, translated to lower
#:   case, with blanks translated to underscores.
# Oracle Enterprise Linux 4
# Oracle Linux 5
# RHEL 6, 7 Workstation
# RHEL 6, 7 Server
# RHEL 6 ComputeNode
#: Translation table for normalizing the distro ID derived from the file name
#: of distro release files, for use by the :func:`distro.id` method.
#: * Key: Value as derived from the file name of a distro release file,
#:   translated to lower case, with blanks translated to underscores.
# RHEL 6.x, 7.x
# Pattern for content of distro release file (reversed)
# Pattern for base file name of distro release file
# Base file names to be looked up for if _UNIXCONFDIR is not readable.
# Base file names to be ignored when searching for distro release file
# Python < 3.8
# NOTE: The idea is to respect order **and** have it set
# updated later
# On AIX platforms, prefer oslevel command output.
# On Debian-like, add debian_version file content to candidates list.
# This algorithm uses the last version in priority order that has
# the best precision. If the versions are not in conflict, that
# does not matter; otherwise, using the last one instead of the
# first one might be considered a surprise.
# Handle os_release specially since distros might purposefully set
# this to empty string to have no codename
# At this point, all shell-like parsing has been done (i.e.
# comments processed, quotes and backslash escape sequences
# processed, multi-line values assembled, trailing newlines
# stripped, etc.), so the tokens are now either:
# * variable assignments: var=value
# * commands or their arguments (not allowed in os-release)
# Ignore any tokens that are not variable assignments
# extract release codename (if any) from version attribute
# os-release added a version_codename field.  Use that in
# preference to anything else Note that some distros purposefully
# do not have code names.  They should be setting
# version_codename=""
# Same as above but a non-standard field name used on older Ubuntus
# Command not found or lsb_release returned error
# Ignore lines without colon.
# This is to prevent the Linux kernel version from
# appearing as the 'best' version on otherwise
# identifiable distributions.
# If it was specified, we use it and parse what we can, even if
# its file name or content does not match the expected pattern.
# The file name pattern for user-specified distro release files
# is somewhat more tolerant (compared to when searching for the
# file), because we want to use what was specified as best as
# possible.
# We sort for repeatability in cases where there are multiple
# distro specific files; e.g. CentOS, Oracle, Enterprise all
# containing `redhat-release` on top of their own.
# This may occur when /etc is not readable but we can't be
# sure about the *-release files. Check common entries of
# /etc for information. If they turn out to not be there the
# error is handled in `_parse_distro_release_file()`.
# The name is always present if the pattern matches.
# the loop didn't "break": no candidate.
# CloudLinux < 7: manually enrich info with proper id.
# Only parse the first line. For instance, on SLES there
# are multiple lines. We don't want them...
# Ignore not being able to read a specific, seemingly version
# related file.
# See https://github.com/python-distro/distro/issues/162
# regexp ensures non-None
# Copyright (C) 2012-2023 The Python Software Foundation.
# See LICENSE.txt and CONTRIBUTORS.txt.
# Requirement parsing code as per PEP 508
# either identifier, or literal string
# either a string chunk, or oq, or q to terminate
# skip past closing quote
# it's a URI
# Some packages have a trailing comma which would break things
# See issue #148
# As a special diversion from PEP 508, allow a version number
# a.b.c in parentheses as a synonym for ~= a.b.c (because this
# is allowed in earlier PEPs)
# normalizes and returns a lstripped-/-separated path
# remove the entry if it was here
# virtualenv venvs
# PEP 405 venvs
# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as
# changes to the stub launcher mean that sys.executable always points
# to the stub on OS X
# Avoid normcasing: see issue #143
# result = os.path.normcase(sys.executable)
# needs to be a text stream
# Try to load as JSON, falling back on legacy format
# entry.dist = self
# TODO check k, v for valid values
# for attr in ('__name__', '__module__', '__doc__'):
# obj.__dict__[self.func.__name__] = value = self.func(obj)
# Set the executable bits (owner, group, and world) on
# all the files specified.
# raise error
# dirs should all be empty now, except perhaps for
# __pycache__ subdirs
# reverse so that subdirs appear before their parents
# should fail if non-empty
# Assume posix, or old Windows
# we use 'isdir' instead of 'exists', because we want to
# fail if there's a file with that name
# Allow spaces in name because of legacy dists like "Twisted Core"
# Extended metadata functionality
# urlopen might fail if it runs into redirections,
# because of Python issue #13696. Fixed in locators
# using a custom redirect handler.
# data = reader.read().decode('utf-8')
# result = json.loads(data)
# Simple sequencing
# nodes with no preds/succs
# Remove empties
# if a step was already seen,
# move it to the end (so it will appear earlier
# when reversed on return) ... but not for the
# final step, as that would be confusing for
# users
# http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
# set the depth index for this node to the smallest unused index
# Consider successors
# Successor has not yet been visited
# the successor is in the stack and hence in the current
# strongly connected component (SCC)
# If `node` is a root node, pop the stack and generate an SCC
# storing the result
# Unarchiving functionality for zip, tar, tgz, tbz, whl
# See Python issue 17153. If the dest path contains Unicode,
# tarfile extraction fails on Python 2.x if a member path name
# contains non-ASCII characters - it leads to an implicit
# bytes -> unicode conversion using ASCII to decode.
# Limit extraction of dangerous items, if this Python
# allows it easily. If not, just trust the input.
# See: https://docs.python.org/3/library/tarfile.html#extraction-filters
# This is only called if the current Python has tarfile filters
# Simple progress bar
# elif duration < 1:
# import pdb; pdb.set_trace()
# Glob functionality
# we support both
# HTTPSConnection which verifies certificates/matches domains
# set this to the path to the certs file (.pem)
# only used if ca_certs is not None
# noinspection PyPropertyAccess
# To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The-
# Middle proxy using HTTP listens on port 443, or an index mistakenly serves
# HTML containing a http://xyz link when it should be https://xyz),
# you can use the following handler class, which does not allow HTTP traffic.
# It works by inheriting from HTTPHandler - so build_opener won't add a
# handler for HTTP itself.
# XML-RPC with timeouts
# The above classes only come into play if a timeout
# is specified
# scheme = splittype(uri)  # deprecated as of Python 3.8
# CSV functionality. This is provided because on 2.x, the csv module can't
# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files.
# Python 3 determines encoding from locale. Force 'utf-8'
# file encoding to match other forced utf-8 encoding
# The strs are used because we need native
# str in the csv API (2.x won't take
# Unicode)
# Check for valid identifiers
# https://www.python.org/dev/peps/pep-0503/#normalized-names
# def _get_pypirc_command():
# """
# Get the distutils command for interacting with PyPI configurations.
# :return: the command.
# from distutils.core import Distribution
# from distutils.config import PyPIRCCommand
# d = Distribution()
# return PyPIRCCommand(d)
# let's get the list of servers
# nothing set, let's try to get the default pypi
# optional params
# work around people having "repository" for the "pypi"
# section of their config set to the HTTP (rather than
# HTTPS) URL
# old format
# get_platform()/get_host_platform() copied from Python 3.10.a0 source, with some minor
# tweaks
# Set for cross builds explicitly
# XXX what about the architecture? NT is Intel or Alpha,
# Mac OS is M68k or PPC, etc.
# Try to distinguish various flavours of Unix
# Convert the OS name to lowercase, remove '/' characters, and translate
# spaces (for "Power Macintosh")
# At least on Linux/Intel, 'machine' is the processor --
# i386, etc.
# XXX what about Alpha, SPARC, etc?
# SunOS 5 == Solaris 2
# We can't use 'platform.architecture()[0]' because a
# bootstrap problem. We use a dict to get an error
# if some suspicious happens.
# fall through to standard osname-release-machine representation
# Copyright (C) 2012-2024 Vinay Sajip.
# Licensed to the Python Software Foundation under a contributor agreement.
# Copyright (C) 2013-2017 Vinay Sajip.
# created when needed
# Use native string to avoid issues on 2.x: see Python #20140.
# Cache invalidation is a hard problem :-)
# write the bytes of the resource to the cache location
# Backwards compatibility
# Issue #50: need to preserve type of path on Python 2.x
# like os.path._get_sep
# should only happen on 2.x
# PyPy doesn't have a _files attr on zipimporter, and you can't set one
# only immediate children
# In Python 3.6, _frozen_importlib -> _frozen_importlib_external
# See issue #146
# calls any path hooks, gets importer into cache
# Copyright (C) 2013-2023 Vinay Sajip.
# check if Python is called on the first line with this expression
# Pre-fetch the contents of all executable wrapper stubs.
# This is to address https://github.com/pypa/pip/issues/12666.
# When updating pip, we rename the old pip in place before installing the
# new version. If we try to fetch a wrapper *after* that rename, the finder
# machinery will be confused as the package is no longer available at the
# location where it was imported from. So we load everything into memory in
# advance.
# Issue 31: don't hardcode an absolute package name, but
# determine it relative to the current package
# make sure we quote only the executable in case of env
# for example /usr/bin/env "/dir with spaces/bin/jython"
# instead of "/usr/bin/env /dir with spaces/bin/jython"
# otherwise whole
# Keep the old name around (for now), as there is at least one project using it!
# for shebangs
# It only makes sense to set mode bits on POSIX.
# Workaround for Jython is not needed on Linux systems.
# Use wrapper exe for Jython on Windows
# In a cross-compiling environment, the shebang will likely be a
# script; this *must* be invoked with the "safe" version of the
# shebang, or else using os.exec() to run the entry script will
# fail, raising "OSError 8 [Errno 8] Exec format error".
# Add 3 for '#!' prefix and newline suffix.
# assume this will be taken care of
# for Python builds from source on Windows, no Python executables with
# a version suffix are created, so we use python.exe
# Normalise case for Windows - COMMENTED OUT
# executable = os.path.normcase(executable)
# N.B. The normalising operation above has been commented out: See
# issue #124. Although paths in Windows are generally case-insensitive,
# they aren't always. For example, a path containing a ẞ (which is a
# LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a
# LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by
# Windows as equivalent in path names.
# If the user didn't specify an executable, it may be necessary to
# cater for executable paths with spaces (not uncommon on Windows)
# Issue #51: don't use fsencode, since we later try to
# check that the shebang is decodable using utf-8.
# in case of IronPython, play safe and enable frames support
# Python parser starts to read a script using UTF-8 until
# it gets a #coding:xxx cookie. The shebang has to be the
# first line of a file, the #coding:xxx cookie cannot be
# written before. So the shebang has to be decodable from
# UTF-8.
# If the script is encoded to a custom encoding (use a
# #coding:xxx cookie), the shebang has to be decodable from
# the script encoding too.
# Failed writing an executable - it might be in use.
# Not allowed to fail here
# nor here
# still in use - ignore error
# Always open the file, but ignore failures in dry-run mode --
# that way, we'll get accurate feedback if we can read the
# script.
# Executable launcher support.
# Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/
# Public API follows
# Leaving this around for now, in case it needs resurrecting in some way
# _userprog = None
# def splituser(host):
# """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'."""
# global _userprog
# if _userprog is None:
# import re
# _userprog = re.compile('^(.*)@(.*)$')
# match = _userprog.match(host)
# if match: return match.group(1, 2)
# return None, host
# Issue #17980: avoid denials of service by refusing more
# than one wildcard per fragment.  A survey of established
# policy among SSL implementations showed it to be a
# reasonable choice.
# RFC 6125, section 6.4.3, subitem 1.
# The client SHOULD NOT attempt to match a presented identifier in which
# the wildcard character comprises a label other than the left-most label.
# When '*' is a fragment by itself, it matches a non-empty dotless
# fragment.
# RFC 6125, section 6.4.3, subitem 3.
# The client SHOULD NOT attempt to match a presented identifier
# where the wildcard character is embedded within an A-label or
# U-label of an internationalized domain name.
# Otherwise, '*' matches any dotless string, e.g. www*
# add the remaining fragments, ignore any wildcards
# The subject is only checked when there is no dNSName entry
# in subjectAltName
# XXX according to RFC 2818, the most specific Common Name
# must be used.
# Implementation from Python 3.3
# than referring to PATH directories. This includes checking relative to the
# current directory, e.g. ./script
# The current directory takes precedence on Windows.
# See if the given file matches any of the expected path extensions.
# This will allow us to short circuit when given "python.exe".
# If it does match, only test that one, otherwise we have to try
# others.
# ZipFile is a context manager in 2.7, but not in 2.6
# return None, so if an exception occurred, it will propagate
# Issue #99: on some systems (e.g. containerised),
# sys.getfilesystemencoding() returns None, and we need a real value,
# so fall back to utf-8. From the CPython 2.7 docs relating to Unix and
# sys.getfilesystemencoding(): the return value is "the user’s preference
# according to the result of nl_langinfo(CODESET), or None if the
# nl_langinfo(CODESET) failed."
# For converting & <-> &amp; etc.
# Python >= 3.4
# {{{ http://code.activestate.com/recipes/576693/ (r9)
# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy.
# Passes Python2.7's test suite and incorporates all the latest updates.
# Big-O running times for all methods are the same as for regular dictionaries.
# The internal self.__map dictionary maps keys to links in a doubly linked list.
# Each link is stored as a list of length three:  [PREV, NEXT, KEY].
# sentinel node
# Setting a new item creates a new link which goes at the end of the linked
# list, and the inherited dictionary is updated with the new key/value pair.
# Deleting an existing item uses self.__map to find the link which is
# then removed by updating the links in the predecessor and successor nodes.
# -- the following methods do not depend on the internal structure --
# Make progressively weaker assumptions about "other"
# let subclasses override update without breaking __init__
# -- the following methods are only used in Python 2.7 --
# The ConvertingXXX classes are wrappers around standard Python containers,
# and they serve to convert any suitable values in the container. The
# conversion converts base dicts, lists and tuples to their wrapped
# equivalents, whereas strings which match a conversion format are converted
# appropriately.
# Each wrapper should have a configurator attribute holding the actual
# configurator to use for conversion.
# If the converted value is different, save for next time
# We might want to use a different one, e.g. importlib
# try as number first (most likely)
# rest should be empty
# type: ignore[no-redef, unused-ignore]
# type: ignore[assignment, unused-ignore]
# a map of group names to parsed data
# a map of group names to their ancestors, used for cycle detection
# a cache of completed resolutions to Requirement lists
# short circuit -- never do the work twice
# packaging.requirements.Requirement parsing ensures that this is a
# valid PEP 508 Dependency Specifier
# raises InvalidRequirement on failure
# unreachable
# Set default logging handler to avoid "No handler found" warnings.
# === NOTE TO REPACKAGERS AND VENDORS ===
# Please delete this block, this logic is only
# for urllib3 being distributed via PyPI.
# See: https://github.com/urllib3/urllib3/issues/2680
# type: ignore # noqa: F401
# This method needs to be in this __init__.py to get the __name__ correct
# even if urllib3 is vendored within another package.
# ... Clean up.
# All warning filters *must* be appended unless you're really certain that they
# shouldn't be: otherwise, it's very hard for users to use most Python
# mechanisms to silence them.
# SecurityWarning's always go off by default.
# SubjectAltNameWarning's should go off once per host
# InsecurePlatformWarning's don't vary between requests, so we keep it default.
# SNIMissingWarnings should go off only once.
# Platform-specific: No threads available
# Re-insert the item, moving it to the end of the eviction line.
# Possibly evict the existing value of 'key'
# If we didn't evict an existing value, we might have to evict the
# least recently used item from the beginning of the container.
# Copy pointers to all values, then wipe the mapping
# Python 2
# Only provide the originally cased names
# Using the MutableMapping function directly fails due to the private marker.
# Using ordinary dict.pop would expose the internal structures.
# So let's reinvent the wheel.
# Keep the common case aka no item present as fast as possible
# Backwards compatibility for httplib
# Backwards compatibility for http.cookiejar
# Don't need to convert tuples
# python2.7 does not expose a proper API for exporting multiheaders
# efficiently. This function re-reads raw lines from the message
# object and extracts the multiheaders properly.
# We received a header line that starts with OWS as described
# in RFC-7230 S3.2.4. This indicates a multiline header, but
# there exists no previous header to which we can attach it.
# Ignore data after the first error
# Allow trailing garbage acceptable in other gzip clients
# Supports both 'brotlipy' and 'Brotli' packages
# since they share an import name. The top branches
# are for 'brotlipy' and bottom branches for 'Brotli'
# Are we using the chunked-style of transfer encoding?
# Don't incur the penalty of creating a list and then discarding it
# Determine length of response
# If requested, preload the body.
# For backwards-compat with earlier urllib3 0.4 and earlier.
# This Response will fail with an IncompleteRead if it can't be
# received as chunked. This method falls back to attempt reading
# the response before raising an exception.
# RFC 7230 section 3.3.2 specifies multiple content lengths can
# be sent in a single Content-Length header
# (e.g. Content-Length: 42, 42). This line ensures the values
# are all valid ints and that as long as the `set` length is 1,
# all values are the same. Otherwise, the header is invalid.
# Convert status to int for comparison
# In some cases, httplib returns a status of "_UNKNOWN"
# Check for responses that shouldn't include a body
# Note: content-encoding value should be case-insensitive, per RFC 7230
# Section 3.2
# FIXME: Ideally we'd like to include the url in the ReadTimeoutError but
# there is yet no clean way to get at it from this context.
# FIXME: Is there a better way to differentiate between SSLErrors?
# SSL errors related to framing/MAC get wrapped and reraised here
# This includes IncompleteRead.
# If no exception is thrown, we should avoid cleaning up
# unnecessarily.
# If we didn't terminate cleanly, we need to throw away our
# connection.
# The response may not be closed but we're not going to use it
# anymore so close it now to ensure that the connection is
# released back to the pool.
# Closing the response may not actually be sufficient to close
# everything, so if we have a hold of the connection close that
# too.
# If we hold the original response but it's closed now, we should
# return the connection back to the pool.
# Besides `max_chunk_amt` being a maximum chunk size, it
# affects memory overhead of reading a response by this
# method in CPython.
# `c_int_max` equal to 2 GiB - 1 byte is the actual maximum
# chunk size that does not lead to an overflow error, but
# 256 MiB is a compromise.
# to reduce peak memory usage by `max_chunk_amt`.
# StringIO doesn't like amt=None
# Platform-specific: Buggy versions of Python.
# Close the connection when no data is returned
# This is redundant to what httplib/http.client _should_
# already do.  However, versions of python released before
# December 15, 2012 (http://bugs.python.org/issue16298) do
# not properly close the connection in all cases. There is
# no harm in redundantly calling close.
# This is an edge case that httplib failed to cover due
# to concerns of backward compatibility. We're
# addressing it here to make sure IncompleteRead is
# raised during streaming, so all calls with incorrect
# Content-Length are caught.
# Python 2.7
# HTTPResponse objects in Python 3 don't have a .strict attribute
# Backwards-compatibility methods for http.client.HTTPResponse
# Overrides from io.IOBase
# This method is required for `io` module compatibility.
# First, we'll figure out length of a chunk and then
# we'll try to read it from socket.
# Invalid chunked protocol response, abort.
# Toss the CRLF at the end of the chunk.
# amt > self.chunk_left
# FIXME: Rewrite this method and make it a class with a better structured logic.
# Don't bother reading the body of a HEAD request.
# If a response is already read and closed
# then return immediately.
# On CPython and PyPy, we should never need to flush the
# decoder. However, on Jython we *might* need to, so
# lets defensively do it anyway.
# Platform-specific: Jython.
# Chunk content ends with \r\n: discard it.
# Some sites may not end with '\r\n'.
# We read everything; close the "file".
# This file is protected via CODEOWNERS
# Abstract
# Python 2:
# encode_rfc2231 accepts an encoded string and returns an ascii-encoded
# string in Python 2 but accepts and returns unicode strings in Python 3
# Replace "\" with "\\".
# All control characters from 0x00 to 0x1F *except* 0x1B.
# For backwards-compatibility.
# All known keyword arguments that could be provided to the pool manager, its
# pools, or the underlying connections. This is used to construct a pool key.
# str
# int or float or Timeout
# int or Retry
# bool
# instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext
# parsed proxy url
# list of (level (int), optname (int), value (int or str)) tuples
# bool or string
#: The namedtuple class used to construct keys for the connection pool.
#: All custom key schemes should include the fields in this key at a minimum.
# Since we mutate the dictionary, make a copy first
# These are both dictionaries and need to be transformed into frozensets
# The socket_options key may be a list and needs to be transformed into a
# tuple.
# Map the kwargs to the names in the namedtuple - this is necessary since
# namedtuples can't have fields starting with '_'.
# Default to ``None`` for keys missing from the context
#: A dictionary that maps a scheme to a callable that creates a pool key.
#: This can be used to alter the way pool keys are constructed, if desired.
#: Each PoolManager makes a copy of this dictionary so they can be configured
#: globally here, or individually on the instance.
# Locally set the pool classes and keys so other PoolManagers can
# override them.
# Return False to re-raise any potential exceptions
# Although the context has everything necessary to create the pool,
# this function has historically only used the scheme, host, and port
# in the positional args. When an API change is acceptable these can
# be removed.
# If the scheme, host, or port doesn't match existing open
# connections, open a new ConnectionPool.
# Make a fresh ConnectionPool of the desired type
# Support relative URLs for redirecting.
# Change the method according to RFC 9110, Section 15.4.4.
# And lose the body not to transfer anything sensitive.
# Strip headers marked as unsafe to forward to the redirected location.
# Check remove_headers_on_redirect to avoid a potential network call within
# conn.is_same_host() which may use socket.gethostbyname() in the future.
# For connections using HTTP CONNECT, httplib sets the necessary
# headers on the CONNECT to the proxy. If we're not using CONNECT,
# we'll definitely need to set 'Host' at the very least.
# Base Exceptions
# For pickling purposes.
#: Renamed to ProtocolError but aliased for backwards compatibility.
# Leaf Exceptions
# This timeout error does not have a URL attached and needs to inherit from the
# base HTTPError
# TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
# 'localhost' is here because our URL parser parses
# localhost:8080 -> scheme=localhost, remove if we fix this.
# Platform-specific: Python 3
# Platform-specific: Python 2
# Pool objects
# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252
# Fill the queue up so that doing get() on it will block properly
# These are mostly for testing and debugging purposes.
# Enable Nagle's algorithm for proxies, to avoid packet fragmentation.
# We cannot know if the user has added default socket options, so we cannot replace the
# list.
# Do not pass 'self' as callback to 'finalize'.
# Then the 'finalize' would keep an endless living (leak) to self.
# By just passing a reference to the pool allows the garbage collector
# to free self if nobody else has a reference to it.
# Close all the HTTPConnections in the pool before the
# HTTPConnectionPool object is garbage collected.
# self.pool is None
# Oh well, we'll create a new connection then
# If this is a persistent connection, check if it got disconnected
# This is a proxied connection that has been mutated by
# http.client._tunnel() and cannot be reused (since it would
# attempt to bypass the proxy)
# Everything is dandy, done.
# self.pool is None.
# This should never happen if self.block == True
# Connection never got put back into the pool, close it.
# Nothing to do for HTTP connections.
# User passed us an int/float. This is for backwards compatibility,
# can be removed later
# See the above comment about EAGAIN in Python 3. In Python 2 we have
# to specifically catch it and throw the timeout error
# Catch possible read timeouts thrown as SSL errors. If not the
# case, rethrow the original. We need to do this because of:
# http://bugs.python.org/issue10272
# Python < 2.7.4
# Trigger any extra validation we need to do.
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
# conn.request() calls http.client.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
# We are swallowing BrokenPipeError (errno.EPIPE) since the server is
# legitimately able to close the connection after sending a valid response.
# With this behaviour, the received response is still readable.
# Python 3
# Python 2 and macOS/Linux
# EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE/ECONNRESET are needed on macOS
# https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/
# Reset the timeout for the recv() on the socket
# App Engine doesn't have a sock attr
# In Python 3 socket.py will catch EAGAIN and return None when you
# try and read into the file pointer created by http.client, which
# instead raises a BadStatusLine exception. Instead of catching
# the exception and assuming all BadStatusLine exceptions are read
# timeouts, check for a zero timeout before making the request.
# None or a value
# Receive the response from the server
# Python 2.7, use buffering of HTTP responses
# Remove the TypeError from the exception chain in
# Python 3 (including for exceptions like SystemExit).
# Otherwise it looks like a bug in the code.
# AppEngine doesn't have a version attr.
# Disable access to the pool
# Close all the HTTPConnections in the pool.
# TODO: Add optional support for socket.gethostbyname checking.
# Use explicit default port for comparison when none is given
# Check host
# Ensure that the URL we're connecting to is properly encoded
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
# See issue #651 [1] for details.
# [1] <https://github.com/urllib3/urllib3/issues/651>
# Merge the proxy headers. Only done when not using HTTP CONNECT. We
# have to copy the headers dict so we can safely change it without those
# changes being reflected in anyone else's copy.
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
# Request a connection from the queue.
# Make the request on the httplib connection object.
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
# Pass method to Response for length checking
# Import httplib's response into our own wrapper object
# Everything went great!
# Didn't get a connection from the pool, no need to clean up
# Discard the connection for these exceptions. It will be
# replaced during the next _get_conn() call.
# We're trying to detect the message 'WRONG_VERSION_NUMBER' but
# SSLErrors are kinda all over the place when it comes to the message,
# so we try to cover our bases here!
# Try to detect a common user error with proxies which is to
# set an HTTP proxy to be HTTPS when it should be 'http://'
# (ie {'http': 'http://proxy', 'https': 'https://proxy'})
# Instead we add a nice error message and point to a URL.
# Keep track of the error for the retry warning.
# We hit some kind of exception, handled or otherwise. We need
# to throw the connection away unless explicitly told not to.
# Close the connection, set the variable to None, and make sure
# we put the None back in the pool to avoid leaking it.
# Put the connection back to be reused. If the connection is
# expired then it will be None, which will get replaced with a
# fresh connection during _get_conn.
# Try again
# Handle redirect?
# Check if we should retry the HTTP response.
# Force connect early to allow us to validate the connection.
# AppEngine might not have  `.sock`
# httplib doesn't like it when we include brackets in IPv6 addresses
# Specifically, if we include brackets but also pass the port then
# httplib crazily doubles up the square brackets on the Host header.
# Instead, we need to make sure we never pass ``None`` as the port.
# However, for backward compatibility reasons we can't actually
# *assert* that.  See http://bugs.python.org/issue28539
# Done.
# Compiled with SSL?
# Platform-specific: No SSL.
# Python 3: not a no-op, we're adding this to the namespace so it can be imported.
# Python 3:
# Not a no-op, we're adding this to the namespace so it can be imported.
# noqa (historical, removed in v2)
# When it comes time to update this value as a part of regular maintenance
# (ie test_recent_date is failing) update it to ~6 months before the current date.
#: Disable Nagle's algorithm by default.
#: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]``
#: Whether this connection verifies the host's certificate.
#: Whether this proxy connection (if used) verifies the proxy host's
#: certificate.
# Pre-set source_address.
#: The socket options provided by the user. If no options are
#: provided, we use the default options.
# Proxy options provided by the user.
# Google App Engine's httplib does not define _tunnel_host
# TODO: Fix tunnel so it doesn't depend on self.sock state.
# Mark this connection as not reusable
# Empty docstring because the indentation of CPython's implementation
# is broken but we don't want this method in our documentation.
# Update the inner socket's timeout value to send the request.
# This only triggers if the connection is re-used.
# Avoid modifying the headers passed into .request()
# After the if clause, to always have a closed body
# Required property for Google AppEngine 1.9.0 which otherwise causes
# HTTPS requests to go out as HTTP. (See Issue #356)
# If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also
# have an SSLContext object in which case we'll use its verify_mode.
# Add certificate verification
# Calls self._set_hostport(), so self.host is
# self._tunnel_host below.
# Override the host with the one we're requesting data from.
# Wrap socket using verification with the root certs in
# trusted_root_certs
# Try to load OS default certs if none are given.
# Works well on Windows (requires Python3.4+)
# If we're using all defaults and the connection
# is TLSv1 or TLSv1.1 we throw a DeprecationWarning
# for the host.
# Defensive:
# While urllib3 attempts to always turn off hostname matching from
# the TLS library, this cannot always be done. So we check whether
# the TLS Library still thinks it's matching hostnames.
# If the user provided a proxy context, we assume CA and client
# certificates have already been set
# If no cert was provided, use only the default options for server
# certificate validation
# Our upstream implementation of ssl.match_hostname()
# only applies this normalization to IP addresses so it doesn't
# match DNS SANs so we do the same thing!
# Add cert to exception and reraise so client code can inspect
# the cert when catching the exception, if they want to
# noqa: F811
# Check for redirect response
# Production GAE handles deflate encoding automatically, but does
# not remove the encoding header.
# We have a full response's content,
# so let's make sure we don't report ourselves as chunked data.
# In order for decoding to work, we must present the content as
# a file-like object.
# Defer to URLFetch's default.
# Alias methods from _appengine_environ to maintain public API interface.
# Performs the NTLM handshake that secures the connection. The socket
# must be kept open while requests are performed.
# Send negotiation message
# Remove the reference to the socket, so that it can not be closed by
# the response object (we want to keep the socket open)
# Server should respond with a challenge message
# Send authentication message
# UnsupportedExtension is gone in cryptography >= 2.1.0
# SNI always works.
# Map from urllib3 to PyOpenSSL compatible parameter-values.
# OpenSSL will only write 16K at a time
# Method added in `cryptography==1.1`; not available in older versions
# pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509
# attribute is only present on those versions.
# Don't send IPv6 addresses through the IDNA encoder.
# Pass the cert to cryptography, which has much better APIs for this.
# We want to find the SAN extension. Ask Cryptography to locate it (it's
# faster than looping in Python)
# No such extension, return the empty list.
# A problem has been found with the quality of the certificate. Assume
# no SAN field is present.
# We want to return dNSName and iPAddress fields. We need to cast the IPs
# back to strings because the match_hostname function wants them as
# strings.
# Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8
# decoded. This is pretty frustrating, but that's what the standard library
# does with certificates, and so we need to attempt to do the same.
# We also want to skip over names which cannot be idna encoded.
# Copy-pasted from Python 3.5 source code
# TLS 1.3 post-handshake authentication
# FIXME rethrow compatible exceptions should we ever use this
# SNI always works
# This dictionary is used by the read callback to obtain a handle to the
# calling wrapped socket. This is a pretty silly approach, but for now it'll
# do. I feel like I should be able to smuggle a handle to the wrapped socket
# directly in the SSLConnectionRef, but for now this approach will work I
# guess.
# We need to lock around this structure for inserts, but we don't do it for
# reads/writes in the callbacks. The reasoning here goes as follows:
# This is good: if we had to lock in the callbacks we'd drastically slow down
# the performance of this code.
# Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over
# for no better reason than we need *a* limit, and this one is right there.
# This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to
# individual cipher suites. We need to do this because this is how
# SecureTransport wants them.
# Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of
# TLSv1 and a high of TLSv1.2. For everything else, we pin to that version.
# TLSv1 to 1.2 are supported on macOS 10.8+
# This has some needless copying here, but I'm not sure there's
# much value in optimising this data path.
# We need to keep these two objects references alive: if they get GC'd while
# in use then SecureTransport could attempt to call a function that is in freed
# memory. That would be...uh...bad. Yeah, that's the word. Bad.
# We save off the previously-configured timeout and then set it to
# zero. This is done because we use select and friends to handle the
# timeouts, but if we leave the timeout set on the lower socket then
# Python will "kindly" call select on that socket again for us. Avoid
# that by forcing the timeout to zero.
# We explicitly don't catch around this yield because in the unlikely
# event that an exception was hit in the block we don't want to swallow
# If we disabled cert validation, just say: cool.
# Do not trust on error
# SecureTransport does not send an alert nor shuts down the connection.
# close the connection immediately
# l_onoff = 1, activate linger
# l_linger = 0, linger for 0 seoncds
# We want data in memory, so load it up.
# Get a CFArray that contains the certs we want.
# Ok, now the hard part. We want to get the SecTrustRef that ST has
# created for this connection, shove our CAs into it, tell ST to
# ignore everything else it knows, and then ask if it can build a
# chain. This is a buuuunch of code.
# First, we do the initial bits of connection setup. We need to create
# a context, set its I/O funcs, and set the connection reference.
# Here we need to compute the handle to use. We do this by taking the
# id of self modulo 2**31 - 1. If this is already in the dictionary, we
# just keep incrementing by one until we find a free space.
# If we have a server hostname, we should set that too.
# Setup the ciphers.
# Setup the ALPN protocols.
# Set the minimum and maximum TLS versions.
# If there's a trust DB, we need to use it. We do that by telling
# SecureTransport to break on server auth. We also do that if we don't
# want to validate the certs at all: we just won't actually do any
# authing in that case.
# If there's a client cert, we need to use it.
# Read short on EOF.
# There are some result codes that we want to treat as "not always
# errors". Specifically, those are errSSLWouldBlock,
# errSSLClosedGraceful, and errSSLClosedNoNotify.
# If we didn't process any bytes, then this was just a time out.
# However, we can get errSSLWouldBlock in situations when we *did*
# read some data, and in those cases we should just read "short"
# Timed out, no data read.
# The remote peer has closed this connection. We should do so as
# well. Note that we don't actually return here because in
# principle this could actually be fired along with return data.
# It's unlikely though.
# Ok, we read and probably succeeded. We should return whatever data
# was actually read.
# Timed out
# We sent, and probably succeeded. Tell them how much we sent.
# TODO: should I do clean shutdown here? Do I have to?
# Urgh, annoying.
# Here's how we do this:
# 1. Call SSLCopyPeerTrust to get hold of the trust object for this
# 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf.
# 3. To get the CN, call SecCertificateCopyCommonName and process that
# 4. To get the SAN, we need to do something a bit more complex:
# This is gross. Really gross. It's going to be a few hundred LoC extra
# just to repeat something that SecureTransport can *already do*. So my
# operating assumption at this time is that what we want to do is
# instead to just flag to urllib3 that it shouldn't do its own hostname
# validation when using SecureTransport.
# Grab the trust store.
# Probably we haven't done the handshake yet. No biggie.
# Also a case that might happen if we haven't handshaked.
# Handshook? Handshaken?
# Ok, now we want the DER bytes.
# We disable buffering with SecureTransport because it conflicts with
# the buffering that ST does internally (see issue #1153 for more).
# TODO: Well, crap.
# So this is the bit of the code that is the most likely to cause us
# trouble. Essentially we need to enumerate all of the SSL options that
# users might want to use and try to see if we can sensibly translate
# them, or whether we should just ignore them.
# TODO: Update in line with above.
# So, this has to do something a bit weird. Specifically, what it does
# is nothing.
# This means that, if we had previously had load_verify_locations
# called, this does not undo that. We need to do that because it turns
# out that the rest of the urllib3 code will attempt to load the
# default verify paths if it hasn't been told about any paths, even if
# the context itself was sometime earlier. We resolve that by just
# ignoring it.
# For now, we just require the default cipher string.
# OK, we only really support cadata and cafile.
# Raise if cafile does not exist.
# So, what do we do here? Firstly, we assert some properties. This is a
# stripped down shim, so there is some functionality we don't support.
# See PEP 543 for the real deal.
# Ok, we're good to go. Now we want to create the wrapped socket object
# and store it in the appropriate place.
# Now we can handshake
# This is fragile as hell, but it seems to be the only way to raise
# useful errors here.
# Defensive: PySocks should catch all these.
# We don't need to duplicate the Verified/Unverified distinction from
# urllib3/connection.py here because the HTTPSConnection will already have been
# correctly set to either the Verified or Unverified form by that module. This
# means the SOCKSHTTPSConnection will automatically be the correct type.
# This regular expression is used to grab PEM data out of a PEM bundle.
# We need to get the dictionary keys and values out in the same order.
# Normalize the PEM bundle's line endings.
# We need to free the array before the exception bubbles further.
# We only want to do that if an error occurs: otherwise, the caller
# should free.
# Unfortunately, SecKeychainCreate requires a path to a keychain. This
# means we cannot use mkstemp to use a generic temporary file. Instead,
# we're going to create a temporary directory and a filename to use there.
# This filename will be 8 random bytes expanded into base64. We also need
# some random bytes to password-protect the keychain we're creating, so we
# ask for 40 random bytes.
# Must be valid UTF-8
# We now want to create the keychain itself.
# Having created the keychain, we want to pass it off to the caller.
# cert data
# Filename, leaving it out for now
# What the type of the file is, we don't care
# what's in the file, we don't care
# import flags
# key params, can include passphrase in the future
# The keychain to insert into
# Results
# A CFArray is not very useful to us as an intermediary
# representation, so we are going to extract the objects we want
# and then free the array. We don't need to keep hold of keys: the
# keychain already has them!
# Ok, the strategy.
# This relies on knowing that macOS will not give you a SecIdentityRef
# unless you have imported a key into a keychain. This is a somewhat
# artificial limitation of macOS (for example, it doesn't necessarily
# affect iOS), but there is nothing inside Security.framework that lets you
# get a SecIdentityRef without having a key in a keychain.
# So the policy here is we take all the files and iterate them in order.
# Each one will use SecItemImport to have one or more objects loaded from
# it. We will also point at a keychain that macOS can use to work with the
# private key.
# Once we have all the objects, we'll check what we actually have. If we
# already have a SecIdentityRef in hand, fab: we'll use that. Otherwise,
# we'll take the first certificate (which we assume to be our leaf) and
# ask the keychain to give us a SecIdentityRef with that cert's associated
# key.
# We'll then return a CFArray containing the trust chain: one
# SecIdentityRef and then zero-or-more SecCertificateRef objects. The
# responsibility for freeing this CFArray will be with the caller. This
# CFArray must remain alive for the entire connection, so in practice it
# will be stored with a single SSLSocket, along with the reference to the
# keychain.
# Filter out bad paths.
# Ok, we have everything. The question is: do we have an identity? If
# not, we want to grab one from the first cert we have.
# We now want to release the original certificate, as we no longer
# need it.
# We now need to build a new CFArray that holds the trust chain.
# ArrayAppendValue does a CFRetain on the item. That's fine,
# because the finally block will release our other refs to them.
# Big Sur is technically 11 but we use 10.16 due to the Big Sur
# beta being labeled as 10.16.
# Caught and reraised as 'ImportError'
# Supported only in 10.12+
# CoreFoundation time!
# SecureTransport does not support TLS 1.3 even if there's a constant for it
# This gap is present on purpose: this was kSecTrustResultConfirm, which
# is deprecated.
# Cipher suites. We only pick the ones our default cipher string allows.
# Source: https://developer.apple.com/documentation/security/1550981-ssl_cipher_suite_values
# For backwards compatibility, provide imports that used to be here.
# Perform initial handshake.
# eof, return 0.
# WANT_READ, and WANT_WRITE are expected, others are not.
# Check `isclosed()` first, in case Python3 doesn't set `closed`.
# GH Issue #928
# Check via the official file-like-object way.
# Check if the object is a container for another file-like object that
# gets released on exhaustion (e.g. HTTPResponse).
# This will fail silently if we pass in the wrong kind of parameter.
# To make debugging easier add an explicit check.
# get_payload is actually email.message.Message.get_payload;
# we're only interested in the result if it's not a multipart message
# httplib is assuming a response body is available
# when parsing headers even when httplib only sends
# header data to parse_headers() This results in
# defects on multipart responses in particular.
# See: https://github.com/urllib3/urllib3/issues/800
# So we ignore the following defects:
# - StartBoundaryNotFoundDefect:
# - MultipartInvariantViolationDefect:
# FIXME: Can we do this somehow without accessing private httplib _method?
# Platform-specific: Appengine
# Queue is imported for side effects on MS Windows. See issue #229.
# Pass as a value within ``headers`` to skip
# emitting some HTTP headers that are added automatically.
# The only headers that are supported are ``Accept-Encoding``,
# ``Host``, and ``User-Agent``.
# This differentiates from None, allowing us to catch
# a failed `tell()` later when trying to rewind the body.
# If we're not using a proxy, no way to use a tunnel.
# HTTP destinations never require tunneling, we always forward.
# Support for forwarding with HTTPS proxies and HTTPS destinations.
# Otherwise always use a tunnel.
# We only want to normalize urls with an HTTP(S) scheme.
# urllib3 infers URLs without a scheme (None) to be http.
# Almost all of these patterns were derived from the
# 'rfc3986' module: https://github.com/python-hyper/rfc3986
# [               h16 ] "::" 4( h16 ":" ) ls32
# [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32
# [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32
# [ *3( h16 ":" ) h16 ] "::"    h16 ":"   ls32
# [ *4( h16 ":" ) h16 ] "::"              ls32
# [ *5( h16 ":" ) h16 ] "::"              h16
# [ *6( h16 ":" ) h16 ] "::"
# We use "is not None" we want things to happen with empty strings (or 0 port)
# Normalize existing percent-encoded bytes.
# Try to see if the component we're encoding is already percent-encoded
# so we can skip all '%' characters but still encode all others.
# Will return a single character bytestring on both Python 2 & 3
# See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code
# Turn the path into a list of segments
# Initialize the variable to use to store output
# '.' is the current directory, so ignore it, it is superfluous
# Anything other than '..', should be appended to the output
# In this case segment == '..', if we can, we should pop the last
# element
# If the path starts with '/' and the output is empty or the first string
# is non-empty
# If the path starts with '/.' or '/..' ensure we add one more empty
# string to add a trailing '/'
# IPv6 hosts of the form 'a::b%zone' are encoded in a URL as
# such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID
# separator as necessary to return a valid RFC 4007 scoped IP.
# Empty
# For the sake of backwards compatibility we put empty
# string values for path if there are any defined values
# beyond the path in the URL.
# TODO: Remove this when we break backwards compatibility.
# Ensure that each part of the URL is a `str` for
# backwards compatibility.
# Note: This file is under the PSF license as the code comes from the python
# stdlib.   http://docs.python.org/3/license.html
# ipaddress has been backported to 2.6+ in pypi.  If it is installed on the
# system, use it to handle IPAddress ServerAltnames (this was added in
# python-3.5) otherwise only do DNS matching.  This allows
# util.ssl_match_hostname to continue to be used in Python 2.7.
# Ported from python3-syntax:
# leftmost, *remainder = dn.split(r'.')
# ignored flake8 # F821 to support python 2.7 function
# noqa: F821
# OpenSSL may add a trailing newline to a subjectAltName's IP address
# Divergence from upstream: ipaddress can't handle byte str
# ValueError: Not an IP address (common case)
# UnicodeError: Divergence from upstream: Have to deal with ipaddress not taking
# byte strings.  addresses should be all ascii, so we consider it not
# an ipaddress in this case
# Divergence from upstream: Make ipaddress library optional
# Defensive
# Data structure for representing the metadata of requests that result in a retry.
# TODO: In v2 we can remove this sentinel and metaclass with deprecated options.
#: Default methods to be used for ``allowed_methods``
#: Default status codes to be used for ``status_forcelist``
#: Default headers to be used for ``remove_headers_on_redirect``
#: Maximum backoff time.
# TODO: Deprecated, remove in v2.0
# TODO: If already given in **kw we use what's given to us
# If not given we need to figure out what to pass. We decide
# based on whether our class has the 'method_whitelist' property
# and if so we pass the deprecated 'method_whitelist' otherwise
# we use 'allowed_methods'. Remove in v2.0
# We want to consider only the last consecutive errors sequence (Ignore redirects).
# Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4
# Assume UTC if no timezone was specified
# On Python2.7, parsedate_tz returns None for a timezone offset
# instead of 0 if no timezone is given, where mktime_tz treats
# a None timezone offset as local time.
# TODO: For now favor if the Retry implementation sets its own method_whitelist
# property outside of our constructor to avoid breaking custom implementations.
# Disabled, indicate to re-raise the error.
# Connect retry?
# Read retry?
# Other retry?
# Redirect retry?
# Incrementing because of a server error like a 500 in
# status_forcelist and the given method is in the allowed_methods
# TODO: Remove this deprecated alias in v2.0
# For backwards compatibility (equivalent to pre-v1.9):
# The default socket timeout, used by httplib to indicate that no timeout was; specified by the user
# A sentinel value to indicate that no timeout was specified by the user in
# urllib3
# Use time.monotonic if available.
#: A sentinel object representing the default timeout value
# __str__ provided for backwards compatibility
# We can't use copy.deepcopy because that will also create a new object
# for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to
# detect the user default.
# In case the connect timeout has not yet been established.
# Maps the length of a digest to a possible hash function producing this digest
# Test for SSL features
# Has SNI?
# Platform-specific: Python 3.6
# OP_NO_TICKET was added in Python 3.6
# A secure default.
# Sources for more information on TLS ciphers:
# - https://wiki.mozilla.org/Security/Server_Side_TLS
# - https://www.ssllabs.com/projects/best-practices/index.html
# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# The general intent is:
# - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
# - prefer ECDHE over DHE for better performance,
# - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and
# - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common,
# - disable NULL authentication, MD5 MACs, DSS, and other
# - NOTE: TLS 1.3 cipher suites are managed through a different interface
# Modern SSL?
# Use default values from a real SSLContext
# We need encode() here for py32; works on py2 and p33.
# PROTOCOL_TLS is deprecated in Python 3.10
# Setting the default here, as we may have no ssl module on import
# SSLv2 is easily broken and is considered harmful and dangerous
# SSLv3 has several problems and is now dangerous
# Disable compression to prevent CRIME attacks for OpenSSL 1.0+
# (issue #309)
# TLSv1.2 only. Unless set explicitly, do not request tickets.
# This may save some bandwidth on wire, and although the ticket is encrypted,
# there is a risk associated with it being on wire,
# if the server is not rotating its ticketing keys properly.
# Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is
# necessary for conditional client cert authentication with TLS 1.3.
# The attribute is None for OpenSSL <= 1.1.0 or does not exist in older
# versions of Python.  We only enable on Python 3.7.4+ or if certificate
# verification is enabled to work around Python issue #37428
# See: https://bugs.python.org/issue37428
# Platform-specific: Python 3.2
# We do our own verification, including fingerprints and alternative
# hostnames. So disable it here
# The order of the below lines setting verify_mode and check_hostname
# matter due to safe-guards SSLContext has to prevent an SSLContext with
# check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more
# complex because we don't know whether PROTOCOL_TLS_CLIENT will be used
# or not so we don't know the initial state of the freshly created SSLContext.
# Enable logging of TLS session keys via defacto standard environment variable
# 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values.
# Note: This branch of code and all the variables in it are no longer
# used by urllib3 itself. We should consider deprecating and removing
# this code.
# try to load OS default certs; works well on Windows (require Python3.4+)
# Attempt to detect if we get the goofy behavior of the
# keyfile being encrypted and OpenSSL asking for the
# passphrase via the terminal and instead error out.
# Defensive: in CI, we always have set_alpn_protocols
# If we detect server_hostname is an IP address then the SNI
# extension should not be used according to RFC3546 Section 3.1
# SecureTransport uses server_hostname in certificate verification.
# Do not warn the user if server_hostname is an invalid SNI hostname.
# IDN A-label bytes are ASCII compatible.
# Look for Proc-Type: 4,ENCRYPTED
# Import error, ssl is not available.
# How should we wait on sockets?
# There are two types of APIs you can use for waiting on sockets: the fancy
# modern stateful APIs like epoll/kqueue, and the older stateless APIs like
# select/poll. The stateful APIs are more efficient when you have a lots of
# sockets to keep track of, because you can set them up once and then use them
# lots of times. But we only ever want to wait on a single socket at a time
# and don't want to keep track of state, so the stateless APIs are actually
# more efficient. So we want to use select() or poll().
# Now, how do we choose between select() and poll()? On traditional Unixes,
# select() has a strange calling convention that makes it slow, or fail
# altogether, for high-numbered file descriptors. The point of poll() is to fix
# that, so on Unixes, we prefer poll().
# On Windows, there is no poll() (or at least Python doesn't provide a wrapper
# for it), but that's OK, because on Windows, select() doesn't have this
# strange calling convention; plain select() works fine.
# So: on Windows we use select(), and everywhere else we use poll(). We also
# fall back to select() in case poll() is somehow broken or missing.
# Modern Python, that retries syscalls by default
# Old and broken Pythons.
# OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7
# 'e.args[0]' incantation works for both OSError and select.error
# When doing a non-blocking connect, most systems signal success by
# marking the socket writable. Windows, though, signals success by marked
# it as "exceptional". We paper over the difference by checking the write
# sockets for both conditions. (The stdlib selectors module does the same
# thing.)
# For some reason, poll() takes timeout in milliseconds
# Apparently some systems have a select.poll that fails as soon as you try
# to use it, either due to strange configuration or broken monkeypatching
# from libraries like eventlet/greenlet.
# We delay choosing which implementation to use until the first time we're
# called. We could do it at import time, but then we might make the wrong
# decision if someone goes wild with monkeypatching select.poll after
# we're imported.
# Platform-specific: Appengine.
# Platform-specific
# Platform-specific: AppEngine
# Connection already closed (such as by httplib).
# Returns True if readable, which here means it's been dropped
# This function is copied from socket.py in the Python 2.7 standard
# library test suite. Added to its signature is only `socket_options`.
# One additional modification is that we avoid binding to IPv6 servers
# discovered in DNS if the system doesn't have IPv6 functionality.
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
# If provided, set socket level options before connecting.
# App Engine doesn't support IPV6 sockets and actually has a quota on the
# number of sockets that can be used, so just early out here instead of
# creating a socket needlessly.
# See https://github.com/urllib3/urllib3/issues/1446
# has_ipv6 returns true if cPython was compiled with IPv6 support.
# It does not tell us if the system has IPv6 support enabled. To
# determine that we must bind to an IPv6 address.
# https://github.com/urllib3/urllib3/pull/611
# https://bugs.python.org/issue658327
# Copyright (c) 2010-2020 Benjamin Peterson
# DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT
# backspace
# linefeed
# form feed
# carriage return
# quote
# backslash
# cache rendered inline tables (mapping from object id to rendered inline table)
# => [(key, value, inside_aot)]
# Lazy import to improve module import time
# check cache first
# Identifier.
# Requirement.
# Candidate.
# <key> -> Set[<key>]
# Sentinel as root dependencies' parent.
# Optimistic backjumping variables
# Check the newly-pinned candidate actually works. This should
# always pass under normal circumstances, but in the case of a
# faulty provider, we will raise an error to notify the implementer
# to fix find_matches() and/or is_satisfied_by().
# Put newly-pinned candidate at the end. This is essential because
# backtracking looks at this mapping to get the last pin.
# All candidates tried, nothing works. This criterion is a dead
# end, signal for backtracking.
# Create a new state from the last known-to-work one, and apply
# the previously gathered incompatibility information.
# Remove the state that triggered backtracking.
# Optimistically backtrack to a state that caused the incompatibility
# Retrieve the last candidate pin and known incompatibilities.
# For safe backjumping only backjump if the current dependency
# is not the same as the incompatible dependency
# On the first time a non-safe backjump is done the state
# is saved so we can restore it later if the resolution fails
# If the current dependencies and the incompatible dependencies
# are overlapping then we have likely found a cause of the
# incompatibility
# Fallback: We should not backtrack to the point where
# broken_state.mapping is empty, so stop backtracking for
# a chance for the resolution to recover
# Also mark the newly known incompatibility.
# It works! Let's work on this new state.
# State does not work after applying known incompatibilities.
# Try the still previous state.
# No way to backtrack anymore.
# Initialize the root state.
# The root state is saved as a sentinel so the first ever pin can have
# something to backtrack to if it fails. The root state is basically
# pinning the virtual "root" package in the graph.
# Variables for optimistic backjumping
# Handle if optimistic backjumping has been running for too long
# All criteria are accounted for. Nothing more to pin, we are done!
# keep track of satisfied names to calculate diff after pinning
# If there are no unsatisfied names use unsatisfied names
# If there is only 1 unsatisfied name skip calling self._get_preference
# Choose the most preferred unpinned criterion to try.
# Backjump if pinning fails. The backjump process puts us in
# an unpinned state, so we can work on it in the next round.
# Dead ends everywhere. Give up.
# discard as information sources any invalidated names
# (unsatisfied names that were previously satisfied)
# Pinning was successful. Push a new state to do another pin.
# causes is a list of RequirementInformation objects
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
# TODO: Can we test whether something is contained within a requirement?
# TODO: Can we normalize the name and extra name?
# --------------------------------------------------------------------------------------
# Recursive descent parser for dependency specifier
# The input might end after whitespace.
# Recursive descent parser for marker expression
# Store whether or not this Specifier should accept prereleases
# https://github.com/python/mypy/pull/13475#pullrequestreview-1079784515
# If there is an explicit prereleases set for this, then we'll just
# blindly use that.
# Look at all of our specifiers and determine if they are inclusive
# operators, and if they are if they are including an explicit
# prerelease.
# The == specifier can include a trailing .*, if it does we
# want to remove before parsing.
# Parse the version, and if it is a pre-release than this
# specifier allows pre-releases.
# Compatible releases have an equivalent combination of >= and ==. That
# is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to
# implement this in terms of the other specifiers instead of
# implementing it ourselves. The only thing we need to do is construct
# the other specifiers.
# We want everything but the last item in the version, but we want to
# ignore suffix segments.
# Add the prefix notation to the end of our string
# We need special logic to handle prefix matching
# In the case of prefix matching we want to ignore local segment.
# Get the normalized version string ignoring the trailing .*
# Split the spec out by bangs and dots, and pretend that there is
# an implicit dot in between a release segment and a pre-release segment.
# Split the prospective version out by bangs and dots, and pretend
# that there is an implicit dot in between a release segment and
# a pre-release segment.
# 0-pad the prospective version before shortening it to get the correct
# shortened version.
# Shorten the prospective version to be the same length as the spec
# so that we can determine if the specifier is a prefix of the
# prospective version or not.
# Convert our spec string into a Version
# If the specifier does not have a local segment, then we want to
# act as if the prospective version also does not have a local
# segment.
# NB: Local version identifiers are NOT permitted in the version
# specifier, so local version labels can be universally removed from
# the prospective version.
# Convert our spec to a Version instance, since we'll want to work with
# it as a version.
# Check to see if the prospective version is less than the spec
# version. If it's not we can short circuit and just return False now
# instead of doing extra unneeded work.
# This special case is here so that, unless the specifier itself
# includes is a pre-release version, that we do not accept pre-release
# versions for the version mentioned in the specifier (e.g. <3.1 should
# not match 3.1.dev0, but should match 3.0.dev0).
# If we've gotten to here, it means that prospective version is both
# less than the spec version *and* it's not a pre-release of the same
# version in the spec.
# Check to see if the prospective version is greater than the spec
# includes is a post-release version, that we do not accept
# post-release versions for the version mentioned in the specifier
# (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0).
# Ensure that we do not allow a local version of the version mentioned
# in the specifier, which is technically greater than, to match.
# greater than the spec version *and* it's not a pre-release of the
# same version in the spec.
# Determine if prereleases are to be allowed or not.
# Normalize item to a Version, this allows us to have a shortcut for
# "2.0" in Specifier(">=2")
# Determine if we should be supporting prereleases in this specifier
# or not, if we do not support prereleases than we can short circuit
# logic if this version is a prereleases.
# Actually do the comparison to determine if this item is contained
# within this Specifier or not.
# Attempt to iterate over all the values in the iterable and if any of
# them match, yield them.
# If our version is a prerelease, and we were not set to allow
# prereleases, then we'll store it for later in case nothing
# else matches this specifier.
# Either this is not a prerelease, or we should have been
# accepting prereleases from the beginning.
# Now that we've iterated over everything, determine if we've yielded
# any values, and if we have not and we have any prereleases stored up
# then we will go ahead and yield the prereleases.
# Get the release segment of our versions
# Get the rest of our versions
# Insert our padding
# Split on `,` to break each individual specifier into its own item, and
# strip each item to remove leading/trailing whitespace.
# Make each individual specifier a Specifier and save in a frozen set
# for later.
# Save the supplied specifiers in a frozen set.
# Store our prereleases value so we can use it later to determine if
# we accept prereleases or not.
# If we have been given an explicit prerelease modifier, then we'll
# pass that through here.
# If we don't have any specifiers, and we don't have a forced value,
# then we'll just return None since we don't know if this should have
# pre-releases or not.
# Otherwise we'll see if any of the given specifiers accept
# prereleases, if any of them do we'll return True, otherwise False.
# Ensure that our item is a Version instance.
# Determine if we're forcing a prerelease or not, if we're not forcing
# one for this particular filter call, then we'll use whatever the
# SpecifierSet thinks for whether or not we should support prereleases.
# We can determine if we're going to allow pre-releases by looking to
# see if any of the underlying items supports them. If none of them do
# and this item is a pre-release then we do not allow it and we can
# short circuit that here.
# Note: This means that 1.0.dev1 would not be contained in something
# We simply dispatch to the underlying specs here to make sure that the
# given version is contained within all of them.
# Note: This use of all() here means that an empty set of specifiers
# If we have any specifiers, then we want to wrap our iterable in the
# filter method for each one, this will act as a logical AND amongst
# each specifier.
# If we do not have any specifiers, then we need to have a rough filter
# which will filter out any pre-releases, unless there are no final
# releases.
# Store any item which is a pre-release for later unless we've
# already found a final version or we are accepting prereleases
# If we've found no items except for pre-releases, then we'll go
# ahead and use the pre-releases
# Sometimes we have a structure like [[...]] which is a single item list
# where the single item is itself it's own list. In that case we want skip
# the rest of this function so that we don't get extraneous () on the
# outside.
# PEP 685 – Comparison of extra names for optional distribution dependencies
# https://peps.python.org/pep-0685/
# > When comparing extra names, tools MUST normalize the names being
# > compared using the semantics outlined in PEP 503 for names
# other environment markers don't have such standards
# Note: We create a Marker object without calling this constructor in
# The attribute `_markers` can be described in terms of a recursive type:
# MarkerList = List[Union[Tuple[Node, ...], str, MarkerList]]
# For example, the following expression:
# python_version > "3.6" or (python_version == "3.6" and os_name == "unix")
# is parsed into:
# [
# ]
# The API used to allow setting extra to None. We need to handle this
# case for backwards compatibility.
# Generic.
# The __hash__ of every single element in a Set[Tag] will be evaluated each time
# that a set calls its `.disjoint()` method, which may be called hundreds of
# times when scanning a page of links for packages with tags matching that
# Set[Tag]. Pre-computing the value here produces significant speedups for
# downstream consumers.
# Short-circuit ASAP for perf reasons.
# expect e.g., cp313
# To allow for version comparison.
# Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
# extension modules is the best option.
# https://github.com/pypa/pip/issues/3383#issuecomment-173267692
# Debug builds can also load "normal" extension modules.
# We can also assume no UCS-4 or pymalloc requirement.
# 'abi3' and 'none' are explicitly handled later.
# The following are examples of `EXT_SUFFIX`.
# We want to keep the parts which are related to the ABI and remove the
# parts which are related to the platform:
# - linux:   '.cpython-310-x86_64-linux-gnu.so' => cp310
# - mac:     '.cpython-310-darwin.so'           => cp310
# - win:     '.cp310-win_amd64.pyd'             => cp310
# - win:     '.pyd'                             => cp37 (uses _cpython_abis())
# - pypy:    '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73
# - graalpy: '.graalpy-38-native-x86_64-darwin.dylib'
# CPython3.7 and earlier uses ".pyd" on Windows.
# non-windows
# windows
# pyston, ironpython, others?
# TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
# When built against an older macOS SDK, Python will report macOS 10.16
# instead of the real version.
# Prior to Mac OS 11, each yearly release of Mac OS bumped the
# "minor" version number.  The major version was always 10.
# Starting with Mac OS 11, each yearly release bumps the major version
# number.   The minor versions are now the midyear updates.
# Mac OS 11 on x86_64 is compatible with binaries from previous releases.
# Arm64 support was introduced in 11.0, so no Arm binaries from previous
# releases exist.
# However, the "universal2" binary format can have a
# macOS version earlier than 11.0 when the x86_64 part of the binary supports
# that version of macOS.
# if iOS is the current platform, ios_ver *must* be defined. However,
# it won't exist for CPython versions before 3.13, which causes a mypy
# error.
# type: ignore[attr-defined, unused-ignore]
# Consider any iOS major.minor version from the version requested, down to
# 12.0. 12.0 is the first iOS version that is known to have enough features
# to support CPython. Consider every possible minor release up to X.9. There
# highest the minor has ever gone is 8 (14.8 and 15.8) but having some extra
# candidates that won't ever match doesn't really hurt, and it saves us from
# having to keep an explicit list of known iOS versions in the code. Return
# the results descending order of version number.
# If the requested major version is less than 12, there won't be any matches.
# Consider the actual X.Y version that was requested.
# Consider every minor version from X.0 to the minor version prior to the
# version requested by the platform.
# Python 3.13 was the first version to return platform.system() == "Android",
# and also the first version to define platform.android_ver().
# 16 is the minimum API level known to have enough features to support CPython
# without major patching. Yield every API level from the maximum down to the
# minimum, inclusive.
# we should never be here, just yield the sysconfig one and return
# Please keep the duplicated `isinstance` check
# in the six comparisons hereunder
# unless you find a way to avoid adding overhead function calls.
# Deliberately not anchored to the start and end of the string, to make it
# easier for 3rd party code to reuse
# Validate the version and parse it into pieces
# Store the parsed out pieces of the version
# Generate a key which will be used for sorting
# Epoch
# Release segment
# Pre-release
# Post-release
# Development release
# Local version segment
# We consider there to be an implicit 0 in a pre-release if there is
# not a numeral associated with it.
# We normalize any letters to their lower case form
# We consider some words to be alternate spellings of other words and
# in those cases we want to normalize the spellings to our preferred
# spelling.
# We assume if we are given a number, but we are not given a letter
# then this is using the implicit post release syntax (e.g. 1.0-1)
# When we compare a release version, we want to compare it with all of the
# trailing zeros removed. So we'll use a reverse the list, drop all the now
# leading zeros until we come to something non zero, then take the rest
# re-reverse it back into the correct order and make it a tuple and use
# that for our sorting key.
# We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
# We'll do this by abusing the pre segment, but we _only_ want to do this
# if there is not a pre or a post segment. If we have one of those then
# the normal sorting rules will handle this case correctly.
# Versions without a pre-release (except as noted above) should sort after
# those with one.
# Versions without a post segment should sort before those with one.
# Versions without a development segment should sort after those with one.
# Versions without a local segment should sort before those with one.
# Versions with a local segment need that segment parsed to implement
# the sorting rules in PEP440.
# - Alpha numeric segments sort before numeric segments
# - Alpha numeric segments sort lexicographically
# - Numeric segments sort numerically
# - Shorter versions sort before longer versions when the prefixes
# The RawMetadata class attempts to make as few assumptions about the underlying
# serialization formats as possible. The idea is that as long as a serialization
# formats offer some very basic primitives in *some* way then we can support
# serializing to and from that format.
# Metadata 1.0 - PEP 241
# Metadata 1.1 - PEP 314
# Metadata 1.2 - PEP 345
# Metadata 2.0
# PEP 426 attempted to completely revamp the metadata format
# but got stuck without ever being able to build consensus on
# it and ultimately ended up withdrawn.
# However, a number of tools had started emitting METADATA with
# `2.0` Metadata-Version, so for historical reasons, this version
# was skipped.
# Metadata 2.1 - PEP 566
# Metadata 2.2 - PEP 643
# Metadata 2.3 - PEP 685
# No new fields were added in PEP 685, just some edge case were
# tightened up to provide better interoptability.
# Metadata 2.4 - PEP 639
# Our logic is slightly tricky here as we want to try and do
# *something* reasonable with malformed data.
# The main thing that we have to worry about, is data that does
# not have a ',' at all to split the label from the Value. There
# isn't a singular right answer here, and we will fail validation
# later on (if the caller is validating) so it doesn't *really*
# matter, but since the missing value has to be an empty str
# and our return value is dict[str, str], if we let the key
# be the missing value, then they'd have multiple '' values that
# overwrite each other in a accumulating dict.
# The other potentional issue is that it's possible to have the
# same label multiple times in the metadata, with no solid "right"
# answer with what to do in that case. As such, we'll do the only
# thing we can, which is treat the field as unparseable and add it
# to our list of unparsed fields.
# Ensure 2 items
# TODO: The spec doesn't say anything about if the keys should be
# The label already exists in our set of urls, so this field
# is unparseable, and we can just add the whole thing to our
# unparseable data and stop processing it.
# If our source is a str, then our caller has managed encodings for us,
# and we don't need to deal with it.
# If our source is a bytes, then we're managing the encoding and we need
# to deal with it.
# The various parse_FORMAT functions here are intended to be as lenient as
# possible in their parsing, while still returning a correctly typed
# RawMetadata.
# To aid in this, we also generally want to do as little touching of the
# data as possible, except where there are possibly some historic holdovers
# that make valid data awkward to work with.
# While this is a lower level, intermediate format than our ``Metadata``
# class, some light touch ups can make a massive difference in usability.
# Map METADATA fields to RawMetadata.
# We have to wrap parsed.keys() in a set, because in the case of multiple
# values for a key (a list), the key will appear multiple times in the
# list of keys, but we're avoiding that by using get_all().
# Header names in RFC are case insensitive, so we'll normalize to all
# lower case to make comparisons easier.
# We use get_all() here, even for fields that aren't multiple use,
# because otherwise someone could have e.g. two Name fields, and we
# would just silently ignore it rather than doing something about it.
# The way the email module works when parsing bytes is that it
# unconditionally decodes the bytes as ascii using the surrogateescape
# handler. When you pull that data back out (such as with get_all() ),
# it looks to see if the str has any surrogate escapes, and if it does
# it wraps it in a Header object instead of returning the string.
# As such, we'll look for those Header objects, and fix up the encoding.
# Flag if we have run into any issues processing the headers, thus
# signalling that the data belongs in 'unparsed'.
# It's unclear if this can return more types than just a Header or
# a str, so we'll just assert here to make sure.
# If it's a header object, we need to do our little dance to get
# the real data out of it. In cases where there is invalid data
# we're going to end up with mojibake, but there's no obvious, good
# way around that without reimplementing parts of the Header object
# ourselves.
# That should be fine since, if mojibacked happens, this key is
# going into the unparsed dict anyways.
# The Header object stores it's data as chunks, and each chunk
# can be independently encoded, so we'll need to check each
# of them.
# Enable mojibake.
# Turn our chunks back into a Header object, then let that
# Header object do the right thing to turn them into a
# string for us.
# This is already a string, so just add it.
# We've processed all of our values to get them into a list of str,
# but we may have mojibake data, in which case this is an unparsed
# field.
# This is a bit of a weird situation, we've encountered a key that
# we don't know what it means, so we don't know whether it's meant
# to be a list or not.
# Since we can't really tell one way or another, we'll just leave it
# as a list, even though it may be a single item list, because that's
# what makes the most sense for email headers.
# If this is one of our string fields, then we'll check to see if our
# value is a list of a single item. If it is then we'll assume that
# it was emitted as a single string, and unwrap the str from inside
# the list.
# If it's any other kind of data, then we haven't the faintest clue
# what we should parse it as, and we have to just add it to our list
# of unparsed stuff.
# If this is one of our list of string fields, then we can just assign
# the value, since email *only* has strings, and our get_all() call
# above ensures that this is a list.
# Special Case: Keywords
# The keywords field is implemented in the metadata spec as a str,
# but it conceptually is a list of strings, and is serialized using
# ", ".join(keywords), so we'll do some light data massaging to turn
# this into what it logically is.
# Special Case: Project-URL
# The project urls is implemented in the metadata spec as a list of
# specially-formatted strings that represent a key and a value, which
# is fundamentally a mapping, however the email format doesn't support
# mappings in a sane way, so it was crammed into a list of strings
# instead.
# We will do a little light data massaging to turn this into a map as
# it logically should be.
# Nothing that we've done has managed to parse this, so it'll just
# throw it in our unparseable data and move on.
# We need to support getting the Description from the message payload in
# addition to getting it from the the headers. This does mean, though, there
# is the possibility of it being set both ways, in which case we put both
# in 'unparsed' since we don't know which is right.
# type: ignore[call-overload]
# Check to see if we've already got a description, if so then both
# it, and this body move to unparseable.
# We need to cast our `raw` to a metadata, because a TypedDict only support
# literal key names, but we're computing our key names on purpose, but the
# way this function is implemented, our `TypedDict` can only have valid key
# names.
# Keep the two values in sync.
# With Python 3.8, the caching can be replaced with functools.cached_property().
# No need to check the cache as attribute lookup will resolve into the
# instance's __dict__ before __get__ is called.
# To make the _process_* methods easier, we'll check if the value is None
# and if this field is NOT a required attribute, and if both of those
# things are true, we'll skip the the converter. This will mean that the
# converters never have to deal with the None union.
# Implicitly makes Metadata-Version required.
# Validate the name as a side-effect.
# Defaults to `text/plain` if parsing failed.
# Check if content-type is valid or defaulted to `text/plain` and thus was
# not parseable.
# Use an acceptable default.
# Mutations occur due to caching enriched values.
# Make sure to check for the fields that are present, the required
# fields (so their absence can be reported).
# Remove fields that have already been checked.
# Can't use getattr() as that triggers descriptor protocol which
# will fail due to no value for the instance argument.
# `name` is not normalized/typed to NormalizedName so as to provide access to
# the original/raw name.
# TODO 2.1: can be in body
# Because `Requires-External` allows for non-PEP 440 version specifiers, we
# don't do any processing on the values.
# PEP 685 lets us raise an error if an extra doesn't pass `Name` validation
# regardless of metadata version.
# Format for program header (bitness).
# Data structure encoding (endianness).
# e_fmt: Format for program header.
# p_fmt: Format for section header.
# p_idx: Indexes to find p_type, p_offset, and p_filesz.
# 32-bit LSB.
# 32-bit MSB.
# 64-bit LSB.
# 64-bit MSB.
# Architecture type.
# Offset of program header.
# Processor-specific flags.
# Size of section.
# Number of sections.
# Not PT_INTERP.
# `os.PathLike` not a generic type until Python 3.9, so sticking with `str`
# as the type for `path` until then.
# hard-float ABI can be detected from the ELF header of the running
# process
# https://static.docs.arm.com/ihi0044/g/aaelf32.pdf
# If glibc ever changes its major version, we need to know what the last
# minor version was, so we can build the complete list of all versions.
# For now, guess what the highest minor version might be, assume it will
# be 50 for testing. Once this actually happens, update the dictionary
# with the actual value.
# platform module.
# https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183
# Should be a string like "glibc 2.17".
# can proceed, so we bail on our attempt.
# From PEP 513, PEP 600
# Check for presence of _manylinux module.
# CentOS 7 w/ glibc 2.17 (PEP 599)
# CentOS 6 w/ glibc 2.12 (PEP 571)
# CentOS 5 w/ glibc 2.5 (PEP 513)
# Oldest glibc to be supported regardless of architecture is (2, 17).
# On x86/i686 also oldest glibc to be supported is (2, 5).
# We can assume compatibility across glibc major versions.
# https://sourceware.org/bugzilla/show_bug.cgi?id=24636
# Build a list of maximum glibc versions so that we can
# output the canonical list of all glibc from current_glibc
# down to too_old_glibc2, including all intermediary versions.
# For other glibc major versions oldest supported is (x, 0).
# Handle the legacy manylinux1, manylinux2010, manylinux2014 tags.
# Python not dynamically linked against musl.
# Core metadata spec for `Name`
# PEP 427: The build number must start with a digit.
# This is taken from PEP 503.
# Legacy versions cannot be normalized
# See PEP 427 for the rules on escaping the project name.
# We are requiring a PEP 440 version, which cannot contain dashes,
# so we split on the last dash.
#######################################################################################
# Adapted from:
# MIT License
# Copyright (c) 2017-present Ofek Lev <oss@ofek.dev>
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to the following
# The above copyright notice and this permission notice shall be included in all copies
# or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
# OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# With additional allowance of arbitrary `LicenseRef-` identifiers, not just
# `LicenseRef-Public-Domain` and `LicenseRef-Proprietary`.
# Pad any parentheses so tokenization can be achieved by merely splitting on
# Normalize to lower case so we can look up licenses/exceptions
# and so boolean operators are Python-compatible.
# Rather than implementing boolean logic, we create an expression that Python can
# parse. Everything that is not involved with the grammar itself is treated as
# `False` and the expression should evaluate as such.
# Take a final pass to check for unknown licenses/exceptions.
# Deprecated alias, previously a separate exception
# Preserving arg order for the sake of API backward compatibility.
# We have to use commonprefix for Python 2.7 compatibility. So we
# normalise case to avoid problems because commonprefix is a character
# based comparison :-(
# Run the hook in a subprocess
# Python 3.8 compatibility
# This file is run as a script, and `import wrappers` is not zip-safe, so we
# include write_json() and read_json() from wrappers.py.
# Ensure in-tree backend directories have the highest priority when importing.
# Rely on importlib to find nested modules based on parent's path
# Ignore other items in _path or sys.path and use backend_path instead:
# According to the spec, the backend MUST be loaded from backend-path.
# Therefore, we can halt the import machinery and raise a clean error.
# Delayed import: Python 3.7 does not contain importlib.metadata
# fallback to build_wheel outside the try block to avoid exception chaining
# which can be confusing to users and is not relevant
# Touch marker file
# Exactly one .whl file
# Remove the parent directory from sys.path to avoid polluting the backend
# import namespace with this directory.
# Pad odd elements with spaces
# noqa: B007
# OSError is raised if obj has no source file, e.g. when defined in REPL.
# If obj is a module, there may be classes (which are callable) to display
# N.B. we cannot use `if type(obj) is type` here because it doesn't work with
# some types of classes, such as the ones that use abc.ABCMeta.
# Global console used by alternative print
# Can happen if the cwd has been deleted
# Special case for inspect(inspect)
# self.type == ColorType.DEFAULT:
# self.standard == ColorStandard.TRUECOLOR:
# Convert to 8-bit color from truecolor color
# If saturation is under 15% assume it is grayscale
# Convert to standard from truecolor or 8-bit
# self.system == ColorSystem.EIGHT_BIT
# type: ignore[has-type]
# Can be replaced with `from typing import Self` in Python 3.11+
# type: ignore[no-untyped-def, override]
# type: ignore[return-value, type-var]
# Only refresh twice a second to prevent jitter
# Based on https://github.com/tqdm/tqdm/blob/master/tqdm/std.py
# attempt to recover the total from the task
# update total of task or create new task
# normalize the mode (always rb, rt)
# patch buffering to provide the same behaviour as the builtin `open`
# attempt to get the total with `os.stat`
# open the file in binary mode,
# wrap the reader in a `TextIOWrapper` if text mode
# pragma: no coverage
# store information about showtraceback call
# keep reference of default traceback
# do not display trace on syntax error
# determine correct tb_offset
# remove ipython internal frames from trace with tb_offset
# clear data upon usage
# replace _showtraceback instead of showtraceback to allow ipython features such as debugging to work
# this is also what the ipython docs recommends to modify when subclassing InteractiveShell
# add wrapper to capture tb_data
# if within ipython, use customized traceback
# type: ignore[name-defined]
# otherwise use default system hook
# __traceback__ can be None, e.g. for exceptions raised by the
# 'multiprocessing' module
# No cover, code is reached but coverage doesn't recognize it.
# No extension, look at first line to see if it is a hashbang
# Note, this is an educated guess and not a guarantee
# If it fails, the only downside is that the code is highlighted strangely
# code may be an empty string if the file doesn't exist, OR
# if the traceback filename is generated dynamically
# Stylize a line at a time
# So that indentation isn't underlined (which looks bad)
# Being defensive here
# If last_instruction reports a line out-of-bounds, we don't want to crash
# 这是对亚洲语言支持的测试。面对模棱两可的想法，拒绝猜测的诱惑
# `fileno` is documented as potentially raising a OSError
# Alas, from the issues, there are so many poorly implemented file-like objects,
# that `fileno()` can raise just about anything.
# last resort, reduce columns evenly
# Fixed width column
# Flexible column, we need to measure contents
# If the column divider is whitespace also style it with the row background
# Fast path with all 1 cell characters
# There are left-aligned characters for 1/8 to 7/8, but
# the right-aligned characters exist only for 1/8 and 4/8.
# When start and end fall into the same cell, we ideally should render
# a symbol that's "center-aligned", but there is no good symbol in Unicode.
# In this case, we fall back to right-aligned block symbol for simplicity.
#  pragma: no cover
# Exact fit
# Pad on the right
# Pad left and right
# Padding on left
# This is somewhat inefficient and needs caching
# Handle the case where the Console has force_jupyter=True,
# but IPython is not installed.
# regex match object
# Callable invoked by re.sub
# Sub method of a compiled re
# This would be a bit of work to implement efficiently
# For now, its not required
# Span not in text or not valid
# TODO: This is a little inefficient, it is only used by full justify
# Handles pythonw, where stdout/stderr are null, and we return NullFile
# instance from Console.file. In this case, we still want to make a log record
# even though we won't be writing anything to a file.
# FORMAT = "%(asctime)-15s - %(levelname)s - %(message)s"
# offsets to insert the breaks at
# Simplest case - the word fits within the remaining width for this line.
# Not enough space remaining for this word on the current line.
# The word doesn't fit on any line, so we can't simply
# place it on the next line...
# Fold the word across multiple lines.
# Folding isn't allowed, so crop the word.
# The word doesn't fit within the remaining space on the current
# line, but it *can* fit on to the next (empty) line.
# Bell
# Backspace
# Vertical tab
# Form feed
# Carriage return
# console.print(Control((ControlType.SET_WINDOW_TITLE, "Hello, world!")))
# same
# For commonmark backwards compatibility
# Literal backslashes
# Escape of tag
# Handle open brace escapes, where the brace is not part of a tag.
# Closing tag
# explicit close
# implicit close
# Opening tag
# prevent circular import
# Fallback if we can't load the Windows DLL
# Prevent potential infinite loop
# Detect object which claim to have all the attributes
# return the original choice, not the lower case version
# Captures the start and end of JSON strings, handling escaped quotes
# Additional work to handle highlighting JSON keys
# Dates
# Calendar month (e.g. 2008-08). The hyphen is required
# Calendar date w/o hyphens (e.g. 20080830)
# Ordinal date (e.g. 2008-243). The hyphen is optional
# Weeks
# Week of the year (e.g., 2008-W35). The hyphen is optional
# Week date (e.g., 2008-W35-6). The hyphens are optional
# Times
# Hours and minutes (e.g., 17:21). The colon is optional
# Hours, minutes, and seconds w/o colons (e.g., 172159)
# Time zone designator (e.g., Z, +07 or +07:00). The colons and the minutes are optional
# Hours, minutes, and seconds with time zone designator (e.g., 17:21:59+07:00).
# All the colons are optional. The minutes in the time zone designator are also optional
# Date and Time
# Calendar date with hours, minutes, and seconds (e.g., 2008-08-30 17:21:59 or 20080830 172159).
# A space is required between the date and the time. The hyphens and colons are optional.
# This regex matches dates and times that specify some hyphens or colons but omit others.
# This does not follow ISO 8601
# XML Schema dates and times
# Date, with optional time zone (e.g., 2008-08-30 or 2008-08-30+07:00).
# Hyphens are required. This is the XML Schema 'date' type
# Time, with optional fractional seconds and time zone (e.g., 01:45:36 or 01:45:36.123+07:00).
# There is no limit on the number of digits for the fractional seconds. This is the XML Schema 'time' type
# Date and time, with optional fractional seconds and time zone (e.g., 2008-08-30T01:45:36 or 2008-08-30T01:45:36.123Z).
# This is the XML Schema 'dateTime' type
# If refresh fails, we want to stop the redirection of sys.stderr,
# so the error stacktrace is properly displayed in the terminal.
# (or, if the code that calls Rich captures the exception and wants to display something,
# let this be displayed in the terminal).
# allow it to fully render on the last even if overflow
# The first Live instance will render everything in the Live stack
# if it is finished allow files or dumb-terminals to see final result
# lock needs acquiring as user can modify live_render renderable at any time unlike in Progress.
# if it is finished render the final output for files or dumb_terminals
# Ranges of unicode ordinals that produce a 1-cell wide character
# This is non-exhaustive, but covers most common Western characters
# Latin (excluding non-printable)
# Greek / Cyrillic
# Box drawing, box elements, geometric shapes
# Braille
# A set of characters that are a single cell wide
# When called with a string this will return True if all
# characters are single-cell, otherwise False
# Binary search until we find the right size
# Digging in to a lot of internals here
# Catching all exceptions in case something is missing on a non CPython implementation
# OSError handles case where object is defined in __main__ scope, e.g. REPL - no filename available.
# TypeError trapped defensively, in case of object without filename slips through.
# needed here to prevent circular import:
# always skip rich generated jupyter renderables or None values
# certain renderables should start on a new line
# strip trailing newline, not usually part of a text repr
# I'm not sure if this should be prevented at a lower level
# replace plain text formatter with rich formatter
# Being very defensive - if we cannot get the attr then its not a namedtuple
# Recursion detected
# Can happen, albeit rarely
# If we've reached the max depth, we still show the class name, but not its contents
# Indices are ANSI color numbers, values are the corresponding Windows Console API color numbers
# black                      The Windows colours are defined in wincon.h as follows:
# red                         define FOREGROUND_BLUE            0x0001 -- 0000 0001
# green                       define FOREGROUND_GREEN           0x0002 -- 0000 0010
# yellow                      define FOREGROUND_RED             0x0004 -- 0000 0100
# blue                        define FOREGROUND_INTENSITY       0x0008 -- 0000 1000
# magenta                     define BACKGROUND_BLUE            0x0010 -- 0001 0000
# cyan                        define BACKGROUND_GREEN           0x0020 -- 0010 0000
# white                       define BACKGROUND_RED             0x0040 -- 0100 0000
# bright black (grey)         define BACKGROUND_INTENSITY       0x0080 -- 1000 0000
# bright red
# bright green
# bright yellow
# bright blue
# bright magenta
# bright cyan
# bright white
# Default to ANSI 7: White
# Default to ANSI 0: Black
# Check colour output
# Check cursor movement
# Check erasing of lines
# The following styles are based on https://github.com/pygments/pygments/blob/master/pygments/formatters/terminal.py
# A few modifications were made
# Styles form a hierarchy
# We need to go from most to least specific
# e.g. ("foo", "bar", "baz") to ("foo", "bar")  to ("foo",)
# More complicated path to only stylize a portion of the code
# This speeds up further operations as there are less spans to process
# required to make MyPy happy - we know lexer is not None at this point
# Skip over tokens until line start
# Generate spans until line end
# Simple case of just rendering text
# Let's add outer boundaries at each side of the list:
# N.B. using "\n" here is much faster than using metacharacters such as "^" or "\Z":
# `line_number` is out of range
# If `column_index` is out of range: let's silently clamp it:
# Translate in to semi-colon separated codes
# Ignore invalid codes, because we want to be lenient
# reset
# styles
#  Foreground
# Background
# Taken from https://en.wikipedia.org/wiki/ANSI_escape_code (Windows 10 column)
# # The standard ansi colors (including bright variants)
# The 256 color palette
# Print once to warm cache
# A type that may be rendered by Console.
# The result of calling a __rich_console__ method.
# Jupyter notebook or qtconsole
# Terminal running IPython
# Other type (?)
# Copy of os.environ allows us to replace it for testing
# If dev has explicitly set this value, return it
# Fudge for Idle
# Return False for Idle which claims to be a tty but can't handle ansi codes
# return False for Jupyter, which may have FORCE_COLOR set
# 0 indicates device is not tty compatible
# 1 indicates device is tty compatible
# https://force-color.org/
# Any other value defaults to auto detect
# in some situation (at the end of a pytest run for example) isatty() can raise
# ValueError: I/O operation on closed file
# return False because we aren't in a terminal anymore
# Probably not a terminal
# get_terminal_size can report 0, 0 if run from pseudo-terminal
# No space to render anything. This prevents potential recursion errors.
# Ignore the frame of this local helper
# Use the faster currentframe where implemented
# Fallback to the slower stack
# Either a non-std stream on legacy Windows, or modern Windows.
# https://bugs.python.org/issue37871
# https://github.com/python/cpython/issues/82052
# We need to avoid writing more than 32Kb in a single write, due to the above bug
# Worse case scenario, every character is 4 bytes of utf-8
# Auto generated by make_terminal_widths.py
# Number of characters before 'pulse' animation repeats
# Style instances and style definitions are often interchangeable
# maps bits on to SGR parameter
# Size of edge or None for yet to be determined
# While any edges haven't been calculated
# Get flexible edges and index to map these back on to sizes list
# Remaining space in total
# No room for flexible edges
# Calculate number of characters in a ratio portion
# If any edges will be less than their minimum, replace size with the minimum
# New fixed size will invalidate calculations, so we need to repeat the process
# Distribute flexible space and compensate for rounding error
# Since edge sizes can only be integers we need to add the remainder
# to the following line
# Sizes now contains integers only
# top
# head
# head_row
# mid
# row
# foot_row
# foot
# bottom
# Map Boxes that don't render with raster fonts on to equivalent that do
# Map headed boxes to their headerless equivalents
# console.save_svg("box.svg")
# From example D28 of:
# http://www.unicode.org/book/ch03.pdf
# These should be preformatted reprs of, say, tuples.
# Force use of single quotes
# Don't close underlying buffer on destruction.
# Heuristic to catch a common mistake.
# pylint: disable=redefined-builtin
# pylint: disable=not-callable
# New interface in Python 3.10 and newer versions of the
# importlib_metadata backport.
# Older interface, deprecated in Python 3.10 and recent
# importlib_metadata, but we need it in Python 3.8 and 3.9.
#: Full name of the lexer, in human-readable form
#: A list of short, unique identifiers that can be used to look
#: up the lexer from a list, e.g., using `get_lexer_by_name()`.
#: A list of `fnmatch` patterns that match filenames which contain
#: content for this lexer. The patterns in this list should be unique among
#: all lexers.
#: A list of `fnmatch` patterns that match filenames which may or may not
#: contain content for this lexer. This list is used by the
#: :func:`.guess_lexer_for_filename()` function, to determine which lexers
#: are then included in guessing the correct one. That means that
#: e.g. every lexer for HTML and a template language should include
#: ``\*.html`` in this list.
#: A list of MIME types for content that can be lexed with this lexer.
#: Priority, should multiple lexers match and no content is provided
#: URL of the language specification/definition. Used in the Pygments
#: documentation. Set to an empty string to disable.
#: Version of Pygments in which the lexer was added.
#: Example file name. Relative to the ``tests/examplefiles`` directory.
#: This is used by the documentation generator to show an example.
# pip vendoring note: this code is not reachable by pip,
# removed import of chardet to make it clear.
# check for BOM first
# no BOM found, so use chardet
# Guess using first 1KB
# text now *is* a unicode string
# ------------------------------------------------------------------------------
# RegexLexer and ExtendedRegexLexer
# pylint: disable=invalid-name
# tuple.__init__ doesn't do anything
# if keyword arguments are given the callback
# function has to create a new lexer instance
# XXX: cache that somehow
# an existing state
# combine a new state from existing ones
# push more than one state
# it's a state reference
# should be processed already, but may not in the case of:
# 1. the state has no counterpart in any parent
# 2. the state includes more than one 'inherit'
# N.b. because this is assigned by reference, sufficiently
# deep hierarchies are processed incrementally (e.g. for
# A(B), B(C), C(RegexLexer), B will be premodified so X(B)
# will not see any inherits in B).
# Replace the "inherit" value with the items
# N.b. this is the index in items (that is, the superclass
# copy), so offset required when storing below.
# don't process yet
#: Flags for compiling the regular expressions.
#: Defaults to MULTILINE.
#: At all time there is a stack of states. Initially, the stack contains
#: a single state 'root'. The top of the stack is called "the current state".
#: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}``
#: ``new_state`` can be omitted to signify no state transition.
#: If ``new_state`` is a string, it is pushed on the stack. This ensure
#: the new current state is ``new_state``.
#: If ``new_state`` is a tuple of strings, all of those strings are pushed
#: on the stack and the current state will be the last element of the list.
#: ``new_state`` can also be ``combined('state1', 'state2', ...)``
#: to signify a new, anonymous state combined from the rules of two
#: or more existing ones.
#: Furthermore, it can be '#pop' to signify going back one step in
#: the state stack, or '#push' to push the current state on the stack
#: again. Note that if you push while in a combined state, the combined
#: state itself is pushed, and not only the state in which the rule is
#: defined.
#: The tuple can also be replaced with ``include('state')``, in which
#: case the rules from the state named by the string are included in the
#: current one.
# state transition
# pop, but keep at least one state on the stack
# (random code leading to unexpected pops should
# not allow exceptions)
# We are here only if all state tokens have been considered
# and there was not a match on any of them.
# at EOL, reset state to "root"
# end=0 not supported ;-)
# altered the state stack?
# CAUTION: callback must set ctx.pos!
# see RegexLexer for why this check is made
# no insertions
# iterate over the token stream where we want to insert
# the tokens from the insertion list.
# first iteration. store the position of first item
# not strictly necessary
# leftover tokens
# no normal tokens, set realpos to zero
# defaults to time per call
# this needs to be a stack, since using(this) will produce nested calls
# no need to call super.__init__
# These instances are supposed to be singletons
# Special token types
# Text that doesn't belong to this lexer (e.g. HTML in PHP)
# Common token types for source code
# Generic types for non-source code
# String and some others are not direct children of Token.
# alias them:
# Map standard token types to short names, used in CSS class naming.
# If you add a new item, please be sure to run this file to perform
# a consistency check for duplicate values.
# print strings, repr(open_paren)
# print '-> nothing left'
# print '-> only 1 string'
# print '-> first string empty'
# multiple one-char strings? make a charset
# do we have more than one oneletter string?
# print '-> 1-character + rest'
# print '-> only 1-character'
# we have a prefix for all strings
# print '-> prefix:', prefix
# is there a suffix?
# print '-> suffix:', suffix[::-1]
# recurse on common 1-string prefixes
# print '-> last resort'
#: Full name for the formatter, in human-readable form.
#: A list of short, unique identifiers that can be used to lookup
#: the formatter from a list, e.g. using :func:`.get_formatter_by_name()`.
#: A list of fnmatch patterns that match filenames for which this
#: formatter can produce output. The patterns in this list should be unique
#: among all formatters.
#: If True, this formatter outputs Unicode strings when no encoding
#: option is given.
# can happen for e.g. pygmentize -O encoding=guess
# wrap the outfile in a StreamWriter
# Allow writing Formatter[str] or Formatter[bytes]. That's equivalent to
# Formatter. This helps when using third-party type stubs from typeshed.
# noqa: E741
# Default mapping of ansixxx to RGB colors.
# dark
# normal
# mapping of deprecated #ansixxx colors to new color names
#: overall background color (``None`` means transparent)
#: highlight background color
#: line number font color
#: line number background color
#: special line number font color
#: special line number background color
#: Style definitions for individual token types.
#: user-friendly style name (used when selecting the style, so this
# should be all-lowercase, no spaces, hyphens)
# Attribute for lexers defined within Pygments. If set
# to True, the style is not shown in the style gallery
# on the website. This is intended for language-specific
# styles.
# Generated from unidata 11.0.0
# Hack to avoid combining this combining with the preceding high
# surrogate, 0xdbff, when doing a repr.
# Escape regex metachars.
# XID_START and XID_CONTINUE are special categories used for matching
# identifiers in Python 3.
#: A dictionary of built-in styles, mapping style names to
#: ``'submodule::classname'`` strings.
#: This list is deprecated. Use `pygments.styles.STYLES` instead
#: Internal reverse mapping to make `get_style_by_name` more efficient
# perhaps it got dropped into our styles package
# Automatically generated by scripts/gen_mapfiles.py.
# DO NOT EDIT BY HAND; run `tox -e mapfiles` instead.
# classes by name
# NB: this returns formatter classes, not info like get_all_lexers().
# This empty dict will contain the namespace for the exec'd file
# Retrieve the class `formattername` from that namespace
# And finally instantiate it with the options
# issubclass() will raise TypeError if first argument is not a class
# simpler processing
# How many characters left to gobble.
# Remove ``left`` tokens from first line, ``n`` from all others.
# Type stubs
# Sage
# SCons
# Skylark/Starlark (used by Bazel, Buck, and Pants)
# Twisted Application infrastructure
# the old style '%s' % (...) string formatting (still valid in Py3)
# the new style '{}'.format(...) string formatting
# field name
# conversion
# backslashes, quotes and formatting signs must be parsed one at a time
# unhandled string formatting sign
# newlines are an error (use "nl" state)
# Assuming that a '}' is the closing brace after format specifier.
# Sadly, this means that we won't detect syntax error. But it's
# more important to parse correct syntax correctly, than to
# highlight invalid syntax.
# raw f-strings
# non-raw f-strings
# raw bytes and strings
# non-raw strings
# non-raw bytes
# without format specifier
# debug (https://bugs.python.org/issue36817)
# with format specifier
# we'll catch the remaining '}' in the outer scope
# allow new lines
# Based on https://docs.python.org/3/reference/expressions.html
# `match`, `case` and `_` soft keywords
# at beginning of line + possible indentation
# a possible keyword
# not followed by...
# characters and keywords that mean this isn't
# pattern matching (but None/True/False is ok)
# optional `_` keyword
# new builtin exceptions from PEP 3151
# others new in Python 3
# new matrix multiplication operator
# all else: go back
# if None occurs here, it's "raise x from None", since None can
# never be a module name
# included here for raw strings
# now taken over by PythonLexer (3.x)
# the old style '%s' % (...) string formatting
# sadly, in "raise x from y" y will be highlighted as namespace too
# anything else here also means "raise x from y" and is therefore
# not an error
# This happens, e.g., when tracebacks are embedded in documentation;
# trailing whitespaces are often stripped in such contexts.
# SyntaxError starts with this
# As soon as we see a traceback, consume everything until the next
# >>> prompt.
# We have two auxiliary lexers. Use DelegatingLexer twice with
# different tokens.  TODO: DelegatingLexer should support this
# directly, by accepting a tuplet of auxiliary lexers and a tuple of
# distinguishing tokens. Then we wouldn't need this intermediary
# class.
# for doctests...
# Either `PEP 657 <https://www.python.org/dev/peps/pep-0657/>`
# error locations in Python 3.11+, or single-caret markers
# for syntax errors before that.
# Cover both (most recent call last) and (innermost last)
# The optional ^C allows us to catch keyboard interrupt signals.
# SyntaxError starts with this.
# For syntax errors.
# (should actually start a block with only cdefs)
# ``cdef foo from "header"``, or ``for foo from 0 < i < 10``
# quotes, percents and backslashes must be parsed one at a time
# included here again for raw strings
# override the mimetypes to not inherit them from python
# lookup builtin lexers
# continue with lexers from setuptools entrypoints
# Retrieve the class `lexername` from that namespace
# decode it, since all analyse_text functions expect unicode
# explicit patterns get a bonus
# The class _always_ defines analyse_text because it's included in
# the Lexer class.  The default implementation returns None which
# gets turned into 0.0.  Run scripts/detect_missing_analyse_text.py
# to find lexers which need it overridden.
# print "Possible lexers, after sort:", matches
# sort by:
# - analyse score
# - is primary filename pattern?
# - priority
# - last resort: class name
# try to get a vim modeline first
# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
# Type annotations
# Inline tables/arrays are implemented using recursion. Pathologically
# nested documents cause pure Python to raise RecursionError (which is OK),
# but mypyc binary wheels will crash unrecoverably (not OK). According to
# mypyc docs this will be fixed in the future:
# https://mypyc.readthedocs.io/en/latest/differences_from_python.html#stack-overflows
# Before mypyc's fix is in, recursion needs to be limited by this library.
# Choosing `sys.getrecursionlimit()` as maximum inline table/array nesting
# level, as it allows more nesting than pure Python, but still seems a far
# lower number than where mypyc binaries crash.
# Neither of these sets include quotation mark or backslash. They are
# currently handled as separate cases in the parser functions.
# tab
# noqa: C901
# The spec allows converting "\r\n" to "\n", even in string
# literals. Let's do so to simplify parsing.
# Parse one statement at a time
# (typically means one line in TOML source)
# 1. Skip line leading whitespace
# 2. Parse rules. Expect one of the following:
# Skip trailing whitespace when applicable.
# 3. Skip comment
# 4. Expect end of line or end of file
# Marks an immutable namespace (inline array or inline table).
# Marks a nest that has been explicitly created and can no longer
# be opened using the "[table]" syntax.
# noqa: A003
# document root has no flags
# The parsed content of the TOML document
# Skip "["
# Skip "[["
# Free the namespace now that it points to another empty list item...
# ...but this key precisely is still prohibited from table declaration
# Check that dotted key syntax does not redefine an existing table
# Containers in the relative path can't be opened with the table syntax or
# dotted key/value syntax in following table sections.
# Mark inline table and array namespaces recursively immutable
# Skip whitespace until next non-whitespace character or end of
# the doc. Error if non-whitespace is found before newline.
# Skip starting apostrophe
# Skip ending apostrophe
# Add at maximum two extra apostrophes/quotes if the end sequence
# is 4 or 5 chars long instead of just 3.
# Pure Python should have raised RecursionError already.
# This ensures mypyc binaries eventually do the same.
# IMPORTANT: order conditions based on speed of checking and likelihood
# Basic strings
# Literal strings
# Booleans
# Arrays
# Inline tables
# Dates and times
# Integers and "normal" floats.
# The regex will greedily match any type starting with a decimal
# char, so needs to be located after handling of dates and times.
# Special floats
# The default `float` callable never returns illegal types. Optimize it.
# E.g.
# - 00:32:00.999999
# - 00:32:00
# local date-time
# No need to limit cache size. This is only ever called on input
# that matched RE_DATETIME, so there is an implicit bound of
# 24 (hours) * 60 (minutes) * 2 (offset direction) = 2880.
# Detect Python runtimes which don't implement SSLObject.get_unverified_chain() API
# This API only became public in Python 3.13 but was available in CPython and PyPy since 3.10.
# type: ignore[name-defined] # noqa: F821
# Hold on to the original class so we can create it consistently
# even if we inject our own SSLContext into the ssl module.
# CPython is known to be good, but non-CPython implementations
# may implement SSLContext differently so to be safe we don't
# subclass the SSLContext.
# This is returned by truststore.SSLContext.__class__()
# This value is the superclass of truststore.SSLContext.
# candidates based on https://github.com/tiran/certifi-system-store by Christian Heimes
# Alpine, Arch, Fedora 34+, OpenWRT, RHEL 9+, BSD
# Fedora <= 34, RHEL <= 9, CentOS <= 9
# Debian, Ubuntu (requires ca-certificates)
# SUSE
# First, check whether the default locations from OpenSSL
# seem like they will give us a usable set of CA certs.
# ssl.get_default_verify_paths already takes care of:
# - getting cafile from either the SSL_CERT_FILE env var
# - getting capath from either the SSL_CERT_DIR env var
# In addition we'll check whether capath appears to contain certs.
# cafile from OpenSSL doesn't exist
# and capath from OpenSSL doesn't contain certs.
# Let's search other common locations instead.
# This is a no-op because we've enabled SSLContext's built-in
# verification via verify_mode=CERT_REQUIRED, and don't need to repeat it.
# Flags to set for SSLContext.verify_mode=CERT_NONE
# Note, actually raises OSError after calling GetLastError and FormatMessage
# If the peer didn't send any certificates then
# we can't do verification. Raise an error.
# Add intermediate certs to an in-memory cert store
# Cert context for leaf cert
# Chain params to match certs for serverAuth extended usage
# First attempt to verify using the default Windows system trust roots
# (default chain engine).
# If that fails but custom CA certs have been added
# to the SSLContext using load_verify_locations,
# try verifying using a custom chain engine
# that trusts the custom CA certs.
# Raise the original error, not the new error.
# type: ignore[valid-type]
# Get cert chain
# chain engine
# leaf cert context
# current system time
# additional in-memory cert store
# chain-building parameters
# reserved
# the resulting chain context
# Verify cert chain
# Check status
# Try getting a human readable message for an error code.
# See if we received a message for the error,
# otherwise we use a generic error with the
# error code and hope that it's search-able.
# Add custom CA certs to an in-memory cert store
# Create a custom cert chain engine which exclusively trusts
# certs from our hRootCertStore
# Get and verify a cert chain using the custom chain engine
# From typeshed/stdlib/ssl.pyi
# urllib3 holds on to its own reference of ssl.SSLContext
# so we need to replace that reference too.
# requests starting with 2.32.0 added a preloaded SSL context to improve concurrent performance;
# this unfortunately leads to a RecursionError, which can be avoided by patching the preloaded SSL context with
# the truststore patched instance
# also see https://github.com/psf/requests/pull/6667
# Dirty hack to get around isinstance() checks
# for ssl.SSLContext instances in aiohttp/trustme
# when using non-CPython implementations.
# This object exists because wrap_bio() doesn't
# immediately do the handshake so we need to do
# certificate verifications after SSLObject.do_handshake()
# Use a context manager here because the
# inner SSLContext holds on to our state
# but also does the actual handshake.
# Python 3.13+ makes get_unverified_chain() a public API that only returns DER
# encoded certificates. We detect whether we need to call public_bytes() for 3.10->3.12
# Pre-3.13 returned None instead of an empty list from get_unverified_chain()
# SecTrustEvaluateWithError is macOS 10.14+
# Returns a CFString which we need to transform
# into a UTF-8 Python string.
# First step is convert the CFString into a C string pointer.
# We try the fast no-copy way first.
# Quoting the Apple dev docs:
# "A pointer to a C string or NULL if the internal
# storage of theString does not allow this to be
# returned efficiently."
# So we need to get our hands dirty.
# If no message can be found for this status we come
# up with a generic one that forwards the status code.
# type: ignore[no-any-return]
# Only set a hostname on the policy if we're verifying the hostname
# on the leaf certificate.
# Add explicit policy requiring positive revocation checks
# Now that we have certificates loaded and a SecPolicy
# we can finally create a SecTrust object!
# The certs are now being held by SecTrust so we can
# release our handles for the array.
# If there are additional trust anchors to load we need to transform
# the list of DER-encoded certificates into a CFArray.
# We always want system certificates.
# macOS 10.13 and earlier don't support SecTrustEvaluateWithError()
# so we use SecTrustEvaluate() which means we need to construct error
# messages ourselves.
# Apple doesn't document these values in their own API docs.
# See: https://github.com/xybp888/iOS-SDKs/blob/master/iPhoneOS13.0.sdk/System/Library/Frameworks/Security.framework/Headers/SecTrust.h#L84
# Note that we're not able to ignore only hostname errors
# for macOS 10.13 and earlier, so check_hostname=False will
# still return an error.
# 1: "Trust evaluation succeeded",
# 4: "Trust result is unspecified",
# sec_trust_eval_result is a bool (0 or 1)
# where 1 means that the certs are trusted.
# If the error is a known failure that we're
# explicitly okay with from SSLContext configuration
# we can set is_trusted accordingly.
# If we're still not trusted then we start to
# construct and raise the SSLCertVerificationError.
# Can this ever return 'None' if there's a CFError?
# TODO: Not sure if we need the SecTrustResultType for anything?
# We only care whether or not it's a success or failure for now.
# TODO: Add Generic type annotations to initialized collections.
# For now we'd simply use implicit Any/Unknown which would add redundant annotations
# mypy: disable-error-code="var-annotated"
# noqa: UP036 # Check for unsupported versions
# capture these to bypass sandboxing
# no write support, probably under GAE
# Patch: Remove deprecation warning from vendored pkg_resources.
# Setting PYTHONWARNINGS=error to verify builds produce no warnings
# causes immediate exceptions.
# See https://github.com/pypa/pip/issues/12243
# Type aliases
# Can be any attribute in the module
# TODO / Incomplete: A readable file-like object
# Any object works, but let's indicate we expect something like a module (optionally has __loader__ or __file__)
# Any: Should be _ModuleLike but we end up with issues where _ModuleLike doesn't have _ZipLoaderModule's __loader__
# Use _typeshed.importlib.LoaderProtocol once available https://github.com/python/typeshed/pull/11890
# not macOS
# Basic resource access and distribution/entry point discovery
# Environmental control
# Primary implementation classes
# Warnings
# Parsing functions and string utilities
# filesystem utilities
# Distribution "precedence" constants
# "Provider" interfaces, implementations, and registration/lookup APIs
# Deprecated/backward compatibility only
# fallback for MacPorts
# if someone is running a non-Mac darwin system, this will fall
# through to the default implementation
# XXX backward compat
# easy case
# macOS special cases
# is this a Mac package?
# this is backwards compatibility for packages built before
# setuptools 0.6. All packages built after this point will
# use the new macOS designation.
# egg isn't macOS or legacy darwin
# are they the same major version and machine type?
# is the required OS major update >= the provided one?
# XXX Linux and other platforms' special cases should go here
# Bad type narrowing, dist has to be a Requirement here, so get_provider has to return Distribution
# The main program does not list any requirements
# ensure the requirements are met
# try it without defaults already on sys.path
# by starting with an empty path
# add any missing entries from sys.path
# then copy back to sys.path
# XXX add more info
# workaround a cache issue
# ignore hidden distros
# set up the stack
# set of processed requirements
# key -> dist
# Mapping of requirement to set of distributions that required it;
# useful for reporting info about conflicts.
# process dependencies breadth-first
# Ignore cyclic or redundant dependencies
# push the new requirements onto the stack
# Register the new requirements needed by req
# return list of distros to activate
# Find the best distribution and add it to the map
# Use an empty environment and workingset to avoid
# any further conflicts with the conflicting
# distribution
# Oops, the "best" so far conflicts with a dependency
# scan project names in alphabetic order
# put all our entries in shadow_set
# save error info
# try the next older version of project
# give up on this project, keep going
# success, no need to try any more versions of this project
# try to download/install
# XXX backward compatibility
# On Windows, permissions are generally restrictive by default
# Make the resource executable
# XXX
# normalize the version
# Include the path in the error message to simplify
# troubleshooting, and without changing the exception type.
# Aggressively disallow Windows absolute paths
# for compatibility, warn; in future
# raise ValueError(msg)
# Already checked get_data exists
# Assume that metadata may be nested inside a "basket"
# of multiple eggs and use module_path instead of .archive.
# A special case, we don't want all Providers inheriting from NullProvider to have a potentially None module_path
# `path` could be `StrPath | IO[bytes]` but that violates the LSP for `MemoizedZipManifests.load`
# type: ignore[override] # ZipManifests.load is a classmethod
# ZipProvider's loader should always be a zipimporter or equivalent
# Convert a virtual filename (full path to file) into a zipfile subpath
# usable with the zipimport directory cache for our target archive
# Convert a zipfile subpath into an egg-relative path part list.
# pseudo-fs path
# no need to lock for extraction, since we use temp names
# ymdhms+wday, yday, dst
# 1980 offset already done
# FIXME: 'ZipProvider._extract_resource' is too complex (12)
# return the extracted directory name
# the file became current since it was checked above,
# Windows, del old file and retry
# report a user-friendly error
# check that the contents match
# wheels are not supported with this finder
# they don't have PKG-INFO metadata, and won't ever contain eggs
# don't yield nested distros
# scan for .egg and .egg-info in directory
# Ignore the directory if does not exist, not a directory or
# permission denied
# empty metadata dir; skip
# use find_spec (PEP 451) and fall-back to find_module (PEP 302)
# capture warnings due to #1111
# Track what packages are namespaces, so when new path items are added,
# they can be updated
# Ensure all the parent's path items are reflected in the child,
# if they apply
# Only return the path if it's not already there
# https://github.com/python/mypy/issues/16261
# https://github.com/python/typeshed/issues/6347
# We could pass `env` and `installer` directly,
# but keeping `*args` and `**kwargs` for backwards compatibility
# Get the requirements for this entry point with all its extras and
# then resolve them. We have to pass `extras` along when resolving so
# that the working set knows what extras we want. Otherwise, for
# dist-info distributions, the working set will assume that the
# requirements for that extra are purely optional and skip over them.
# We could set `precedence` explicitly, but keeping this as `**kw` for full backwards and subclassing compatibility
# It's not a Distribution, so they are not equal
# These properties have to be lazy so that we don't have to load any
# metadata until/unless it's actually needed.  (i.e., some distributions
# may not know their name or version without loading PKG-INFO)
# PEP 678
# We need to access _get_metadata_path() on the provider object
# directly rather than through this class's __getattr__()
# since _get_metadata_path() is marked private.
# Handle exceptions e.g. in case the distribution's metadata
# provider doesn't support _get_metadata_path().
# FIXME: 'Distribution.insert_on' is too complex (13)
# don't modify path (even removing duplicates) if
# found and not replace
# if it's an .egg, give it precedence over its directory
# UNLESS it's already been added to sys.path and replace=False
# p is the spot where we found or inserted loc; now remove duplicates
# ha!
# ignore the inevitable setuptools self-conflicts  :(
# TODO: remove this except clause when python/cpython#103632 is fixed.
# Unsafely unpacking. But keeping **kw for backwards and subclassing compatibility
# type:ignore[arg-type]
# Including any condition expressions
# find the first stack frame that is *not* code in
# the pkg_resources module, to use for the warning
# packaging.requirements.Requirement uses a set for its extras. We use a variable-length tuple
# Allow prereleases always in order to match the previous behavior of
# this method. In the future this should be smarter and follow PEP 440
# more accurately.
# _find_adapter would previously return None, and immediately be called.
# So we're raising a TypeError to keep backward compatibility if anyone depended on that behaviour.
# wrap up last segment
# temporarily bypass sandboxing
# and then put it back
# Silence the PEP440Warning by default, so that end users don't get hit by it
# randomly just because they use pkg_resources. We want to append the rule
# because we want earlier uses of filterwarnings to take precedence over this
# one.
# Ported from ``setuptools`` to avoid introducing an import inter-dependency:
# TODO: Add a deadline?
# from jaraco.functools 1.3
# Activate all distributions already on sys.path with replace=False and
# ensure that all distributions added to the working set in the future
# (e.g. by calling ``require()``) will get activated as well,
# with higher priority (replace=True).
# match order
# All of these are set by the @_call_aside methods above
# Won't exist at runtime
# This is slightly terrible, but we want to delay extracting the file
# in cases where we're inside of a zipimport situation until someone
# actually calls where(), but we don't want to re-extract the file
# on every call of where(), so we'll do it once then store it in a
# global variable.
# This is slightly janky, the importlib.resources API wants you to
# manage the cleanup of this file, so it doesn't actually return a
# path, it returns a context manager that will give you the path
# when you enter it and will do any cleanup when you leave it. In
# the common case of not needing a temporary file, it will just
# return the file system location and the __exit__() is a no-op.
# We also have to hold onto the actual context manager, because
# it will do the cleanup whenever it gets garbage collected, so
# we will also store that at the global level as well.
# This is slightly terrible, but we want to delay extracting the
# file in cases where we're inside of a zipimport situation until
# someone actually calls where(), but we don't want to re-extract
# the file on every call of where(), so we'll do it once then store
# it in a global variable.
# This is slightly janky, the importlib.resources API wants you
# to manage the cleanup of this file, so it doesn't actually
# return a path, it returns a context manager that will give
# you the path when you enter it and will do any cleanup when
# you leave it. In the common case of not needing a temporary
# file, it will just return the file system location and the
# __exit__() is a no-op.
# ruff: noqa: F401
# alias for compatibility to simplejson/marshal/pickle.
#: array of bytes fed.
#: Which position we currently reads
# When Unpacker is used as an iterable, between the calls to next(),
# the buffer is not "consumed" completely, for efficiency sake.
# Instead, it is done sloppily.  To make sure we raise BufferFull at
# the correct moments, we have to keep track of how sloppy we were.
# Furthermore, when the buffer is incomplete (that is: in the case
# we raise an OutOfData) we need to rollback the buffer to the correct
# state, which _buf_checkpoint records.
# Strip buffer before checkpoint before reading file.
# Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
# (int) -> bytearray
# Fast path: buffer has n bytes already
# Read from file
# rollback
# TODO should we eliminate the recursion?
# TODO check whether we need to call `list_hook`
# TODO is the interaction between `list_hook` and `use_list` ok?
# TODO check whether we need to call hooks
# timestamp
# force reset
# seconds is non-negative and fits in 34 bits
# nanoseconds is zero and seconds < 2**32, so timestamp 32
# timestamp 64
# timestamp 96
# Deprecated.  Use ValueError instead
# Deprecated.  Use Exception instead to catch all exception during packing.
# / (   (- (/ (/ (- _)  /  _)
# Verify urllib3 isn't installed from git.
# Sometimes, urllib3 only reports its version as 16.1.
# Check urllib3 for compatibility.
# urllib3 >= 1.21.1
# Check charset_normalizer for compatibility.
# chardet_version >= 3.0.2, < 6.0.0
# charset_normalizer >= 2.0.0 < 4.0.0
# pip does not need or use character detection
# cryptography < 1.3.4
# Check imported dependencies for compatibility.
# Attempt to enable urllib3's fallback for SNI support
# if the standard library doesn't support SNI or the
# 'ssl' library isn't available.
# Note: This logic prevents upgrading cryptography on Windows, if imported
# Check cryptography version
# urllib3's DependencyWarnings should be silenced.
# FileModeWarnings go off per the default.
# TODO: response is the only one
# formerly defined here, reexposed here for backward compatibility
# Preferred clock, based on which one is more accurate on a given system.
# Bypass if not a dictionary (e.g. verify)
# Remove keys that are set to None. Extract keys first to avoid altering
# the dictionary during iteration.
# Due to the nature of how requests processes redirects this method will
# be called at least once upon the original response and at least twice
# on each subsequent redirect response (if any).
# If a custom mixin is used to handle this logic, it may be advantageous
# to cache the redirect location onto the response object as a private
# attribute.
# Currently the underlying http module on py3 decode headers
# in latin1, but empirical evidence suggests that latin1 is very
# rarely used with non-ASCII characters in HTTP headers.
# It is more likely to get UTF8 header rather than latin1.
# This causes incorrect handling of UTF8 encoded location headers.
# To solve this, we re-encode the location in latin1.
# Special case: allow http -> https redirect when using the standard
# ports. This isn't specified by RFC 7235, but is kept to avoid
# breaking backwards compatibility with older versions of requests
# that allowed any redirects on the same host.
# Handle default port usage corresponding to scheme.
# Standard case: root URI must match
# keep track of history
# Update history and keep track of redirects.
# resp.history must ignore the original request in this loop
# Consume socket so it can be released
# Release the connection back into the pool.
# Handle redirection without scheme (see: RFC 1808 Section 4)
# Normalize url case and attach previous fragment if needed (RFC 7231 7.1.2)
# Facilitate relative 'location' headers, as allowed by RFC 7231.
# (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
# Compliant with RFC3986, we percent encode the url.
# https://github.com/psf/requests/issues/1084
# https://github.com/psf/requests/issues/3490
# Extract any cookies sent on the response to the cookiejar
# in the new request. Because we've mutated our copied prepared
# request, use the old one that we haven't yet touched.
# Rebuild auth and proxy information.
# A failed tell() sets `_body_position` to `object()`. This non-None
# value ensures `rewindable` will be True, allowing us to raise an
# UnrewindableBodyError, instead of hanging the connection.
# Attempt to rewind consumed file-like object.
# Override the original request.
# extract redirect url, if any, for the next loop
# If we get redirected to a new host, we should strip out any
# authentication headers.
# .netrc might have more auth for us on our new host.
# urllib3 handles proxy authorization for us in the standard adapter.
# Avoid appending this to TLS tunneled requests where it may be leaked.
# https://tools.ietf.org/html/rfc7231#section-6.4.4
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
# Second, if a POST is responded to with a 301, turn it into a GET.
# This bizarre behaviour is explained in Issue 1704.
#: A case-insensitive dictionary of headers to be sent on each
#: :class:`Request <Request>` sent from this
#: :class:`Session <Session>`.
#: Default Authentication tuple or object to attach to
#: :class:`Request <Request>`.
#: Dictionary mapping protocol or protocol and host to the URL of the proxy
#: (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to
#: be used on each :class:`Request <Request>`.
#: Event-handling hooks.
#: Dictionary of querystring data to attach to each
#: :class:`Request <Request>`. The dictionary values may be lists for
#: representing multivalued query parameters.
#: Stream response content default.
#: SSL Verification default.
#: Defaults to `True`, requiring requests to verify the TLS certificate at the
#: remote end.
#: If verify is set to `False`, requests will accept any TLS certificate
#: presented by the server, and will ignore hostname mismatches and/or
#: expired certificates, which will make your application vulnerable to
#: man-in-the-middle (MitM) attacks.
#: Only set this to `False` for testing.
#: SSL client certificate default, if String, path to ssl client
#: cert file (.pem). If Tuple, ('cert', 'key') pair.
#: Maximum number of redirects allowed. If the request exceeds this
#: limit, a :class:`TooManyRedirects` exception is raised.
#: This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is
#: 30.
#: Trust environment settings for proxy configuration, default
#: authentication and similar.
#: A CookieJar containing all currently outstanding cookies set on this
#: session. By default it is a
#: :class:`RequestsCookieJar <requests.cookies.RequestsCookieJar>`, but
#: may be any other ``cookielib.CookieJar`` compatible object.
# Default connection adapters.
# Bootstrap CookieJar.
# Merge with session cookies
# Set environment's basic authentication if not explicitly set.
# Create the Request.
# Send the request.
# Set defaults that the hooks can utilize to ensure they always have
# the correct parameters to reproduce the previous request.
# It's possible that users might accidentally send a Request object.
# Guard against that specific failure case.
# Set up variables needed for resolve_redirects and dispatching of hooks
# Get the appropriate adapter to use
# Start time (approximately) of the request
# Send the request
# Total elapsed time of the request (approximately)
# Response manipulation hooks
# Persist cookies
# If the hooks create history then we want those cookies too
# Resolve redirects if allowed.
# Redirect resolving generator.
# Shuffle things around if there's history.
# Insert the first (original) request at the start
# Get the last request made
# If redirects aren't being followed, store the response on the Request for Response.next().
# Gather clues from the surrounding environment.
# Set environment's proxies.
# Look for requests environment configuration
# and be compatible with cURL.
# Merge all the kwargs.
# Nothing matches :-/
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
# Informational.
# Redirection.
# "resume" and "resume_incomplete" to be removed in 3.0
# Client Error.
# Server Error.
# This code exists for backwards compatibility reasons.
# I don't like it either. Just look the other way. :)
# This traversal is apparently necessary such that the identities are
# preserved (requests.packages.urllib3.* is urllib3.*)
# .-. .-. .-. . . .-. .-. .-. .-.
# |(  |-  |.| | | |-  `-.  |  `-.
# ' ' `-' `-`.`-' `-' `-'  '  `-'
# Only return the response's URL if the user hadn't set the Host
# header
# If they did set it, retrieve it and reconstruct the expected domain
# Reconstruct the URL as we expect it
# the _original_response field is the wrapped httplib.HTTPResponse object,
# pull out the HTTPMessage with the headers and put it in the mock:
# support client code that unsets cookies by assignment of a None value:
# there is only one domain in jar
# if there are multiple cookies that meet passed in criteria
# we will eventually return this as long as no cookie conflict
# remove the unpickleable RLock object
# We're dealing with an instance of RequestsCookieJar
# We're dealing with a generic CookieJar instance
# Bypass default SSLContext creation when Python
# interpreter isn't built with the ssl module.
# Determine if we have and should use our default SSLContext
# to optimize performance on standard requests.
# According to our docs, we allow users to specify just the client
# cert path
# Can't handle by adding 'proxy_manager' to self.__attrs__ because
# self.poolmanager uses a lambda function, which isn't pickleable.
# save these values for pickling
# Only load the CA certificates if 'verify' is a string indicating the CA bundle to use.
# Otherwise, if verify is a boolean, we don't load anything since
# the connection will be using a context with the default certificates already loaded,
# and this avoids a call to the slow load_verify_locations()
# `verify` must be a str with a path then
# Fallback to None if there's no status_code, for whatever reason.
# Make headers case-insensitive.
# Set encoding.
# Add new cookies from the server.
# Give the Response some context.
# Only scheme should be lower case
# Don't confuse urllib3
# TODO: Remove this in 3.0.0: see #2811
# This branch is for urllib3 v1.22 and later.
# This branch is for urllib3 versions earlier than v1.22
# "I want us to put a big-ol' comment on top of it that
# says that this behaviour is dumb but we need to preserve
# it because people are relying on it."
# These are here solely to maintain backwards compatibility
# for things like ints. This will be removed in 3.0.0.
# -- End Removal --
# Keep state in per-thread local storage
# Ensure state is initialized just once per-thread
# lambdas assume digest modules are imported at the top level
# noqa:E731
# XXX not implemented yet
#: path is request-uri defined in RFC 2616 which should not be empty
# XXX handle auth-int.
# XXX should the partial digests be encoded too?
# If response is not 4xx, do not auth
# See https://github.com/psf/requests/issues/3772
# Rewind the file position indicator of the body to where
# it was to resend the request.
# Consume content and release the original connection
# to allow our new request to reuse the same one.
# Initialize per-thread state, if needed
# If we have a saved nonce, skip the 401
# In the case of HTTPDigestAuth being reused and the body of
# the previous request was a file-like object, pos has the
# file position of the previous body. Ensure it's set to
# None.
# to_native_string is unused here, but imported here for backwards compatibility
# Ensure that ', ' is used to preserve previous delimiter behavior.
# provide a proxy_bypass version on Windows without DNS lookups
# ProxyEnable could be REG_SZ or REG_DWORD, normalizing it
# ProxyOverride is almost always a string
# make a check value list from the registry entry: replace the
# '<local>' string by the localhost entry and the corresponding
# canonical entry.
# filter out empty strings to avoid re.match return true in the following code.
# now check if we match one of the registry values.
# mask dots
# change glob sequence
# change glob char
# urllib3 2.x+ treats all strings as utf-8 instead
# of latin-1 (iso-8859-1) like http.client.
# AttributeError is a surprising exception, seeing as how we've just checked
# that `hasattr(o, 'fileno')`.  It happens for objects obtained via
# `Tarfile.extractfile()`, per issue 5229.
# Having used fstat to determine the file length, we need to
# confirm that this file was opened up in binary mode.
# This can happen in some weird situations, such as when the file
# is actually a special file descriptor like stdin. In this
# instance, we don't know what the length is, so set it to zero and
# let requests chunk it instead.
# StringIO and BytesIO have seek but no usable fileno
# seek to end of file
# seek back to current position to support
# partially read file-like objects
# Abort early if there isn't one.
# Return with login / password
# If there was a parsing error or a permissions issue reading the file,
# we'll just skip netrc auth unless explicitly asked to raise errors.
# App Engine hackiness.
# this is already a valid path, no need to do anything further
# find the first valid part of the provided path and treat that as a zip archive
# assume the rest of the path is the name of a member in the archive
# If we don't check for an empty prefix after the split (in other words, archive remains unchanged after the split),
# we _can_ end up in an infinite loop on a rare corner case affecting a small number of users
# we have a valid zip archive and a valid member of that archive
# use read + write to avoid the creating nested folders, we only want the file, avoids mkdir racing condition
# From mitsuhiko/werkzeug (used with permission).
# this is not the real unquoting, but fixing this so that the
# RFC is met will result in bugs with internet explorer and
# probably some other browsers as well.  IE for example is
# uploading files with "C:\foo\bar.txt" as filename
# if this is a filename and the starting characters look like
# a UNC path, then just return the value without quotes.  Using the
# replace sequence below on a UNC path has the effect of turning
# the leading double slash into a single slash and then
# _fix_ie_filename() doesn't work correctly.  See #458.
# Assume UTF-8 based on RFC 4627: https://www.ietf.org/rfc/rfc4627.txt since the charset was unset
# Try charset from content-type
# Fall back:
# The unreserved URI characters (RFC 3986)
# Unquote only the unreserved characters
# Then quote only illegal characters (do not quote reserved,
# unreserved, or '%')
# We couldn't unquote the given URI, so let's try quoting it, but
# there may be unquoted '%'s in the URI. We need to make sure they're
# properly quoted so they do not cause issues elsewhere.
# Prioritize lowercase environment variables over uppercase
# to keep a consistent behaviour with other http projects (curl, wget).
# First check whether no_proxy is defined. If it is, check that the URL
# we're getting isn't in the no_proxy list.
# URLs don't always have hostnames, e.g. file:/// urls.
# We need to check whether we match here. We need to see if we match
# the end of the hostname, both with and without the port.
# If no_proxy ip was defined in plain IP notation instead of cidr notation &
# matches the IP of the index
# The URL does match something in no_proxy, so we don't want
# to apply the proxies on this URL.
# parsed.hostname can be `None` in cases such as a file URI.
# Null bytes; no need to recreate these on each call to guess_json_utf
# encoding to ASCII for Python 3
# JSON always starts with two ASCII characters, so detection is as
# easy as counting the nulls and from their location and count
# determine the encoding. Also detect a BOM, if present.
# BOM included
# BOM included, MS style (discouraged)
# 1st and 3rd are null
# 2nd and 4th are null
# Did not detect 2 valid UTF-16 ascii-range characters
# Did not detect a valid UTF-32 ascii-range character
# A defect in urlparse determines that there isn't a netloc present in some
# urls. We previously assumed parsing was overly cautious, and swapped the
# netloc and path. Due to a lack of tests on the original defect, this is
# maintained with parse_url for backwards compatibility.
# parse_url doesn't provide the netloc with auth
# so we'll add it ourselves.
# see func:`prepend_scheme_if_needed`
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
# Compare insensitively
# Copy is required
# We allow fall-through here, so values default to None
# Detect which major version of urllib3 is being used.
# If we can't discern a version, prefer old functionality.
# -------------------
# Character Detection
# Pythons
# Syntax sugar.
#: Python 2.x?
#: Python 3.x?
# Note: We've patched out simplejson support in pip because it prevents
# Keep OrderedDict for backwards compatibility.
# --------------
# Legacy Imports
# Import encoding now, to avoid implicit import later.
# Implicit import within threads may cause LookupError when standard library is in a ZIP,
# such as in Embedded Python. See https://github.com/psf/requests/issues/3578.
#: The set of HTTP status codes that indicate an automatically
#: processable redirect.
# 301
# 302
# 303
# 307
# 308
# Don't call str() on bytestrings: in Py3 it all goes wrong.
# support for explicit filename
# Default empty dicts for dict params.
#: HTTP verb to send to the server.
#: HTTP URL to send the request to.
#: dictionary of HTTP headers.
# The `CookieJar` used to create the Cookie header will be stored here
# after prepare_cookies is called
#: request body to send to the server.
#: dictionary of callback hooks, for internal usage.
#: integer denoting starting position of a readable file-like body.
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
#: Accept objects that have string representations.
#: We're unable to blindly call unicode/str functions
#: as this will include the bytestring indicator (b'')
#: on python 3.x.
#: https://github.com/psf/requests/pull/2238
# Remove leading whitespaces from url
# Don't do any URL preparation for non-HTTP schemes like `mailto`,
# `data` etc to work around exceptions from `url_parse`, which
# handles RFC 3986 only.
# Support for unicode domain names and paths.
# In general, we want to try IDNA encoding the hostname if the string contains
# non-ASCII characters. This allows users to automatically get the correct IDNA
# behaviour. For strings containing only ASCII characters, we need to also verify
# it doesn't start with a wildcard (*), before allowing the unencoded hostname.
# Carefully reconstruct the network location
# Bare domains aren't valid URLs.
# Raise exception on invalid header value.
# Check if file, fo, generator, iterator.
# If not, run through normal process.
# Nottin' on you.
# urllib3 requires a bytes-like body. Python 2's json.dumps
# provides this natively, but Python 3 gives a Unicode string.
# Record the current file position before reading.
# This will allow us to rewind a file in the event
# of a redirect.
# a failed `tell()` later when trying to rewind the body
# Multi-part file uploads.
# Add content-type if it wasn't explicitly provided.
# If length exists, set it. Otherwise, we fallback
# to Transfer-Encoding: chunked.
# Set Content-Length to 0 for methods that can have a body
# but don't provide one. (i.e. not GET or HEAD)
# If no Auth is explicitly provided, extract it from the URL first.
# special-case basic HTTP auth
# Allow auth to make its changes.
# Update self to reflect the auth changes.
# Recompute Content-Length
# hooks can be passed as None to the prepare method and to this
# method. To prevent iterating over None, simply use an empty list
# if hooks is False-y
#: Integer Code of responded HTTP Status, e.g. 404 or 200.
#: Case-insensitive Dictionary of Response Headers.
#: For example, ``headers['content-encoding']`` will return the
#: value of a ``'Content-Encoding'`` response header.
#: File-like object representation of response (for advanced usage).
#: Use of ``raw`` requires that ``stream=True`` be set on the request.
#: This requirement does not apply for use internally to Requests.
#: Final URL location of Response.
#: Encoding to decode with when accessing r.text.
#: A list of :class:`Response <Response>` objects from
#: the history of the Request. Any redirect responses will end
#: up here. The list is sorted from the oldest to the most recent request.
#: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
#: A CookieJar of Cookies the server sent back.
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta).
#: This property specifically measures the time taken between sending
#: the first byte of the request and finishing parsing the headers. It
#: is therefore unaffected by consuming the response content or the
#: value of the ``stream`` keyword argument.
#: The :class:`PreparedRequest <PreparedRequest>` object to which this
#: is a response.
# Consume everything; accessing the content attribute makes
# sure the content has been fully read.
# pickled objects do not have .raw
# If no character detection library is available, we'll fall back
# to a standard Python utf-8 str.
# simulate reading small chunks of the content
# Read the contents.
# don't need to release the connection; that's been handled by urllib3
# since we exhausted the data.
# Fallback to auto-detected encoding.
# Decode unicode from given encoding.
# A LookupError is raised if the encoding was not found which could
# indicate a misspelling or similar mistake.
# A TypeError can be raised if encoding is None
# So we try blindly encoding.
# No encoding set. JSON RFC 4627 section 3 states we should expect
# UTF-8, -16 or -32. Detect which one to use; If the detection or
# decoding fails, fall back to `self.text` (using charset_normalizer to make
# a best guess).
# Wrong UTF codec detected; usually because it's not UTF-8
# but some other 8-bit codec.  This is an RFC violation,
# and the server didn't bother to tell us what codec *was*
# used.
# Catch JSON-related errors and raise as requests.JSONDecodeError
# This aliases json.JSONDecodeError and simplejson.JSONDecodeError
# encodings. (See PR #3538)
# noqa: PLC0415
# return to avoid redefinition of a result
# Work around mypy issue: https://github.com/python/mypy/issues/10962
#: Currently active platform
#: Backwards compatibility with appdirs
# noqa: FBT001, FBT002
# noqa: PLR0904
# noqa: PLR0913, PLR0917
#: The name of application.
#: A flag to indicating to use opinionated values.
# noqa: PTH118
# If multipath is True, the first path is returned.
# file generated by setuptools-scm
# don't change, don't track in version control
# noqa: PTH111
# type checker isn't happy with our "import android", just don't do this when type checking see
# https://stackoverflow.com/a/61394121
# First try to get a path to android app using python4android (if available)...
# noqa: BLE001
# ...and fall back to using plain pyjnius, if python4android isn't available or doesn't deliver any useful
# result...
# and if that fails, too, find an android folder looking at path on the sys.path
# warning: only works for apps installed under /data, not adopted storage etc.
# one last try: find an android folder looking at path on the sys.path taking adopted storage paths into
# account
# Get directories with pyjnius
# noqa: T201
# XDG default for $XDG_DATA_DIRS; only first, if multipath is False
# XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
# noqa: S108
# Add fake section header, so ConfigParser doesn't complain
# Handle relative home paths
# only needed for mypy type checker to know that this code runs only on Windows
# There is no 'CSIDL_DOWNLOADS'.
# Use 'CSIDL_PROFILE' (40) and append the default folder 'Downloads' instead.
# https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid
# noqa: B009 # using getattr to avoid false positive with mypy type checker
# Downgrade to short path name if it has high-bit chars.
# noqa: PLR2004
# noqa: PLC0415, F401
# non-negative: '-' not in self.arg
# varargs
# noargs
# O  (i.e. a single arg)
# DEPRECATED: implementation for ffi.verify()
# add 'export_symbols' to the dictionary.  Note that we add the
# list before filling it.  When we fill it, it will thus also show
# up in kwds['export_symbols'].
# not needed in the generic engine
# first paste some standard set of lines that are mostly '#include'
# then paste the C source given by the user, verbatim.
# call generate_gen_xxx_decl(), for every xxx found from
# ffi._parser._declarations.  This generates all the functions.
# on Windows, distutils insists on putting init_cffi_xyz in
# 'export_symbols', so instead of fighting it, just give up and
# give it one
# import it with the CFFI backend
# needs to make a path that contains '/', on Posix
# call loading_gen_struct() to get the struct layout inferred by
# the C compiler
# build the FFILibrary class and instance, this is a module subclass
# because modules are expected to have usually-constant-attributes and
# in PyPy this means the JIT is able to treat attributes as constant,
# which we want.
# finally, call the loaded_gen_xxx() functions.  This will set
# up the 'library' object.
# ----------
# typedefs: generates no code so far
# function declarations
# cannot support vararg functions better than this: check for its
# exact type (including the fixed arguments), and build it as a
# constant function pointer (no _cffi_f_%s wrapper)
# named structs
# nothing to do with opaque structs
# accept all integers, but complain on float or double
# only accept exactly the type declared.
# cannot verify it, ignore
# xxx ignore fbitsize for now
# use the function()'s sizes and offsets to guide the
# layout of the struct
# force 'fixedlayout' to be considered
# check that the layout sizes and offsets match the real ones
# 'anonymous' declarations.  These are produced for anonymous structs
# or unions; the 'name' is obtained by a typedef.
# constants, likely declared with '#define'
# enums
# "$enum_$1" => "___D_enum____D_1"
# macros: for now only for integers
# an integer
# global variables
# int a[5] is "constant" in the
# sense that "a=..." is forbidden
# 'value' is a <cdata 'type *'> which we have to replace with
# a <cdata 'type[N]'> if the N is actually known
# remove ptr=<cdata 'int *'> from the library instance, and replace
# it by a property on the class, which reads/writes into ptr[0].
# The verifier module file names are based on the CRC32 of a string that
# contains the following version number.  It may be older than __version__
# if nothing is clearly incompatible.
# Python 3.x
# We use execfile() (here rewritten for Python 3) instead of
# __import__() to load the build script.  The problem with
# a normal import is that in some packages, the intermediate
# __init__.py files may already try to import the file that
# we are generating.
# Python 2.6 compatibility
# maybe it's a function instead of directly an ffi
# certain development versions of setuptools
# If we don't know the version number of setuptools, we
# try to set 'py_limited_api' anyway.  At worst, we get a
# avoid setting Py_LIMITED_API if py_limited_api=False
# which _cffi_include.h does unless _CFFI_NO_LIMITED_API is defined
# We are a setuptools extension. Need this build_ext for py_limited_api.
# a setuptools-only, API-only hook: called with the "ext" and "ffi"
# arguments just before we turn the ffi into C code.  To use it,
# subclass the 'distutils.command.build_ext.build_ext' class and
# add a method 'def pre_run(self, ext, ffi)'.
# NB. multiple runs here will create multiple 'build_ext_make_mod'
# classes.  Even in this case the 'build_ext' command should be
# run once; but just in case, the logic above does nothing if
# called again.
# This is called from 'setup.py sdist' only.  Exclude
# the generate .py module in this case.
# distutils and setuptools have no notion I could find of a
# generated python module.  If we don't add module_name to
# dist.py_modules, then things mostly work but there are some
# combination of options (--root and --record) that will miss
# the module.  So we add it here, which gives a few apparently
# harmless warnings about not finding the file outside the
# build directory.
# Then we need to hack more in get_source_files(); see above.
# the following is only for "build_ext -i"
# from get_ext_fullpath() in distutils/command/build_ext.py
# type qualifiers
# It seems that __restrict is supported by gcc and msvc.
# If you hit some different compiler, add a #define in
# _cffi_include.h for it (and in its copies, documented there)
# some logic duplication with ffi.getctype()... :-(
# the following types are not primitive in the C sense
# Corresponds to a C type like 'int(int)', which is the C type of
# a function, but not a pointer-to-function.  The backend has no
# notion of such a type; it's used temporarily by parsing.
# __stdcall ignored for variadic funcs
# force the item BType
# nested anonymous struct/union
# force the struct or union to have a declaration that lists
# directly all fields returned by enumfields(), flattening
# nested anonymous structs/unions.
# not completing it: it's an opaque struct
# SF_PACKED
# fix the length to match the total size
# XXX!  The goal is to ensure that the warnings.warn()
# will not suppress the warning.  We want to get it
# several times if we reach this point several times.
# needs a signed type
# returns _typecache_cffi_backend if backend is the _cffi_backend
# module, or type(backend).__typecache if backend is an instance of
# CTypesBackend (or some FakeBackend class during tests)
# note that setdefault() on WeakValueDictionary is not atomic
# and contains a rare bug (http://bugs.python.org/issue19542);
# we have to use a lock and do it ourselves
##import sys
##l1 = allocate_lock
##class allocate_lock(object):
############# workaround for a distutils bugs where some env vars can
# become longer and longer every time it is used
# XXX compact but horrible :-(
# already relative
# failed to make it relative
# this works on Python < 3.12
# this is a limited emulation for Python >= 3.12.
# Note that this is used only for tests or for the old ffi.verify().
# This is copied from the source code of Python 3.11.
# Backwards-compatibility
# Break out of outer loop when breaking out of inner loop.
# free-threaded doesn't yet support limited API
# prepare all FUNCTION bytecode sequences first
# placeholder
# prepare all OTHER bytecode sequences
# collect all structs and unions and enums
# emit all bytecode sequences now
# consistency check
# don't change any more
# When producing C, expand all anonymous struct/union fields.
# That's necessary to have C code checking the offsets of the
# individual fields contained in them.  When producing Python,
# don't do it and instead write it like it is, with the
# corresponding fields having an empty name.  Empty names are
# recognized at runtime when we import the generated Python
# collect the declarations for '_cffi_globals', '_cffi_typenames', etc.
# check for a possible internal inconsistency: _cffi_struct_unions
# should have been generated with exactly self._struct_unions
# same with enums
# first the '#include' (actually done by inlining the file's content)
# if we have ffi._embedding != None, we give it here as a macro
# and include an extra file
# the declaration of '_cffi_types'
# call generate_cpy_xxx_decl(), for every xxx found from
# the declaration of '_cffi_globals' and '_cffi_typenames'
# the declaration of '_cffi_includes'
# the declaration of '_cffi_type_context'
# set to mean that we use extern "Python"
# the init function
# Py2: unicode unexpected; Py3: bytes unexp.
# the 'import' of the included ffis
# the '_types' keyword argument
# the keyword arguments from ALL_STEPS
# the '_includes' keyword argument
# the footer
# a KeyError here is a bug.  please report it! :-)
# don't check with is_float_type(): it may be a 'long
# double' here, and _cffi_to_c_double would loose precision
# a struct (not a struct pointer) as a function argument;
# or, a complex (the same code works)
# typedefs
# constant function pointer (no CPython wrapper)
# ------------------------------
# the 'd' version of the function, only for addressof(lib, 'func')
# the PyPy version: need to replace struct/union arguments with
# pointers, and if the result is a struct/union, insert a first
# arg that is a pointer to the result.  We also do that for
# complex args and return type.
# 'METH_NOARGS'
# 'METH_O'
# 'METH_VARARGS'
# named structs or unions
# also requires nested anon struct/unions in ABI mode, recursively
# only accept exactly the type declared, except that '[]'
# is interpreted as a '*' and so will match any array length.
# (It would also match '*', but that's harder to detect...)
# opaque
# field layout obtained silently from the C compiler
# cname is None for _add_missing_struct_unions() only
# unknown name, for _add_missing_struct_unions
# not very nice, but some struct declarations might be missing
# because they don't have any known C name.  Check that they are
# not partial (we can't complete or verify them!) and emit them
# anonymously.
# constants, declared with "static const ..."
# This code assumes that casts from "tp *" to "void *" is a
# no-op, i.e. a function that returns a "tp *" can be called
# as if it returned a "void *".  This should be generally true
# on any modern machine.  The only exception to that rule (on
# uncommon architectures, and as far as I can tell) might be
# if 'tp' were a function type, but that is not possible here.
# (If 'tp' is a function _pointer_ type, then casts from "fn_t
# **" to "void *" are again no-ops, as far as I can tell.)
# extern "Python"
# unicode
# -> bytes
# got bytes, check for valid utf-8
# python2
# python3
# type(line) is bytes, which enumerates like a list of integers
# emitting the opcodes for individual types
# compare to xml.etree.ElementTree._get_writer
# already up-to-date
# Aaargh.  Distutils is not tested at all for the purpose of compiling
# DLLs that are not extension modules.  Here are some hacks to work
# around that, in the _patch_for_*() functions...
# we must not remove the manifest when building for embedding!
# FUTURE: this module was removed in setuptools 74; this is likely dead code and should be removed,
# we must not make a '-bundle', but a '-dynamiclib' instead
# if 'target' is different from '*', we need to patch some internal
# method to just return this 'target' value, instead of having it
# built from module_name
# Python 3.1
# You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with
# _cffi_backend.so compiled.
# bad version!  Try to be as explicit as possible.
# (If you insist you can also try to pass the option
# 'backend=backend_ctypes.CTypesBackend()', but don't
# rely on it!  It's probably not going to work well.)
# _cffi_backend: attach these constants to the class
# ctypes backend: attach these constants to the instance
# unicode, on Python 2
# call me with the lock!
# string -> ctype object
#def buffer(self, cdata, size=-1):
# decorator mode
# direct mode
# If set_unicode(True) was called, insert the UNICODE and
# _UNICODE macro declarations
# Set the tmpdir here, and not in Verifier.__init__: it picks
# up the caller's directory, which we want to be the caller of
# ffi.verify(), as opposed to the caller of Veritier().
# Make a Verifier() and use it to load the library.
# Save the loaded library for keep-alive purposes, even
# if the caller doesn't keep it alive itself (it should).
# must include an argument like "-lpython2.7" for the compiler
# we need 'libpypy-c.lib'.  Current distributions of
# pypy (>= 4.1) contain it as 'libs/python27.lib'.
# we need 'libpypy-c.{so,dylib}', which should be by
# default located in 'sys.prefix/bin' for installed
# systems.
# On uninstalled pypy's, the libpypy-c is typically found in
# .../pypy/goal/.
# 2.6
# fallback, 'tmpdir' ignored
# Read _init_once_cache[tag], which is either (False, lock) if
# we're calling the function now in some thread, or (True, result).
# Don't call setdefault() in most cases, to avoid allocating and
# immediately freeing a lock; but still use setdefaut() to avoid
# races.
# Common case: we got (True, result), so we return the result.
# Else, it's a lock.  Acquire it to serialize the following tests.
# Read again from _init_once_cache the current status.
# Call the function and store the result back.
# fix 'pysource' before it gets dumped into the C file:
# - remove empty lines at the beginning, so it starts at "line 1"
# - dedent, if all non-empty lines are indented
# - check for SyntaxErrors
# Windows: load_library(None) fails, but this works
# on Python 2 (backward compatibility hack only)
# added by another thread while waiting for the lock
# a hack to make at least ffi.typeof(builtin_function) work,
# if the builtin function was obtained by 'vengine_cpy'.
# import setuptools first; this is the most robust way to ensure its embedded distutils is available
# (the .pth shim should usually work, but this is even more robust)
# Python 3.12 has no built-in distutils to fall back on, so any import problem is fatal
# silently ignore on older Pythons (support fallback to stdlib distutils where available)
# bring in just the bits of distutils we need, whether they really came from setuptools or stdlib-embedded distutils
# FUTURE: msvc9compiler module was removed in setuptools 74; consider removing, as it's only used by an ancient patch in `recompiler`
# anything older, just let the underlying distutils import error fly
# Note that after a setuptools installation, there are both .py
# and .so files with the same basename.  The code here relies on
# imp.find_module() locating the .so in priority.
# The new module will have a _cffi_setup() function that receives
# objects from the ffi world, and that calls some setup code in
# the module.  This setup code is split in several independent
# functions, e.g. one per constant.  The functions are "chained"
# by ending in a tail call to each other.
# This is further split in two chained lists, depending on if we
# can do it at import-time or if we must wait for _cffi_setup() to
# provide us with the <ctype> objects.  This is needed because we
# need the values of the enum constants in order to build the
# <ctype 'enum'> that we may have to pass to _cffi_setup().
# The following two 'chained_list_constants' items contains
# the head of these two chained lists, as a string that gives the
# call to do, if any.
# first paste some standard set of lines that are mostly '#define'
# implement the function _cffi_setup_custom() as calling the
# head of the chained list.
# produce the method table, including the entries for the
# generated Python->C function wrappers, which are done
# by generate_cpy_function_method().
# standard init.
# XXX review all usages of 'self' here!
# import it as a new extension module
# call loading_cpy_struct() to get the struct layout inferred by
# the C code will need the <ctype> objects.  Collect them in
# order in a list.
# build the FFILibrary class and instance and call _cffi_setup().
# this will set up some fields like '_cffi_types', and only then
# it will invoke the chained list of functions that will really
# build (notably) the constant objects, as <cdata> if they are
# pointers, and store them as attributes on the 'library' object.
# finally, call the loaded_cpy_xxx() functions.  This will perform
# the final adjustments, like copying the Python->C wrapper
# functions from the module to the 'library' object, and setting
# up the FFILibrary class with properties for the global C variables.
# a struct (not a struct pointer) as a function argument
# don't call _do_collect_type(tp) in this common case,
# otherwise test_autofilled_struct_as_argument fails
# kill both the .so extension and the other .'s, as introduced
# by Python 3: 'basename.cpython-33m.so'
# and the _d added in Python 2 debug builds --- but try to be
# conservative and not kill a legitimate _d
# cannot import the package itself, give up
# (e.g. it might be called differently before installation)
# Write our source file to an in memory file.
# Determine if this matches the current file
# Actually write the file out if it doesn't match
# Set this flag
# compile this C source
# for tests
# only remove .c files
# bah, no C_EXTENSION available.  Occurs on pypy without cpyext
# fetch "bool" and all simple Windows types
# in case we got ImportError above
# cdecl is already a BaseType
# recursive
# extra types for Windows (most of them are in commontypes.c)
# Issue #392: packaging tools like cx_Freeze can not find these
# because pycparser uses exec dynamic import.  This is an obscure
# workaround.  This function is never called.
# matches "* const "
# Workaround for a pycparser issue (fixed between pycparser 2.10 and
# 2.14): "char*const***" gives us a wrong syntax tree, the same as
# for "char***(*const)".  This means we can't tell the difference
# afterwards.  But "char(*const(***))" gives us the right syntax
# tree.  The issue only occurs if there are several stars in
# sequence with no parenthesis in between, just possibly qualifiers.
# Attempt to fix it by adding some parentheses in the source: each
# time we see "* const" or "* const *", we add an opening
# parenthesis before each star---the hard part is figuring out where
# to close them.
#print repr(''.join(parts)+csource), '=>',
# e.g. "* const "
#print repr(''.join(parts)+csource)
# input: `extern "Python" int foo(int);` or
# output:
# input: `extern "Python+C" int foo(int);`
#print
#print ''.join(parts)+csource
#print '=>'
# grouping variant
# non-grouping variant
# _r_line_directive matches whole lines, without the final \n, if they
# start with '#line' with some spacing allowed, or '#NUMBER'.  This
# function stores them away and replaces them with exactly the string
# '#line@N', where N is the index in the list 'line_directives'.
# First, remove the lines of the form '#line N "filename"' because
# the "filename" part could confuse the rest
# Remove comments.  NOTE: this only work because the cdef() section
# should not contain any string literals (except in line directives)!
# Remove the "#define FOO x" lines
# BIG HACK: replace WINAPI or __stdcall with "volatile const".
# It doesn't make sense for the return type of a function to be
# "volatile volatile const", so we abuse it to detect __stdcall...
# Hack number 2 is that "int(volatile *fptr)();" is not valid C
# syntax, so we place the "volatile" before the opening parenthesis.
# Replace `extern "Python"` with start/end markers
# Now there should not be any string literal left; warn if we get one
# Replace "[...]" with "[__dotdotdotarray__]"
# Replace "...}" with "__dotdotdotNUM__}".  This construction should
# occur only at the end of enums; at the end of structs we have "...;}"
# and at the end of vararg functions "...);".  Also replace "=...[,}]"
# with ",__dotdotdotNUM__[,}]": this occurs in the enums too, when
# giving an unknown value.
# Replace "int ..." or "unsigned long int..." with "__dotdotdotint__"
# Replace "float ..." or "double..." with "__dotdotdotfloat__"
# Replace all remaining "..." with the same name, "__dotdotdot__",
# which is declared with a typedef for the purpose of C parsing.
# Finally, put back the line directives
# Look in the source for what looks like usages of types from the
# list of common types.  A "usage" is approximated here as the
# appearance of the word, minus a "definition" of the type, which
# is the last word in a "typedef" statement.  Approximative only
# but should be fine for all the common types.
# word in COMMON_TYPES
# XXX: for more efficiency we would need to poke into the
# internals of CParser...  the following registers the
# typedefs, because their presence or absence influences the
# parsing itself (but what they are typedef'ed to plays no role)
# this forces pycparser to consider the following in the file
# called <cdef source string> from line 1
# see test_missing_newline_bug
# pycparser is not thread-safe...
# csource will be used to find buggy source text
# xxx look for "<cdef source string>:NUM:" at the start of str(e)
# and interpret that as a line number.  This will not work if
# the user gives explicit ``# NUM "FILE"`` directives.
# add the macros
# find the first "__dotdotdot__" and use that as a separator
# between the repeated typedefs and the real csource
# skip pragma, only in pycparser 2.15
# ignore identical double declarations
# "010" is not valid oct in py3
# hack: `extern "Python"` in the C source is replaced
# with "void __cffi_extern_python_start;" and
# "void __cffi_extern_python_stop;"
# first, dereference typedefs, if we have it already parsed, we're good
# array type
# a hack: in 'typedef int foo_t[...][...];', don't use '...' as
# the length but use directly the C expression that would be
# generated by recompiler.py.  This lets the typedef be used in
# many more places within recompiler.py
# pointer type
# assume a primitive type.  get it from .names, but reduce
# synonyms to a single chosen combination
# keep this unmodified
# ignore the 'signed' prefix below, and reorder the others
# implicitly
# but kill it if 'short' or 'long'
# 'struct foobar'
# 'union foobar'
# 'enum foobar'
# a function type
# nested anonymous structs or unions end up here
# the 'quals' on the result type are ignored.  HACK: we absure them
# to detect __stdcall functions: we textually replace "__stdcall"
# with "volatile volatile const" above.
# else, probable syntax error anyway
# First, a level of caching on the exact 'type' node of the AST.
# This is obscure, but needed because pycparser "unrolls" declarations
# such as "typedef struct { } foo_t, *foo_p" and we end up with
# an AST that is not a tree, but a DAG, with the "type" node of the
# two branches foo_t and foo_p of the trees being the same node.
# It's a bit silly but detecting "DAG-ness" in the AST tree seems
# to be the only way to distinguish this case from two independent
# structs.  See test_struct_with_two_usages.
# Note that this must handle parsing "struct foo" any number of
# times and always return the same StructType object.  Additionally,
# one of these times (not necessarily the first), the fields of
# the struct can be specified with "struct foo { ...fields... }".
# If no name is given, then we have to create a new anonymous struct
# with no caching; in this case, the fields are either specified
# right now or never.
# get the type or create it if needed
# 'force_name' is used to guess a more readable name for
# anonymous structs, for the common case "typedef struct { } foo".
# enums: done here
# is there a 'type.decls'?  If yes, then this is the place in the
# C sources that declare the fields.  If no, then just return the
# existing type, possibly still incomplete.
# XXX pycparser is inconsistent: 'names' should be a list
# of strings, but is sometimes just one string.  Use
# str.join() as a way to cope with both.
# must be re-completed: it is not opaque any more
# for now, limited to expressions that are an immediate number
# or positive/negative number
# load previously defined int constant
# opaque enum
# fix for test_anonymous_enum_include
# note: not for 'long double' so far
# pkg-config, https://www.freedesktop.org/wiki/Software/pkg-config/ integration for cffi
# convert -Dfoo=bar to list of tuples [("foo", "bar")] expected by distutils
# drop "-D"
# "-Dfoo=bar" => ("foo", "bar")
# "-Dfoo" => ("foo", None)
# return kwargs for given libname
# merge all arguments together
# may be overridden
# not supported anyway by ctypes
# cast within range
# fix precision
# <CData <char>>
# extra null
# xxx obscure workaround
# create a callback to the Python callable init()
# .value: http://bugs.python.org/issue1574593
#print repr(res2)
# The only pointers callbacks can return are void*s:
# http://bugs.python.org/issue5710
# NULL
# XXX not implemented
# The Python Imaging Library.
# $Id$
# a simple Qt image interface.
# 2006-06-03 fl: created
# 2006-06-04 fl: inherit from QImage instead of wrapping it
# 2006-06-05 fl: removed toimage helper; move string support to ImageQt
# 2013-11-13 fl: add support for Qt5 (aurelien.ballier@cyclonit.com)
# Copyright (c) 2006 by Secret Labs AB
# Copyright (c) 2006 by Fredrik Lundh
# See the README file for information on usage and redistribution.
# If a version has already been imported, attempt it first
# use qRgb to pack the colors, and then turn the resulting long
# into a negative integer with the same bitpattern.
# preserve alpha channel with png
# otherwise ppm is more friendly with Image.open
# calculate bytes per line and the extra padding if needed
# already 32 bit aligned by luck
# handle filename, if given instead of image name
# FIXME - is this really the best way to do this?
# Populate the 4th channel with 255
# must keep a reference, or Qt will crash!
# All QImage constructors that take data operate on an existing
# buffer, so this buffer has to hang on for the life of the image.
# Fixes https://github.com/python-pillow/Pillow/issues/1370
# this check is also in src/_imagingcms.c:setup_module()
# PNG support code
# See "PNG (Portable Network Graphics) Specification, version 1.0;
# W3C Recommendation", 1996-10-01, Thomas Boutell (ed.).
# 1996-05-06 fl   Created (couldn't resist it)
# 1996-12-14 fl   Upgraded, added read and verify support (0.2)
# 1996-12-15 fl   Separate PNG stream parser
# 1996-12-29 fl   Added write support, added getchunks
# 1996-12-30 fl   Eliminated circular references in decoder (0.3)
# 1998-07-12 fl   Read/write 16-bit images as mode I (0.4)
# 2001-02-08 fl   Added transparency support (from Zircon) (0.5)
# 2001-04-16 fl   Don't close data source in "open" method (0.6)
# 2004-02-24 fl   Don't even pretend to support interlaced files (0.7)
# 2004-08-31 fl   Do basic sanity check on chunk identifiers (0.8)
# 2004-09-20 fl   Added PngInfo chunk container
# 2004-12-18 fl   Added DPI read support (based on code by Niki Spahiev)
# 2008-08-13 fl   Added tRNS support for RGB images
# 2009-03-06 fl   Support for preserving ICC profiles (by Florian Hoech)
# 2009-03-08 fl   Added zTXT support (from Lowell Alleman)
# 2009-03-29 fl   Read interlaced PNG files (from Conrado Porto Lopes Gouvua)
# Copyright (c) 1997-2009 by Secret Labs AB
# Copyright (c) 1996 by Fredrik Lundh
# supported bits/color combinations, and corresponding modes/rawmodes
# Grayscale
# Truecolour
# Indexed-colour
# Grayscale with alpha
# LA;16B->LA not yet available
# Truecolour with alpha
# APNG frame disposal modes
# APNG frame blend modes
# Support classes.  Suitable for PNG and related formats like MNG etc.
# Skip CRC checks for ancillary chunks if allowed to load truncated
# images
# 5th byte of first char is 1 [specs, section 5.4]
# Simple approach; just calculate checksum for all remaining
# blocks.  Must be called directly after open.
# The tEXt chunk stores latin-1 text
# PNG image stream (IHDR/IEND)
# local copies of Image attributes
# ICC profile
# according to PNG spec, the iCCP chunk contains:
# Profile name  1-79 bytes (character string)
# Null separator        1 byte (null character)
# Compression method    1 byte (0)
# Compressed profile    n bytes (zlib with deflate compression)
# FIXME
# image header
# image data
# palette
# transparency
# tRNS contains only one full-transparent entry,
# other entries are full opaque
# otherwise, we have a byte string with one alpha value
# for each palette entry
# gamma setting
# chromaticity, 8 unsigned ints, actual value is scaled by 100,000
# WP x,y, Red x,y, Green x,y Blue x,y
# srgb rendering intent, 1 byte
# 0 perceptual
# 1 relative colorimetric
# 2 saturation
# 3 absolute colorimetric
# pixels per unit
# meter
# text
# fallback for broken tEXt tags
# compressed text
# international text
# APNG chunks
# PNG reader
# Image plugin for PNG images.
# Parse headers up to the first IDAT or fDAT chunk
# get next chunk
# Copy relevant attributes from the PngStream.  An alternative
# would be to let the PngStream class modify these attributes
# directly, but that introduces circular references which are
# difficult to break if things go wrong in the decoder...
# (believe me, I've tried ;-)
# used by load_prepare()
# IDAT chunk contains default image and not first animation frame
# experimental
# iTxt, tEXt and zTXt chunks may appear at the end of the file
# So load the file to ensure that they are read
# for APNG, seek to the final frame before loading
# back up to beginning of IDAT block
# ensure previous frame was loaded
# advance to the next frame
# CRC
# there must be at least one fdAT chunk between fcTL chunks
# setup frame disposal (actual disposal done when needed in the next _seek())
# used by load_read()
# end of chunk, skip forward to next one
# sequence_num has already been read
# empty chunks are allowed
# read more data from this chunk
# start of the next frame, stop reading
# PNG writer
# supported PIL modes, and corresponding rawmode, bit depth and color type
# wrap output from the encoder in IDAT chunks
# wrap encoder output in fdAT chunks
# animation control
# 0: num_frames
# 4: num_plays
# default image IDAT (if it exists)
# frame control
# sequence_number
# width
# height
# x_offset
# y_offset
# delay_numerator
# delay_denominator
# dispose_op
# blend_op
# frame data
# first frame must be in IDAT chunks for backwards compatibility
# save an image to disk (called by the save method)
# attempt to minimize storage requirements for palette images
# number of bits specified by user
# check palette contents
# encoder options
# get the corresponding PNG mode
# write minimal PNG file
# 0: size
# 10: compression
# 11: filter category
# 12: interlace flag
# You must either have sRGB or iCCP.
# Disallow sRGB chunks when an iCCP-chunk has been emitted.
# Private chunk
# limit to actual palette size
# don't bother with transparency if it's an RGBA
# and it's in the info dict. It's probably just stale.
# PNG chunk converter
# Registry
# standard mode descriptors
# History:
# 2006-03-20 fl   Added
# Copyright (c) 2006 by Secret Labs AB.
# Copyright (c) 2006 by Fredrik Lundh.
# core modes
# Bits need to be extended to bytes
# UNDONE - unsigned |u1i1i1
# extra experimental modes
# I;16 == I;16L, and I;32 == I;32L
# see 7.9.2.2 Text String Type on page 86 and D.3 PDFDocEncoding Character Set
# on page 656
# object ID => (offset, generation)
# object ID => generation
# find a contiguous sequence of object IDs
# XXX escape more chars? handle binary garbage
# the page has been deleted
# make dict keys into strings for passing to write_page
# key should be a PdfName
# replace the page reference with the new one
# delete redundant Pages tree nodes from xref table
# cannot mmap an empty file
# save the original list of page references
# in case the user modifies, adds or deletes some pages
# and we need to rewrite the pages and their list
# TODO: support reuse of deleted objects
# No "\012" aka "\n" or "\015" aka "\r":
# make sure we found the LAST trailer
# XXX Decimal instead of float???
# filter out whitespace
# append a 0 if the length is not even - yes, at the end
# return None, offset  # fallback (only for debugging)
# VERSION was removed in Pillow 6.0.0.
# PILLOW_VERSION was removed in Pillow 9.0.0.
# Use __version__ instead.
# The Python Imaging Library
# screen grabber
# 2001-04-26 fl  created
# 2001-09-17 fl  use builtin driver, if present
# 2002-11-19 fl  added grabclipboard support
# Copyright (c) 2001-2002 by Secret Labs AB
# Copyright (c) 2001-2002 by Fredrik Lundh
# RGB, 32-bit line padding, origin lower left corner
# Cast to Optional[str] needed for Windows and macOS.
# CF_HDROP
# Session type check failed
# wl-paste, when the clipboard is empty
# Ubuntu/Debian wl-paste, when the clipboard is empty
# Ubuntu/Debian wl-paste, when an image isn't available
# wl-paste or Ubuntu/Debian xclip, when an image isn't available
# xclip, when an image isn't available
# xclip, when the clipboard isn't initialized
# load a GIMP brush file
# Copyright (c) Secret Labs AB 1997.
# Copyright (c) Fredrik Lundh 1996.
# Copyright (c) Eric Soroos 2016.
# See https://github.com/GNOME/gimp/blob/mainline/devel-docs/gbr.txt for
# format documentation.
# This code Interprets version 1 and 2 .gbr files.
# Version 1 files are obsolete, and should not be used for new
# Version 2 files are saved by GIMP v2.8 (at least)
# Version 3 files have a format specifier of 18 for 16bit floats in
# Image plugin for the GIMP brush format.
# Image might not be small
# Data is an uncompressed block of w * h * bytes/pixel
# registry
# IM Tools support for PIL
# 1996-05-27 fl   Created (read 8-bit images only)
# 2001-02-17 fl   Use 're' instead of 'regex' (Python 2.1) (0.2)
# Copyright (c) Secret Labs AB 1997-2001.
# Copyright (c) Fredrik Lundh 1996-2001.
# Image plugin for IM Tools images.
# Quick rejection: if there's not a LF among the first
# 100 bytes, this is (probably) not a text header.
# image data begins
# read key/value pair
# comment
# no extension registered (".im" is simply too common)
# SPIDER image file handling
# 2004-08-02    Created BB
# 2006-03-02    added save method
# 2006-03-13    added support for stack images
# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144.
# Copyright (c) 2004 by William Baxter.
# Copyright (c) 2004 by Secret Labs AB.
# Copyright (c) 2004 by Fredrik Lundh.
# Image plugin for the Spider image format. This format is used
# by the SPIDER software, in processing image data from electron
# microscopy and tomography.
# SpiderImagePlugin.py
# The Spider image format is used by SPIDER software, in processing
# image data from electron microscopy and tomography.
# Spider home page:
# https://spider.wadsworth.org/spider_doc/spider/docs/spider.html
# Details about the Spider image format:
# https://spider.wadsworth.org/spider_doc/spider/docs/image_doc.html
# There is no magic number to identify Spider files, so just check a
# series of header locations to see if they have reasonable values.
# Returns no. of bytes in the header, if it is a valid Spider header,
# otherwise returns 0
# add 1 value so can use spider header index start=1
# header values 1,2,5,12,13,22,23 should be integers
# check iform
# check other header values
# no. records in file header
# total no. of bytes in header
# record length in bytes
# looks like a valid header
# read 23 * 4 bytes
# try big-endian first
# little-endian
# check header
# read 27 float values
# add 1 value : spider header index starts at 1
# size in pixels (width, height)
# stk=0, img=0: a regular 2D image
# stk>0, img=0: Opening the stack for the first time
# Point to the first image in the stack
# stk=0, img>0: an image within the stack
# So Image knows it's still a stack
# FIXME: hack
# 1st image index is zero (although SPIDER imgnumber starts at 1)
# returns a byte image after rescaling to 0..255
# returns a ImageTk.PhotoImage object, after rescaling to 0..255
# Image series
# given a list of filenames, return a list of images
# For saving images in Spider format
# There are labrec records in the header
# NB these are Fortran indices
# nslice (=1 for an image)
# number of rows per slice
# number of records in the image
# iform for 2D image
# number of pixels per line
# number of records in file header
# total number of bytes in header
# adjust for Fortran indexing
# pack binary data into a string
# write the SPIDER header
# 32-bit native floating point
# get the filename extension and register it with Image
# perform some image operation
# GRIB stub adapter
# Copyright (c) 1996-2003 by Fredrik Lundh
# Image adapter
# make something up
# standard filters
# 1995-11-27 fl   Created
# 2002-06-08 fl   Added rank and mode filters
# 2003-09-15 fl   Fixed rank calculation in rank filter; added expand call
# Copyright (c) 1997-2003 by Secret Labs AB.
# Copyright (c) 1995-2002 by Fredrik Lundh.
# default scale is sum of kernel
# Hidden flag `_copy_table=False` could be used to avoid extra copying
# of the table if the table is specially made for the constructor.
# Convert to a flat list
# lossless
# Use the newer AnimDecoder API to parse the (possibly) animated file,
# and access muxed chunks like ICC/EXIF/XMP.
# Get info from decoder
# Attempt to read ICC / EXIF / XMP chunks from file
# Initialize seek state
# Set logical frame to requested position
# Get next frame
# Check if an error occurred
# Reset just to be safe
# Compute duration
# libwebp gives frame end, adjust to start of frame
# Nothing to do
# Rewind to beginning
# Advance to the requested frame
# We need to load the image data for this frame
# Set tile
# Make sure image mode is supported
# If total frame count is 1, then save using the legacy API, which
# will preserve non-alpha modes
# GifImagePlugin stores a global color table index in
# info["background"]. So it must be converted to an RGBA value
# Sensible keyframe defaults are from gif2webp.c script
# Validate background color
# Convert to packed uint
# Setup the WebP animation encoder
# Add each frame
# Get number of frames in this image
# Append the frame to the animation encoder
# Update timestamp and frame index
# Force encoder to flush frames
# Get the final output from the encoder
# sequence support classes
# 1997-02-20 fl     Created
# Copyright (c) 1997 by Secret Labs AB.
# Copyright (c) 1997 by Fredrik Lundh.
# GD file handling
# 1996-04-12 fl   Created
# Copyright (c) 1996 by Fredrik Lundh.
# Header
# transparency index
# Adobe PSD 2.5/3.0 file handling
# 1995-09-01 fl   Created
# 1997-01-03 fl   Read most PSD images
# 1997-01-18 fl   Fixed P and CMYK support
# 2001-10-21 fl   Added seek/tell support (for layers)
# Copyright (c) 1997-2001 by Secret Labs AB.
# Copyright (c) 1995-2001 by Fredrik Lundh
# (photoshop mode, bits) -> (pil mode, required channels)
# FIXME: multilayer
# duotone
# --------------------------------------------------------------------.
# read PSD images
# Image plugin for Photoshop images.
# color mode data
# image resources
# load resources
# signature
# padding
# layer and mask information
# image descriptor
# keep the file open
# seek to given layer (1..max)
# return layer number (0=image, 1..max=layers)
# read layerinfo block
# bounding box
# image info
# size
# figure out the image mode
# skip over blend flags and extra information
# filler
# length of the extra data field
# Don't know the proper encoding,
# Latin-1 should be a good guess
# get tiles
# raw compression
# packbits compression
# base class for raster font file parsers
# 1997-06-05 fl   created
# 1997-08-19 fl   restrict image width
# Copyright (c) 1997-1998 by Secret Labs AB
# Copyright (c) 1997-1998 by Fredrik Lundh
# create bitmap large enough to hold all data
# paste glyphs into bitmap
# font data
# font metrics
# HACK!!!
# BMP file handler
# Windows (and OS/2) native bitmap storage format.
# 1996-04-30 fl   Added save
# 1997-08-27 fl   Fixed save of 1-bit images
# 1998-03-06 fl   Load P images as L where possible
# 1998-07-03 fl   Load P images as 1 where possible
# 1998-12-29 fl   Handle small palettes
# 2002-12-30 fl   Fixed load of 1-bit palette images
# 2003-04-21 fl   Fixed load of 1-bit monochrome images
# 2003-04-23 fl   Added limited support for BI_BITFIELDS compression
# Copyright (c) 1997-2003 by Secret Labs AB
# Copyright (c) 1995-2003 by Fredrik Lundh
# Read BMP file
# bits => mode, rawmode
# =============================================================================
# Image plugin for the Windows BMP format.
# ------------------------------------------------------------- Description
# -------------------------------------------------- BMP Compression values
# read bmp header size @offset 14 (this is part of the header size)
# -------------------- If requested, read header at a specific position
# read the rest of the bmp header, without its size
# ------------------------------- Windows Bitmap v2, IBM OS/2 Bitmap v1
# ----- This format has different offsets because of width/height types
# 12: BITMAPCOREHEADER/OS21XBITMAPHEADER
# --------------------------------------------- Windows Bitmap v3 to v5
# 108: BITMAPV4HEADER
# 124: BITMAPV5HEADER
# byte size of pixel data
# 40 byte headers only have the three components in the
# bitfields masks, ref:
# https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx
# See also
# https://github.com/python-pillow/Pillow/issues/1293
# There is a 4th component in the RGBQuad, in the alpha
# location, but it is listed as a reserved component,
# and it is not generally an alpha channel
# ------------------ Special case : header is reported 40, which
# ---------------------- is shorter than real size for bpp >= 16
# ------- If color count was not found in the header, compute from bits
# ---------------------- Check bit depth for unusual unsupported values
# ---------------- Process BMP with Bitfields compression (not palette)
# 32-bit .cur offset
# --------------- Once the header is processed, process the palette/LUT
# Paletted for 1, 4 and 8 bit images
# ---------------------------------------------------- 1-bit images
# ----------------- Check if grayscale and ignore palette if so
# ------- If all colors are gray, white or black, ditch palette
# ---------------------------- Finally set the tile data for the plugin
# read 14 bytes: magic number, filesize, reserved, header final offset
# choke if the file does not have the required magic bytes
# read the start position of the BMP image data (u32)
# load bitmap information (offset=raster info)
# encoded mode
# Too much data for row
# end of line
# end of bitmap
# delta
# absolute mode
# 2 pixels per byte
# align to 16-bit word boundary
# Image plugin for the DIB format (BMP alias)
# Write BMP file
# 1 meter == 39.3701 inches
# or 64 for OS/2 version 2
# bitmap header
# file type (magic)
# image data offset
# bitmap info header
# info header size
# planes
# depth
# compression (0=uncompressed)
# size of bitmap
# resolution
# colors used
# colors important
# padding (for OS/2 format)
# Optional color management support, based on Kevin Cazabon's PyCMS
# library.
# Originally released under LGPL.  Graciously donated to PIL in
# March 2009, for distribution under the standard PIL license
# 2009-03-08 fl   Added to PIL.
# Copyright (C) 2002-2003 Kevin Cazabon
# Copyright (c) 2009 by Fredrik Lundh
# Copyright (c) 2013 by Eric Soroos
# See the README file for information on usage and redistribution.  See
# below for the original description.
# Allow error import for doc purposes, but error out when accessing
# anything in core.
# intent/direction values
# flags
# this should be 8BITS_DEVICELINK, but that is not a valid name in Python:
# Don't hot fix scum dot
# Don't create prelinearization tables on precalculated transforms
# (internal use):
# Guess device class (for transform2devicelink)
# Inhibit 1-pixel cache
# Don't transform anyway
# Use more memory to give better accuracy
# Use less memory to minimize resources
# Out of Gamut alarm
# Do softproofing
# Black preservation
# CRD special
# Gridpoints
# Experimental PIL-level API
# Profile.
# type: ignore[unreachable]
# Note: inputMode and outputMode are for pyCMS compatibility only
# wrong output mode
# type: ignore[unused-ignore, unreachable]
# pyCMS compatible layer
# add an extra newline to preserve pyCMS compatibility
# do it in python, not c.
# Python, not C. the white point bits weren't working well,
# so skipping.
# info was description \r\n\r\n copyright \r\n\r\n K007 tag \r\n\r\n whitepoint
# FIXME: I get different results for the same data w. different
# compilers.  Bug in LittleCMS or in the binding?
# Master version for Pillow
# PDF (Acrobat) file handling
# 1996-07-16 fl   Created
# 1997-01-18 fl   Fixed header
# 2004-02-21 fl   Fixes for 1/L/CMYK images, etc.
# 2004-02-24 fl   Fixes for 1 and P images.
# Copyright (c) 1997-2004 by Secret Labs AB.  All rights reserved.
# Copyright (c) 1996-1997 by Fredrik Lundh.
# Image plugin for PDF images (output only).
# object ids:
# (Internal) Image save plugin for the PDF format.
# FIXME: Should replace ASCIIHexDecode with RunLengthDecode
# (packbits) or LZWDecode (tiff/lzw compression).  Note that
# PDF 1.2 also supports Flatedecode (zip compression).
# Get image characteristics
# grayscale
# params = f"<< /Predictor 15 /Columns {width-2} >>"
# indexed color
# color images
# use a single strip
# * 72.0 / x_resolution,
# * 72.0 / y_resolution,
# make sure image data is available
# pages
# catalog and list of pages
# page
# page contents
# trailer
# path interface
# 1996-11-04 fl   Created
# 2002-04-14 fl   Added documentation stub class
# macOS icns file decoder, based on icns.py by Bob Ippolito.
# 2004-10-09 fl   Turned into a PIL plugin; removed 2.3 dependencies.
# 2020-04-04      Allow saving on all operating systems.
# Copyright (c) 2004 by Bob Ippolito.
# Copyright (c) 2004 by Secret Labs.
# Copyright (c) 2014 by Alastair Houghton.
# Copyright (c) 2020 by Pan Jing.
# The 128x128 icon seems to have an extra header for some reason.
# uncompressed ("RGBRGBGB")
# decode image
# Alpha masks seem to be uncompressed
# j2k, jpc or j2c
# signature : (start, length)
# Image plugin for Mac OS icons.
# Check that a matching size exists,
# or that there is a scale that would create a size that matches
# Already loaded
# This is likely NOT the best way to do it, but whatever.
# If this is a PNG or JPEG 2000, it won't be loaded yet
# TOC
# Data
# Simple PostScript graphics interface
# 1996-04-20 fl   Created
# 1999-01-10 fl   Added gsave/grestore to image method
# 2005-05-04 fl   Fixed floating point issue in image (from Eric Etheridge)
# Copyright (c) 1997-2005 by Secret Labs AB.  All rights reserved.
# Simple PostScript graphics interface.
# FIXME: incomplete
# self.fp.write(ERROR_PS)  # debugging!
# reencode font
# rough
# default resolution depends on mode
# fax
# image size (on paper)
# max allowed size
# EpsImagePlugin._save prints the image at (0,0,xsize,ysize)
# PostScript driver
# EDROFF.PS -- PostScript driver for Edroff 2
# 94-01-25 fl: created (edroff 2.04)
# Copyright (c) Fredrik Lundh 1994.
# VDI.PS -- PostScript driver for VDI meta commands
# ERROR.PS -- Error handler
# 89-11-21 fl: created (pslist 1.10)
# Microsoft Image Composer support for PIL
# Notes:
# Copyright (c) Fredrik Lundh 1997.
# Image plugin for Microsoft's Image Composer file format.
# read the OLE directory and see if this is a likely
# to be a Microsoft Image Composer file
# find ACI subfiles with Image members (maybe not the
# best way to identify MIC files, but what the... ;-)
# if we didn't find any images, this is probably not
# an MIC file.
# map CSS3-style colour description strings to RGB
# 2002-10-24 fl   Added support for CSS-style color strings
# 2002-12-15 fl   Added RGBA support
# 2004-03-27 fl   Fixed remaining int() problems for Python 1.5.2
# 2004-07-19 fl   Fixed gray/grey spelling issues
# 2009-03-05 fl   Fixed rounding error in grayscale calculation
# Copyright (c) 2002-2004 by Secret Labs AB
# Copyright (c) 2002-2004 by Fredrik Lundh
# check for known string formats
# same as getrgb, but converts the result to the given mode
# ITU-R Recommendation 601-2 for nonlinear RGB
# scaled to 24 bits to match the convert's implementation.
# X11 colour table from https://drafts.csswg.org/css-color-4/, with
# gray/grey spelling issues fixed.  This is a superset of HTML 4.0
# colour names used in CSS 1.
# SGI image file handling
# See "The SGI Image File Format (Draft version 0.97)", Paul Haeberli.
# <ftp://ftp.sgi.com/graphics/SGIIMAGESPEC>
# 2017-22-07 mb   Add RLE decompression
# 2016-16-10 mb   Add save method without compression
# 1995-09-10 fl   Created
# Copyright (c) 2016 by Mickael Bonfill.
# Copyright (c) 2008 by Karsten Hiddemann.
# Copyright (c) 1995 by Fredrik Lundh.
# Image plugin for SGI images.
# HEAD
# compression : verbatim or RLE
# bpc : 1 or 2 bytes (8bits or 16bits)
# dimension : 1, 2 or 3 (depending on xsize, ysize and zsize)
# xsize : width
# ysize : height
# zsize : channels count
# determine mode from bits/zsize
# orientation -1 : scanlines begins at the bottom-left corner
# decoder info
# Get the keyword arguments
# Byte-per-pixel precision, 1 = 8bits per pixel
# Flip the image, since the origin of SGI file is the bottom-left corner
# Define the file as SGI File Format
# Run-Length Encoding Compression - Unsupported at this time
# X Dimension = width / Y Dimension = height
# Z Dimension: Number of channels
# Number of dimensions (x,y,z)
# Minimum Byte value
# Maximum Byte value (255 = 8bits per pixel)
# Image name (79 characters max, truncated below in write)
# Standard representation of pixel in the file
# dummy
# truncates to 79 chars
# force null byte after img_name
# End of file
# WMF stub codec
# 1996-12-14 fl   Created
# 2004-02-22 fl   Turned into a stub driver
# 2004-02-23 fl   Added EMF support
# Copyright (c) Secret Labs AB 1997-2004.  All rights reserved.
# WMF/EMF reference documentation:
# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf
# http://wvware.sourceforge.net/caolan/index.html
# http://wvware.sourceforge.net/caolan/ora-wmf.html
# install default handler (windows only)
# rewind
# Read WMF file
# Image plugin for Windows metafiles.
# check placeable header
# placeable windows metafile
# get units per inch
# get bounding box
# normalize size to 72 dots per inch
# sanity check (standard metafile header)
# enhanced metafile
# get frame (in 0.01 millimeter units)
# calculate dots per inch from bbox and frame
# Registry stuff
# bitmap distribution font (bdf) file parser
# 1996-05-16 fl   created (as bdf2pil)
# 1997-08-25 fl   converted to FontFile driver
# 2001-05-25 fl   removed bogus __init__ call
# 2002-11-20 fl   robustification (from Kevin Cazabon, Dmitry Vasiliev)
# 2003-04-22 fl   more robustification (from Graham Dumpleton)
# Copyright (c) 1997-2003 by Fredrik Lundh.
# skip to STARTCHAR
# load symbol properties
# load bitmap
# The word BBX
# followed by the width in x (BBw), height in y (BBh),
# and x and y displacement (BBxoff0, BByoff0)
# of the lower left corner from the origin of the character.
# The word DWIDTH
# followed by the width in x and y of the character in device pixels.
# deal with zero-width characters
# XBM File handling
# 1995-09-08 fl   Created
# 1996-11-01 fl   Added save support
# 1997-07-07 fl   Made header parser more tolerant
# 1997-07-22 fl   Fixed yet another parser bug
# 2001-02-17 fl   Use 're' instead of 'regex' (Python 2.1) (0.4)
# 2001-05-13 fl   Added hotspot handling (based on code from Bernhard Herzog)
# 2004-02-24 fl   Allow some whitespace before first #define
# Copyright (c) 1997-2004 by Secret Labs AB
# Copyright (c) 1996-1997 by Fredrik Lundh
# XBM header
# Image plugin for X11 bitmaps.
# WAL file handling
# 2003-04-23 fl   created
# Copyright (c) 2003 by Fredrik Lundh.
# read header fields
# load pixel data
# strings are null-terminated
# default palette taken from piffo 0.93 by Hans Häggström
# Image plugin for Palm pixmap images (output only).
# so build a prototype image to be used for palette resampling
# OK, we now have in Palm8BitColormapImage,
# a "P"-mode image with the right palette
# (Internal) Image save plugin for the Palm format.
# this is 8-bit grayscale, so we shift it to get the high-order bits,
# and invert it because
# Palm does grayscale from white (0) to black (1)
# here we assume that even though the inherent mode is 8-bit grayscale,
# only the lower bpp bits are significant.
# We invert them to match the Palm.
# we ignore the palette here
# monochrome -- write it inverted, as is the Palm standard
# write header
# reserved by Palm
# now write colormap if necessary
# now convert data to raw form
# _tkinter may be compiled directly into Python, in which case __file__ is
# not available. load_tkinter_funcs will check the binary first in any case.
# MSP file handling
# This is the format used by the Paint program in Windows 1 and 2.
# Copyright (c) Fredrik Lundh 1995-97.
# Copyright (c) Eric Soroos 2017.
# More info on this format: https://archive.org/details/gg243631
# Page 313:
# Figure 205. Windows Paint Version 1: "DanM" Format
# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03
# See also: https://www.fileformat.info/format/mspaint/egff.htm
# read MSP files
# Image plugin for Windows MSP images.  This plugin supports both
# uncompressed (Windows 1.0).
# Header checksum
# The algo for the MSP decoder is from
# https://www.fileformat.info/format/mspaint/egff.htm
# cc-by-attribution -- That page references is taken from the
# Encyclopedia of Graphics File Formats and is licensed by
# O'Reilly under the Creative Common/Attribution license
# For RLE encoded files, the 32byte header is followed by a scan
# line map, encoded as one 16bit word of encoded byte length per
# line.
# NOTE: the encoded length of the line can be 0. This was not
# handled in the previous version of this encoder, and there's no
# mention of how to handle it in the documentation. From the few
# examples I've seen, I've assumed that it is a fill of the
# background color, in this case, white.
# Pseudocode of the decoder:
# Read a BYTE value as the RunType
# which are then interpreted as a bit packed mode '1' image
# write MSP files (uncompressed only)
# create MSP header
# version 1
# FIXME: is this the right field?
# image body
# TGA file handling
# 95-09-01 fl   created (reads 24-bit files only)
# 97-01-04 fl   support more TGA versions, including compressed images
# 98-07-04 fl   fixed orientation and alpha layer bugs
# 98-09-11 fl   fixed orientation for runlength decoder
# Copyright (c) Secret Labs AB 1997-98.
# Read RGA file
# map imagetype/depth to rawmode
# Image plugin for Targa files.
# process header
# validate header fields
# image mode
# ???
# orientation
# read palette
# setup tile descriptor
# compressed
# cannot decode
# Write TGA file
# colormapfirst
# write targa version 2 footer
# A binary morphology add-on for the Python Imaging Library
# Copyright (c) 2014 Dov Grobgeld <dov.grobgeld@gmail.com>
# pos of current pixel
# rotations
# mirror
# negate
# Swap 0 and 1
# Parse and create symmetries of the patterns strings
# Get rid of spaces
# compile the patterns into regular expressions for speed
# Step through table and find patterns that match.
# Note that all the patterns are searched. The last one
# caught overrides
# Build the bit pattern
# Python Imaging Library
# stuff to read GIMP palette files
# 1997-08-23 fl     Created
# 2004-09-07 fl     Support GIMP 2.0 palette files.
# Copyright (c) Fredrik Lundh 1997-2004.
# skip fields and comment lines
# the Image class wrapper
# partial release history:
# 1995-09-09 fl   Created
# 1996-03-11 fl   PIL release 0.0 (proof of concept)
# 1996-04-30 fl   PIL release 0.1b1
# 1999-07-28 fl   PIL release 1.0 final
# 2000-06-07 fl   PIL release 1.1
# 2000-10-20 fl   PIL release 1.1.1
# 2001-05-07 fl   PIL release 1.1.2
# 2002-03-15 fl   PIL release 1.1.3
# 2003-05-10 fl   PIL release 1.1.4
# 2005-03-28 fl   PIL release 1.1.5
# 2006-12-02 fl   PIL release 1.1.6
# 2009-11-15 fl   PIL release 1.1.7
# Copyright (c) 1997-2009 by Secret Labs AB.  All rights reserved.
# Copyright (c) 1995-2009 by Fredrik Lundh.
# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image
# If the _imaging C module is not present, Pillow will not load.
# Note that other modules should not refer to _imaging directly;
# import Image and use the Image.core variable instead.
# Also note that Image.core is not a publicly documented interface,
# and should be considered private and subject to change.
# Explanations for ways that we know we might have an import error
# The _imaging C module is present, but not compiled for
# the right version (windows only).  Print a warning, if
# Fail here anyway. Don't let people run with a mostly broken Pillow.
# see docs/porting.rst
# Constants
# transpose
# transforms (also defined in Imaging.h)
# resampling filters (also defined in Imaging.h)
# dithers
# Not yet implemented
# default
# palettes/quantizers
# Registries
# Modes
# raw modes that may be memory mapped.  NOTE: if you change this, you
# may have to modify the stride calculation in map.c too!
# Helpers
# Codec factories (used by tobytes/frombytes and ImageFile.load)
# tweak arguments
# get decoder
# get encoder
# Simple expression analyzer
# Implementation wrapper
# FIXME: take "new" parameters / other image?
# Context manager support
# Instead of simply setting to None, we're setting up a
# deferred error that will better explain that the core image
# object is gone.
# Same as __repr__ but without unpredictable id(self),
# to keep Jupyter notebook `text/plain` output stable.
# numpy array interface support
# Binary images need to be extended from bits to bytes
# See: https://github.com/python-pillow/Pillow/issues/350
# load image first
# may pass tuple instead of argument list
# unpack data
# see RawEncode.c
# default format
# realize palette
# determine default mode
# matrix conversion
# transparency handling
# Use transparent conversion to promote from transparent
# color to an alpha channel.
# Dragons. This can't be represented by a single color
# get the new transparency color.
# use existing conversions
# If all 256 colors are in use,
# then there is no need for transparency
# can't just retrieve the palette number, got to do it
# after quantization.
# This could possibly happen if we requantize to fewer colors.
# The transparency would be totally off in that case.
# trns was converted to RGB
# if we can't make a transparent color, don't leave the old
# transparency hanging around to mess us up.
# colorspace conversion
# normalize source image and try again
# crash fail if we leave a bytes transparency in an rgb/l mode.
# Caller specified an invalid mode.
# use palette from reference image
# could be abused
# XMP tags
# no palette
# abbreviated paste(im, mask) syntax
# upper left corner given; get size from image or mask
# FIXME: use self.size here?
# should use an adapter for this!
# over image, crop if it's not the whole image.
# target for the paste
# destination image. don't copy if we're using the whole image.
# if it isn't a list, it should be a function
# check if the function can be used with point_transform
# UNDONE wiredfool -- I think this prevents us from ever doing
# a gamma function point transform on > 8bit images.
# for other modes, convert the function to a table
# FIXME: _imaging returns a confusing error message for this case
# attempt to promote self to a matching alpha mode
# do things the hard way
# alpha layer
# constant alpha
# install new palette
# RGB or RGBA value for a P or PA image
# L-mode
# pick only the used colors from the palette
# replace the palette color id of all pixel with the new id
# Palette images are [0..255], mapped through a 1 or 3
# byte/color map.  We need to remap the whole image
# from palette 1 to palette 2. New_positions is
# an array of indexes into palette 1.  Palette 2 is
# palette 1 with any holes removed.
# We're going to leverage the convert mechanism to use the
# C code to remap the image from palette 1 to palette 2,
# by forcing the source image into 'L' mode and adding a
# mapping 'L' mode palette, then converting back to 'L'
# sans palette thus converting the image bytes, then
# assigning the optimized RGB palette.
# perf reference, 9500x4000 gif, w/~135 colors
# 14 sec prepatch, 1 sec postpatch with optimization forced.
# possibly set palette dirty, then
# m_im.putpalette(mapping_palette, 'L')  # converts to 'P'
# or just force it.
# UNDONE -- this is part of the general issue with palettes
# Fast paths regardless of filter, as long as we're not
# translating or changing the center.
# Calculate the affine matrix.  Note that this is the reverse
# transformation (from destination image to source) because we
# want to interpolate the (discrete) destination pixel from
# the local area around the (floating) source pixel.
# The matrix we actually want (note that it operates from the right):
# (1, 0, tx)   (1, 0, cx)   ( cos a, sin a, 0)   (1, 0, -cx)
# (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy)
# (0, 0,  1)   (0, 0,  1)   (     0,     0, 1)   (0, 0,   1)
# The reverse matrix is thus:
# (1, 0, cx)   ( cos -a, sin -a, 0)   (1, 0, -cx)   (1, 0, -tx)
# (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty)
# (0, 0,  1)   (      0,      0, 1)   (0, 0,   1)   (0, 0,   1)
# In any case, the final translation may be updated at the end to
# compensate for the expand flag.
# calculate output size
# We multiply a translation matrix from the right.  Because of its
# special form, this is the same as taking the image of the
# translation vector as new translation vector.
# only set the name for metadata purposes
# may mutate self!
# Open also for reading ("+"), because TIFF save_all
# writer needs to go back and edit the written data.
# overridden by file handlers
# FIXME: the different transform methods need further explanation
# instead of bloating the method docs, add a separate chapter.
# compatibility w. old-style transform objects
# list of quads
# convert extent to an affine transform
# quadrilateral warp.  data specifies the four corners
# given as NW, SW, SE, and NE.
# Abstract handlers.
# Factories
# don't initialize
# css3-style specifier
# RGB or RGBA value for a P image
# type: ignore[name-defined]  # noqa: F821, UP037
# (shape, typestr) => mode, rawmode, color modes
# first two members of shape are set to one
# shortcuts:
# Image processing.
# Plugin registry
# Simple display support.
# Effects
# Resources
# Helper function
# returns a dict with any single item tuples/lists as individual values
# an offset pointer to the location of the nested embedded IFD.
# It should be a long, but may be corrupted.
# Extract EXIF information.  This is highly experimental,
# and is likely to be replaced with something better in a future
# version.
# The EXIF record consists of a TIFF file embedded in a JPEG
# application marker (!).
# process dictionary
# get EXIF extension
# GPS
# CameraInfo
# Seconds since 2000
# Interop
# Load all keys into self._data
# image enhancement classes
# For a background, see "Image Processing By Interpolation and
# Extrapolation", Paul Haeberli and Douglas Voorhies.  Available
# at http://www.graficaobscura.com/interp/index.html
# 1996-03-23 fl  Created
# 2009-06-16 fl  Fixed mean calculation
# EXIF tags
# Copyright (c) 2003 by Secret Labs AB
# possibly incomplete
# Sun image file handling
# 1996-05-28 fl   Fixed 32-bit alignment
# 1998-12-29 fl   Import ImagePalette module
# 2001-12-18 fl   Fixed palette loading (from Jean-Claude Rimbault)
# Copyright (c) 1997-2001 by Secret Labs AB
# Copyright (c) 1995-1996 by Fredrik Lundh
# Image plugin for Sun raster files.
# The Sun Raster file header is 32 bytes in length
# and has the following format:
# data_length = i32(s, 16)   # unreliable, ignore.
# 0: None, 1: RGB, 2: Raw/arbitrary
# 16 bit boundaries on stride
# file type: Type is the version (or flavor) of the bitmap
# file. The following values are typically found in the Type
# field:
# 0000h Old
# 0001h Standard
# 0002h Byte-encoded
# 0003h RGB format
# 0004h TIFF format
# 0005h IFF format
# FFFFh Experimental
# Old and standard are the same, except for the length tag.
# byte-encoded is run-length-encoded
# RGB looks similar to standard, but RGB byte order
# TIFF and IFF mean that they were converted from T/IFF
# Experimental means that it's something else.
# (https://www.fileformat.info/format/sunraster/egff.htm)
# MPO file handling
# See "Multi-Picture Format" (CIPA DC-007-Translation 2009, Standard of the
# Camera & Imaging Products Association)
# The multi-picture object combines multiple JPEG images (with a modified EXIF
# data format) into a single file. While it can theoretically be used much like
# a GIF animation, it is commonly used to represent 3D photographs and is (as
# of this writing) the most commonly used format by 3D cameras.
# 2014-03-13 Feneric   Created
# APP2 marker
# Baseline MP Primary Image
# Undefined
# Image plugin for MPO images.
# prep the fp in order to pass the JPEG test
# Note that the following assertion will only be invalid if something
# gets broken within JpegImagePlugin.
# no longer needed
# get ready to read first frame
# for now we can only handle reading and individual frame extraction
# skip SOI marker
# ---------------------------------------------------------------------
# Note that since MPO shares a factory with JPEG, we do not need to do a
# separate registration for it here.
# Image.register_open(MpoImageFile.format,
# PCX file handling
# This format was originally used by ZSoft's popular PaintBrush
# program for the IBM PC.  It is also supported by many MS-DOS and
# Windows applications, including the Windows PaintBrush program in
# Windows 3.
# 1996-05-20 fl   Fixed RGB support
# 1997-01-03 fl   Fixed 2-bit and 4-bit support
# 1999-02-03 fl   Fixed 8-bit support (broken in 1.0b1)
# 1999-02-07 fl   Added write support
# 2002-06-09 fl   Made 2-bit and 4-bit support a bit more robust
# 2002-07-30 fl   Seek from to current position, not beginning of file
# 2003-06-03 fl   Extract DPI settings (info["dpi"])
# Copyright (c) 1995-2003 by Fredrik Lundh.
# Image plugin for Paintbrush images.
# format
# FIXME: hey, this doesn't work with the incremental loader !!!
# check if the palette is linear grayscale
# Don't trust the passed in stride.
# Calculate the approximate position for ourselves.
# CVE-2020-35653
# While the specification states that this must be even,
# not all images follow this
# save PCX files
# mode: (version, bits, planes, raw mode)
# bytes per plane
# stride should be even
# Stride needs to be kept in sync with the PcxEncode.c version.
# Ideally it should be passed in in the state, but the bytes value
# gets overwritten.
# under windows, we could determine the current screen size with
# "Image.core.display_mode()[1]", but I think that's overkill...
# PCX header
# colour palette
# 768 bytes
# grayscale palette
# BUFR stub adapter
# JPEG2000 file handling
# 2014-03-12 ajh  Created
# 2021-06-30 rogermb  Extract dpi information from the 'resc' header box
# Copyright (c) 2014 Coriolis Systems Limited
# Copyright (c) 2014 Alastair Houghton
# Outside box: ensure we don't read past the known file length
# Inside box contents: ensure read does not go past box boundaries
# No length known, just read
# Skip the rest of the box if it has not been read
# Read the length and type of the next box
# Find the JP2 header box
# 2-tuple of DPI info, or None
# Image plugin for JPEG2000 images.
# Start of tile or end of codestream
# Comment
# https://github.com/python-pillow/Pillow/issues/4343 found that the
# new Image 'reduce' method was shadowed by this plugin's 'reduce'
# property. This attempts to allow for both scenarios
# Update the reduce and layers settings
# ------------------------------------------------------------
# Save support
# a simple math add-on for the Python Imaging Library
# 1999-02-15 fl   Original PIL Plus release
# 2005-05-05 fl   Simplified and cleaned up for PIL 1.1.6
# 2005-09-12 fl   Fixed int() and float() for Python 2.4.1
# Copyright (c) 1999-2005 by Secret Labs AB
# Copyright (c) 2005 by Fredrik Lundh
# convert image to suitable mode
# argument was an image.
# argument was a constant
# unary operation
# binary operation
# convert both arguments to floating point
# crop both arguments to a common size
# unary operators
# an image is "true" if it contains at least one non-zero pixel
# binary operators
# bitwise
# logical
# conversions
# build execution namespace
# number of blocks in row
# Decode next 8-byte block.
# Decode this block into 4x4 pixels
# Accumulate the results onto our 4 row accumulators
# get next control op and generate a pixel
# Decode next 16-byte block.
# Do we want the higher bits?
# We get a value between 0 and 15
# alphacode_index >= 18 and alphacode_index <= 45
# mips
# subtype
# What IS this?
# Uncompressed or DirectX compression
# alpha encoding
# Basic McIdas support for PIL
# 1997-05-05 fl  Created (8-bit images only)
# 2009-03-08 fl  Added 16/32-bit support.
# Thanks to Richard Jones and Craig Swank for specs and samples.
# Image plugin for McIdas area images.
# parse area file directory
# get mode
# FIXME: add memory map support
# no default extension
# TIFF file handling
# TIFF is a flexible, if somewhat aged, image file format originally
# defined by Aldus.  Although TIFF supports a wide variety of pixel
# layouts and compression methods, the name doesn't really stand for
# "thousands of incompatible file formats," it just feels that way.
# To read TIFF data from a stream, the stream must be seekable.  For
# progressive decoding, make sure to use TIFF files where the tag
# directory is placed first in the file.
# 1996-05-04 fl   Handle JPEGTABLES tag
# 1996-05-18 fl   Fixed COLORMAP support
# 1997-01-05 fl   Fixed PREDICTOR support
# 1997-08-27 fl   Added support for rational tags (from Perry Stoll)
# 1998-01-10 fl   Fixed seek/tell (from Jan Blom)
# 1998-07-15 fl   Use private names for internal variables
# 1999-06-13 fl   Rewritten for PIL 1.0 (1.0)
# 2000-10-11 fl   Additional fixes for Python 2.0 (1.1)
# 2001-04-17 fl   Fixed rewind support (seek to frame 0) (1.2)
# 2001-05-12 fl   Added write support for more tags (from Greg Couch) (1.3)
# 2001-12-18 fl   Added workaround for broken Matrox library
# 2002-01-18 fl   Don't mess up if photometric tag is missing (D. Alan Stewart)
# 2003-05-19 fl   Check FILLORDER tag
# 2003-09-26 fl   Added RGBa support
# 2004-02-24 fl   Added DPI support; fixed rational write support
# 2005-02-07 fl   Added workaround for broken Corel Draw 10 files
# 2006-01-09 fl   Added support for float/double tags (from Russell Nelson)
# Copyright (c) 1997-2006 by Secret Labs AB.  All rights reserved.
# Copyright (c) 1995-1997 by Fredrik Lundh
# Set these to true to force use of libtiff for reading or writing.
# little-endian (Intel style)
# big-endian (Motorola style)
# Read TIFF files
# a few tag names, just to make the code below a bit more readable
# newsphoto properties
# photoshop properties
# pseudo-tag by libtiff
# https://github.com/imagej/ImageJA/blob/master/src/main/java/ij/io/TiffDecoder.java
# Compression => pil compression name
# obsolete
# 16-bit padding
# (ByteOrder, PhotoInterpretation, SampleFormat, FillOrder, BitsPerSample,
# missing ExtraSamples
# Corel Draw 10
# JPEG compressed images handled by LibTiff and auto-converted to RGBX
# Minimal Baseline TIFF requires YCbCr images to have 3 SamplesPerPixel
# Valid TIFF header with big-endian byte order
# Valid TIFF header with little-endian byte order
# Invalid TIFF header, assume big-endian
# Invalid TIFF header, assume little-endian
# BigTIFF with big-endian byte order
# BigTIFF with little-endian byte order
# Wrapper for TIFF IFDs.
# Python >= 3.11
# will remain empty if legacy_api is false
# main tag storage
# added 2008-06-05 by Florian Hoech
# unpack on the fly
# check type
# Three branches:
# Spec'd length == 1, Actual length 1, store as element
# Spec'd length == 1, Actual > 1, Warn and truncate. Formerly barfed.
# No Spec, Actual length 1, Formerly (<4.2) returned a 1 element tuple.
# Don't mess with the legacy api, since it's frozen.
# rationals
# We've got a builtin tag with 1 expected entry
# Spec'd length > 1 or undefined
# Unspec'd, and length > 1
# Basic type, except for the legacy API.
# remerge of https://github.com/python-pillow/Pillow/pull/1416
# ignore unsupported type
# FIXME What about tagdata?
# pass 1: convert tags to binary format
# always write tags in ascending order
# count is sum of lengths for string and arbitrary data
# figure out if data fits into the entry
# pad to word
# update strip offset data to point beyond auxiliary data
# pass 2: write entries to file
# -- overwrite here for multi-page --
# end of entries
# pass 3: write auxiliary data to file
# skip TIFF header on subsequent pages
# Legacy ImageFileDirectory support.
# defined in ImageFileDirectory_v2
# an indicator for multipage tiffs
# undone -- switch this pointer
# Image plugin for TIFF files.
# setup frame pointers
# Use repr to avoid str(bytes)
# and load the first frame
# This IFD has already been processed
# Declare this to be the end of the image
# fill the legacy tag/ifd entries
# allow closing if we're on the first frame, there's no next
# This is the ImageFile.load path only, libtiff specific below.
# load IFD data from fp before it is closed
# (self._compression, (extents tuple),
# To be nice on memory footprint, if there's a
# file descriptor, use that instead of reading
# into a string in python.
# flush the file descriptor, prevents error on pypy 2.4+
# should also eliminate the need for fp.tell
# in _seek
# io.BytesIO have a fileno, but returns an OSError if
# it doesn't use a file descriptor.
# We've got a stringio like thing passed in. Yay for all in memory.
# The decoder needs the entire file in one shot, so there's not
# a lot we can do here other than give it the entire file.
# unless we could do something like get the address of the
# underlying string for stringio.
# Rearranging for supporting byteio items, since they have a fileno
# that returns an OSError if there's no underlying fp. Easier to
# deal with here by reordering.
# we've got a actual file on disk, pass in the fp.
# Save and restore the file position, because libtiff will move it
# outside of the Python runtime, and that will confuse
# io.BufferedReader and possible others.
# NOTE: This must use os.lseek(), and not fp.tell()/fp.seek(),
# because the buffer read head already may not equal the actual
# file position, and fp.seek() may just adjust it's internal
# pointer and not actually seek the OS file handle.
# 4 bytes, otherwise the trace might error out
# we have something else.
# UNDONE -- so much for that buffer size thing.
# might be shared
# extract relevant tags
# photometric is a required tag, but not everyone is reading
# the specification
# old style jpeg compression images most certainly are YCbCr
# SAMPLEFORMAT is properly per band, so an RGB image will
# be (1,1,1).  But, we don't support per band pixel types,
# and anything more than one band is a uint8. So, just
# take the first element. Revisit this if adding support
# for more exotic images.
# RGB, YCbCr, LAB
# CMYK
# DOS check, samples_per_pixel can be a Long, and we extend the tuple below
# If a file has more values in bps_tuple than expected,
# remove the excess.
# If a file has only one value in bps_tuple, when it should have more,
# presume it is the same number of bits for all of the samples.
# mode: check photometric interpretation and bits per pixel
# dots per inch
# dots per centimeter. convert to dpi
# used to default to 1, but now 2)
# For backward compatibility,
# we also preserve the old behavior
# No absolute unit of measurement
# build tile descriptors
# Decoder expects entire file as one tile.
# There's a buffer size limit in load (64k)
# so large g4 images will fail if we use that
# function.
# Setup the one tile for the whole image, then
# use the _load_libtiff function.
# libtiff handles the fillmode for us, so 1;IR should
# actually be 1;I. Including the R double reverses the
# bits, so stripes of the image are reversed.  See
# https://github.com/python-pillow/Pillow/issues/279
# Replace fillorder with fillorder=1
# this should always work, since all the
# fillorder==2 modes have a corresponding
# fillorder=1 mode
# YCbCr images with new jpeg compression with pixels in one plane
# unpacked straight into RGB values
# libtiff always returns the bytes in native order.
# we're expecting image byte order. So, if the rawmode
# contains I;16, we need to convert from native to image
# byte order.
# Offset in the tile tuple is 0, we go from 0,0 to
# w,h, and we only do this once -- eds
# striped image
# tiled image
# Every tile covers the image. Only use the last offset
# bytes per line
# each band on it's own layer
# adjust stride width accordingly
# Fix up info.
# fixup palette descriptor
# Write TIFF files
# little endian is default except for image modes with
# explicit big endian byte-order
# mode => rawmode, byteorder, photometrics,
# compression value may be from BMP. Ignore it
# OJPEG is obsolete, so use new-style JPEG compression instead
# required for color libtiff images
# write any arbitrary tags passed in as an ImageFileDirectory
# might not be an IFD. Might not have populated type
# IFD offset that may not be correct in the saved image
# Determined by the image format and should not be copied from legacy_ifd.
# additions written by Greg Couch, gregc@cgl.ucsf.edu
# inspired by image-sig posting from Kevin Cazabon, kcazabon@home.com
# preserve tags from original TIFF image file
# preserve ICC profile (should also work when saving other formats
# which support profiles as TIFF) -- 2008-06-06 Florian Hoech
# data orientation
# aim for given strip size (64 KB by default) when using libtiff writer
# JPEG encoder expects multiple of 8 rows
# this is adjusted by IFD writer
# no compression by default:
# optional types for non core tags
# STRIPOFFSETS and STRIPBYTECOUNTS are added by the library
# based on the data in the strip.
# OSUBFILETYPE is deprecated.
# The other tags expect arrays with a certain length (fixed or depending on
# BITSPERSAMPLE, etc), passing arrays with a different length will result in
# segfaults. Block these tags until we add extra validation.
# SUBIFD may also cause a segfault.
# bits per sample is a single short in the tiff directory, not a list.
# Merge the ones that we have with (optional) more bits from
# the original file, e.g x,y resolution so that we can
# save(load('')) == original file.
# Libtiff can only process certain core items without adding
# them to the custom dictionary.
# Custom items are supported for int, float, unicode, string and byte
# values. Other types and tuples require a tagtype.
# libtiff always expects the bytes in native order.
# we're storing image byte order. So, if the rawmode
# Pass tags as sorted list so that the tags are set in a fixed order.
# This is required by libtiff for some tags. For example, the JPEGQUALITY
# pseudo tag requires that the COMPRESS tag was already set.
# -- helper for multi-page save --
# just to access o32 and o16 (using correct byte order)
# byte
# ascii
# short
# long
# rational
# sbyte
# undefined
# sshort
# slong
# srational
# float
# double
# ifd
# complex
# long8
# StripOffsets
# FreeOffsets
# TileOffsets
# JPEGQTables
# JPEGDCTables
# JPEGACTables
# Reset everything.
# empty file - first page
# fix offsets
# Make it easy to finish a frame without committing to a new one.
# Call this to finish a frame.
# pad to 16 byte boundary
# skip the locally stored value that is not an offset
# offset is now too large - we must convert long to long8
# offset is now too large - we must convert short to long
# XXX TODO
# simple case - the offset is just one and therefore it is
# local (not referenced with another offset)
# Move back past the new offset, past 'count', and before 'field_type'
# rewrite the type
# Register
# drawing interface operations
# 1996-04-13 fl   Created (experimental)
# 1996-08-07 fl   Filled polygons, ellipses.
# 1996-08-13 fl   Added text support
# 1998-06-28 fl   Handle I and F images
# 1998-12-29 fl   Added arc; use arc primitive to draw ellipses
# 1999-01-10 fl   Added shape stuff (experimental)
# 1999-02-06 fl   Added bitmap support
# 1999-02-11 fl   Changed all primitives to take options
# 1999-02-20 fl   Fixed backwards compatibility
# 2000-10-12 fl   Copy on write, when necessary
# 2001-02-18 fl   Use default ink for bitmap/text also in fill mode
# 2002-12-10 fl   Added experimental support for RGBA-on-RGB drawing
# 2002-12-11 fl   Refactored low-level drawing API (work in progress)
# 2004-08-26 fl   Made Draw() a factory function, added getdraw() support
# 2004-09-04 fl   Added width support to line primitive
# 2004-09-10 fl   Added font mode handling
# 2006-06-19 fl   Added font bearing support (getmask2)
# Copyright (c) 1997-2006 by Secret Labs AB
# Copyright (c) 1996-2006 by Fredrik Lundh
# experimental access to the outline API
# FIXME: fix Fill2 to properly support matte for I+F images
# aliasing is okay for other modes
# FIXME: should add a font repository
# This is a straight line, so no joint is required
# Cover potential gaps between the line and the joint
# To avoid expanding the polygon outwards,
# use the fill as a mask
# The two left and two right corners are joined
# The two top and two bottom corners are joined
# If all corners are joined, that is a circle
# If the corners have no curve,
# or there are no corners,
# that is a rectangle
# Draw top and bottom halves
# Draw left and right halves
# Draw four separate corners
# type: ignore[union-attr,misc]
# image_text.font.getmask2(mode="RGBA")
# returns color in RGB bands and mask in A
# extract mask and set text alpha
# Draw stroked text
# Draw normal text
# Only draw normal text
# based on an implementation by Eric S. Raymond
# amended by yo1995 @20180806
# seed point already has fill color
# seed point outside image
# use a set to keep record of current and previous edge pixels
# to reduce memory consumption
# 4 adjacent method
# If already processed, or if a coordinate is negative, skip
# discard pixels processed
# 1. Error Handling
# 1.1 Check `n_sides` has an appropriate value
# 1.2 Check `bounding_circle` has an appropriate value
# 1.3 Check `rotation` has an appropriate value
# 2. Define Helper Functions
# Start with the bottom left polygon vertex
# 3. Variable Declarations
# 4. Compute Vertices
# stuff to read (and render) GIMP gradient files
# Enable auto-doc for data member
# expand to RGBA
# add to palette
# GIMP 1.2 gradient files don't contain a name, but GIMP 1.3 files do
# a class to read from a container file
# 1995-06-18 fl     Created
# 1995-09-07 fl     Added readline(), readlines()
# Copyright (c) 1995 by Fredrik Lundh
# Always false.
# clamp
# image palette object
# 1996-03-11 fl   Rewritten.
# 1997-01-03 fl   Up and running.
# 1997-08-23 fl   Added load hack
# 2001-04-16 fl   Fixed randint shadow bug in random()
# if set, palette contains raw data
# Declare tostring as an alias for tobytes
# Search for an unused index
# allocate new color slot
# Internal
# FIXME: supports GIMP gradients only
# data, rawmode
# align by align parameter
# align left by anchor
# QOI support for PIL
# colorspace
# QOI_OP_RGB
# QOI_OP_RGBA
# QOI_OP_INDEX
# QOI_OP_DIFF
# QOI_OP_LUMA
# QOI_OP_RUN
# transform wrappers
# 2002-04-08 fl   Created
# Copyright (c) 2002 by Secret Labs AB
# Copyright (c) 2002 by Fredrik Lundh
# can be overridden
# DCX file handling
# DCX is a container file format defined by Intel, commonly used
# for fax applications.  Each DCX file consists of a directory
# (a list of file offsets) followed by a set of (usually 1-bit)
# PCX files.
# 1996-03-20 fl   Properly derived from PcxImageFile.
# 1998-07-15 fl   Renamed offset attribute to avoid name clash
# 2002-07-30 fl   Fixed file handling
# Copyright (c) 1997-98 by Secret Labs AB.
# Copyright (c) 1995-96 by Fredrik Lundh.
# QUIZ: what's this value, then?
# Image plugin for the Intel DCX format.
# Component directory
# FLI/FLC file handling.
# decoder
# Image plugin for the FLI/FLC animation format.  Use the <b>seek</b>
# method to load individual frames.
# frames
# image characteristics
# animation speed
# look for palette
# prefix chunk; ignore it
# look for palette chunk
# set things up to decode first frame
# load palette
# ensure that the previous frame was loaded
# move to next frame
# Magic ("DDS ")
# DDS flags
# DDS caps
# Pixel Format
# dxgiformat.h
# Backward compatibility layer
# pixel format
# Texture contains uncompressed RGB data
# ignoring flags which pertain to volume textures and cubemaps
# Some masks will be padded with zeros, e.g. R 0b11 G 0b1100
# Calculate how many zeros each mask is padded with
# And the maximum value of each channel without the padding
# Remove the zero padding, and scale it to 8 bits
# header size
# mipmaps
# pfsize, pfflags, fourcc, bitcount
# dwRGBABitMask
# dxgi_format, 2D resource, misc, array size, straight alpha
# Windows Icon support for PIL
# This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis
# <casadebender@gmail.com>.
# https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki
# Icon format references:
# (2+2)
# Another image has been supplied for this size
# with a different bit depth
# TODO: invent a more convenient method for proportional scalings
# idCount(2)
# 0 means 256
# bWidth(1)
# bHeight(1)
# bColorCount(1)
# bReserved(1)
# wPlanes(2)
# wBitCount(2)
# dwBytesInRes(4)
# dwImageOffset(4)
# check magic
# Number of items in file
# Get headers for each item
# See Wikipedia
# No. of colors in image (0 if >=8bpp)
# See Wikipedia notes about color depth.
# We need this just to differ images with equal sizes
# ICO images are usually squares
# png frame
# XOR + AND mask bmp frame
# change tile dimension to only encompass XOR image
# figure out where AND mask image starts
# 32-bit color depth icon image allows semitransparent areas
# PIL's DIB format ignores transparency bits, recover them.
# The DIB is packed in BGRX byte order where X is the alpha
# channel.
# Back up to start of bmp data
# extract every 4th byte (eg. 3,7,11,15,...)
# convert to an 8bpp grayscale image
# 8bpp
# (w, h)
# source chars
# raw decoder
# 8bpp inverted, unpadded, reversed
# get AND image from end of bitmap
# bitmap row data is aligned to word boundaries
# the total mask data is
# padded row size * height / bits per char
# convert raw data to image
# 1 bpp
# 1bpp inverted, padded, reversed
# now we have two images, im is XOR image and mask is AND image
# apply mask image as alpha channel
# Image plugin for Windows Icon files.
# if tile is PNG, it won't really be loaded yet
# Flag the ImageFile.Parser so that it
# just does all the decode at the end.
# global image statistics
# 1996-04-05 fl   Created
# 1997-05-21 fl   Added mask; added rms, var, stddev attributes
# 1997-08-05 fl   Added median
# 1998-07-05 hk   Fixed integer overflow error
# This class shows how to implement delayed evaluation of attributes.
# To get a certain value, simply access the corresponding attribute.
# The __getattr__ dispatcher takes care of the rest.
# Copyright (c) Fredrik Lundh 1996-97.
# compatibility
# GIF file handling
# 1996-12-14 fl   Added interlace support
# 1996-12-30 fl   Added animation support
# 1997-01-05 fl   Added write support, fixed local colour map bug
# 1997-02-23 fl   Make sure to load raster data in getdata()
# 1997-07-05 fl   Support external decoder (0.4)
# 1998-07-09 fl   Handle all modes when saving (0.5)
# 2001-04-16 fl   Added rewind support (seek to frame 0) (0.6)
# 2001-04-17 fl   Added palette optimization (0.7)
# 2002-06-06 fl   Added transparency support for save (0.8)
# 2004-02-24 fl   Disable interlacing for small images
# Copyright (c) 1995-2004 by Fredrik Lundh
#: .. versionadded:: 9.1.0
# Identify/read GIF files
# Image plugin for GIF images.  This plugin supports both GIF87 and
# GIF89 images.
# get global palette
# check if palette contains colour indices
# backup to last frame
# extensions
# graphic control extension
# disposal method - find the value of bits 4 - 6
# only set the dispose if it is not
# unspecified. I'm not sure if this is
# correct, but it seems to prevent the last
# frame from looking odd for some animations
# comment extension
# Read this comment block
# If multiple comment blocks in frame, separate with \n
# application extension
# local image
# extent
# replace with background colour
# only dispose the extent in this frame
# by convention, attempt to use transparency first
# replace with previous contents
# Write GIF files
# a bytes palette
# local image header
# end of image data
# a copy is required here since seek can still mutate the image
# delta frame
# This frame is identical to the previous frame
# To appear correctly in viewers using a convention,
# only consider transparency, and not background color
# When the delta is zero, fill the image with transparency
# Convert to L without considering palette
# Since multiple frames will not be written, use the combined duration
# global header
# compress difference
# workaround for @PIL153
# extension intro
# length
# packed fields
# duration
# local color table flag
# offset
# bits
# Unused by default.
# To use, uncomment the register_save call at the end of the file.
# If you need real GIF compression and/or RGB quantization, you
# can use the external NETPBM/PBMPLUS utilities.  See comments
# below for information on how to enable this.
# Pipe ppmquant output into ppmtogif
# "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)
# Allow ppmquant to receive SIGPIPE if ppmtogif exits
# Force optimization so that we can test performance against
# cases where it took lots of memory and time previously.
# Potentially expensive operation.
# The palette saves 3 bytes per color not used, but palette
# lengths are restricted to 3*(2**N) bytes. Max saving would
# be 768 -> 6 bytes if we went all the way down to 2 colors.
# * If we're over 128 colors, we can't save any space.
# * If there aren't any holes, it's not worth collapsing.
# * If we have a 'large' image, the palette is in the noise.
# create the new palette if not every color is used
# check which colors are used
# check that the palette would become smaller when saved
# check that the palette is not already the smallest possible size
# calculate the palette size for the header
# add the missing amount of bytes
# the palette has to be 2<<n in size
# WebPImagePlugin stores an RGBA value in info["background"]
# So it must be converted to the same format as GifImagePlugin's
# info["background"] - a global color table index
# then there is no need for the background color
# Ignore non-opaque WebP background
# Header Block
# https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp
# version
# canvas width
# canvas height
# Logical Screen Descriptor
# size of global color table + global color table flag
# background + reserved/aspect
# Global Color Table
# number of loops
# Legacy GIF utilities
# make sure raster data is available
# Uncomment the following line if you wish to use NETPBM/PBMPLUS
# instead of the built-in "uncompressed" GIF encoder
# Image.register_save(GifImageFile.format, _save_netpbm)
# standard channel operations
# 1996-03-24 fl   Created
# 1996-08-13 fl   Added logical operations (for "1" images)
# 2000-10-12 fl   Added offset method (from Image.py)
# Copyright (c) 1997-2000 by Secret Labs AB
# Copyright (c) 1996-2000 by Fredrik Lundh
# TIFF tags
# This module provides clear-text names for various well-known
# TIFF tags.  the TIFF codec works just fine without it.
# Copyright (c) Secret Labs AB 1999.
# This module provides constants and clear-text names for various
# well-known TIFF tags.
# Using get will call hash(value), which can be expensive
# for some types (e.g. Fraction). Since self.enum is rarely
# used, it's usually better to test it first.
# Map tag numbers to tag info.
# The length here differs from the length in the tiff spec.  For
# numbers, the tiff spec is for the number of fields returned. We
# agree here.  For string-like types, the tiff spec uses the length of
# field in bytes.  In Pillow, we are using the number of expected
# fields, in general 1 for string-like types.
# TIFF/EP, Adobe DNG
# Adobe DNG
# obsolete JPEG tags
# Four private SGI tags
# FIXME add more tags here
# MPInfo
# UNDONE, check
# Can be more than one
# see Issue #2006
# ExifIFD
# GPSInfoIFD
# InteroperabilityIFD
# Legacy Tags structure
# these tags aren't included above, but were in the previous versions
# Additional Exif Info
# Populate legacy structure.
# Map type numbers to type names -- defined in ImageFileDirectory.
# These tags are handled by default in libtiff, without
# adding to the custom dictionary. From tif_dir.c, searching for
# case TIFFTAG in the _TIFFVSetField function:
# Line: item.
# 148: case TIFFTAG_SUBFILETYPE:
# 151: case TIFFTAG_IMAGEWIDTH:
# 154: case TIFFTAG_IMAGELENGTH:
# 157: case TIFFTAG_BITSPERSAMPLE:
# 181: case TIFFTAG_COMPRESSION:
# 202: case TIFFTAG_PHOTOMETRIC:
# 205: case TIFFTAG_THRESHHOLDING:
# 208: case TIFFTAG_FILLORDER:
# 214: case TIFFTAG_ORIENTATION:
# 221: case TIFFTAG_SAMPLESPERPIXEL:
# 228: case TIFFTAG_ROWSPERSTRIP:
# 238: case TIFFTAG_MINSAMPLEVALUE:
# 241: case TIFFTAG_MAXSAMPLEVALUE:
# 244: case TIFFTAG_SMINSAMPLEVALUE:
# 247: case TIFFTAG_SMAXSAMPLEVALUE:
# 250: case TIFFTAG_XRESOLUTION:
# 256: case TIFFTAG_YRESOLUTION:
# 262: case TIFFTAG_PLANARCONFIG:
# 268: case TIFFTAG_XPOSITION:
# 271: case TIFFTAG_YPOSITION:
# 274: case TIFFTAG_RESOLUTIONUNIT:
# 280: case TIFFTAG_PAGENUMBER:
# 284: case TIFFTAG_HALFTONEHINTS:
# 288: case TIFFTAG_COLORMAP:
# 294: case TIFFTAG_EXTRASAMPLES:
# 298: case TIFFTAG_MATTEING:
# 305: case TIFFTAG_TILEWIDTH:
# 316: case TIFFTAG_TILELENGTH:
# 327: case TIFFTAG_TILEDEPTH:
# 333: case TIFFTAG_DATATYPE:
# 344: case TIFFTAG_SAMPLEFORMAT:
# 361: case TIFFTAG_IMAGEDEPTH:
# 364: case TIFFTAG_SUBIFD:
# 376: case TIFFTAG_YCBCRPOSITIONING:
# 379: case TIFFTAG_YCBCRSUBSAMPLING:
# 383: case TIFFTAG_TRANSFERFUNCTION:
# 389: case TIFFTAG_REFERENCEBLACKWHITE:
# 393: case TIFFTAG_INKNAMES:
# Following pseudo-tags are also handled by default in libtiff:
# TIFFTAG_JPEGQUALITY 65537
# some of these are not in our TAGS_V2 dict and were included from tiff.h
# This list also exists in encode.c
# as above
# this has been in our tests forever, and works
# We don't have support for subfiletypes
# We don't have support for writing tiled images with libtiff
# Tiled images
# Ink Names either
# Note to advanced users: There may be combinations of these
# parameters and values that when added properly, will work and
# produce valid tiff images that may work in your application.
# It is safe to add and remove tags from this set from Pillow's point
# of view so long as you test against libtiff.
# MPEG file handling
# Copyright (c) Fredrik Lundh 1995.
# Bitstream parser
# Image plugin for MPEG streams.  This plugin can identify a stream,
# but it cannot read it.
# HDF5 stub adapter
# Copyright (c) 2000-2003 by Fredrik Lundh
# PIXAR raster support for PIL
# notes:
# helpers
# Image plugin for PIXAR raster images.
# assuming a 4-byte magic label
# read rest of header
# get channel/depth descriptions
# FIXME: to be continued...
# create tile descriptor (assuming "dumped")
# a Windows DIB display interface
# 1996-05-20 fl   Created
# 1996-09-20 fl   Fixed subregion exposure
# 1997-09-21 fl   Added draw primitive (for tzPrint)
# 2003-05-21 fl   Added experimental Window/ImageWindow classes
# 2003-09-05 fl   Added fromstring/tostring methods
# Copyright (c) Secret Labs AB 1997-2003.
# Copyright (c) Fredrik Lundh 1996-2003.
# IPTC/NAA file handling
# 1995-10-01 fl   Created
# 1998-03-09 fl   Cleaned up and added to PIL
# 2002-06-18 fl   Added getiptcinfo helper
# Copyright (c) Secret Labs AB 1997-2002.
# Image plugin for IPTC/NAA datastreams.  To read IPTC/NAA fields
# from TIFF and JPEG files, use the <b>getiptcinfo</b> function.
# get a IPTC field header
# syntax
# field size
# load descriptive fields
# mode
# compression
# tile
# Copy image data to temporary file
# To simplify access to the extracted file,
# prepend a PPM header
# return info dictionary right away
# extract the IPTC/NAA resource
# get raw data from the IPTC/NAA tag (PhotoShop tags the data
# as 4-byte integers, so we cannot use the get method...)
# no properties
# create an IptcImagePlugin object without initializing it
# parse the IPTC information chunk
# expected failure
# Only support single-format files.
# I don't know of any multi-format file.
# base class for image file handlers
# 1996-03-11 fl   Fixed load mechanism.
# 1996-04-15 fl   Added pcx/xbm decoders.
# 1996-04-30 fl   Added encoders.
# 1996-12-14 fl   Added load helpers
# 1997-01-11 fl   Use encode_to_file where possible
# 1997-08-27 fl   Flush output in _save
# 1998-03-05 fl   Use memory mapping for some modes
# 1999-02-04 fl   Use memory mapping also for "I;16" and "I;16B"
# 1999-05-31 fl   Added image parser
# 2000-10-12 fl   Set readonly flag on memory-mapped images
# 2002-03-20 fl   Use better messages for common decoder errors
# 2003-04-21 fl   Fall back on mmap/map_buffer if map is not available
# 2003-10-30 fl   Added StubImageFile class
# 2004-02-25 fl   Made incremental parser more robust
# sort on offset
# ImageFile base class
# until we know better
# filename
# stream
# end of data (ord)
# unsupported mode
# got header but not the first frame
# close the file only if we have opened it this constructor
# raise exception if something's wrong.  must be called
# directly after open, and closes file when finished.
# look for read/seek overrides
# don't use mmap if there are custom read/seek functions
# try memory mapping
# use mmap, if possible
# After trashing self.im,
# we might need to reload the palette data.
# initialize to unknown error
# sort tiles in file order
# FIXME: This is a hack to handle TIFF's JpegTables tag.
# Remove consecutive duplicates that only differ by their offset
# truncated png/gif
# truncated jpeg
# Need to cleanup here to prevent leaks
# still raised if decoder fails to return anything
# create image memory if necessary
# create palette (optional)
# may be defined for contained formats
# def load_seek(self, pos: int) -> None:
# may be defined for blocked formats (e.g. PNG)
# def load_read(self, read_bytes: int) -> bytes:
# Only check upper limit on frames if additional seek operations
# are not required to do so
# become the other object (!)
# collect data
# parse what we have
# skip header
# end of stream
# decoding error
# end of image
# if we end up here with no decoder, this file cannot
# be incrementally parsed.  wait until we've gotten all
# available data
# attempt to open this file
# not enough data
# custom load code, or multiple tiles
# initialize decoder
# calculate decoder offset
# finish decoding
# get rid of what's left in the buffers
# incremental parsing not possible; reopen the file
# not that we have all data
# FIXME: make MAXBLOCK a configuration parameter
# It would be great if we could have the encoder specify what it needs
# But, it would need at least the image size in most cases. RawEncode is
# a tricky case.
# compress to Python file-compatible object
# slight speedup: compress to real file object
# following c code
# bad configuration
# a Tk display interface
# 96-04-08 fl   Created
# 96-09-06 fl   Added getimage method
# 96-11-01 fl   Rewritten, removed image attribute and crop method
# 97-05-09 fl   Use PyImagingPaste method instead of image type
# 97-05-12 fl   Minor tweaks to match the IFUNC95 interface
# 97-05-17 fl   Support the "pilbitmap" booster patch
# 97-06-05 fl   Added file= and data= argument to image constructors
# 98-03-09 fl   Added width and height methods to Image classes
# 98-07-02 fl   Use default mode for "P" images without palette attribute
# 98-07-02 fl   Explicitly destroy Tkinter image objects
# 99-07-24 fl   Support multiple Tk interpreters (from Greg Couch)
# 99-07-26 fl   Automatically hook into Tkinter (if possible)
# 99-08-15 fl   Hook uses _imagingtk instead of _imaging
# Copyright (c) 1997-1999 by Secret Labs AB
# Check for Tkinter interface hooks
# activate Tkinter hook
# may raise an error if it cannot attach to Tkinter
# PhotoImage
# Tk compatibility: file or data
# got an image instead of a mode
# palette mapped data
# ignore internal errors
# convert to blittable
# convert directly between buffers
# BitmapImage
# Windows Cursor support for PIL
# Image plugin for Windows Cursor files.
# pick the largest cursor in the file
# load as bitmap
# patch up the bitmap height
# stuff to read simple, teragon-style palette files
# Decoder options as module globals, until there is a way to pass parameters
# to Image.open (see https://github.com/python-pillow/Pillow/issues/569)
# coding brands
# We accept files with AVIF container brands; we can't yet know if
# the ftyp box has the correct compatible brands, but if it doesn't
# then the plugin will raise a SyntaxError which Pillow will catch
# before moving on to the next plugin that accepts the file.
# Also, because this file might not actually be an AVIF file, we
# don't raise an error if AVIF support isn't properly compiled.
# Setup the AVIF encoder
# Update frame duration
# Update frame index
# XPM File handling
# 1996-12-29 fl   Created
# 2001-02-17 fl   Use 're' instead of 'regex' (Python 2.1) (0.7)
# XPM header
# Image plugin for X11 pixel maps.
# skip forward to next string
# load palette description
# process colour key
# unknown colour
# missing colour key
# load all image data in one chunk
# "4:2:0"
# "4:4:4"
# PIL raster font management
# 1996-08-07 fl   created (experimental)
# 1997-08-25 fl   minor adjustments to handle fonts from pilfont 0.3
# 1999-02-06 fl   rewrote most font management stuff in C
# 1999-03-17 fl   take pth files into account in load_path (from Richard Jones)
# 2001-02-17 fl   added freetype support
# 2001-05-09 fl   added TransposedFont wrapper class
# 2002-03-04 fl   make sure we have a "L" or "1" font
# 2002-12-04 fl   skip non-directory entries in the system path
# 2003-04-29 fl   add embedded default font
# 2003-09-27 fl   added support for truetype charmap encodings
# Todo:
# Adapt to PILFONT2 format (16-bit fonts, compressed, single file)
# FIXME: add support for pilfont2 format (see FontFile.py)
# Font metrics format:
# To place a character, cut out srcbox and paste at dstbox,
# relative to the character position.  Then move the character
# position according to dx, dy.
# check image
# read PILfont header
# FIXME: should be a dictionary
# read PILfont metrics
# Wrapper for FreeType fonts.  Application code should use the
# <b>truetype</b> factory function to create font objects.
# FIXME: use service provider instead
# FreeType cannot load fonts with non-ASCII characters on Windows
# So load it into memory first
# When the same name is set twice in a row,
# there is an 'unknown freetype error'
# https://savannah.nongnu.org/bugs/?56186
# any 'transpose' argument, or None
# TransposedFont doesn't support getmask2, move top-left point to (0, 0)
# this has no effect on ImageFont and simulates anchor="lt" for FreeTypeFont
# check the windows font repository
# NOTE: must use uppercase WINDIR, to work around bugs in
# 1.5.2's os.environ.get()
# The freedesktop spec defines the following default directory for
# when XDG_DATA_HOME is unset or empty. This user-level directory
# takes precedence over system-level directories.
# Similarly, defaults are defined for the system-level directories
# courB08
# read files from within a tar file
# 95-06-18 fl   Created
# 96-05-28 fl   Open files in binary mode
# Copyright (c) Fredrik Lundh 1995-96.
# Open region
# PCD file handling
# Image plugin for PhotoCD images.  This plugin only reads the 768x512
# image from the file; higher resolutions are encoded in a proprietary
# Handle rotated PCDs
# Binary input/output support routines.
# Copyright (c) 2012 by Brian Crowell
# Input, le = little endian, be = big endian
# Output, le = little endian, be = big endian
# THIS IS WORK IN PROGRESS
# FlashPix support for PIL
# 97-01-25 fl   Created (reads uncompressed RGB images only)
# we map from colour field tuples to (mode, rawmode) descriptors
# opacity
# monochrome
# photo YCC
# standard RGB (NIFRGB)
# Image plugin for the FlashPix images.
# to be a FlashPix file
# get the Image Contents Property Set
# size (highest resolution)
# mode.  instead of using a single field for this, flashpix
# requires you to specify the mode for each channel in each
# resolution subimage, and leaves it to the decoder to make
# sure that they all match.  for now, we'll cheat and assume
# that this is always the case.
# note: for now, we ignore the "uncalibrated" flag
# load JPEG tables, if any
# setup tile descriptors for a given subimage
# skip prefix
# header stream
# tilecount = i32(s, 12)
# channels = i32(s, 24)
# get tile descriptors
# FIXME: the fill decoder is not implemented
# The image is stored as usual (usually YCbCr).
# For "RGBA", data is stored as YCbCrA based on
# negative RGB. The following trick works around
# this problem :
# let the decoder decide
# The image is stored as defined by rawmode
# FIXME: jpeg tables are tile dependent; the prefix
# data must be placed in the tile descriptor itself!
# isn't really required
# EPS file handling
# 1995-09-01 fl   Created (0.1)
# 1996-05-18 fl   Don't choke on "atend" fields, Ghostscript interface (0.2)
# 1996-08-22 fl   Don't choke on floating point BoundingBox values
# 1996-08-23 fl   Handle files from Macintosh (0.3)
# 2003-09-07 fl   Check gs.close status (from Federico Di Gregorio) (0.5)
# 2014-05-07 e    Handling of EPS with binary preview and fixed resolution
# Unpack decoder tile
# Hack to support hi-res rendering
# resolution is dependent on bbox and size
# Ignore length and offset!
# Ghostscript can read it
# Copy whole file to read in Ghostscript
# fetch length of fp
# ensure start position
# go back
# "RGBA"
# "pnmraw" automatically chooses between
# PBM ("1"), PGM ("L"), and PPM ("RGB").
# Build Ghostscript command
# quiet mode
# set output geometry (pixels)
# set input DPI (dots per inch)
# exit after processing
# don't pause between pages
# safe mode
# output file
# adjust for image origin
# input file
# showpage (see https://bugs.ghostscript.com/show_bug.cgi?id=698272)
# push data through Ghostscript
# Image plugin for Encapsulated PostScript. This plugin supports only
# a few variants of this format.
# go to offset - start of "%!PS"
# When reading header comments, the first comment is used.
# When reading trailer comments, the last comment is used.
# Note: The DSC spec says that BoundingBox
# fields should be integers, but some drivers
# put floating point values there anyway.
# if we didn't read a byte we must be at the end of the file
# if we read a line ending character, ignore it and parse what
# we have already read. if we haven't read any other characters,
# continue reading
# ASCII/hexadecimal lines in an EPS file must not exceed
# 255 characters, not including line ending characters
# only enforce this for lines starting with a "%",
# otherwise assume it's binary data
# reset bytes_read so we can keep reading
# data until the end of the line
# Load EPS header
# if this line doesn't start with a "%",
# or does start with "%%EndComments",
# then we've reached the end of the header/comments
# handle non-DSC PostScript comments that some
# tools mistakenly put in the Comments section
# Check for an "ImageData" descriptor
# https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577413_pgfId-1035096
# If we've already read an "ImageData" descriptor,
# don't read another one.
# Values:
# columns
# rows
# bit depth (1 or 8)
# mode (1: L, 2: LAB, 3: RGB, 4: CMYK)
# number of padding channels
# block size (number of bytes per row per channel)
# binary/ascii (1: binary, 2: ascii)
# data start identifier (the image data follows after a single line
# Parse the columns and rows after checking the bit depth and mode
# in case the bit depth and/or mode are invalid.
# Load EPS trailer
# A "BoundingBox" is always required,
# even if an "ImageData" descriptor size exists.
# An "ImageData" size takes precedence over the "BoundingBox".
# for HEAD without binary preview
# FIX for: Some EPS file not handled correctly / issue #302
# EPS can contain binary data
# or start directly with latin coding
# more info see:
# https://web.archive.org/web/20160528181353/http://partners.adobe.com/public/developer/en/ps/5002.EPSF_Spec.pdf
# Load EPS via Ghostscript
# we can't incrementally load, so force ImageFile.parser to
# use our custom load method by defining this method.
# determine PostScript image mode
# write EPS header
# fp.write("%%CreationDate: %s"...)
# <= bits
# FITS file handling
# Copyright (c) 1998-2003 by Fredrik Lundh
# This is now a data unit
# Seek to the end of the header unit
# Keep going to read past the headers
# PPM support for PIL
# standard
# PIL extensions (for test purposes only)
# Image plugin for PBM, PGM, and PPM images.
# read until whitespace or longest available magic number
# read until next whitespace or limit of 10 characters
# token ended
# skip whitespace at start
# ignores rest of the line; stops at CR, LF or EOF
# Token was not even 1 byte
# If maxval matches a bit depth, use the raw decoder directly
# lowest nonnegative index (or -1)
# Finish current comment
# Comment ends in this block
# Delete tail of comment
# Comment spans whole block
# So read the next block, looking for the end
# Search for any further comments
# No comment found
# Delete comment
# Comment continues to next block(s)
# read next block
# flush half_token
# stitch half_token to new block
# block might split token
# save half token for later
# prevent buildup of half_token
# finished!
# XV Thumbnail file handler by Charles E. "Gene" Cash
# (gcash@magicnet.net)
# see xvcolor.c and xvbrowse.c in the sources to John Bradley's XV,
# available from ftp://ftp.cis.upenn.edu/pub/xv/
# 98-08-15 cec  created (b/w only)
# 98-12-09 cec  added color palette
# 98-12-28 fl   added to PIL (with only a few very minor modifications)
# To do:
# FIXME: make save work (this requires quantization support)
# standard color palette for thumbnails (RGB332)
# Image plugin for XV thumbnail images.
# Skip to beginning of next line
# skip info comments
# ie. when not a comment: '#'
# parse header line (already read)
# portable compiled font file parser
# 1997-08-19 fl   created
# 2003-09-13 fl   fixed loading of unicode fonts
# declarations
# "\x01fcp"
# create glyph structure
# font properties
# read property description
# pad
# "compressed" metrics
# "jumbo" metrics
# bitmap data
# byteorder = format & 4  # non-zero => MSB
# non-zero => MSB
# map character code to bitmap index
# character is not supported in selected encoding
# IFUNC IM file handling for PIL
# 1995-09-01 fl   Created.
# 1997-01-03 fl   Save palette images
# 1997-01-08 fl   Added sequence support
# 1997-01-23 fl   Added P and RGB save support
# 1997-05-31 fl   Read floating point images
# 1997-06-22 fl   Save floating point images
# 1997-08-27 fl   Read and save 1-bit images
# 1998-06-25 fl   Added support for RGB+LUT images
# 1998-07-02 fl   Added support for YCC images
# 1998-12-29 fl   Added I;16 support
# 2003-09-26 fl   Added LA/PA support
# Copyright (c) 1995-2001 by Fredrik Lundh.
# Standard tags
# ifunc93/p3cfunc formats
# old p3cfunc formats
# ifunc95 extensions
# Read IM directory
# Image plugin for the IFUNC IM file format.
# Quick rejection: if there's not an LF among the first
# Default values
# Some versions of IFUNC uses \n\r instead of \r\n...
# FIXME: this may read whole file if not a text file
# Don't know if this is the correct encoding,
# but a decent guess (I guess)
# Convert value as appropriate
# Add to dictionary. Note that COMMENT tags are
# combined into a list of strings.
# Basic attributes
# Skip forward to start of image data
# convert lookup table to palette or lut attribute
# greyscale palette
# linear greyscale palette
# ifunc95 formats
# use bit decoder (if necessary)
# Old LabEye/3PC files.  Would be very surprised if anyone
# ever stumbled upon such a file ;-)
# LabEye/IFUNC files
# Save IM files
# mode: (im type, raw mode)
# Each line must be 100 characters or less,
# or: SyntaxError("not an IM file")
# 8 characters are used for "Name: " and "\r\n"
# Keep just the filename, ditch the potentially overlong path
# standard image operations
# 2001-10-20 fl   Created
# 2001-10-23 fl   Added autocontrast operator
# 2001-12-18 fl   Added Kevin's fit operator
# 2004-03-14 fl   Fixed potential division by zero in equalize
# 2005-05-05 fl   Fixed equalize for low number of values
# Copyright (c) 2001-2004 by Secret Labs AB
# Copyright (c) 2001-2004 by Fredrik Lundh
# FIXME: apply to lookup table, not image data
# actions
# get rid of outliers
# cut off pixels from both ends of the histogram
# get number of pixels
# remove cutoff% pixels from the low end
# remove cutoff% samples from the high end
# find lowest/highest samples after preprocessing
# don't bother
# Initial asserts
# Define colors from arguments
# Empty lists for the mapping
# Create the low-end values
# Create the mapping (2-color)
# Create the mapping (3-color)
# Create the high-end values
# Return converted image
# by Kevin Cazabon, Feb 17/2000
# kevin@cazabon.com
# https://www.cazabon.com
# calculate the area to use for resizing and cropping, subtracting
# the 'bleed' around the edges
# number of pixels to trim off on Top and Bottom, Left and Right
# calculate the aspect ratio of the live_size
# calculate the aspect ratio of the output image
# figure out if the sides or top/bottom will be cropped off
# live_size is already the needed ratio
# live_size is wider than what's needed, crop the sides
# live_size is taller than what's needed, crop the top and bottom
# make the crop
# resize the image and return it
# im.show() drivers
# 2008-04-06 fl   Created
# Copyright (c) Secret Labs AB 2008.
# main api
# hook methods
# nosec
# on darwin open returns immediately resulting in the temp
# file removal while app is opening
# note: xv is pretty outdated.  most modern systems have
# imagemagick's display command instead.
# unixoids
# WCK-style drawing interface operations
# 2003-12-07 fl   created
# 2005-05-15 fl   updated; added to PIL as ImageDraw2
# 2005-05-15 fl   added text support
# 2005-05-20 fl   added arc/chord/pieslice support
# Copyright (c) 2003-2005 by Secret Labs AB
# Copyright (c) 2003-2005 by Fredrik Lundh
# FIXME: add support for bitmap fonts
# handle color arguments
# handle transformation
# render the item
# JPEG (JFIF) file handling
# See "Digital Compression and Coding of Continuous-Tone Still Images,
# Part 1, Requirements and Guidelines" (CCITT T.81 / ISO 10918-1)
# 1995-09-13 fl   Added full parser
# 1996-03-25 fl   Added hack to use the IJG command line utilities
# 1996-05-05 fl   Workaround Photoshop 2.5 CMYK polarity bug
# 1996-05-28 fl   Added draft support, JFIF version (0.1)
# 1996-12-30 fl   Added encoder options, added progression property (0.2)
# 1997-08-27 fl   Save mode 1 images as BW (0.3)
# 1998-07-12 fl   Added YCbCr to draft and save methods (0.4)
# 1998-10-19 fl   Don't hang on files using 16-bit DQT's (0.4.1)
# 2001-04-16 fl   Extract DPI settings from JFIF files (0.4.2)
# 2002-07-01 fl   Skip pad bytes before markers; identify Exif files (0.4.3)
# 2003-04-25 fl   Added experimental EXIF decoder (0.5)
# 2003-06-06 fl   Added experimental EXIF GPSinfo decoder
# 2003-09-13 fl   Extract COM markers
# 2009-09-06 fl   Added icc_profile support (from Florian Hoech)
# 2009-03-06 fl   Changed CMYK handling; always use Adobe polarity (0.6)
# 2009-03-08 fl   Added subsampling support (from Justin Huff).
# Copyright (c) 1995-1996 by Fredrik Lundh.
# Parser
# Application marker.  Store these in the APP dictionary.
# Also look for well-known application markers.
# extract JFIF information
# extract JFIF properties
# cm
# 1 dpcm = 2.54 dpi
# extract EXIF information
# extract FlashPix information (incomplete)
# FIXME: value will change
# Since an ICC profile can be larger than the maximum size of
# a JPEG marker (64K), we need provisions to split it into
# multiple markers. The format defined by the ICC specifies
# one or more APP2 markers containing the following data:
# Decoders should use the marker sequence numbers to
# reassemble the profile, rather than assuming that the APP2
# markers appear in the correct sequence.
# parse the image resource block
# resource code
# resource name (usually empty)
# name = s[offset+1:offset+1+name_len]
# align
# resource data block
# ResolutionInfo
# insufficient data
# extract Adobe custom properties
# extract MPO information
# offset is current location minus buffer size
# plus constant header size
# Comment marker.  Store these in the APP dictionary.
# Start of frame marker.  Defines the size and mode of the
# image.  JPEG is colour blind, so we use some simple
# heuristics to map the number of layers to an appropriate
# mode.  Note that this could be made a bit brighter, by
# looking for JFIF and Adobe APP markers.
# fixup icc profile
# sort by sequence number
# wrong number of fragments
# 4-tuples: id, vsamp, hsamp, qtable
# Define quantization table.  Note that there might be more
# than one table in each marker.
# FIXME: The quantization tables can be used to estimate the
# compression quality.
# in bytes
# the values are always big-endian
# JPEG marker table
# Magic number was taken from https://en.wikipedia.org/wiki/JPEG
# Image plugin for JPEG and JFIF images.
# Create attributes
# JPEG specifics (internal)
# Skip non-0xFF junk
# start of scan
# assume adobe conventions
# self.__offset = self.fp.tell()
# padded marker or junk; move on
# Skip extraneous data (escaped 0xFF)
# Premature EOF.
# Pretend file is finished adding EOI marker
# Protect from second call
# ALTERNATIVE: handle JPEGs via the IJG command line utilities
# If DPI isn't in JPEG header, fetch from EXIF
# truncated EXIF
# dpi not included
# invalid/unreadable EXIF
# dpi is an invalid float
# invalid dpi rational value
# Extract MP information.  This method was inspired by the "highly
# experimental" _getexif version that's been in use for years now,
# itself based on the ImageFileDirectory class in the TIFF plugin.
# The MP record essentially consists of a TIFF file embedded in a JPEG
# application marker.
# it's an error not to have a number of images
# get MP entries
# Next we should try and parse the individual image unique ID list;
# we don't because I've never seen this actually used in a real MPO
# file and so can't test it.
# stuff to save JPEG files
# There's no subsampling when images have only 1 layer
# (grayscale images) or when they are CMYK (4 layers),
# so set subsampling to the default value.
# NOTE: currently Pillow can't encode JPEG to YCCK format.
# If YCCK support is added in the future, subsampling code will have
# to be updated (here and in JpegEncode.c) to deal with 4 layers.
# For compatibility. Before Pillow 4.3, 4:1:1 actually meant 4:2:0.
# Set 4:2:0 if someone is still using that value.
# b"http://ns.adobe.com/xap/1.0/\x00"
# b"ICC_PROFILE\0" + o8(i) + o8(len(markers))
# "progressive" is the official name, but older documentation
# says "progression"
# FIXME: issue a warning if the wrong form is used (post-1.1.7)
# get keyword arguments
# if we optimize, libjpeg needs a buffer big enough to hold the whole image
# in a shot. Guessing on the size, at im.size bytes. (raw pixel size is
# channels*size, this is a value that's been used in a django patch.
# https://github.com/matthewwithanm/django-imagekit/issues/50
# CMYK can be bigger
# keep sets quality to -1, but the actual value may be high.
# The EXIF info needs to be written as one block, + APP1, + one spare byte.
# Ensure that our buffer is big enough. Same with the icc_profile block.
# Factory for making JPEG and MPO instances
# Ultra HDR images are not yet supported
# It's actually an MPO
# Don't reload everything, just convert it.
# It is really a JPEG
# Copyright (C) 2008 Brailcom, o.p.s.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <https://www.gnu.org/licenses/>.
# config.py - A simple dialog based tool for basic configuration of
# Copyright (C) 2008, 2010 Brailcom, o.p.s.
# Configuration and sound data paths
# Locale/gettext configuration
# two globals
# On plain enter, return default
# If a value was typed, check it and convert it
# Test whether Speech Dispatcher works
# Try to kill running Speech Dispatcher
# Attempt to workaround the psmisc 22.15 bug with 16 char max process names
# All debugging files are written to TMPDIR/speech-dispatcher/
# Start Speech Dispatcher with debugging enabled
# Parse config file in-place and replace the desired options+values
# Check if the current line contains any of the desired options
# Now count unknown words and try to judge if this is
# real configuration or just a comment
# If a foreign word went before our option identifier,
# we are not in code but in comments
# Only consider the line as the actual code when the keyword
# is followed by exactly one word value. Otherwise consider this
# line as plain comment and leave intact
# Convert value into string representation in spd_val
# Ask before touching things that we do not have to!
# Copy the original intact configuration files
# creating a conf/ subdirectory
# Ensure the files are writeable when copying from immutable directory.
# Now determine the most important config option
# Substitute given configuration options
# Basic objects
# that should possibly be refactored, test should not be passed
# Check for and/or create basic user configuration
# configobj.py
# A config file reader/writer that supports nested sections in config files.
# Copyright (C) 2005-2014:
# (name) : (email)
# Michael Foord: fuzzyman AT voidspace DOT org DOT uk
# Nicola Larosa: nico AT tekNico DOT net
# Rob Dennis: rdennis AT gmail DOT com
# Eli Courtwright: eli AT courtwright DOT org
# This software is licensed under the terms of the BSD license.
# http://opensource.org/licenses/BSD-3-Clause
# ConfigObj 5 - main repository for documentation and issue tracking:
# https://github.com/DiffSK/configobj
# imported lazily to avoid startup performance hit if it isn't used
# A dictionary mapping BOM to
# the encoding to decode with, and what to set the
# encoding attribute to.
# All legal variants of the BOM codecs.
# TODO: the list of aliases is not meant to be exhaustive, is there a
# Map of encodings to the BOM to write.
# Quote strings used for writing values
# Sentinel for use in getattr calls to replace hasattr
# option may be set to one of ('', ' ', '\t')
# An undefined Name
# this is supposed to be safe
# compiled regexp to use in self.interpolate()
# the Section instance that "owns" this engine
# short-cut
# Have we been here already?
# Yes - infinite loop detected
# Place a marker on our backtrail so we won't come back here again
# Now start the actual work
# The actual parsing of the match is implementation-dependent,
# so delegate to our helper function
# That's the signal that no further interpolation is needed
# Further interpolation may be needed to obtain final value
# Replace the matched string with its final value
# Pick up the next interpolation key, if any, for next time
# through the while loop
# Now safe to come back here again; remove marker from backtrail
# Back in interpolate(), all we have to do is kick off the recursive
# function with appropriate starting values
# switch off interpolation before we try and fetch anything !
# Start at section that "owns" this InterpolationEngine
# try the current section first
# try "DEFAULT" next
# move up to parent and try again
# top-level's parent is itself
# reached top level, time to give up
# restore interpolation to previous value before returning
# Valid name (in or out of braces): fetch value from section
# Escaped delimiter (e.g., $$): return single delimiter
# Return None for key and section to indicate it's time to stop
# Anything else: ignore completely, just return it unchanged
# Hack for pickle
# used for nesting level *and* interpolation
# used for the interpolation attribute
# level of nesting depth of this Section
# purely for information
# we do this explicitly so that __setitem__ is used properly
# (rather than just passing to ``dict.__init__``)
# the sequence of scalar values in this Section
# the sequence of sections in this Section
# for comments :-)
# the configspec
# for defaults
# do we already have an interpolation engine?
# not yet: first time running _interpolate(), so pick the engine
# note that "if name:" would be incorrect here
# backwards-compatibility: interpolation=True means use default
# so that "Template", "template", etc. all work
# invalid value for self.main.interpolation
# save reference to engine so we don't have to do this again
# let the engine do the actual work
# add the comment
# remove the entry from defaults
# First create the new depth level,
# then create the section
# Extra methods - not in a normal dictionary
# create a copy rather than a reference
# scalars first
# bound again in case name has changed
# then sections
# previous result is discarded
# TODO: Why do we raise a KeyError here?
# this regexp pulls list values out as a single string
# or single values and comments
# FIXME: this regex adds a '' to the end of comma terminated lists
# use findall to get the members of a list value
# this regexp is used for the value
# when lists are switched off
# regexes for finding triple quoted values on one line
# Used by the ``istrue`` Section method
# init the superclass
# TODO: check the values too.
# XXXX this ignores an explicit list_values = True in combination
# with _inspec. The user should *never* do that anyway, but still...
# raise an error if the file doesn't exist
# file doesn't already exist
# this is a good test that the filename specified
# isn't impossible - like on a non-existent device
# initialise self
# the Section class handles creating subsections
# get a copy of our ConfigObj
# This supports file like objects
# needs splitting into lines - but needs doing *after* decoding
# in case it's not an 8 bit encoding
# don't do it for the empty ConfigObj
# infile is now *always* a list
# Set the newlines attribute (first line ending it finds)
# and strip trailing '\n' or '\r' from lines
# if we had any errors, now is the time to raise them
# set the errors attribute; it's a list of tuples:
# (error_type, message, line_number)
# set the config attribute
# delete private attributes
# initialise a few variables
# Clear section attributes as well
# No need to check for a BOM
# the encoding specified doesn't have one
# just decode
# it's already decoded and there's no need to do anything
# else, just use the _decode utility method to handle
# listifying appropriately
# encoding explicitly supplied
# And it could have an associated BOM
# TODO: if encoding is just UTF16 - we ought to check for both
# TODO: big endian and little endian versions.
# For UTF16 we try big endian and little endian
# skip UTF8
### BOM discovered
##self.BOM = True
# Don't need to remove BOM
# If we get this far, will *probably* raise a DecodeError
# As it doesn't appear to start with a BOM
# Must be UTF8
# BOM removed
# No encoding specified - so we need to check for UTF8/UTF16
# didn't specify a BOM, or it's not a bytestring
# BOM discovered
# UTF8
# remove BOM
# UTF16 - have to decode
# No BOM discovered and no encoding specified, default to UTF-8
# NOTE: Could raise a ``UnicodeDecodeError``
# NOTE: The isinstance test here handles mixed lists of unicode/string
# NOTE: But the decode will break on any non-string values
# NOTE: Or could raise a ``UnicodeDecodeError``
# TODO: this may need to be modified
# intentially 'str' because it's just whatever the "normal"
# string type is for the python version we're dealing with
# do we have anything on the line ?
# preserve initial comment
# first we check if it's a section marker
# is a section line
# the new section is dropping back to a previous level
# the new section is a sibling of the current section
# the new section is a child the current section
# create the new section
# it's not a section marker,
# so it should be a valid ``key = value`` line
# is a keyword value
# value will include any inline comment
# check for a multiline value
# extract comment and lists
# add the key.
# we set unrepr because if we have got this far we will never
# be creating a new section
# no indentation used, set the type accordingly
# preserve the final comment
# we've reached the top level already
# shouldn't get here
# raise the error - parsing stops here
# store the error
# reraise when parsing has finished
# should only happen during parsing of lists
# Only if multiline is set, so that it is used for values not
# keys, and not values that are part of a list
# we don't quote if ``list_values=False``
# for normal values either single or double quotes will do
# will only happen if multiline is off - e.g. '\n' in key
# if value has '\n' or "'" *and* '"', it will need triple quotes
# Parsing a configspec so don't handle comments
# do we look for lists in values ?
# NOTE: we don't unquote here
# the value is badly constructed, probably badly quoted,
# or an invalid list
# change this if you want to accept empty values
# NOTE: note there is no error handling from here if the regex
# is wrong: then incorrect values will slip through
# the single comma - meaning an empty list
# handle empty values
# FIXME: the '' is a workaround because our regex now matches
# not a list value
# somehow the triple quote is missing
# end of multiline, process it
# we've got to the end of the config, oops...
# a badly formed line
# FIXME: Should we check that the configspec was created with the
# FIXME: Should these errors have a reference
# copy comments
# Could be a scalar when we expect a section
# NOTE: the calls to self._quote here handles non-StringType values.
# Public methods
# this can be true if initialised from a dictionary
# don't write out default values
# a section
# output a list of lines
# might need to encode
# NOTE: This will *screw* UTF16, each line will start with the BOM
# Add the UTF8 BOM
# Turn the list to a string, joined with correct newlines
# Windows specific hack to avoid writing '\r\r\n'
# We do this once to remove a top level dependency on the validate module
# Which makes importing configobj faster
# section.default_values.clear() #??
# No default, bad default or validator has no 'get_default_value'
# (e.g. SimpleVal)
# preserve the error
# if we are doing type conversion
# or the value is a supplied default
# preserve lists
# convert the None from a default to a ''
# reserved names
# missing entries
# or entries from defaults
# Missing sections will have been created as empty ones when the
# configspec was read.
# FIXME: this means DEFAULT is not copied in copy mode
# If the section wasn't created (i.e. it wasn't missing)
# then we can't return False, we need to preserve errors
# If we are preserving errors, but all
# the failures are from missing sections / values
# then we can return False. Otherwise there is a
# real failure that we need to preserve.
# FIXME: Should be done by '_initialise', but ConfigObj constructor (and reload)
# Just to be sure ;-)
# first time called
# Go down one level
# Go up one level
# validate.py
# A Validator object
# Mark Andrews: mark AT la-la DOT com
# two groups
# one group
# import here to avoid it when ip_addr values are not used
# no need to intercept here, 4294967295L is fine
# this regex does the initial parsing of the checks
# this regex takes apart keyword arguments
# this regex finds keyword=list(....) type values
# this regex takes individual values out of lists - in one pass
# These regexes check a set of arguments for validity
# and then pull the members out
# tekNico: for use by ConfigObj
# no information needed here - to be handled by caller
# Special case a quoted None
# We call list and dict below to work with *copies* of the data
# rather than the original (which are mutable of course)
# Bad syntax
# pull out args of group 2
# args may need whitespace removing (before removing quotes)
# allows for function names without (args)
# Default must be deleted if the value is specified too,
# otherwise the check function will get a spurious "default" keyword arg
# built in checks
# you can override these by setting the appropriate name
# in Validator.functions
# note: if the params are specified wrongly in your input string,
# if it's a string - does it represent an integer ?
# if it's a string - does it represent a float ?
# we do an equality test rather than an identity test
# this ensures Python 2.2 compatibilty
# and allows 0 and 1 to represent True and False
# run the code tests in doctest format
#: A `referencing.jsonschema.SchemaRegistry` containing all of the official
#: meta-schemas and vocabularies.
# type: ignore[import-not-found, no-redef]
# importlib.resources.abc.Traversal doesn't have nice ways to do this that
# I'm aware of...
# It can't recurse arbitrarily, e.g. no ``.glob()``.
# So this takes some liberties given the real layout of what we ship
# (only 2 levels of nesting, no directories within the second level).
# for backwards compatability because this used to be a dict
# Sway only output fields:
# set simple properties
# XXX in 4.12, marks is an array (old property was a string "mark")
# XXX this is for compatability with 4.8
# set complex properties
# safety string for i3-ipc
# in seconds
# XXX: Message shouldn't be any longer than the data
# for the auto_reconnect feature only
# XXX: can the socket path change between restarts?
# special case: ipc-shutdown is not in the protocol
# TODO deprecate this
# we have not implemented this event
# sway only
# not included in sway
# i3 didn't include the 'first' field in 4.15. See i3/i3#3271.
# sway-specific command types
# first try environment variables
# next try the root window property
# finally try the binaries
# could not find the socket path
# events have the highest bit set
# a reply
# ===- __init__.py - Clang Python Bindings --------------------*- python -*--===#
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# ===------------------------------------------------------------------------===#
# ===- cindex.py - Python Indexing Library Bindings -----------*- python -*--===#
# TODO
# ====
# o API support for invalid translation units. Currently we can't even get the
# o fix memory management issues (currently client must hold on to index and
# o expose code completion APIs.
# o cleanup ctypes wrapping, would be nice to separate the ctypes details more
# o implement additional SourceLocation, SourceRange, and File methods.
# Python 3 strings are unicode, translate them to/from utf8 for C-interop.
# type: ignore [override]
# Support passing null to C functions expecting char arrays
# ctypes doesn't implicitly convert c_void_p to the appropriate wrapper
# object. This is a problem, because it means that from_parameter will see an
# integer and pass the wrong value on platforms where int != void*. Work around
# this by marshalling object arguments as void**.
### Exception Classes ###
# Indicates that an unknown error occurred. This typically indicates that
# I/O failed during save.
# Indicates that errors during translation prevented saving. The errors
# should be available via the TranslationUnit's diagnostics.
# Indicates that the translation unit was somehow invalid.
### Structures and Utility Classes ###
# type: ignore [no-any-return]
# FIXME: Eliminate this and make normal constructor? Requires hiding ctypes
# object.
# If we get no tokens, no memory was allocated. Be sure not to return
# anything and potentially call a destructor on nothing.
### Cursor Kinds ###
# Declaration Kinds
# A declaration whose specific kind is not exposed via this interface.
# Unexposed declarations have the same operations as any other kind of
# declaration; one can extract their location information, spelling, find
# their definitions, etc. However, the specific kind of the declaration is
# not reported.
# A C or C++ struct.
# A C or C++ union.
# A C++ class.
# An enumeration.
# A field (in C) or non-static data member (in C++) in a struct, union, or
# C++ class.
# An enumerator constant.
# A function.
# A variable.
# A function or method parameter.
# An Objective-C @interface.
# An Objective-C @interface for a category.
# An Objective-C @protocol declaration.
# An Objective-C @property declaration.
# An Objective-C instance variable.
# An Objective-C instance method.
# An Objective-C class method.
# An Objective-C @implementation.
# An Objective-C @implementation for a category.
# A typedef.
# A C++ class method.
# A C++ namespace.
# A linkage specification, e.g. 'extern "C"'.
# A C++ constructor.
# A C++ destructor.
# A C++ conversion function.
# A C++ template type parameter
# A C++ non-type template parameter.
# A C++ template template parameter.
# A C++ function template.
# A C++ class template.
# A C++ class template partial specialization.
# A C++ namespace alias declaration.
# A C++ using directive
# A C++ using declaration
# A Type alias decl.
# A Objective-C synthesize decl
# A Objective-C dynamic decl
# A C++ access specifier decl.
# Reference Kinds
# A reference to a type declaration.
# A type reference occurs anywhere where a type is named but not
# declared. For example, given:
# The typedef is a declaration of size_type (CXCursor_TypedefDecl),
# while the type of the variable "size" is referenced. The cursor
# referenced by the type of size is the typedef for size_type.
# A reference to a class template, function template, template
# template parameter, or class template partial specialization.
# A reference to a namespace or namepsace alias.
# A reference to a member of a struct, union, or class that occurs in
# some non-expression context, e.g., a designated initializer.
# A reference to a labeled statement.
# A reference to a set of overloaded functions or function templates that
# has not yet been resolved to a specific function or function template.
# A reference to a variable that occurs in some non-expression
# context, e.g., a C++ lambda capture list.
# Invalid/Error Kinds
# Expression Kinds
# An expression whose specific kind is not exposed via this interface.
# Unexposed expressions have the same operations as any other kind of
# expression; one can extract their location information, spelling,
# children, etc.
# However, the specific kind of the expression is not reported.
# An expression that refers to some value declaration, such as a function,
# variable, or enumerator.
# An expression that refers to a member of a struct, union, class,
# Objective-C class, etc.
# An expression that calls a function.
# An expression that sends a message to an Objective-C object or class.
# An expression that represents a block literal.
# An integer literal.
# A floating point number literal.
# An imaginary number literal.
# A string literal.
# A character literal.
# A parenthesized expression, e.g. "(1)".
# This AST node is only formed if full location information is requested.
# This represents the unary-expression's (except sizeof and
# alignof).
# [C99 6.5.2.1] Array Subscripting.
# A builtin binary operation expression such as "x + y" or "x <= y".
# Compound assignment such as "+=".
# The ?: ternary operator.
# An explicit cast in C (C99 6.5.4) or a C-style cast in C++
# (C++ [expr.cast]), which uses the syntax (Type)expr.
# For example: (int)f.
# [C99 6.5.2.5]
# Describes an C or C++ initializer list.
# The GNU address of label extension, representing &&label.
# This is the GNU Statement Expression extension: ({int X=4; X;})
# Represents a C11 generic selection.
# Implements the GNU __null extension, which is a name for a null
# pointer constant that has integral type (e.g., int or long) and is the
# same size and alignment as a pointer.
# The __null extension is typically only used by system headers, which
# define NULL as __null in C++ rather than using 0 (which is an integer that
# may not match the size of a pointer).
# C++'s static_cast<> expression.
# C++'s dynamic_cast<> expression.
# C++'s reinterpret_cast<> expression.
# C++'s const_cast<> expression.
# Represents an explicit C++ type conversion that uses "functional"
# notion (C++ [expr.type.conv]).
# Example:
# \code
# \endcode
# A C++ typeid expression (C++ [expr.typeid]).
# [C++ 2.13.5] C++ Boolean Literal.
# [C++0x 2.14.7] C++ Pointer Literal.
# Represents the "this" expression in C++
# [C++ 15] C++ Throw Expression.
# This handles 'throw' and 'throw' assignment-expression. When
# assignment-expression isn't present, Op will be null.
# A new expression for memory allocation and constructor calls, e.g:
# "new CXXNewExpr(foo)".
# A delete expression for memory deallocation and destructor calls,
# e.g. "delete[] pArray".
# Represents a unary expression.
# ObjCStringLiteral, used for Objective-C string literals i.e. "foo".
# ObjCEncodeExpr, used for in Objective-C.
# ObjCSelectorExpr used for in Objective-C.
# Objective-C's protocol expression.
# An Objective-C "bridged" cast expression, which casts between Objective-C
# pointers and C pointers, transferring ownership in the process.
# Represents a C++0x pack expansion that produces a sequence of
# expressions.
# A pack expansion expression contains a pattern (which itself is an
# expression) followed by an ellipsis. For example:
# Represents an expression that computes the length of a parameter
# pack.
# Represents a C++ lambda expression that produces a local function
# Objective-c Boolean Literal.
# Represents the "self" expression in a ObjC method.
# OpenMP 4.0 [2.4, Array Section].
# Represents an @available(...) check.
# Fixed point literal.
# OpenMP 5.0 [2.1.4, Array Shaping].
# OpenMP 5.0 [2.1.6 Iterators].
# OpenCL's addrspace_cast<> expression.
# Expression that references a C++20 concept.
# Expression that references a C++20 requires expression.
# Expression that references a C++20 parenthesized list aggregate
# initializer.
# Represents a C++26 pack indexing expression.
# A statement whose specific kind is not exposed via this interface.
# Unexposed statements have the same operations as any other kind of
# statement; one can extract their location information, spelling, children,
# etc. However, the specific kind of the statement is not reported.
# A labelled statement in a function.
# A compound statement
# A case statement.
# A default statement.
# An if statement.
# A switch statement.
# A while statement.
# A do statement.
# A for statement.
# A goto statement.
# An indirect goto statement.
# A continue statement.
# A break statement.
# A return statement.
# A GNU-style inline assembler statement.
# Objective-C's overall @try-@catch-@finally statement.
# Objective-C's @catch statement.
# Objective-C's @finally statement.
# Objective-C's @throw statement.
# Objective-C's @synchronized statement.
# Objective-C's autorelease pool statement.
# Objective-C's for collection statement.
# C++'s catch statement.
# C++'s try statement.
# C++'s for (* : *) statement.
# Windows Structured Exception Handling's try statement.
# Windows Structured Exception Handling's except statement.
# Windows Structured Exception Handling's finally statement.
# A MS inline assembly statement extension.
# The null statement.
# Adaptor class for mixing declarations with statements and expressions.
# OpenMP parallel directive.
# OpenMP SIMD directive.
# OpenMP for directive.
# OpenMP sections directive.
# OpenMP section directive.
# OpenMP single directive.
# OpenMP parallel for directive.
# OpenMP parallel sections directive.
# OpenMP task directive.
# OpenMP master directive.
# OpenMP critical directive.
# OpenMP taskyield directive.
# OpenMP barrier directive.
# OpenMP taskwait directive.
# OpenMP flush directive.
# Windows Structured Exception Handling's leave statement.
# OpenMP ordered directive.
# OpenMP atomic directive.
# OpenMP for SIMD directive.
# OpenMP parallel for SIMD directive.
# OpenMP target directive.
# OpenMP teams directive.
# OpenMP taskgroup directive.
# OpenMP cancellation point directive.
# OpenMP cancel directive.
# OpenMP target data directive.
# OpenMP taskloop directive.
# OpenMP taskloop simd directive.
# OpenMP distribute directive.
# OpenMP target enter data directive.
# OpenMP target exit data directive.
# OpenMP target parallel directive.
# OpenMP target parallel for directive.
# OpenMP target update directive.
# OpenMP distribute parallel for directive.
# OpenMP distribute parallel for simd directive.
# OpenMP distribute simd directive.
# OpenMP target parallel for simd directive.
# OpenMP target simd directive.
# OpenMP teams distribute directive.
# OpenMP teams distribute simd directive.
# OpenMP teams distribute parallel for simd directive.
# OpenMP teams distribute parallel for directive.
# OpenMP target teams directive.
# OpenMP target teams distribute directive.
# OpenMP target teams distribute parallel for directive.
# OpenMP target teams distribute parallel for simd directive.
# OpenMP target teams distribute simd directive.
# C++2a std::bit_cast expression.
# OpenMP master taskloop directive.
# OpenMP parallel master taskloop directive.
# OpenMP master taskloop simd directive.
# OpenMP parallel master taskloop simd directive.
# OpenMP parallel master directive.
# OpenMP depobj directive.
# OpenMP scan directive.
# OpenMP tile directive.
# OpenMP canonical loop.
# OpenMP interop directive.
# OpenMP dispatch directive.
# OpenMP masked directive.
# OpenMP unroll directive.
# OpenMP metadirective directive.
# OpenMP loop directive.
# OpenMP teams loop directive.
# OpenMP target teams loop directive.
# OpenMP parallel loop directive.
# OpenMP target parallel loop directive.
# OpenMP parallel masked directive.
# OpenMP masked taskloop directive.
# OpenMP masked taskloop simd directive.
# OpenMP parallel masked taskloop directive.
# OpenMP parallel masked taskloop simd directive.
# OpenMP error directive.
# OpenMP scope directive.
# OpenMP stripe directive.
# OpenACC Compute Construct.
# Other Kinds
# Cursor that represents the translation unit itself.
# The translation unit cursor exists primarily to act as the root cursor for
# traversing the contents of a translation unit.
# Attributes
# An attribute whoe specific kind is note exposed via this interface
# Preprocessing
# Extra declaration
# A module import declaration.
# A type alias template declaration
# A static_assert or _Static_assert node
# A friend declaration
# A concept declaration
# A code completion overload candidate.
### Template Argument Kinds ###
### Exception Specification Kinds ###
### Cursors ###
# This function is not null-guarded because it is used in cursor_null_guard itself
# Not null-guarded for consistency with __eq__
# TODO: Should probably check that this is either a reference or
# declaration prior to issuing the lookup.
# Figure out the underlying type of the enum to know if it
# is a signed or unsigned quantity.
# If this triggers an AttributeError, the instance was not properly
# created.
# FIXME: Expose iteration from CIndex, PR6125.
# FIXME: Document this assertion in API.
# Create reference to TU so it isn't GC'd before Cursor.
# Store a reference to the TU in the Python object so it won't get GC'd
# before the Cursor.
### Availability Kinds ###
### C++ access specifiers ###
### Type Kinds ###
# FIXME Support slice objects.
# instantiated.
## CIndex Objects ##
# CIndex objects (derived from ClangObject) are essentially lightweight
# wrappers attached to some underlying object, which is exposed via CIndex as
# a void*.
# Functions calls through the python interface are rather slow. Fortunately,
# for most symboles, we do not need to perform a function call. Their spelling
# never changes and is consequently provided by this spelling cache.
# 0: CompletionChunk.Kind("Optional"),
# 1: CompletionChunk.Kind("TypedText"),
# 2: CompletionChunk.Kind("Text"),
# 3: CompletionChunk.Kind("Placeholder"),
# 4: CompletionChunk.Kind("Informative"),
# 5 : CompletionChunk.Kind("CurrentParameter"),
# CompletionChunk.Kind("LeftParen"),
# CompletionChunk.Kind("RightParen"),
# CompletionChunk.Kind("LeftBracket"),
# CompletionChunk.Kind("RightBracket"),
# CompletionChunk.Kind("LeftBrace"),
# CompletionChunk.Kind("RightBrace"),
# CompletionChunk.Kind("LeftAngle"),
# CompletionChunk.Kind("RightAngle"),
# CompletionChunk.Kind("Comma"),
# 15: CompletionChunk.Kind("ResultType"),
# CompletionChunk.Kind("Colon"),
# CompletionChunk.Kind("SemiColon"),
# CompletionChunk.Kind("Equal"),
# CompletionChunk.Kind("HorizontalSpace"),
# 20: CompletionChunk.Kind("VerticalSpace")
# We do not use @CachedProperty here, as the manual implementation is
# apparently still significantly faster. Please profile carefully if you
# would like to add CachedProperty back.
# Defining __getitem__ and __len__ is enough to make an iterable
# but the typechecker doesn't understand that.
# Default parsing mode.
# Instruct the parser to create a detailed processing record containing
# metadata not normally retained.
# Indicates that the translation unit is incomplete. This is typically used
# when parsing headers.
# Instruct the parser to create a pre-compiled preamble for the translation
# unit. This caches the preamble (included files at top of source file).
# This is useful if the translation unit will be reparsed and you don't
# want to incur the overhead of reparsing the preamble.
# Cache code completion information on parse. This adds time to parsing but
# speeds up code completion.
# Flags with values 16 and 32 are deprecated and intentionally omitted.
# Do not parse function bodies. This is useful if you only care about
# searching for declarations/definitions.
# Used to indicate that brief documentation comments should be included
# into the set of code completions returned from this translation unit.
# Automatically adapt CIndex/ctype pointers to python objects
# Copy a reference to the TranslationUnit to prevent premature GC.
# An unknown error occurred
# The database could not be loaded
# Keep a reference to the originating CompileCommands
# to prevent garbage collection
# Now comes the plumbing to hook up the C library.
# Register callback types
# Functions strictly alphabetical order.
# ("clang_disposeCXTUResourceUsage",
# ("clang_getCXTUResourceUsage",
# A function may not exist, if these bindings are used with an older or
# incompatible version of libclang.so.
# If the object is already a plain string, skip __html__ check and string
# conversion. This is the most common use case.
# Use type(s) instead of s.__class__ because a proxy object may be reporting
# the __class__ of the proxied value.
# a tuple of arguments, each wrapped
# a mapping of arguments, wrapped
# a single argument, wrapped with the helper and a tuple
# Look for comments then tags separately. Otherwise, a comment that
# contains a tag would end early, leaving some of the comment behind.
# keep finding comment start marks
# find a comment end mark beyond the start, otherwise stop
# remove tags using the same method
# collapse spaces
# We need to make sure the format spec is str here as
# otherwise the wrong callback methods are invoked.
# !/usr/bin/env python3
# /tmp/hypr moved to $XDG_RUNTIME_DIR/hypr in #5788
# We only use it on Hyprland
# We have no way to check it on sway
# 1. Mirroring is impossible to check in any way. We need to parse back the monitors.conf file, and it sucks.
# skip comments
# 2. This won't work w/ Hyprland <= 0.36.0
# split "Hz"
# to identify Gdk.Monitor
# We used to assign Gdk.Monitor to output on the basis of x and y coordinates, but it no longer works,
# starting from gtk3-1:3.24.42: all monitors have x=0, y=0. This is most likely a bug, but from now on
# we must rely on gdk monitors order.
# We will read all the meaningful lines if -n argument not given or >= number of lines.
# Binding workspaces to a monitor, e.g.:
# 'workspace=1,monitor:desc:AOC 2475WR F17H4QA000449' or
# 'workspace=1,monitor:HDMI-A-1'
# This was done by mistake, and the config file need to be migrated to the proper path
# Create empty files if not found
# Active outputs, listed from the sway tree; stores name and all attributes.
# Just a dictionary "name": is_active - from get_outputs()
# "workspace_num": "display_name"
# Glade form fields
# Value from config adjusted to current view scale
# basic vocabulary (for en_US)
# translate if translation available
# localized vocabulary
# offset == distance of parent widget from edge of screen ...
# plus distance from pointer to edge of widget
# max_x, max_y both relative to the parent
# note that we're rounding down now so that these max values don't get
# rounded upward later and push the widget off the edge of its parent.
# x_root,x_root relative to screen
# x,y relative to parent (fixed widget)
# px,py stores previous values of x,y
# get starting values for x,y
# make sure the potential coordinates x,y:
# Just in case ;)
# not really from the widget, but from the global value
# This is just to set active_id
# Output properties
# self.modes = modes
# converts "enabled | disabled" to bool
# Button properties
# save config file
# If the output has just been turned back on, Gdk.Display.get_default() may need some time
# just active outputs have their buttons
# Check if the outputs file exists
# Load a backup to restore settings if needed
# avoid looking up the hardware name
# GtkLayerShell.set_keyboard_mode(confirm_win, GtkLayerShell.KeyboardMode.ON_DEMAND)
# Parse backup file back to commands and execute them
# omit comments & empty lines
# remove "{"
# convert multiple spaces into single
# execute line by line
# Don't execute any command here, just save the file and wait for Hyprland to notice and apply the change.
# Let's give it some time to do it before refreshing UI.
# migrate old config file, if not yet migrated
# if sway:
# Gtk.Fixed does not respect expand properties. That's why we need
# to scale the window automagically if opened as a floating_con
# SecretStorage module for Python
# Access passwords using the SecretService DBus API
# Author: Dmitry Shachnev, 2013-2018
# License: 3-clause BSD, see LICENSE file
# PKCS-7 style padding
# Author: Dmitry Shachnev, 2013-2020
# os.environ['DBUS_SESSION_BUS_ADDRESS'] may raise it
# Author: Dmitry Shachnev, 2013-2016
# This file contains some common defines.
# Author: Dmitry Shachnev, 2014-2018
# Needed for mypy
# A standard 1024 bits (128 bytes) prime number for use in Diffie-Hellman exchange
# type: Optional[str]
# type: Optional[bytes]
# 128-bytes-long strong random number
# Prepend NULL bytes if needed
# HKDF with null salt, empty info and SHA-256 hash
# Resulting AES key should be 128-bit
# Author: Dmitry Shachnev, 2013-2022
# GNOME Keyring provides session collection where items
# are stored in process memory.
# Author: Dmitry Shachnev, 2012-2018
# This file is part of avahi.
# avahi is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
# avahi is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
# License for more details.
# License along with avahi; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA.
# Some definitions matching those in avahi-common/defs.h
# Python 3: iterating over bytes yields ints
# Python 2: iterating over str yields str
#!/usr/bin/python
# -*-python-*-
# copied from setuptools.logging, omitting monkeypatching
# Handle wrong byte order
# Xcode will not set the deployment target below 11.0.0
# for the arm64 architecture. Ignore the arm64 deployment
# in fat binaries when the target is 11.0.0, that way
# the other architectures can select a lower deployment
# target.
# This is safe because there is no arm64 variant for
# macOS 10.15 or earlier.
# macosx platform tag do not support minor bugfix release
# ensure Python logging is configured
# setuptools < ??
# pip pull request #3497
# packaging pull request #234
# TODO armv8l, packaging pull request #690 => this did not land
# in pip/packaging yet
# non-Windows
# Windows
# we want something like pypy36-pp73
# needed for correct `wheel_dist_name`
# Support legacy [wheel] section for setting universal
# please don't define this in your global configs
# bdist sets self.plat_name if unset, we should only use it for purepy
# wheels if the user supplied it.
# macosx contains system version in platform name so need special handle
# on macosx always limit the platform name to comply with any
# c-extension modules in bdist_dir, since the user can specify
# a higher MACOSX_DEPLOYMENT_TARGET via tools like CMake
# on other platforms, and on macosx if there are no c-extension
# modules, use the default platform name.
# We don't work on CPython 3.1, 3.0.
# issue gh-374: allow overriding plat_name
# A wheel without setuptools scripts is more cross-platform.
# Use the (undocumented) `no_ep` option to setuptools'
# install_scripts command to avoid creating entry point scripts.
# Use a custom scheme for the archive, because we have to decide
# at installation time which scheme to use.
# win32 barfs if any of these are ''; could be '.'?
# (distutils.command.install:change_roots bug)
# Make the archive
# Add to 'Distribution.dist_files' so that the "upload" command works
# like 3.7
# of the spec
# Doesn't work for bdist_wininst
# copied from dir_util, deleted
# Setuptools has resolved any patterns to actual file names
# Setuptools recognizes the license_files option but does not do globbing
# Prior to those, wheel is entirely responsible for handling license files
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
# .egg-info is a single file
# .egg-info is a directory
# ignore common egg metadata that is useless to wheel
# delete dependency_links if it is only whitespace
# Non-greedy matching of an optional build number may be too clever (more
# invalid wheel filenames will match). Separate regex for .dist-info?
# 1980-01-01 00:00:00 UTC
# Some applications need reproducible .whl files, but they can't do this without
# forcing the timestamp of the individual ZipInfo objects. See issue #143.
# Ignore RECORD and any embedded wheel signatures
# Fill in the expected hashes by reading them from RECORD
# Monkey patch the _update_crc method to also check for the hash from
# RECORD
# Sort the directory names so that `os.walk` will walk them in a
# defined order on the next iteration.
# Write RECORD
# setuptools extra:condition syntax
# Those will be regenerated from `requires.txt`.
# if the first line of long_description is blank,
# the first line here will be indented.
# needed for console script
# To be able to run 'python wheel-0.9.whl/wheel':
# Better integration/compatibility with setuptools:
# in the case new fixes or PEPs are implemented in setuptools
# there is no need to backport them to the deprecated code base.
# This is useful in the case of old packages in the ecosystem
# that are still used but have low maintenance.
# Only used in the case of old setuptools versions.
# If the user wants to get the latest fixes/PEPs,
# they are encouraged to address the deprecation warning.
# Start changing as needed
# preserve the comment
# Find the .dist-info directory
# Determine the target wheel filename
# Read the tags and the existing build number from .dist-info/WHEEL
# Set the wheel file name and add/replace/remove the Build tag in .dist-info/WHEEL
# Reassemble the tags for the wheel file
# Repack the wheel
# Set permissions to the same values as they were set in the archive
# We have to do this manually due to
# https://github.com/python/cpython/issues/59999
# Binary wheels are assumed to be for CPython
# Skip pure directory entries
# Handle files in the egg-info directory specially, selectively moving
# them to the dist-info directory while converting as needed
# For any other file, just pass it through
# Determine the initial architecture and Python version from the file name
# (if possible)
# Look for an .egg-info directory and any .pyd files for more precise info
# Write the METADATA file
# Write the WHEEL file
# metaclass implementation idea from
# http://blog.ianbicking.org/more-on-python-metaprogramming-comment-14.html
# control verbosity of error output
# parse iid out of tag if needed
# check for transports that may be tunneled
# Copyright (c) 2006 Andy Gross.  See LICENSE.txt for details.
# Both the community strings "public" and "private"
# cannot be used to set variables using "snmpset"
# operations. Run the "snmpset" tests after replacing
# the following 'Community' string with any other
# configured community string from the snmpd.conf file.
# snmpset fails for the 'sysLocation' variable,
# as the syslocation token is configured in the
# snmpd.conf file, which disables write access
# to the variable.
# Hence using the 'sysName' variable for the set tests.
# GetBulk is not supported for v1
# snmpEngineID.0
# Copyright (C) 2020 The Psycopg Team
# If there is already a pipeline, ride it, in order to avoid
# sending unnecessary Sync.
# Otherwise, make a new one
# Try to cancel the query, then consume the results
# already received.
# Try to get out of ACTIVE state. Just do a single attempt, which
# should work to recover from an error or query cancelled.
# If a fresher result has been set on the cursor by the Copy object,
# read its properties (especially rowcount).
# Copyright (C) 2021 The Psycopg Team
# drop with Python 3.8
# Row factories
# Implementation detail: make sure this is the tuple type itself, not an
# equivalent function, because the C code fast-paths on it.
# "describe" in named cursors
# Record if a dumper or loader has an optimised version.
# implement the AdaptContext protocol too
# Register the dumper both as its format and as auto
# so that the last dumper registered is used in auto (%s) format
# Register the dumper by oid, if the oid of the dumper is fixed
# Fast path: the class has a known dumper.
# If the KeyError was caused by cls missing from dmap, let's
# look for different cases.
# Look for the right class, including looking at superclasses
# If the adapter is not found, look for its name as a string
# Replace the class name with the class itself
# Check if the class comes from psycopg.types and there is a class
# with the same name in psycopg_c._psycopg.
# Micro-optimization: copying these objects is faster than creating new dicts
# noqa: F401 import early to stabilize side effects
# Set the logger to a quiet default, can be enabled if needed
# DBAPI compliance
# register default adapters for PostgreSQL
# exposed by the package
# After the default ones, because these can deal with the bytea oid better
# Must come after all the types have been registered
# Note: defining the exported methods helps both Sphynx in documenting that
# this is the canonical place to obtain them and should be used by MyPy too,
# so that function signatures are consistent with the documentation.
# DBAPI exports
# DBAPI type constructors and singletons
# Global objects with PostgreSQL builtins and globally registered user types.
# Global adapter maps with PostgreSQL types configuration
# Use tools/update_oids.py to update this data.
# autogenerated: start
# Generated from PostgreSQL 17.0
# autogenerated: end
# Both numpy Decimal and uint64 dumpers use the numeric oid, but the former
# covers the entire numeric domain, whereas the latter only deals with
# integers. For this reason, if we specify dumpers by oid, we want to make
# sure to get the Decimal dumper. We enforce that by registering the
# numeric dumpers last.
# Size of data to accumulate before sending it down the network. We fill a
# buffer this size field by field, and when it passes the threshold size
# we ship it, so it may end up being bigger than this.
# Maximum data size we want to queue to send to the libpq copy. Sending a
# buffer too big to be handled can cause an infinite loop in the libpq
# (#255) so we want to split it in more digestable chunks.
# Note: making this buffer too large, e.g.
# MAX_BUFFER_SIZE = 1024 * 1024
# makes operations *way* slower! Probably triggering some quadraticity
# in the libpq memory management and data sending.
# Max size of the write queue of buffers. More than that copy will block
# Each buffer should be around BUFFER_SIZE size.
# On certain systems, memmove seems particularly slow and flushing often is
# more performing than accumulating a larger buffer. See #746 for details.
# High level copy protocol generators (state change of the Copy object)
# res is the final PGresult
# This result is a COMMAND_OK which has info about the number of rows
# returned, but not about the columns, which is instead an information
# that was received on the COPY_OUT result at the beginning of COPY.
# So, don't replace the results in the cursor, just update the rowcount.
# Get the final result to finish the copy operation
# true if the user is using write_row()
# Note down that we are writing in row mode: it means we will have
# to take care of the end-of-copy marker too
# Assume, for simplicity, that the user is not passing stupid
# things to the write function. If that's the case, things
# will fail downstream.
# If we have sent no data we need to send the signature
# and the trailer
# if we have sent data already, we have sent the signature
# too (either with the first row, or we assume that in
# block mode the signature is included).
# Write the trailer only if we are sending rows (with the
# assumption that who is copying binary data is sending the
# whole format).
# drop \n
# Signature
# extra length
# Override functions with fast versions if available
# Make them also the default dumpers when dumping by bytea oid
# Copyright (C) 2024 The Psycopg Team
# ASYNC:
# We couldn't resolve anything
# Order matters: first try all the load-balanced host in standby mode,
# then allow primary
# Local path, or no host to resolve
# Already resolved
# If the host is already an ip address don't try to resolve it
# COPY_OUT results have columns but no name
# these are tuples so they can be used as keys e.g. in prepared stmts
# The format requested by the user and the ones to really pass Postgres
# Avoid caching queries extremely long or with a huge number of
# parameters. They are usually generated by ORMs and have poor
# cacheablility (e.g. INSERT ... VALUES (...), (...) with varying
# numbers of tuples.
# see https://github.com/psycopg/psycopg/discussions/628
# Try concrete types, then abstract types
# The type of the _query2pg() and _query2pg_nocache() methods
# last part
# Note: the cache size is 128 items, but someone has reported throwing ~12k
# queries (of type `INSERT ... VALUES (...), (...)` with a varying amount of
# records), and the resulting cache size is >100Mb. So, we will avoid to cache
# large queries or queries with a large number of params. See
# https://github.com/sqlalchemy/sqlalchemy/discussions/10270
# pairs [(fragment, match], with the last match None
# drop the "%%", validate
# unescape '%%' to '%' if necessary, then merge the parts
# explicit message for a typical error
# Index or name
# ASYNC
# Create a new exception with the same type as the last one, containing
# all attempt errors while preserving backward compatibility.
# try to rollback, but if there are problems (connection in a bad
# state) just warn without clobbering the exception bubbling up.
# Close the connection only if it doesn't belong to a pool.
# TODO: maybe send a cancel on close, if the connection is ACTIVE?
# Allow interrupting the wait with a signal by reducing a long timeout
# into shorter intervals.
# Remove the backlog deque for the duration of this critical
# section to avoid reporting notifies twice.
# if notifies were received when the generator was off,
# return them in a first batch.
# Emit the notifications received.
# Stop if we have received enough notifications.
# Check the deadline after the loop to ensure that timeout=0
# polls at least once.
# WARNING: reference loop, broken ahead.
# Currently only used internally by Cursor.executemany() in a branch
# in which we already established that the connection has no pipeline.
# If this changes we may relax the asserts.
# On Ctrl-C, try to cancel the query in the server, otherwise
# the connection will remain stuck in ACTIVE state.
# as expected
# This might result in a nested transaction. What we want is to leave
# the function with the connection in the state we found (either idle
# or intrans)
# to_regtype() introduced in PostgreSQL 9.4 and CockroachDB 22.2
# `to_regtype()` returns the type oid or NULL, unlike the :: operator,
# which returns the type or raises an exception, which requires
# a transaction rollback and leaves traces in the server logs.
# Make a shallow copy: it will become a proper copy if the registry
# is edited.
# Allow info to customise further their relation with the registry
# Time to write! so, copy.
# WARNING: this file is auto-generated by 'async_to_sync.py'
# from the original file '_copy_async.py'
# DO NOT CHANGE! Change the original file instead.
# Copyright (C) 2023 The Psycopg Team
# End user sync interface
# The server has already finished to send copy data. The connection
# is already in a good state.
# Throw a cancel to the server, then consume the rest of the copy data
# (which might or might not have been already transferred entirely to
# the client, so we won't necessary see the exception associated with
# canceling).
# Most used path: we don't need to split the buffer in smaller
# bits, so don't make a copy.
# Copy a buffer too large in chunks to avoid causing a memory
# error in the libpq, which may cause an infinite loop (#255).
# The QueryCanceled is expected if we sent an exception message to
# pgconn.put_copy_end(). The Python exception that generated that
# cancelling is more important, so don't clobber it.
# Propagate the error to the main thread.
# warning: reference loop, broken by _write_end
# If the worker thread raies an exception, re-raise it to the caller.
# break reference loops if any
# Check if the worker thread raised any exception before terminating.
# If issue #304 is detected, raise an error instead of dumping wrong data.
# Hack required on Python 3.8 because subclassing Queue[T] fails at runtime.
# https://stackoverflow.com/questions/45414066/mypy-how-to-define-a-generic-subclass
# Always specify a timeout to make the wait interruptible.
# from the original file '_pipeline_async.py'
# Don't clobber an exception raised in the block with this one
# The object that will be exposed by the module.
# Copyright (C) 2022 The Psycopg Team
# In pipeline mode always use PQsendQueryParams - see #314
# Multiple statements in the same query are not allowed anyway.
# If we can, let's use simple query protocol,
# as it can execute more than one statement in a single query.
# re-exports
# None if executemany() not executing, True/False according to returning state
# We return columns if we have nfields, but also if we don't but
# the query said we got tuples (mostly to handle the super useful
# query "SELECT ;"
# no-op
# Generators for the high level operations on the cursor
# Like for sync/async connections, these are implemented as generators
# so that different concurrency strategies (threads,asyncio) can use their
# own way of waiting (or better, `connection.wait()`).
# Check if the query is prepared or needs preparing
# The query must be executed without preparing
# If the query is not already prepared, prepare it.
# Then execute it.
# Update the prepare state of the query.
# If an operation requires to flush our prepared statements cache,
# it will be added to the maintenance commands to execute later.
# run the query
# End of single row results
# Errors, unexpected values
# The connection gets in an unrecoverable state if we attempt COPY in
# pipeline mode. Forbid it explicitly.
# Merge the params client-side
# Note: the only reason to override format is to correctly set
# binary loaders on server-side cursors, because send_describe_portal
# only returns a text result.
# COPY_OUT has never info about nrows. We need such result for the
# columns in order to return a `description`, but not overwrite the
# cursor rowcount (which was set by the Copy object).
# Received from execute()
# Received from executemany()
# In non-returning case, set rowcount to the cumulated number of
# rows of executed queries.
# Don't reset the query because it may be useful to investigate after
# an error.
# from the original file 'cursor_async.py'
# from the original file '_server_cursor_async.py'
# Postgres doesn't have a reliable way to report a cursor out of bound
# To debug slowdown during connection:
# This call may read notifies: they will be saved in the
# PGconn buffer and passed to Python later, in `fetch()`.
# What might have happened here is that a previuos error disconnected
# the connection, for example a idle in transaction timeout.
# Check if we had received an error before: if that's the case
# just exit the loop. Our callers will handle this error and raise
# it as an exception.
# After entering copy mode the libpq will create a phony result
# for every request so let's break the endless loop.
# PIPELINE_SYNC is not followed by a NULL, but we return it alone
# similarly to other result sets.
# This shouldn't happen, but insisting hard enough, it will.
# For instance, in test_executemany_badquery(), with the COPY
# statement and the AsyncClientCursor, which disables
# prepared statements).
# Bail out from the resulting infinite loop.
# Consume notifies
# would block
# some data
# Retrieve the final result of copy
# TODO: too brutal? Copy worked.
# Retry enqueuing data until successful.
# WARNING! This can cause an infinite loop if the buffer is too large. (see
# ticket #255). We avoid it in the Copy object by splitting a large buffer
# into smaller ones. We prefer to do it there instead of here in order to
# do it upstream the queue decoupling the writer task from the producer one.
# Flushing often has a good effect on macOS because memcpy operations
# seem expensive on this platform so accumulating a large buffer has a
# bad effect (see #745).
# Repeat until it the message is flushed to the server
# Retry enqueuing end copy message until successful
# Nested pipeline case.
# Notice that this error might be pretty irrecoverable. It
# happens on COPY, for instance: even if sync succeeds, exiting
# fails with "cannot exit pipeline mode with uncollected results"
# No more results to fetch, but there may still be pending
# Wrappers to force numbers to be cast as specific PostgreSQL types
# These types are implemented here but exposed by `psycopg.types.numeric`.
# They are defined here to avoid a circular import.
# Yes, it may change on __enter__. No, I don't care, because the
# un-entered state is outside the public interface.
# Clobber an exception happened in the block with the exception
# caused by out-of-order transaction detected, so make the
# behaviour consistent with _commit_gen and to make sure the
# user fixes this condition, which is unrelated from
# operational error that might arise in the block.
# Also clear the prepared statements cache.
# Swallow the exception
# outer transaction: if no name it's only a begin, else
# there will be an additional savepoint
# inner transaction: it always has a name
# A single attempt to make. Don't mangle the conninfo string.
# Now all lists are either empty or have the same length
# TODO: check if in service
# Number of times a query is executed before it is prepared.
# Maximum number of prepared statements on the connection.
# Map (query, types) to the number of times the query was seen.
# Map (query, types) to the name of the statement if  prepared.
# Counter to generate prepared statements names
# The user doesn't want this query to be prepared
# The query was already prepared in this session
# The query has been executed enough times and needs to be prepared
# The query is not to be prepared yet
# We cannot prepare a multiple statement
# We don't prepare failed queries or other weird results
# don't do anything if prepared statements are disabled
# from the original file '_conninfo_attempts_async.py'
# WARNING: don't store context, or you'll create a loop with the Cursor
# mapping fmt, class -> Dumper instance
# mapping fmt, oid -> Dumper instance
# Not often used, so create it only if needed.
# mapping fmt, oid -> Loader instance
# sequence of load functions from value to python
# the length of the result columns
# mapping oid -> type sql representation
# If we have dumpers, it means set_dumper_types had been called, in
# which case self.types and self.formats are set to sequences of the
# right size.
# If the result is quoted, and the oid not unknown or text,
# add an explicit type cast.
# Check the last char because the first one might be 'E'.
# builtin: prefer "timestamptz" to "timestamp with time zone"
# Normally, the type of the object dictates how to dump it
# Reuse an existing Dumper class for objects of the same type
# If it's the first time we see this type, look for a dumper
# configured for it.
# Check if the dumper requires an upgrade to handle this specific value
# If it does, ask the dumper to create its own upgraded version
# As per docs: an empty string means the build default, not e.g.
# something configured by PGPORT
# https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-PORT
# Get the known defaults to avoid reporting them
# Not returned by the libq. Bug? Bet we're using SSH.
# re-exported
# On windows we cannot use os.fstat() to check a socket.
# Check if it was a timeout or we were disconnected
# Use an event to block and restart after the fd state changes.
# Not sure this is the best implementation but it's a start.
# Assume the connection was closed
# Specialised implementation of wait functions.
# Unlikely: the exception should have been raised above
# This happens on macOS but not on Linux (the xl list is set)
# If not imported, don't import it.
# Choose the best wait strategy for the platform.
# the selectors objects have a generic interface but come with some overhead,
# so we also offer more finely tuned implementations.
# Allow the user to choose a specific function for testing
# On Windows, for the moment, avoid using wait_c, because it was reported to
# use excessive CPU (see #645).
# TODO: investigate why.
# On Windows, SelectSelector should be the default.
# On linux, EpollSelector is the default. However, it hangs if the fd is
# closed while polling.
# type: ignore  # dnspython is currently optional and mypy fails if missing
# If hostaddr is defined don't do any resolution.
# If only one port is specified, it applies to all the hosts.
# ProgrammingError would have been more appropriate, but this is
# what the raise if the libpq fails connect in the same case.
# No SRV entry found. Delegate the libpq a QNAME=target lookup
# If there is precisely one SRV RR, and its Target is "." (the root
# domain), abort.
# Nothing found, we ended up with an empty list
# Divide the entries by priority:
# buffer object
# TODO: this is probably not the right way to whitelist pre
# pyre complains. Will wait for mypy to complain too to fix.
# init super() now to make the __repr__ not explode in case of error
# Literals
# Insert the name as the second word
# command_status is empty if the result comes from
# describe_portal, which means that we have just executed the DECLARE,
# so we can assume we are at the first row.
# If the cursor is being reused, the previous one must be closed.
# Set the format, which will be used by describe and fetch operations
# The above result only returned COMMAND_OK. Get the cursor shape
# if the connection is not in a sane state, don't even try
# If we are IDLE, a WITHOUT HOLD cursor will surely have gone already.
# if we didn't declare the cursor ourselves we still have to close it
# but we must make sure it exists.
# pipeline mode otherwise, unsupported here.
# If we are stealing the cursor, make sure we know its shape
# An object implementing the buffer protocol
# Waiting protocol types
# Adaptation types
# Unparsed xid: return the gtrid.
# XA xid: mash together the components.
# from the original file 'connection_async.py'
# mypy: disable-error-code="import-not-found, attr-defined"
# Note: "c" must the first attempt so that mypy associates the variable the
# right module interface. It will not result Optional, but hey.
# A couple of special cases used a bit everywhere.
# re-exoprts
# Default timeout for connection a attempt.
# Arbitrary timeout, what applied by the libpq on my computer.
# Your mileage won't vary.
# If no kwarg specified don't mung the conninfo but check if it's correct.
# Make sure to return a string, not a subtype, to avoid making Liskov sad.
# Override the conninfo with the parameters
# Drop the None arguments
# Verify the result is valid
# Follow the libpq convention:
# - 0 or less means no timeout (but we will use a default to simulate
# - at least 2 seconds.
# See connectDBComplete in fe-connect.c
# The sync connect function will stop on the default socket timeout
# Because in async connection mode we need to enforce the timeout
# ourselves, we need a finite value.
# Enforce a 2s min
# Row Type variable for Cursor (when it needs to be distinguished from the
# connection's one)
# DBAPI2 exposed exceptions
# None, but set to a copy of the global adapters map as soon as requested.
# Number of transaction blocks currently entered
# closed by an explicit close()
# xid, prepared
# Gather notifies when the notifies() generator is not running.
# It will be set to None during `notifies()` generator run.
# Attribute is only set if the connection is from a pool so we can tell
# apart a connection in the pool too (when _pool = None)
# Time after which the connection should be closed
# If fails on connection we might not have this attribute yet
# Connection correctly closed
# Connection in a pool so terminating with the program is normal
# Raise an exception if we are in a transaction
# implement the AdaptContext protocol
# cancel() is a no-op if the connection is closed;
# this allows to use the method as callback handler without caring
# about its life.
# `_notifies_backlog` is None if the `notifies()` generator is running
# Generators to perform high-level operations on the connection
# These operations are expressed in terms of non-blocking generators
# and the task of waiting when needed (when the generators yield) is left
# to the connections subclass, which might wait either in blocking mode
# or through asyncio.
# All these generators assume exclusive access to the connection: subclasses
# should have a lock and hold it before calling and consuming them.
# Unless needed, use the simple query protocol, e.g. to interact with
# pgbouncer. In pipeline mode we always use the advanced query protocol
# instead, see #350
# Get out of a "pipeline aborted" state
# TPC supported on every supported PostgreSQL version.
# Handle sqlstate codes for which we don't have a class.
# To make the exception picklable
# PGresult is a protocol, can't use isinstance
# Connection Exception
# Feature Not Supported
# XQuery Error
# Case Not Foud
# Cardinality Violation
# Data Exception
# Integrity Constraint Violation
# Invalid Cursor State
# Invalid Transaction State
# Invalid SQL Statement Name *
# Triggered Data Change Violation
# Invalid Authorization Specification
# Dependent Privilege Descriptors Still Exist
# Invalid Transaction Termination
# SQL Routine Exception *
# Invalid Cursor Name *
# External Routine Exception *
# External Routine Invocation Exception *
# Savepoint Exception *
# Invalid Catalog Name
# Invalid Schema Name
# Transaction Rollback
# Syntax Error or Access Rule Violation
# WITH CHECK OPTION Violation
# Insufficient Resources
# Program Limit Exceeded
# Object Not In Prerequisite State
# Operator Intervention
# System Error (errors external to PostgreSQL itself)
# Configuration File Error
# Foreign Data Wrapper Error (SQL/MED)
# PL/pgSQL Error
# Internal Error
# Error classes generated by tools/update_errors.py
# Class 02 - No Data (this is also a warning class per the SQL standard)
# Class 03 - SQL Statement Not Yet Complete
# Class 08 - Connection Exception
# Class 09 - Triggered Action Exception
# Class 0A - Feature Not Supported
# Class 0B - Invalid Transaction Initiation
# Class 0F - Locator Exception
# Class 0L - Invalid Grantor
# Class 0P - Invalid Role Specification
# Class 0Z - Diagnostics Exception
# Class 10 - XQuery Error
# Class 20 - Case Not Found
# Class 21 - Cardinality Violation
# Class 22 - Data Exception
# Class 23 - Integrity Constraint Violation
# Class 24 - Invalid Cursor State
# Class 25 - Invalid Transaction State
# Class 26 - Invalid SQL Statement Name
# Class 27 - Triggered Data Change Violation
# Class 28 - Invalid Authorization Specification
# Class 2B - Dependent Privilege Descriptors Still Exist
# Class 2D - Invalid Transaction Termination
# Class 2F - SQL Routine Exception
# Class 34 - Invalid Cursor Name
# Class 38 - External Routine Exception
# Class 39 - External Routine Invocation Exception
# Class 3B - Savepoint Exception
# Class 3D - Invalid Catalog Name
# Class 3F - Invalid Schema Name
# Class 40 - Transaction Rollback
# Class 42 - Syntax Error or Access Rule Violation
# Class 44 - WITH CHECK OPTION Violation
# Class 53 - Insufficient Resources
# Class 54 - Program Limit Exceeded
# Class 55 - Object Not In Prerequisite State
# Class 57 - Operator Intervention
# Class 58 - System Error (errors external to PostgreSQL itself)
# Class 72 - Snapshot Failure
# Class F0 - Configuration File Error
# Class HV - Foreign Data Wrapper Error (SQL/MED)
# Class P0 - PL/pgSQL Error
# Class XX - Internal Error
# Don't show a complete traceback upon raising these exception.
# Usually the traceback starts from internal functions (for instance in the
# server communication callbacks) but, for the end user, it's more important
# to get the high level information about where the exception was raised, for
# instance in a certain `Cursor.execute()`.
# "EUC_TW": not available in Python
# "MULE_INTERNAL": not available in Python
# this actually means no encoding, see PostgreSQL docs
# it is special-cased by the text loader.
# Add an alias without underscore, for lenient lookups
# namedtuple fields cannot start with underscore. So...
# Objects exported here
# noqa: F401 # reexport
# escaping and quoting
# This path is taken when quote is asked without a connection,
# usually it means by psycopg.sql.quote() or by
# 'Composible.as_string(None)'. Most often than not this is done by
# someone generating a SQL file to consume elsewhere.
# No quoting, only quote escaping, random bs escaping. See further.
# b"\\" in memoryview doesn't work so search for the ascii value
# If the string has no backslash, the result is correct and we
# don't need to bother with standard_conforming_strings.
# The libpq has a crazy behaviour: PQescapeString uses the last
# standard_conforming_strings setting seen on a connection. This
# means that backslashes might be escaped or might not.
# A syntax E'\\' works everywhere, whereas E'\' is an error. OTOH,
# if scs is off, '\\' raises a warning and '\' is an error.
# Check what the libpq does, and if it doesn't escape the backslash
# let's do it on our own. Never mind the race condition.
# Check in src/interfaces/libpq/libpq-fe.h for updates.
# Only for cancel connections.
# from src/include/postgres_ext.h
# (hopefully) temporary hack: libpq not in a standard place
# https://github.com/orgs/Homebrew/discussions/3595
# If pg_config is available and agrees, let's use its indications.
# Note: this function is exposed by the pq module and was documented, therefore
# we are not going to remove it, but we don't use it internally.
# Don't pass the encoding if not specified, because different classes have
# different defaults (conn has its own encoding. others default to utf8).
# Possible prefixes to strip for error messages, in the known localizations.
# This regular expression is generated from PostgreSQL sources using the
# `tools/update_error_prefixes.py` script
# Put together the [STATUS]
# Put together the (CONNECTION)
# It might happen if a new status on connection appears
# before upgrading the ConnStatus enum.
# import these names into the module on success as side effect
# The best implementation: fast but requires the system libpq installed
# Second best implementation: fast and stand-alone
# Pure Python implementation, slow and requires the system libpq installed.
# mypy: ignore-errors
# note_on_del: functions called on __del__ are imported as local values to
# avoid warnings on interpreter shutdown in case the module is gc'd before the
# object is destroyed.
# Not a weak reference on PyPy.
# avoid destroying the pgresult_ptr
# Keep alive for the lifetime of PGconn
# Close the connection only if it was created in this process,
# not if this object is being GC'd after fork.
# see note_on_del
# repurpose this function with a cheeky replacement of query with name,
# drop the param_types from the result
# convert bytearray/memoryview to bytes
# PQflush segfaults if it receives a NULL connection
# TODO: do it without copy
# TODO: might be done without copy (however C does that)
# TODO: might be able to do without a copy but it's a mess.
# the C library does it better anyway, so maybe not worth optimising
# https://mail.python.org/pipermail/python-dev/2012-September/121780.html
# out includes final 0
# not needed, but let's keep it symmetric with the escaping:
# if a connection is passed in, it must be valid.
# importing the ssl module sets up Python's libcrypto callbacks
# disable libcrypto setup in libpq, so it won't stomp on the callbacks
# that have already been set up
# Display the return value only if the function is declared to return
# something else than None.
# Likely this is a system using musl libc, see the following bug:
# https://github.com/python/cpython/issues/65821
# Get the libpq version to define what functions are available.
# libpq data types
# Function definitions as explained in PostgreSQL 12 documentation
# 33.1. Database Connection Control Functions
# PQconnectdbParams: doesn't seem useful, won't wrap for now
# PQsetdbLogin: not useful
# PQsetdb: not useful
# PQconnectStartParams: not useful
# 33.2. Connection Status Functions
# TODO: PQsslAttribute, PQsslAttributeNames, PQsslStruct, PQgetssl
# 33.3. Command Execution Functions
# PQresStatus: not needed, we have pretty enums
# TODO: PQresultVerboseErrorMessage
# 33.3.2. Retrieving Query Result Information
# PQfnumber: useless and hard to use
# not a null-terminated string
# PQprint: pretty useless
# 33.3.3. Retrieving Other Result Information
# 33.3.4. Escaping Strings for Inclusion in SQL Commands
# TODO: raises "wrong type" error
# PQescapeStringConn.argtypes = [
# PQescapeString.argtypes = [c_char_p, c_char_p, c_size_t]
# actually POINTER(c_ubyte) but this is easier
# 33.4. Asynchronous Command Processing
# 32.6. Retrieving Query Results in Chunks
# 33.6. Canceling Queries in Progress
# PQcancel.argtypes = [PGcancel_ptr, POINTER(c_char), c_int]
# 33.8. Asynchronous Notification
# 33.9. Functions Associated with the COPY Command
# 33.10. Control Functions
# 33.11. Miscellaneous Functions
# 33.12. Notice Processing
# 34.5 Pipeline Mode
# 33.18. SSL Support
# An object implementing the buffer protocol (ish)
# These objects will be imported lazily
# Exposed here
# microseconds, sec tz offset
# microseconds, days, months
# NOTE: whatever the PostgreSQL DateStyle input format (DMY, MDY, YMD)
# the YYYY-MM-DD is always understood correctly.
# Use (cls,) to report the need to upgrade to a dumper for timetz (the
# Frankenstein of the data types).
# Use (cls,) to report the need to upgrade (downgrade, actually) to a
# dumper for naive timestamp.
# The comma is parsed ok by PostgreSQL but it's not documented
# and it seems brittle to rely on it. CRDB doesn't consume it well.
# sql_standard format needs explicit signs
# otherwise -1 day 1 sec will mean -1 sec
# ISO
# German
# SQL or Postgres
# Pad the fraction of second to get micros
# Pad the fraction of second to get the micros
# Calculate timezone
# SQL
# Postgres
# Calculate timezone offset
# The return value is a datetime with the timezone of the connection
# (in order to be consistent with the binary loader, which is the only
# thing it can return). So create a temporary datetime object, in utc,
# shift it by the offset parsed from the timestamp, and then move it to
# the connection timezone.
# If we have created the temporary 'dt' it means that we have a
# datetime close to max, the shift pushed it past max, overflowing.
# In this case return the datetime in a fixed offset timezone.
# If we were asked about a timestamp which would overflow in UTC,
# but not in the desired timezone (e.g. datetime.max at Chicago
# timezone) we can still save the day by shifting the value by the
# timezone offset and then replacing the timezone.
# will raise downstream
# date is first token
# year is last token
# Pad to get microseconds from a fraction of seconds
# first register dumpers for 'timetz' oid, then the proper ones on time type.
# first register dumpers for 'timestamp' oid, then the proper ones
# on the datetime type.
# the server will raise DataError subclass if the string contains 0x00
# The next are concrete dumpers, each one specifying the oid they dump to.
# return bytes for SQL_ASCII db
# We cannot use the base quoting because escape_bytea already returns
# the quotes content. if scs is off it will escape the backslashes in
# the format, otherwise it won't, but it doesn't tell us what quotes to
# use.
# We don't have a connection, so someone is using us to generate a file
# to use off-line or something like that. PQescapeBytea, like its
# string counterpart, is not predictable whether it will escape
# backslashes.
# NOTE: the order the dumpers are registered is relevant. The last one
# registered becomes the default for each type. Usually, binary is the
# default dumper. For text we use the text dumper as default because it
# plays the role of unknown, and it can be cast automatically to other
# types. However, before that, we register dumper with 'text', 'varchar',
# 'name' oids, which will be used when a text dumper is looked up by oid.
# it's a hex string in binary
# A friendly error warning instead of an AttributeError in case fetch()
# failed and it wasn't noticed.
# Default binary dump
# Cache all dynamically-generated types to avoid leaks in case the types
# cannot be GC'd.
# Fast-path if too small to contain any data.
# Register arrays and type info
# Generate and register customized dumpers
# Register the loaders on the oid
# noqa[F401]
# If changing load function globally, just change the default on the
# global class
# If the scope is smaller than global, create subclassess and register
# them in the appropriate scope.
# If this function has no code or closure, optimistically assume that it's
# an ok object and stable enough that will not cause a leak. for example it
# might be a C function (such as `orjson.loads()`).
# If there is a closure, the same code might have different effects
# according to the closure arguments. We could do something funny like
# using the closure values to build a cache key, but I am not 100% sure
# about whether the closure objects are always `cell` (the type says it's
# `cell | Any`) and the solution would be partial anyway because of
# non-hashable closure objects, therefore let's just give a warning (which
# can be detected via logging) and avoid to create a leak.
# The globally used JSON dumps() function. It can be changed globally (by
# set_json_dumps) or by a subclass.
# The globally used JSON loads() function. It can be changed globally (by
# set_json_loads) or by a subclass.
# json.loads() cannot work on memoryview.
# Currently json binary format is nothing different than text, maybe with
# an extra memcopy we can avoid.
# Will be set by register() if the `factory` is a type
# Should be this, but it doesn't work
# oid = _oids.RECORD_OID
# Subclasses must set this info
# Note: this class is not a RecursiveDumper because it would use the
# same Transformer of the context, which would confuse dump_sequence()
# in case the composite contains another composite. Make sure to use
# a separate Transformer instance instead.
# If the final group ended in `,` there is a final NULL in the record
# that the regexp couldn't parse.
# Cache a transformer for each sequence of oid found.
# Usually there will be only one, but if there is more than one
# row in the same query (in different columns, or even in different
# records), oids might differ and we'd need separate transformers.
# generate and register a customized text loader
# generate and register a customized binary loader
# If the factory is a type, create and register dumpers for it
# Default to the text dumper because it is more flexible
# ndims, hasnull, elem oid
# dim, lower bound
# More than one type in the list. It might be still good, as long
# as they dump with the same oid (e.g. IPv4Network, IPv6Network).
# Checking for precise type. If the type is a subclass (e.g. Int4)
# we assume the user knows what type they are passing.
# If we got an int, let's see what is the biggest one in order to
# choose the smallest OID and allow Postgres to do the right cast.
# If we have an oid we don't need to upgrade
# Empty lists can only be dumped as text if the type is unknown.
# We consider an array of unknowns as unknown, so we can dump empty
# lists or lists containing only None elements.
# Double quotes and backslashes embedded in element values will be
# backslash-escaped.
# Postgres won't take unknown for element oid: fall back on text
# placeholders to avoid a resize
# If we get here, the sub_dumper must have been set
# No need to make a new loader because the binary datum has all the info.
# Note: caching this function is really needed because, if the C extension
# is available, the resulting type cannot be GC'd, so calling
# register_array() in a loop results in a leak. See #647.
# The text dumper is more flexible as it can handle lists of mixed type,
# so register it later.
# Remove the dimensions information prefix (``[...]=``)
# fon ndims > 1 we have to aggregate the array into sub-arrays
# Importing the uuid module is slow, so import it only on request.
# Hashable versions
# Will be set by register_enum()
# tolerate a missing enum, assuming it won't be used. If it is we
# will get a DataError on fetch.
# Binary Dumpers
# Map multiranges ranges and subtypes to info
# Order is arbitrary but consistent
# Subclasses to specify a specific subtype. Usually not needed
# If we are a subclass whose oid is specified we don't need upgrade
# postgres won't cast int4range -> int8range so we must use
# text format and unknown oid here
# Work around the normal mapping where text is dumped as unknown
# can raise IndexError
# Text dumpers for builtin multirange types wrappers
# These are registered on specific subtypes so that the upgrade mechanism
# doesn't kick in.
# Binary dumpers for builtin multirange types wrappers
# Text loaders for builtin multirange types
# Binary loaders for builtin multirange types
# range is empty
# lower bound is inclusive
# upper bound is inclusive
# lower bound is -infinity
# upper bound is +infinity
# Map ranges subtypes to info
# Make bounds consistent with infs
# It doesn't seem that Python has an ABC for ordered types.
# as the postgres docs describe for the server-side stuff,
# ordering is rather arbitrary, but will remain stable
# and consistent.
# Subclasses to specify a specific subtype. Usually not needed: only needed
# in binary copy, where switching to text is not an option.
# will replace the head later
# after the head
# Text dumpers for builtin range types wrappers
# Binary dumpers for builtin range types wrappers
# Text loaders for builtin range types
# Binary loaders for builtin range types
# Convert to int in order to dump IntEnum or numpy.integer correctly
# Ratio between number of bits required to store a number and number of pg
# decimal digits required.
# it supports bytes directly
# decimal digits per Postgres "digit"
# If numpy is available, the dumped object might be a numpy integer too
# Verify if numpy is available. If it is, we might have to dump
# its integers too.
# cover NaN and sNaN
# Weights of py digits into a pg digit according to their positions.
# Starting with an index wi != 0 is equivalent to prepending 0's to
# the digits tuple, but without really changing it.
# Find the last nonzero digit
# align the py digits to the pg digits if there's some py exponent
# Equivalent of 0-padding left to align the py digits to the pg digits
# but without changing the digits tuple.
# ndigits
# weight
# sign
# The binary dumper is currently some 30% slower, so default to text
# (see tests/scripts/testdec.py for a rough benchmark)
# Also, must be after IntNumericDumper
# Used only by oid, can take both int and Decimal as input
# String must come after enum and none to map text oid -> string dumper
# Same adapters used by PostgreSQL, or a good starting point for customization
# Dump strings with text oid instead of unknown.
# Unlike PostgreSQL, CRDB seems able to cast text to most types.
# CRDB doesn't have json/jsonb: both names map to the jsonb oid
# Alias json -> jsonb.
# Alias integer -> int8
# special case, not generated
# Generated from CockroachDB 23.1.10
# By default, use CockroachDB adapters map
# note: this is a bit more lax than the actual pep 440 to allow for a/b/rc/dev without a number
# Don't forget to update in `docs/conf.py`!
# Version tuple.
# Application.
# Shortcuts.
# Formatted text.
# Version info.
# When no control is given, use the current control if that's a BufferControl.
# Only if this control is searchable.
# Make sure to focus the search BufferControl
# Remember search link.
# If we're in Vi mode, make sure to go into insert mode.
# (Should not happen, but possible when `stop_search` is called
# when we're not searching.)
# Focus the original buffer again.
# Remove the search link.
# Reset content of search control.
# If we're in Vi mode, go back to navigation mode.
# Only search if the current control is a `BufferControl`.
# Update search_state.
# Apply search to current buffer.
# Update search state.
# Apply search.
# Add query to history of search line.
# Stop search and focus previous control again.
# Default value that should tell the output implementation to never send
# cursor shape escape sequences. This is the default right now, because
# before this `CursorShape` functionality was introduced into
# prompt_toolkit itself, people had workarounds to send cursor shapes
# escapes into the terminal, by monkey patching some of prompt_toolkit's
# internals. We don't want the default prompt_toolkit implementation to
# interfere with that. E.g., IPython patches the `ViState.input_mode`
# property. See: https://github.com/ipython/ipython/pull/13501/files
# like vi's INSERT
# Default
# No suggestion
# Consider only the last line for the suggestion.
# Only create a suggestion when this is not an empty line.
# Find first matching line in history.
# In memory storage for strings.
# History that's loaded already, in reverse order. Latest, most recent
# item first.
# Methods expected by `Buffer`.
# Implementation for specific backends.
# Lock for accessing/manipulating `_loaded_strings` and `_loaded`
# together in a consistent state.
# Events created by each `load()` call. Used to wait for new history
# entries from the loader thread.
# Start the load thread, if this is called for the first time.
# Consume the `_loaded_strings` list, using asyncio.
# Create threading Event so that we can wait for new items.
# Wait for new items to be available.
# (Use a timeout, because the executor thread is not a daemon
# thread. The "slow-history.py" example would otherwise hang if
# Control-C is pressed before the history is fully loaded,
# because there's still this non-daemon executor thread waiting
# for this event.)
# Read new items (in lock).
# Start with an empty list. In case `append_string()` was called
# before `load()` happened. Then `.store_string()` will have
# written these entries back to disk and we will reload it.
# All of the following are proxied to `self.history`.
# Emulating disk storage.
# Don't remember this.
# Join and drop trailing newline.
# Reverse the order, because newest items have to go first.
# Save to file.
# Regex for finding "words" in documents. (We consider a group of alnum
# characters a word, but also a group of special characters a word, as long as
# it doesn't contain a space.)
# (This is a 'word' in Vi.)
# Regex for finding "WORDS" in documents.
# (This is a 'WORD in Vi.)
# Share the Document._cache between all Document instances.
# (Document instances are considered immutable. That means that if another
# `Document` is constructed with the same text, it should have the same
# `_DocumentCache`.)
# Maps document.text to DocumentCache instance.
#: List of lines for the Document text.
#: List of index positions, pointing to the start of all the lines.
# Check cursor position. It can also be right after the end. (Where we
# insert text.)
# By default, if no cursor position was given, make sure to put the
# cursor position is at the end of the document. This is what makes
# sense in most places.
# Keep these attributes private. A `Document` really has to be
# considered to be immutable, because otherwise the caching will break
# things. Because of that, we wrap these into read-only properties.
# Cache for lines/indexes. (Shared with other Document instances that
# contain the same text.
# XX: For some reason, above, we can't use 'WeakValueDictionary.setdefault'.
# self._cache = _text_to_document_cache.setdefault(self.text, _DocumentCache())
# assert self._cache
# Cache, because this one is reused very often.
# Cache, because this is often reused. (If it is used, it's often used
# many times. And this has to be fast for editing big documents!)
# Create list of line lengths.
# Calculate cumulative sums.
# Remove the last item. (This is not a new line.)
# (Don't use self.text_before_cursor to calculate this. Creating
# substrings and doing rsplit is too expensive for getting the cursor
# position.)
# Find start of this line.
# Keep in range. (len(self.text) is included, because the cursor can be
# right after the end of the text as well.)
# (Otherwise, we always get a match for the empty string.)
# Space before the cursor or no text before cursor.
# Reverse the text before the cursor, in order to do an efficient
# backwards search.
# When there is a match before and after, and we're not looking for
# WORDs, make sure that both the part before and after the cursor are
# either in the [a-zA-Z_] alphabet or not. Otherwise, drop the part
# before the cursor.
# Take first match, unless it's the word on which we're right now.
# Look forward.
# Look backward.
# Look for a match.
# XXX: shouldn't this return `None` if there is no selection???
# In case of a LINES selection, go to the start/end of the lines.
# In Vi mode, the upper boundary is always included. For Emacs,
# that's not the case.
# Take the intersection of the current line and the selection.
# Block selection doesn't cross this line.
# In Vi mode, the upper boundary is always included. For Emacs
# mode, that's not the case.
# In case of a LINES selection, don't include the trailing newline.
# Modifiers.
#: Document as it was when the completion started.
#: List of all the current Completion instances which are possible at
#: this point.
#: Position in the `completions` array.
#: This can be `None` to indicate "no completion", the original text.
# Position in the `_completions` array.
# Accept both filters and booleans as input.
# Filters. (Usually, used by the key bindings to drive the buffer.)
# Text width. (For wrapping, used by the Vi 'gq' operator.)
#: The command buffer history.
# Note that we shouldn't use a lazy 'or' here. bool(history) could be
# False when empty.
# Events
# Document cache. (Avoid creating new Document instances.)
# Create completer / auto suggestion / validation coroutines.
# Asyncio task for populating the history.
# Reset other attributes.
# `ValidationError` instance. (Will be set when the input is wrong.)
# State of the selection.
# Multiple cursor mode. (When we press 'I' or 'A' in visual-block mode,
# we can insert text on multiple lines at once. This is implemented by
# using multiple cursors.)
# When doing consecutive up/down movements, prefer to stay at this column.
# State of complete browser
# For interactive completion through Ctrl-N/Ctrl-P.
# State of Emacs yank-nth-arg completion.
# for yank-nth-arg.
# Remember the document that we had *right before* the last paste
# operation. This is used for rotating through the kill ring.
# Current suggestion.
# The history search text. (Used for filtering the history when we
# browse through it.)
# Undo/redo stacks (stack of `(text, cursor_position)`).
# Cancel history loader. If history loading was still ongoing.
# Cancel the `_load_history_task`, so that next repaint of the
# `BufferControl` we will repopulate it.
#: The working lines. Similar to history, except that this can be
#: modified. The user can press arrow_up and edit previous entries.
#: Ctrl-C should reset this, and copy the whole history back in here.
#: Enter should process the current command and append to the real
#: history.
# Ignore cancellation. But handle it, so that we don't get
# this traceback.
# Probably not needed, but we had situations where
# `GeneratorExit` was raised in `load_history` during
# cancellation.
# Log error if something goes wrong. (We don't have a
# caller to which we can propagate this exception.)
# <getters/setters>
# Return True when this text has been changed.
# For Python 2, it seems that when two strings have a different
# length and one is a prefix of the other, Python still scans
# character by character to see whether the strings are different.
# (Some benchmarking showed significant differences for big
# documents. >100,000 of lines.)
# Ensure cursor position remains within the size of the text.
# Don't allow editing of read-only buffers.
# Reset history search text.
# (Note that this doesn't need to happen when working_index
# Ensure cursor position is within the size of the text.
# Make sure to reset the cursor position, otherwise we end up in
# situations where the cursor position is out of the bounds of the
# text.
# Remove any validation errors and complete state.
# fire 'on_text_changed' event.
# Input validation.
# (This happens on all change events, unlike auto completion, also when
# deleting text.)
# Remove any complete state.
# (Input validation should only be undone when the cursor position
# changes.)
# Unset preferred_column. (Will be set after the cursor movement, if
# required.)
# Note that the cursor position can change if we have a selection the
# new position of the cursor determines the end of the selection.
# fire 'on_cursor_position_changed' event.
# Set text and cursor position first.
# Now handle change events. (We do this when text/cursor position is
# both set and consistent.)
# End of <getters/setters>
# Safe if the text is different from the text at the top of the stack
# is different. If the text is the same, just update the cursor position.
# Saving anything to the undo stack, clears the redo stack.
# Split lines
# Apply transformation
# Remember the original column for the next up/down movement.
# Go to the start of the line?
# Set new Document atomically.
# Remove spaces.
# Get lines.
# Replace leading spaces with just one space.
# Set new document.
# Trigger event. This should eventually invalidate the layout.
# For every line of the whole history, find matches with the current line.
# When a new line has been found.
# Create completion.
# Set new completion
# Set text/cursor position
# (changing text/cursor position will unset complete_state.)
# If there was already a completion active, cancel that one.
# Insert text from the given completion.
# Go forward in history.
# If we found an entry, move cursor to the end of the first line.
# Go back in history.
# If we move to another entry, move cursor to the end of the line.
# Make sure we have a `YankNthArgState`.
# Get new history position.
# Take argument from line.
# Insert new argument.
# Save state again for next completion. (Note that the 'insert'
# operation from above clears `self.yank_nth_arg_state`.)
# Remember original document. This assignment should come at the end,
# because assigning to 'document' will erase it.
# Original text & cursor position.
# In insert/text mode.
# Don't overwrite the newline itself. Just before the line ending,
# it should act like insert mode.
# (Set text and cursor position at the same time. Otherwise, setting
# the text will fire a change event before the cursor position has been
# set. It works better to have this atomic.)
# Fire 'on_text_insert' event.
# XXX: rename to `start_complete`.
# Only complete when "complete_while_typing" is enabled.
# Call auto_suggest.
# Pop from the undo-stack until we find a text that if different from
# the current text. (The current logic of `save_to_undo_stack` will
# cause that the top of the undo stack is usually the same as the
# current text, so in that case we have to pop twice.)
# Push current text to redo stack.
# Set new text/cursor_position.
# Copy current state on undo stack.
# Pop state from redo stack.
# Don't call the validator again, if it was already called for the
# current input.
# Call validator.
# Set cursor position (don't allow invalid values.)
# Handle validation result.
# If the document changed during the validation, try again.
# Trigger redraw (display error).
# Save at the tail of the history. (But don't if the last entry the
# history is already the same.)
# Try find at the current input.
# No match, go forward in the history. (Include len+1 to wrap around.)
# (Here we should always include all cursor positions, because
# it's a different line.)
# No match, go back in the history. (Include -1 to wrap around.)
# Do 'count' search iterations.
# Nothing found.
# Keep selection, when `working_index` was not changed.
# Complex (directory) tempfile implementation.
# Revert to simple case.
# Try to make according to tempfile logic.
# Assume there is no issue creating dirs in this temp dir.
# Open the filename and write current text.
# Write current text to temporary file
# Open in editor
# (We need to use `run_in_terminal`, because not all editors go to
# the alternate screen buffer, and some could influence the cursor
# Read content again.
# Drop trailing newline. (Editors are supposed to add it at the
# end, but we don't need it.)
# Accept the input.
# Clean up temp dir/file.
# If the 'VISUAL' or 'EDITOR' environment variable has been set, use that.
# Otherwise, fall back to the first available editor that we can find.
# Order of preference.
# Use 'shlex.split()', because $VISUAL can contain spaces
# and quotes.
# Executable does not exist, try the next one.
# Only one of these options can be selected.
# Don't complete when we already have completions.
# Create an empty CompletionState.
# Load.
# If the input text changes, abort.
# Always stop at 10k completions.
# Refresh one final time after we got everything.
# When there is only one completion, which has nothing to add, ignore it.
# Set completions if the text was not yet changed.
# When no completions were found, or when the user selected
# already a completion by using the arrow keys, don't do anything.
# When there are no completions, reset completion state anyway.
# Render the ui if the completion menu was shown
# it is needed especially if there is one completion and it was deleted.
# Select first/last or insert common part, depending on the key
# binding. (For this we have to wait until all completions are
# loaded.)
# Insert the common part, update completions.
# (Don't call `async_completer` again, but
# recalculate completions. See:
# https://github.com/ipython/ipython/issues/9658)
# When we were asked to insert the "common"
# prefix, but there was no common suffix but
# still exactly one match, then select the
# first. (It could be that we have a completion
# which does * expansion, like '*.py', with
# exactly one match.)
# If the last operation was an insert, (not a delete), restart
# the completion coroutine.
# Nothing changed.
# Don't suggest when we already have a suggestion.
# Set suggestion only if the text was not yet changed.
# Set suggestion and redraw interface.
# Otherwise, restart thread.
# When the validation succeeded, accept the input.
# Don't start a new function, if the previous is still in progress.
# Apply transformation.
# Place cursor in the same position in text after indenting
# Place cursor in the same position in text after dedent
# Take indentation from the first line.
# `match` can't be None, actually.
# Now, take all the 'words' from the lines to be reshaped.
# And reshape.
# Apply result.
# Mouse up: This same event type is fired for all three events: left mouse
# up, right mouse up, or middle mouse up
# Mouse down: This implicitly refers to the left mouse down (this event is
# not fired upon pressing the middle or right mouse buttons).
# Triggered when the left mouse button is held down, and the mouse moves
# When we're scrolling, or just moving the mouse and not pressing a button.
# This is for when we don't know which mouse button was pressed, but we do
# know that one has been pressed during this mouse event (as opposed to
# scrolling, for example)
#: Characters. (Visual in Vi.)
#: Whole lines. (Visual-Line in Vi.)
#: A block selection. (Visual-Block in Vi.)
# Yank like emacs.
# When pressing 'p' in Vi.
# When pressing 'P' in Vi.
# Input/Output standard device numbers. Note that these are not handle objects.
# It's the `windll.kernel32.GetStdHandle` system call that turns them into a
# real handle object.
# Short
# word
# Unicode or ASCII.
# double word
# dword
# uint
# word  # Union.
# BOOL comes back as 'int'.
# Enter.
# Keep track of the curret app session.
# See what output is active *right now*. We should do it at this point,
# before this `StdoutProxy` instance is possibly assigned to `sys.stdout`.
# Otherwise, if `patch_stdout` is used, and no `Output` instance has
# been created, then the default output creation code will see this
# proxy object as `sys.stdout`, and get in a recursive loop trying to
# access `StdoutProxy.isatty()` which will again retrieve the output.
# Flush thread
# Don't bother calling when we got an empty string.
# Read the rest of the queue if more data was queued up.
# If an application was running that requires repainting, then wait
# for a very short time, in order to bundle actual writes and avoid
# having to repaint to often.
# Ensure that autowrap is enabled before calling `write`.
# XXX: On Windows, the `Windows10_Output` enables/disables VT
# If an application is running, use `run_in_terminal`, otherwise
# call it directly.
# No loop, write immediately.
# Make sure `write_and_flush` is executed *in* the event loop, not
# in another thread.
# When there is a newline in the data, write everything before the
# newline, including the newline itself.
# Otherwise, cache in buffer.
# Pretend everything was written.
# Attributes for compatibility with sys.__stdout__:
# Also Control-[
# Also Control-Space.
# Tab
# Newline
# shift + tab
# Matches any key.
# Special.
# For internal use: key which is ignored.
# (The key binding for this key should not do anything.)
# Some 'Key' aliases (for backwards-compatibility).
# ShiftControl was renamed to ControlShift in
# 888fcb6fa4efea0de8333177e1bbc792f3ff3c24 (20 Feb 2020).
# Aliases.
# ShiftControl was renamed to ControlShift.
# The set of key bindings that is active.
#: Name of the search buffer.
#: Name of the default buffer.
#: Name of the system buffer.
# Used to ensure sphinx autodoc does not try to import platform-specific
# stuff when documenting win32.py modules.
# Add to list of event handlers.
# Minimum string length for considering it long.
# Maximum number of long strings to remember.
# Keep track of the "long" strings in this cache.
# Note: We use the `max(0, ...` because some non printable control
# Store in cache.
# Rotate long strings.
# (It's hard to tell what we can consider short...)
# Not 'darwin' or 'linux2'
# Import needs to be inline. Windows libraries are not always available.
# Remove items with zero-weight.
# Make sure that we have some items left.
# Each iteration of this loop, we fill up until by (total_weight/max_weight).
# Look in cache first.
# Not found? Get it.
# Remove the oldest key when the size is exceeded.
# NOTE: This cache is used to cache `prompt_toolkit.layout.screen.Char` and
# Don't raise any exception.
# Call the validator only if the filter is active.
# XXX: drop is_done
#: Variable for capturing the output.
# Create locals for the most used output methods.
# (Save expensive attribute lookups.)
# Hide cursor before rendering. (Avoid flickering.)
# Forget last char after resetting attributes.
# Use newlines instead of CURSOR_DOWN, because this might add new lines.
# CURSOR_DOWN will never create new lines at the bottom.
# Also reset attributes, otherwise the newline could draw a
# background color.
# If the last printed character has the same style, don't output the
# style again.
# Look up `Attr` for this style string. Only set attributes if different.
# (Two style strings can still have the same formatting.)
# Note that an empty style string can have formatting that needs to
# be applied, because of style transformations.
# Render for the first time: reset styling.
# Disable autowrap. (When entering a the alternate screen, or anytime when
# we have a prompt. - In the case of a REPL, like IPython, people can have
# background threads, and it's hard for debugging if their output is not
# wrapped.)
# When the previous screen has a different size, redraw everything anyway.
# Also when we are done. (We might take up less rows, so clearing is important.)
# XXX: also consider height??
# Get height of the screen.
# (height changes as we loop over data_buffer, so remember the current value.)
# (Also make sure to clip the height to the size of the output.)
# Loop over the rows.
# Loop over the columns.
# Column counter.
# When the old and new character at this position are different,
# draw the output. (Because of the performance, we don't call
# `Char.__ne__`, but inline the same expression.)
# Send injected escape sequences to output.
# If the new line is shorter, trim it.
# Correctly reserve vertical space as required by the layout.
# When this is a new screen (drawn for the first time), or for some reason
# higher than the previous one. Move the cursor once to the bottom of the
# output. That way, we're sure that the terminal scrolls up, even when the
# lower lines of the canvas just contain whitespace.
# The most obvious reason that we actually want this behavior is the avoid
# the artifact of the input scrolling when the completion menu is shown.
# (If the scrolling is actually wanted, the layout can still be build in a
# way to behave that way by setting a dynamic height.)
# Move cursor:
# Always reset the color attributes. This is important because a background
# thread could print data to stdout and we want that to be displayed in the
# default colors. (Also, if a background color has been set, many terminals
# give weird artifacts on resize events.)
# Time to wait until we consider CPR to be not supported.
# TODO: Move following state flags into `Vt100_Output`, similar to
# Future set when we are waiting for a CPR flag.
# Cache for the style.
# Reset position
# Remember the last screen instance between renderers. This way,
# we can create a `diff` between two screens and only output the
# difference. It's also to remember the last height. (To show for
# instance a toolbar at the bottom position.)
# Default MouseHandlers. (Just empty.)
#: Space from the top of the layout, until the bottom of the terminal.
#: We don't know this until a `report_absolute_cursor_row` call.
# In case of Windows, also make sure to scroll to the current cursor
# position. (Only when rendering the first time.)
# It does nothing for vt100 terminals.
# Quit alternate screen.
# Disable mouse support.
# Disable bracketed paste.
# NOTE: No need to set/reset cursor key mode here.
# Flush output. `disable_mouse_support` needs to write to stdout.
# Only do this request when the cursor is at the top row. (after a
# clear or reset). We will rely on that in `report_absolute_cursor_row`.
# In full-screen mode, always use the total height as min-available-height.
# For Win32, we have an API call to get the number of rows below the
# cursor.
# Use CPR.
# Asks for a cursor position report (CPR).
# If we don't know whether CPR is supported, only do a request if
# none is pending, and test it, using a timer.
# Not set in the meantime -> not supported.
# Make sure to call this callback in the main thread.
# Calculate the amount of rows from the cursor position until the
# bottom of the terminal.
# Set the minimum available height.
# Pop and set waiting for CPR future.
# Received CPR response without having a CPR.
# Make copy.
# When there are no CPRs in the queue. Don't do anything.
# Got timeout, erase queue.
# Enter alternate screen.
# Enable bracketed paste.
# Reset cursor key mode.
# Enable/disable mouse support.
# Create screen and write layout to it.
# Hide cursor by default, unless one of the
# containers decides to display it.
# Calculate height.
# When we are done, we don't necessary want to fill up until the bottom.
# When the size changes, don't consider the previous screen.
# When we render using another style or another color depth, do a full
# repaint. (Forget about the previous rendered screen.)
# (But note that we still use _last_screen to calculate the height.)
# When grayed. Replace all styles in the new screen.
# Process diff and write to output.
# Handle cursor shapes.
# Flush buffered output.
# Set visible windows in layout.
# Erase current output first.
# Send "Erase Screen" command and go to (0, 0).
# Reset first.
# Print all (style_str, text) tuples.
# Set style attributes if something changed.
# Print escape sequences as raw output
# Eliminate carriage returns
# Insert a carriage return before every newline (important when the
# front-end is a telnet client).
# Reset again.
# key_bindings.
# key_processor
# digraphs for Unicode from RFC1345
# (also work for ISO-8859-1 aka latin1)
# euro
# rouble
# code points 0xe000 - 0xefff excluded, they have no assigned
# characters, only used in proposals.
# Vim 5.x compatible digraphs that don't conflict with the above
# currency symbol in ISO 8859-1
# multiplication symbol in ISO 8859-1
# Avoid circular imports.
# The only two return values for a mouse handler (and key bindings) are
# `None` and `NotImplemented`. For the type checker it's best to annotate
# this as `object`. (The consumer never expects a more specific instance:
# checking for NotImplemented can be done using `is NotImplemented`.)
# Other non-working options are:
# * Optional[Literal[NotImplemented]]
# * None
# * Any
# Key bindings can be regular functions or coroutines.
# In both cases, if they return `NotImplemented`, the UI won't be invalidated.
# This is mainly used in case of mouse move events, to prevent excessive
# repainting during mouse move events.
# If the handler is a coroutine, create an asyncio task.
# Sequence of keys presses.
# `add` and `remove` don't have to be part of this interface.
# For cache invalidation.
# When a filter is Never, it will always stay disabled, so in that
# case don't bother putting it in the key bindings. It will slow
# down every key press otherwise.
# We're adding an existing Binding object.
# Remove the given function.
# Remove this sequence of key bindings.
# No key binding found for this function. Raise ValueError.
# Place bindings that have more 'Any' occurrences in them at the end.
# Already a parse key? -> Return it.
# Lookup aliases.
# Replace 'space' by ' '
# Return as `Key` object when it's a special key.
# Final validation.
# `KeyBindings` to be synchronized with all the others.
# Proxy methods to self._bindings2.
# Copy all bindings from `self.key_bindings`, adding our condition.
# Empty key bindings.
# 'key' is a one character string.
# The queue of keys not yet send to our _process generator/state machine.
# The key buffer that is matched in the generator state machine.
# (This is at at most the amount of keys that make up for one key binding.)
#: Readline argument (for repetition of commands.)
#: https://www.gnu.org/software/bash/manual/html_node/Readline-Arguments.html
# Start the processor coroutine.
# Try match, with mode flag
# Get the filters for all the key bindings that have a longer match.
# Note that we transform it into a `set`, because we don't care about
# the actual bindings and executing it more than once doesn't make
# sense. (Many key bindings share the same filter.)
# When any key binding is active, return True.
# If we have some key presses, check for matches.
# When eager matches were found, give priority to them and also
# ignore all the longer matches.
# Exact matches found, call handler.
# Keep reference.
# No match found.
# Loop over the input, try longest match first and shift.
# When the application result is set, stop processing keys.  (E.g.
# if ENTER was received, followed by a few additional key strokes,
# leave the other keys in the queue.)
# But if there are still CPRResponse keys in the queue, these
# need to be processed.
# Only process CPR responses. Everything else is typeahead.
# Process next key.
# If for some reason something goes wrong in the parser, (maybe
# an exception was raised) restart the processor for next time.
# Skip timeout if the last key was flush.
# Filter out CPRs. We don't want to return these.
# Save the state of the current buffer.
# Call handler.
# When a key binding does an attempt to change a buffer which is
# read-only, we can ignore that. We sound a bell and go on.
# Record the key sequence in our macro. (Only if we're in macro mode
# before and after executing the key.)
# Should always be true, given that
# `was_recording_emacs` is set.
# Set the preferred_column for arrow up/down again.
# (This was cleared after changing the cursor position.)
# Not waiting for a text object and no argument has been given.
# This sleep can be cancelled. In that case we don't flush.
# (No keys pressed in the meantime.)
# Automatically flush keys.
#: True when the previous key sequence was handled by the same handler.
# Don't exceed a million.
# Load basic bindings.
# Load emacs bindings.
# Load Vi bindings.
# Make sure that the above key bindings are only active if the
# currently focused control is a `BufferControl`. For other controls, we
# don't want these key bindings to intervene. (This would break "ptterm"
# for instance, which handles 'Keys.Any' in the user control itself.)
# Active, even when no buffer has been focused.
# Simple macro recording. (Like Readline does.)
# (For Emacs mode.)
# Normal mode.
#: None or CharacterFind instance. (This is used to repeat the last
#: search in Vi mode, by pressing the 'n' or 'N' in navigation mode.)
# When an operator is given and we are waiting for text object,
# -- e.g. in the case of 'dw', after the 'd' --, an operator callback
# is set here.
#: Named registers. Maps register name (e.g. 'a') to
#: :class:`ClipboardData` instances.
#: The Vi mode we're currently in to.
#: Waiting for digraph.
# (None or a symbol.)
#: When true, make ~ act as an operator.
#: Register in which we are recording a macro.
#: `None` when not recording anything.
# Note that the recording is only stored in the register after the
# recording is stopped. So we record in a separate `current_recording`
# variable.
# Temporary navigation (normal) mode.
# This happens when control-o has been pressed in insert or replace
# mode. The user can now do one navigation action and we'll return back
# to insert/replace.
# Go back to insert mode.
# Reset recording state.
# Only enable when a `Buffer` is focused, otherwise, we would catch keys
# when another widget is focused (like for instance `c-d` in a
# ptterm.Terminal).
# Typing.
# Registry that maps the Readline command names to their handlers.
# Commands for moving
# See: http://www.delorie.com/gnu/docs/readline/rlman_14.html
# Commands for manipulating the history.
# See: http://www.delorie.com/gnu/docs/readline/rlman_15.html
# Commands for changing text
# When a negative argument has been given, this should delete in front
# of the cursor.
# XXX: not DRY: see meta_c and meta_u!!
# Killing and yanking.
# Nothing found? delete until the start of the document.  (The
# input starts with whitespace and no words were found before the
# cursor.)
# If the previous key press was also Control-W, concatenate deleted
# Nothing to delete. Bell.
# Completion.
# Keyboard macros.
# Insert the macro.
# TODO: Make the format suitable for the inputrc file.
# Miscellaneous Commands.
# Transform all lines.
# Accept input.
# ('first' should be true, because we want to insert it at the current
# position in the queue.)
# Accept the current input. (This will also redraw the interface in the
# 'done' state.)
# Set the new index at the start of the next run.
# When already navigating through completions, select the next one.
# Request completions.
# Calculate the common suffix.
# One completion: insert it.
# Multiple completions with common part.
# Otherwise: display all completions.
# Get terminal dimensions.
# Calculate amount of required columns/rows for displaying the
# completions. (Keep in mind that completions are displayed
# alphabetically column-wise.)
# Note: math.ceil can return float on Python2.
# Display completions.
# Add padding.
# User interaction through an application generator function.
# Ask confirmation if it doesn't fit on the screen.
# Display pages.
# Display --MORE-- and go to the next page.
# Display all completions.
# Tab.
# pylint: disable=function-redefined
# If the motion is exclusive and the end of motion is on the first
# column, the end position becomes end of previous line.
# Select whole lines
# Get absolute cursor positions from the text object.
# Take the start of the lines.
# For Vi mode, the SelectionState does include the upper position,
# while `self.operator_range` does not. So, go one to the left, unless
# we're in the line mode, then we don't want to risk going to the
# previous line, and missing one line in the selection.
# Typevar for any text object function:
# Arguments are multiplied.
# Call the text object handler.
# Get the operator function.
# (Should never be None here, given the
# `vi_waiting_for_text_object_mode` filter state.)
# Call the operator function with the text object.
# Clear operator.
# Register a move operation. (Doesn't need an operator.)
# Register a move selection operation.
# Should not happen, because of the `vi_selection_mode` filter.
# When the text object has both a start and end position, like 'i(' or 'iw',
# Turn this into a selection, otherwise the cursor.
# Take selection positions from text object.
# Take selection type from text object.
# Make it possible to chain @text_object decorators.
# Typevar for any operator function:
# When this key binding is matched, only set the operator
# function in the ViState. We should execute it after a text
# object has been received.
# Create text object from selection.
# Execute operator.
# Quit selection mode.
# Note: Some key bindings have the "~IsReadOnly()" filter added. This
# (Note: Always take the navigation bindings in read-only mode, even when
# Rot 13 transformation
# To lowercase
# To uppercase.
# Swap case.
# Insert a character literally (quoted insert).
# In navigation mode, pressing enter will always return the input.
# In insert mode, also accept input when enter is pressed, and the buffer
# has been marked as single line.
# ** In navigation mode **
# List of navigation commands: http://hea-www.harvard.edu/~fine/Tech/vi.html
# ~IsReadOnly, because we want to stay in navigation mode for
# read-only buffers.
# TODO: implement 'arg'
# We copy the whole line.
# But we delete after the whitespace
# Split string in before/deleted/after text.
# Set new text.
# Set text and cursor position.
# Cursor At the start of the first 'after' line, after the leading whitespace.
# Set clipboard data
# Store all cursor positions.
# Go to 'INSERT_MULTIPLE' mode.
# TODO: go to begin of sentence.
# XXX: should become text_object.
# TODO: go to end of sentence.
# *** Operators ***
# Set deleted/changed text to clipboard or named register.
# Only go back to insert mode in case of 'change'.
# Transform.
# Move cursor
# *** Text objects ***
# TODO: 'dat', 'dit', (tags (like xml)
# Quotes
# Brackets
# 'dab', 'dib'
# 'daB', 'diB'
# Note: We also need `no_selection_handler`, because we in
# When we find a Window that has BufferControl showing this window,
# move to the start of the visible area.
# Otherwise, move to the start of the input.
# move to the center of the visible area.
# move to the end of the visible area.
# Otherwise, move to the end of the input.
# We can safely set the scroll offset to zero; the Window will make
# sure that it scrolls at least enough to make the cursor visible
# again.
# Calculate the offset that we need in order to position the row
# containing the cursor in the center.
# If 'arg' has been given, the meaning of % is to go to the 'x%'
# row in the file.
# Do nothing.
# Move to the corresponding opening/closing bracket (()'s, []'s and {}'s).
# Move to the given line.
# Move to the top of the input.
# *** Other ***
# Construct new text.
# Shift all cursor positions.
# Set result.
# Don't delete across lines.
# Lookup.
# Try reversing.
# Unknown digraph.
# Insert digraph.
# Store and stop recording.
# Retrieve macro.
# Expand macro (which is a string in the register), in individual keys.
# Use vt100 parser for this.
# Now feed keys back to the input processor.
# Vi-style forward search.
# Vi-style backward search.
# Apply the search. (At the / or ? prompt.)
# Handle escape. This should accept the search, just like readline.
# `abort_search` would be a meaningful alternative.
# The incoming data looks like u'\x1b[35;1R'
# Parse row/col information.
# Report absolute cursor position to the renderer.
# Height to scroll.
# Calculate how many lines is equivalent to that vertical space.
# When the cursor is at the top, move to the next line. (Otherwise, only scroll.)
# When the cursor is at the bottom, move to the previous line. (Otherwise, only scroll.)
# Move cursor up, as many steps as the height of the first line.
# TODO: not entirely correct yet, in case of line wrapping and many long lines.
# Scroll window
# Scroll down one page.
# Put cursor at the first visible line. (But make sure that the cursor
# moves at least one line up.)
# Set the scroll offset. We can safely set it to zero; the Window will
# make sure that it scrolls at least until the cursor becomes visible.
# Overview of Readline emacs commands:
# http://www.catonmat.net/download/readline-emacs-editing-mode-cheat-sheet.pdf
# ControlQ does a quoted insert. Not that for vt100 terminals, you have to
# disable flow control by running ``stty -ixon``, otherwise Ctrl-Q and
# Ctrl-S are captured by the terminal.
# Meta + Enter: always accept input.
# Enter: accept input in single line mode.
# Also named 'character-search'
# Also named 'character-search-backward'
# List all completions.
# Insert them.
# Control-space or Control-@
# Take the current cursor position as the start of this selection.
# NOTE: We don't bind 'Escape' to 'abort_search'. The reason is that we
# Handling of escape.
# Like Readline, it's more natural to accept the search when escape has
# been pressed, however instead the following two bindings could be used
# #handle('escape', 'escape', eager=True)(search.abort_search)
# #handle('escape', 'enter', eager=True)(search.accept_search_and_accept_input)
# If Read-only: also include the following key bindings:
# '/' and '?' key bindings for searching, just like Vi mode.
# the other keys are handled through their readline command
# Both the dict lookup and `get_by_name` can raise KeyError.
# (`else` is not really needed here.)
# (It should always be a binding here)
# (`selection_state` should never be `None`, it is created by
# `start_selection`.)
# Then move the cursor
# Cursor didn't actually move - so cancel selection
# to avoid having an empty selection
# Just move the cursor, like shift was not pressed
# selection is now empty, so cancel selection
# moving the cursor in shift selection mode cancels the selection
# we then process the cursor movement
# left_up                       0+ + +  =0
# left_up     Shift             0+4+ +  =4
# left_up           Alt         0+ +8+  =8
# left_up     Shift Alt         0+4+8+  =12
# left_up               Control 0+ + +16=16
# left_up     Shift     Control 0+4+ +16=20
# left_up           Alt Control 0+ +8+16=24
# left_up     Shift Alt Control 0+4+8+16=28
# middle_up                     1+ + +  =1
# middle_up   Shift             1+4+ +  =5
# middle_up         Alt         1+ +8+  =9
# middle_up   Shift Alt         1+4+8+  =13
# middle_up             Control 1+ + +16=17
# middle_up   Shift     Control 1+4+ +16=21
# middle_up         Alt Control 1+ +8+16=25
# middle_up   Shift Alt Control 1+4+8+16=29
# right_up                      2+ + +  =2
# right_up    Shift             2+4+ +  =6
# right_up          Alt         2+ +8+  =10
# right_up    Shift Alt         2+4+8+  =14
# right_up              Control 2+ + +16=18
# right_up    Shift     Control 2+4+ +16=22
# right_up          Alt Control 2+ +8+16=26
# right_up    Shift Alt Control 2+4+8+16=30
# left_down                     0+ + +  =0
# left_down   Shift             0+4+ +  =4
# left_down         Alt         0+ +8+  =8
# left_down   Shift Alt         0+4+8+  =12
# left_down             Control 0+ + +16=16
# left_down   Shift     Control 0+4+ +16=20
# left_down         Alt Control 0+ +8+16=24
# left_down   Shift Alt Control 0+4+8+16=28
# middle_down                   1+ + +  =1
# middle_down Shift             1+4+ +  =5
# middle_down       Alt         1+ +8+  =9
# middle_down Shift Alt         1+4+8+  =13
# middle_down           Control 1+ + +16=17
# middle_down Shift     Control 1+4+ +16=21
# middle_down       Alt Control 1+ +8+16=25
# middle_down Shift Alt Control 1+4+8+16=29
# right_down                    2+ + +  =2
# right_down  Shift             2+4+ +  =6
# right_down        Alt         2+ +8+  =10
# right_down  Shift Alt         2+4+8+  =14
# right_down            Control 2+ + +16=18
# right_down  Shift     Control 2+4+ +16=22
# right_down        Alt Control 2+ +8+16=26
# right_down  Shift Alt Control 2+4+8+16=30
# left_drag                     32+ + +  =32
# left_drag   Shift             32+4+ +  =36
# left_drag         Alt         32+ +8+  =40
# left_drag   Shift Alt         32+4+8+  =44
# left_drag             Control 32+ + +16=48
# left_drag   Shift     Control 32+4+ +16=52
# left_drag         Alt Control 32+ +8+16=56
# left_drag   Shift Alt Control 32+4+8+16=60
# middle_drag                   33+ + +  =33
# middle_drag Shift             33+4+ +  =37
# middle_drag       Alt         33+ +8+  =41
# middle_drag Shift Alt         33+4+8+  =45
# middle_drag           Control 33+ + +16=49
# middle_drag Shift     Control 33+4+ +16=53
# middle_drag       Alt Control 33+ +8+16=57
# middle_drag Shift Alt Control 33+4+8+16=61
# right_drag                    34+ + +  =34
# right_drag  Shift             34+4+ +  =38
# right_drag        Alt         34+ +8+  =42
# right_drag  Shift Alt         34+4+8+  =46
# right_drag            Control 34+ + +16=50
# right_drag  Shift     Control 34+4+ +16=54
# right_drag        Alt Control 34+ +8+16=58
# right_drag  Shift Alt Control 34+4+8+16=62
# none_drag                     35+ + +  =35
# none_drag   Shift             35+4+ +  =39
# none_drag         Alt         35+ +8+  =43
# none_drag   Shift Alt         35+4+8+  =47
# none_drag             Control 35+ + +16=51
# none_drag   Shift     Control 35+4+ +16=55
# none_drag         Alt Control 35+ +8+16=59
# none_drag   Shift Alt Control 35+4+8+16=63
# scroll_up                     64+ + +  =64
# scroll_up   Shift             64+4+ +  =68
# scroll_up         Alt         64+ +8+  =72
# scroll_up   Shift Alt         64+4+8+  =76
# scroll_up             Control 64+ + +16=80
# scroll_up   Shift     Control 64+4+ +16=84
# scroll_up         Alt Control 64+ +8+16=88
# scroll_up   Shift Alt Control 64+4+8+16=92
# scroll_down                   64+ + +  =65
# scroll_down Shift             64+4+ +  =69
# scroll_down       Alt         64+ +8+  =73
# scroll_down Shift Alt         64+4+8+  =77
# scroll_down           Control 64+ + +16=81
# scroll_down Shift     Control 64+4+ +16=85
# scroll_down       Alt Control 64+ +8+16=89
# scroll_down Shift Alt Control 64+4+8+16=93
# fmt:on
# TypicaL:   "eSC[MaB*"
# Urxvt:     "Esc[96;14;13M"
# Xterm SGR: "Esc[<64;85;12M"
# Parse incoming packet.
# Typical.
# TODO: Is it possible to add modifiers here?
# Handle situations where `PosixStdinReader` used surrogateescapes.
# Urxvt and Xterm SGR.
# When the '<' is not present, we are not using the Xterm SGR mode,
# but Urxvt instead.
# Extract coordinates.
# Parse event type.
# Some other terminals, like urxvt, Hyper terminal, ...
# Only handle mouse events when we know the window height.
# Take region above the layout into account. The reported
# coordinates are absolute to the visible part of the terminal.
# Call the mouse handler from the renderer.
# Note: This can return `NotImplemented` if no mouse handler was
# We don't receive a cursor position, so we don't know which window to
# scroll. Just send an 'up' key press instead.
# This key binding should only exist for Windows.
# Parse data.
# Make coordinates absolute to the visible part of the terminal.
# Call the mouse event handler.
# (Can return `NotImplemented`.)
# No mouse handler found. Return `NotImplemented` so that we don't
# invalidate the UI.
# Also c-space.
# Readline-style bindings.
# Control-W should delete, using whitespace as separator, while M-Del
# should delete using [^a-zA-Z0-9] as a boundary.
# CTRL keys.
# Delete the word before the cursor.
# Global bindings.
# Be sure to use \n as line ending.
# Some terminals (Like iTerm2) seem to paste \r\n line endings in a
# bracketed paste. See: https://github.com/ipython/ipython/issues/9737
# Async generator
# Utils.
# Inputhooks.
# Do not import win32-specific stuff when generating documentation.
# Otherwise RTD would be unable to generate docs for this module.
# Manual reset event.
# Initial state.
# Unnamed event object.
# If no `max_postpone_time` has been given, schedule right now.
# When there are no other tasks scheduled in the event loop. Run it
# Notice: uvloop doesn't have this _ready attribute. In that case,
# If the timeout expired, run this now.
# Schedule again for later.
# call_exception_handler() is usually called indirectly
# from an except block. If it's not the case, the traceback
# is undefined...
# Deprecated!
# If there are tasks in the current event loop,
# don't run the input hook.
# Run selector in other thread.
# Call inputhook.
# The inputhook function is supposed to return when our selector
# becomes ready. The inputhook can do that by registering the fd in its
# own loop, or by checking the `input_is_ready` function regularly.
# Flush the read end of the pipe.
# Before calling 'os.read', call select.select. This is required
# when the gevent monkey patch has been applied. 'os.read' is never
# monkey patched and won't be cooperative, so that would block all
# other select() calls otherwise.
# See: http://www.gevent.org/gevent.os.html
# Note: On Windows, this is apparently not an issue.
# This happens when the window resizes and a SIGWINCH was received.
# We get 'Error: [Errno 4] Interrupted system call'
# Just ignore.
# Wait for the real selector to be done.
# By default, choose a buffer size that's a good balance between having enough
# throughput, but not consuming too much memory. We use this to consume a sync
# generator of completions as an async generator. If the queue size is very
# small (like 1), consuming the completions goes really slow (when there are a
# lot of items). If the queue size would be unlimited or too big, this can
# cause overconsumption of memory, and cause CPU time spent producing items
# that are no longer needed (if the consumption of the async generator stops at
# some point). We need a fixed size in order to get some back pressure from the
# async consumer to the sync producer. We choose 1000 by default here. If we
# have around 50k completions, measurements show that 1000 is still
# significantly faster than a buffer of 100.
# NOTE: We are limiting the queue size in order to have back-pressure.
# When this async generator was cancelled (closed), stop this
# thread.
# Start background thread.
# When this async generator is closed (GeneratorExit exception, stop
# the background thread as well. - we don't need that anymore.)
# Wait for the background thread to finish. (should happen right after
# the last item is yielded).
# PipInput object, for sending input in the CLI.
# (This is something that we can use in the prompt_toolkit event loop,
# but still write date in manually.)
# Output object. Don't render to the real stdout, but write everything
# in the SSH channel.
# Channel not open for sending.
# Should not happen.
# Disable the line editing provided by asyncssh. Prompt_toolkit
# provides the line editing.
# Close the connection.
# Send resize event to the current application.
# No authentication.
# Iac Do Linemode
# Suppress Go Ahead. (This seems important for Putty to do correct echoing.)
# This will allow bi-directional operation.
# Iac sb
# IAC Will Echo
# Negotiate window size
# Negotiate terminal type
# Assume the client will accept the negotiation with `IAC +  WILL + TTYPE`
# We can then select the first terminal type supported by the client,
# which is generally the best type the client supports
# The client should reply with a `IAC + SB  + TTYPE + IS + ttype + IAC + SE`
# Create "Output" object.
# Initialize.
# Create output.
# Connection closed by client.
# Add reader.
# Wait for v100_output to be properly instantiated
# Make sure that when an application was active for this connection,
# that we print the text above the application.
# Create and bind socket
# Run forever, until cancelled.
# Wait for all applications to finish.
# (This is similar to
# `Application.cancel_and_wait_for_background_tasks`. We wait for the
# background tasks to complete, but don't propagate exceptions, because
# we can't use `ExceptionGroup` yet.)
# Already running.
# Run application for this connection.
# Happens either when the connection is closed by the client
# (e.g., when the user types 'control-]', then 'quit' in the
# telnet client) or when the user types control-d in a prompt
# and this is not handled by the interact function.
# Unhandled control-c propagated by a prompt.
# Telnet constants.
# NOTE: the first parameter of struct.unpack should be
# a 'str' object. Both on Py2/py3. This crashes on OSX
# otherwise.
# NOP
# Go to state escaped.
# Handle simple commands.
# Handle IAC-[DO/DONT/WILL/WONT] commands.
# Subnegotiation
# Consume everything until next IAC-SE
# Name of the named group in the regex, matching trailing input.
# (Trailing input is when the input contains characters after the end of the
# expression has been matched.)
#: Dictionary that will map the regex names to Node instances.
# Maps regex group names to varnames.
# Compile regex strings.
# Compile the regex itself.
# Note that we don't need re.MULTILINE! (^ and $
# still represent the start and end of input text.)
# We compile one more set of regexes, similar to `_re_prefix`, but accept any trailing
# input. This will ensure that we can still highlight the input correctly, even when the
# input contains some additional characters at the end that don't match the grammar.)
# Turn `AnyNode` into an OR.
# Concatenate a `NodeSequence`
# For Regex and Lookahead nodes, just insert them literally.
# A `Variable` wraps the children into a named group.
# `Repeat`.
# Generate separate pattern for all terms that contain variables
# within this OR. Terms that don't contain a variable can be merged
# together in one pattern.
# If we have a definition like:
# Then we want to be able to generate completions for both the
# name as well as the city. We do this by yielding two
# different regular expressions, because the engine won't
# follow multiple paths, if multiple are possible.
# Merge options without variable together.
# For a sequence, generate a pattern for each prefix that ends with
# a variable + one pattern of the complete sequence.
# (This is because, for autocompletion, we match the text before
# the cursor, and completions are given for the variable that we
# match right before the cursor.)
# For all components in the sequence, compute prefix patterns,
# as well as full patterns.
# If any child is contains a variable, we should yield a
# pattern up to that point, so that we are sure this will be
# matched.
# If there are non-variable nodes, merge all the prefixes into
# one pattern. If the input is: "[part1] [part2] [part3]", then
# this gets compiled into:
# For nodes that contain a variable, we skip the "|partial"
# part here, because thees are matched with the previous
# patterns.
# Start with complete patterns.
# Add prefix patterns.
# No need to yield a prefix for this one, we did
# the variable prefixes earlier.
# If this yields multiple, we should yield all combinations.
# Not sure what the correct semantics are in this case.
# (Probably it's not worth implementing this.)
# (Note that we should not append a '?' here. the 'transform'
# method will already recursively do that.)
# If we have a repetition of 8 times. That would mean that the
# current input could have for instance 7 times a complete
# match, followed by a partial match.
# First try to match using `_re_prefix`. If nothing is found, use the patterns that
# also accept trailing characters.
# Find all regex group for the name _INVALID_TRAILING_INPUT.
# Take the smallest part. (Smaller trailing text means that a larger input has
# been matched, so that is better.)
# If this part goes until the end of the input string.
#: List of (varname, value, slice) tuples.
# If we have a `Lexer` instance for this part of the input.
# Tokenize recursively and apply tokens.
# Highlight trailing input.
# Unwrap text.
# Create a document, for the completions API (text/cursor_position)
# Call completer
# Wrap again.
# Parse input document.
# We use `match`, not `match_prefix`, because for validation, we want
# the actual, unambiguous interpretation of the input.
# Unescape text.
# Validate
# Regular expression for tokenizing other regular expressions.
# We add a closing brace because that represents the final pop of the stack.
# TODO: implement!
# Compile grammar.
# XXX: not entirely correct.
# Create GrammarCompleter
# Base.
# Defaults.
# Style.
# Style transformation.
# Pygments.
# Named colors.
#: Style attributes.
#: The default `Attrs`.
#: ``Attrs.bgcolor/fgcolor`` can be in either 'ffffff' format, or can be any of
#: the following in case we want to take colors from the 8/16 color palette.
#: Usually, in that case, the terminal application allows to configure the RGB
#: values for these names.
#: ISO 6429 colors
# Low intensity, dark.  (One or two components 0x80, the other 0x00.)
# High intensity, bright. (One or two components 0xff, the other 0x00. Not supported everywhere.)
# People don't use the same ANSI color names everywhere. In prompt_toolkit 1.0
# we used some unconventional names (which were contributed like that to
# Pygments). This is fixed now, but we still support the old names.
# The table below maps the old aliases to the current names.
# Always the same value.
# Reverse colors.
# Don't do anything if the whole brightness range is acceptable.
# This also avoids turning ansi colors into RGB sequences.
# If a foreground color is given without a background color.
# Calculate new RGB values.
# Do RGB lookup for ANSI colors.
# Parse RRGGBB format.
# NOTE: we don't have to support named colors here. They are already
# Always return the same hash for these dummy instances.
# Dictionary that maps ANSI color names to their opposite. This is useful for
# turning color schemes that are optimized for a black background usable for a
# white background.
# Because color/bgcolor can be None in `Attrs`.
# Special values.
# Try ANSI color names.
# Try 6 digit RGB colors.
#: Default styling. Mapping from classnames to their style definition.
# Highlighting of search matches in document.
# Incremental search.
# Highlighting of select text in document.
# Highlighting of matching brackets.
# Styling of other cursors, in case of block editing.
# Line numbers.
# Default prompt.
# Search toolbar.
# System toolbar
# "arg" toolbar.
# Validation toolbar.
# Completions toolbar.
# Completions menu.
# (Note: for the current completion, we use 'reverse' on top of fg/bg
# colors. This is to have proper rendering with NO_COLOR=1).
# Fuzzy matches in completion menu (for FuzzyCompleter).
# Styling of readline-like completions.
# Scrollbars.
# Start/end of scrollbars. Adding 'underline' here provides a nice little
# detail to the progress bar, but it doesn't look good on all terminals.
# ('scrollbar.start',                          'underline #ffffff'),
# ('scrollbar.end',                            'underline #000000'),
# Auto suggestion text.
# Trailing whitespace and tabs.
# When Control-C/D has been pressed. Grayed.
# Entering a Vi digraph.
# Control characters, like ^C, ^X.
# Non-breaking space.
# Default styling of HTML elements.
# It should be possible to use the style names in HTML.
# <reverse>...</reverse>  or <noreverse>...</noreverse>.
# Prompt bottom toolbar
# Style that will turn for instance the class 'red' into 'red'.
# Dialog windows.
# Scrollbars in dialogs.
# Buttons.
# Menu bars.
# Shadows.
# The default Pygments style, include this by default in case a Pygments lexer
# is used.
# Note: In Pygments, Token.String is an alias for Token.Literal.String,
# Import inline.
# ANSI color names.
# 140 named colors.
# Replace by 'hex' value.
# Hex codes.
# Keep this for backwards-compatibility (Pygments does it).
# I don't like the '#' prefix for named colors.
# 6 digit hex color.
# 3 digit hex color.
# Default.
# Attributes, when they are not filled in by a style. None means that we take
# the value from the parent.
# Start from default Attrs.
# Now update with the given attributes.
# prompt_toolkit extensions. Not in Pygments.
# Pygments properties that we ignore.
# Ignore pieces in between square brackets. This is internal stuff.
# Like '[transparent]' or '[set-cursor-position]'.
# Colors.
# The 'fg:' prefix is optional.
# This one can't contain a comma!
# We don't support Python versions older than 3.6 anymore, so we can always
# depend on dictionary ordering. This is the default.
# Loop through the rules in the order they were defined.
# Rules that are defined later get priority.
# The order of the class names doesn't matter.
# (But the order of rules does matter.)
# Split on '.' and whitespace. Count elements.
# Apply default styling.
# Go from left to right through the style string. Things on the right
# take precedence.
# This part represents a class.
# Do lookup of this class name in the style definition, as well
# as all class combinations that we have so far.
# Expand all class names (comma separated list).
# Build a set of all possible class combinations to be applied.
# Apply the styles that match these class names.
# Process inline style.
# Should not happen, there's always one non-null value.
# NOTE: previously, we used an algorithm where we did not generate the
# Filesystem.
# Fuzzy
# Nested.
# Word completer.
# Deduplicate
#: Automatic completion while typing.
#: Used explicitly requested completion by pressing 'tab'.
# NOTE: Right now, we are consuming the `get_completions` generator in
# def get_all_in_thread() -> List[Completion]:
# completions = await get_running_loop().run_in_executor(None, get_all_in_thread)
# for completion in completions:
# Get all completions in a blocking way.
# Get all completions in a non-blocking way.
# Get all completions from the other completers in a blocking way.
# Get all completions from the other completers in a non-blocking way.
# Take only completions that don't change the text before the cursor.
# When there is at least one completion that changes the text before the
# cursor, don't return any common part.
# Return the common prefix.
# Similar to os.path.commonprefix
# Keep track of the document strings we'd get after applying any completion.
# Don't include completions that don't have any effect at all.
# Complete only when we have at least the minimal input length,
# otherwise, we can too many results and autocompletion will become too
# heavy.
# Do tilde expansion.
# Directories where to look.
# Start of current file.
# Get all filenames.
# Look for matches in this directory.
# Sort
# Yield them.
# For directories, add a slash to the filename.
# (We don't add them to the `completion`. Users can type it
# to trigger the autocompletion themselves.)
# Get list of words.
# Get word/text before cursor.
# Get completions
# If word before the cursor is an empty string, consider all
# completions, without filtering everything with an empty regex
# pattern.
# lookahead regex to manage overlapping matches
# Prefer the match, closest to the left, then shortest.
# Include these completions, but set the correct `display`
# attribute and `start_position`.
# We access to private `_display_meta` attribute, because that one is lazy.
# No highlighting when we have zero length matches (no input text).
# In this case, use the original display text (which can include
# additional styling or characters).
# Text before match.
# The match itself.
# Text after match.
# NestedDict = Mapping[str, Union['NestedDict', Set[str], None, Completer]]
# Split document.
# If there is a space, check for the first term, and use a
# subcompleter.
# If we have a sub completer, use this for the completions.
# No space in the input: behave exactly like `WordCompleter`.
# We are not importing `PyperclipClipboard` here, because it would require the
# `pyperclip` module to be present.
# from .pyperclip import PyperclipClipboard
# Not abstract.
# When the clipboard data is equal to what we copied last time, reuse
# the `ClipboardData` instance. That way we're sure to keep the same
# `SelectionType`.
# Pyperclip returned something else. Create a new `ClipboardData`
# instance.
# Add the very first item at the end.
# Map search BufferControl back to the original BufferControl.
# This is used to keep track of when exactly we are searching, and for
# applying the search.
# When a link exists in this dictionary, that means the search is
# currently active.
# Map: search_buffer_control -> original buffer control.
# Mapping that maps the children in the layout to their parent.
# This relationship is calculated dynamically, each time when the UI
# is rendered.  (UI elements have only references to their children.)
# List of visible windows.
# List of `Window` objects.
# BufferControl by buffer name.
# BufferControl by buffer object.
# Focus UIControl.
# Otherwise, expecting any Container object.
# This is a `Window`: focus that.
# Focus a window in this container.
# If we have many windows as part of this container, and some
# of them have been focused before, take the last focused
# item. (This is very useful when the UI is composed of more
# complex sub components.)
# Take the first one that was focused before.
# None was focused before: take the very first focusable window.
# Check whether this "container" is focused. This is true if
# one of the elements inside is focused.
# Not every `UIControl` is a `BufferControl`. This only applies to
# `BufferControl`.
# focusable windows are windows that are visible, but also part of the
# modal container. Make sure to keep the ordering.
# Go up in the tree, and find the root. (it will be a part of the
# layout, if the focus is in a modal part.)
# Remove all search links when the UI starts.
# (Important, for instance when control-c is been pressed while
# When `skip_hidden` is set, don't go into disabled ConditionalContainer containers.
# yield from walk(c)
# Also cannot be a float.
# Smallest possible value.
# 0-values are allowed, so use "is None"
# Something huge.
# Don't allow situations where max < min. (This would be a bug.)
# Make sure that the 'preferred' size is always in the min..max range.
# If all dimensions are size zero. Return zero.
# (This is important for HSplit/VSplit, to report the right values to their
# parent when all children are invisible.)
# Ignore empty dimensions. (They should not reduce the size of others.)
# Take the highest minimum dimension.
# For the maximum, we would prefer not to go larger than then smallest
# 'max' value, unless other dimensions have a bigger preferred value.
# This seems to work best:
# If it doesn't work well enough, then it's up to the UI designer to
# explicitly pass dimensions.
# Make sure that min>=max. In some scenarios, when certain min..max
# ranges don't have any overlap, we can end up in such an impossible
# situation. In that case, give priority to the max value.
# E.g. taking (1..5) and (8..9) would return (8..5). Instead take (8..8).
# Anything that can be converted to a dimension.
# None is a valid dimension that will fit anything.
# Callable[[], 'AnyDimension']  # Recursive definition not supported by mypy.
# Assume it's a callable that doesn't take arguments.
# Common alias.
# For backward-compatibility.
# Layout.
# Dimensions.
# Containers.
# Controls.
# Margins.
# Menus.
# Get current line number.
# Construct margin.
# Only display line number if this line is not a continuation of the previous line.
# Current line.
# Left align current number in relative mode.
# Other lines.
# Fill with tildes.
# Up arrow.
# Scrollbar body.
# Give the last cell a different style, because we
# want to underline this.
# Down arrow
# Take the width from the first line.
# First line.
# Next lines.
# Dummy window.
# Padding Top.
# The children with padding.
# Padding right.
# Draw child panes.
# Fill in the remaining space. This happens when a child control
# refuses to take more space and we don't have any padding. Adding a
# dummy child control for this (in `self._all_children`) is not
# desired, because in some situations, it would take more space, even
# when it's not required. This is required to apply the styling.
# Calculate heights.
# Sum dimensions
# If there is not enough space for both.
# Don't do anything.
# Find optimal sizes. (Start with minimal size, increase until we cover
# the whole height.)
# Increase until we meet at least the 'preferred' size.
# Increase until we use all the available space. (or until "max")
# At the point where we want to calculate the heights, the widths have
# already been decided. So we can trust `width` to be the actual
# `width` that's going to be used for the rendering. So,
# `divide_widths` is supposed to use all of the available width.
# Using only the `preferred` width caused a bug where the reported
# height was more than required. (we had a `BufferControl` which did
# wrap lines because of the smaller width returned by `_divide_widths`.
# Padding left.
# Calculate widths.
# the whole width.)
# Increase until we use all the available space.
# If there is not enough space.
# Calculate heights, take the largest possible, but not larger than
# write_position.height.
# Draw all child panes.
# z_index of a Float is computed by summing the z_index of the
# container and the `Float`.
# If the float that we have here, is positioned relative to the
# cursor position, but the Window that specifies the cursor
# position is not drawn yet, because it's a Float itself, we have
# to postpone this calculation. (This is a work-around, but good
# enough for now.)
# Draw as late as possible, but keep the order.
# When a menu_position was given, use this instead of the cursor
# position. (These cursor positions are absolute, translate again
# relative to the write_position.)
# Note: This should be inside the for-loop, because one float could
# Left & width given.
# Left & right given -> calculate width.
# Width & right given -> calculate left.
# Near x position of cursor.
# Only width given -> center horizontally.
# Otherwise, take preferred width from float content.
# Center horizontally.
# Trim.
# Top & height given.
# Top & bottom given -> calculate height.
# Height & bottom given -> calculate top.
# Near cursor.
# Reduce height if not enough space. (We can use the height
# when the content requires it.)
# When the space below the cursor is more than
# the space above, just reduce the height.
# Otherwise, fit the float above the cursor.
# Only height given -> center vertically.
# Otherwise, take preferred height from content.
# Center vertically.
# Write float.
# (xpos and ypos can be negative: a float can be partially visible.)
# Width without margins.
# row/col from input to absolute y/x
# screen coordinates.
# For `DummyControl` for instance, the content can be empty, and so
# will `_rowcol_to_yx` be. Return 0/0 by default.
# Get row where the cursor is displayed.
# For left/right, it probably doesn't make sense to return something.
# (We would have to calculate the widths of all the lines and keep
# double width characters in mind.)
# Cache for the screens generated by the margin.
#: Scrolling position of the main content.
# Vertical scroll 2: this is the vertical offset that a line is
# scrolled if a single line (the one that contains the cursor) consumes
# all of the vertical space.
#: Keep render information (mappings between buffer input and render
#: output.)
# Margin.get_width, needs to have a UIContent instance.
# Calculate the width of the margin.
# Window of the content. (Can be `None`.)
# Include width of the margins.
# Merge.
# When a preferred dimension was explicitly given to the Window,
# ignore the UIControl.
# Otherwise, calculate the preferred dimension from the UI control
# content.
# When a 'preferred' dimension is given by the UIControl, make sure
# that it stays within the bounds of the Window.
# When a `dont_extend` flag has been given, use the preferred dimension
# also as the max dimension.
# If dont_extend_width/height was given. Then reduce width/height in
# WritePosition if the parent wanted us to paint in a bigger area.
# (This happens if this window is bundled with another window in a
# HSplit/VSplit, but with different size requirements.)
# Draw
# When no z_index is given, draw right away.
# Otherwise, postpone.
# Don't bother writing invisible windows.
# (We save some time, but also avoid applying last-line styling.)
# Calculate margin sizes.
# Render UserControl.
# Scroll content.
# Erase background and fill with `char`.
# Resolve `align` attribute.
# Write body
# Remember render info. (Set before generating the margins. They need this.)
# Set mouse handlers.
# Don't handle mouse events outside of the current modal part of
# the UI.
# Find row/col position first.
# If clicked below the content area, look for a position in the
# last line instead.
# Try again. (When clicking on the right side of double
# width characters, or on the right side of the input.)
# Found position, call handler of UIControl.
# nobreak.
# (No x/y coordinate found for the content. This happens in
# case of a DummyControl, that does not have any content.
# Report (0,0) instead.)
# If it returns NotImplemented, handle it here.
# Render and copy margins.
# Retrieve margin fragments.
# Turn it into a UIContent object.
# already rendered those fragments using this size.)
# (ConditionalMargin returns a zero width. -- Don't render.)
# Create screen for margin.
# Copy and shift X.
# Apply 'self.style'
# Tell the screen that this user control has been painted at this
# position.
# Map visible line number to (row, col) of input.
# 'col' will always be zero if line wrapping is off.
# Maps (row, col) from the input to (y, x) screen coordinates.
# Throwaway dictionary.
# Draw line prefix.
# Scroll horizontally.
# Characters skipped because of horizontal scrolling.
# Remove first character.
# When scrolling over double width character,
# this can end up being negative.
# Align this line. (Note that this doesn't work well when we use
# get_line_prefix and that function returns variable width prefixes.)
# Remember raw VT escape sequences. (E.g. FinalTerm's
# escape sequences.)
# Wrap when the line width is exceeded.
# Insert line prefix (continuation prompt).
# Break out of all for loops.
# Set character in screen and shift 'x'.
# When we print a multi width character, make sure
# to erase the neighbors positions in the screen.
# (The empty string if different from everything,
# so next redraw this cell will repaint anyway.)
# If this is a zero width characters, then it's
# probably part of a decomposed unicode character.
# See: https://en.wikipedia.org/wiki/Unicode_equivalence
# Merge it in the previous cell.
# Handle all character widths. If the previous
# character is a multiwidth character, then
# merge it two positions back.
# Previous character width.
# Keep track of write position for each character.
# Copy content.
# Take the next line and copy it in the real screen.
# Copy margin and actual line.
# Normally this should never happen. (It is a bug, if it happens.)
# But to be sure, return (0, 0)
# raise ValueError(
# Set cursor and menu positions.
# Draw input characters from the input processor queue.
# Set menu position.
# Update output screen height.
# Apply `self.style`.
# Apply the 'last-line' class to the last line of each Window. This can
# be used to apply an 'underline' to the user control.
# The textual data for the given key. (Can be a VT100 escape
# sequence.)
# Display only if this is a 1 cell width character.
# Highlight cursor line.
# Highlight cursor column.
# Highlight color columns
# Only draw when visible.
# We don't have horizontal scrolling.
# When there is no space, reset `vertical_scroll_2` to zero and abort.
# This can happen if the margin is bigger than the window width.
# Otherwise the text height will become "infinite" (a big number) and
# the copy_line will spend a huge amount of iterations trying to render
# nothing.
# If the current line consumes more than the whole window height,
# then we have to scroll vertically inside this line. (We don't take
# the scroll offsets into account for this.)
# Also, ignore the scroll offsets in this case. Just set the vertical
# scroll to this line.
# Calculate the height of the text before the cursor (including
# line prefixes).
# Adjust scroll offset.
# Keep the cursor visible.
# Avoid blank lines at the bottom when scrolling up again.
# Current line doesn't consume the whole height. Take scroll offsets into account.
# Make sure that the cursor line is not below the bottom.
# (Calculate how many lines can be shown between the cursor and the .)
# Make sure that the cursor line is not above the top.
# Scroll vertically. (Make sure that the whole line which contains the
# cursor is visible.
# Note: the `min(topmost_visible, ...)` is to make sure that we
# don't require scrolling up because of the bottom scroll offset,
# when we are at the end of the document.
# Disallow scrolling beyond bottom?
# Without line wrapping, we will never have to scroll vertically inside
# a single line.
# Calculate the scroll offset to apply.
# This can obviously never be more than have the screen size. Also, when the
# cursor appears at the top or bottom, we don't apply the offset.
# Prevent negative scroll offsets.
# Scroll back if we scrolled to much and there's still space to show more of the document.
# Scroll up if cursor is before visible part.
# Scroll down if cursor is after visible part.
# When a preferred scroll is given, take that first into account.
# Update horizontal/vertical scroll to make sure that the cursor
# remains visible.
# We can only analyze the current line. Calculating the width off
# all the lines is too expensive.
# TODO: not entirely correct yet in case of line wrapping and long lines.
# Key bindings will be collected when `layout.walk()` finds the child
# container.
# Here we have to return the current active container itself, not its
# children. Otherwise, we run into issues where `layout.walk()` will
# never see an object of type `Window` if this contains a window. We
# can't/shouldn't proxy the "isinstance" check.
# Default reset. (Doesn't have to be implemented.)
# Cache for line heights. Maps cache key -> height
# Instead of using `get_line_prefix` as key, we use render_counter
# instead. This is more reliable, because this function could still be
# the same, while the content would change over time.
# Calculate line width first.
# Add prefix width.
# Slower path: compute path when there's a line prefix.
# Keep wrapping as long as the line doesn't fit.
# Keep adding new prefixes for every wrapped line.
# Prefix doesn't fit.
# Fast path: compute height when there's no line prefix.
# Like math.ceil.
# Cache and return
# No type check on 'text'. This is done dynamically.
# Key bindings.
#: Cache for the content.
# Only cache one fragment list. We don't need the previous item.
# Render info for the mouse support.
# Get fragments
# Strip mouse handlers from fragments.
# Keep track of the fragments with mouse handler, for later use in
# `mouse_handler`.
# If there is a `[SetCursorPosition]` in the fragment list, set the
# cursor position here.
# If there is a `[SetMenuPosition]`, set the menu over here.
# Create content, or take it from the cache.
# Read the generator.
# Find position in the fragment list.
# Find mouse handler for this character.
# Handler found. Call it.
# (Handler can return NotImplemented, so return
# that result.)
# Otherwise, don't handle here.
# Something very big.
#: Cache for the lexer.
#: Often, due to cursor movement, undo/redo and window resizing
#: operations, it happens that a short time, the same document has to be
#: lexed. This is a fairly easy way to cache such an expensive operation.
# Calculate the content height, if it was drawn on a screen with the
# given width.
# Pass a dummy '1' as height.
# When line wrapping is off, the height should be equal to the amount
# of lines.
# When the number of lines exceeds the max_available_height, just
# return max_available_height. No need to calculate anything.
# Cache using `document.text`.
# Merge all input processors together.
# Get cursor position at this line.
# Trigger history loading of the buffer. We do this during the
# rendering of the UI here, because it needs to happen when an
# `Application` with its event loop is running. During the rendering of
# the buffer control is the earliest place we can achieve this, where
# we're sure the right event loop is active, and don't require user
# interaction (like in a key binding).
# Get the document to be shown. If we are currently searching (the
# search buffer has focus, and the preview_search filter is enabled),
# then use the search document, which has possibly a different
# text/cursor position.)
# Only if this feature is enabled.
# And something was typed in the associated search field.
# And we are searching in this control. (Many controls can point to
# the same search field, like in Pyvim.)
# Add a space at the end, because that is a possible cursor
# position. (When inserting after the input.) We should do this on
# all the lines, not just the line containing the cursor. (Because
# otherwise, line wrapping/scrolling could change when moving the
# cursor around.)
# If there is an auto completion going on, use that start point for a
# pop-up menu position. (But only when this buffer has the focus --
# there is only one place for a menu, determined by the focused buffer.)
# Position for completion menu.
# Note: We use 'min', because the original cursor position could be
# Focus buffer when clicked.
# Translate coordinates back to the cursor position of the
# original input.
# Set the cursor position.
# Click and drag to highlight a selection
# When the cursor was moved to another place, select the text.
# (The >1 is actually a small but acceptable workaround for
# selecting text in Vi navigation mode. In navigation mode,
# the cursor can never be after the text, so the cursor
# will be repositioned automatically.)
# Select word around cursor on double click.
# Two MOUSE_UP events in a short timespan are considered a double click.
# Don't handle scroll events here.
# Not focused, but focusing on click events.
# Focus happens on mouseup. (If we did this on mousedown, the
# up event will be received at the point where this widget is
# focused and be handled anyway.)
# Whenever the buffer changes, the UI has to be updated.
# If this BufferControl is used as a search field for one or more other
# BufferControls, then represents the search state.
# Never go beyond this height, because performance will degrade.
# We're only scrolling vertical. So the preferred width is equal to
# that of the content.
# If a scrollbar needs to be displayed, add +1 to the content width.
# Prefer a height large enough so that it fits all the content. If not,
# we'll make the pane scrollable.
# If `show_scrollbar` is set. Always reserve space for the scrollbar.
# Only take 'preferred' into account. Min/max can be anything.
# Compute preferred height again.
# Ensure virtual height is at least the available height.
# First, write the content to a virtual screen, then copy over the
# visible part to the real screen.
# If anything in the virtual screen is focused, move vertical scroll to
# No window focused here. Don't scroll.
# Make sure this window is visible.
# Copy over virtual screen and zero width escapes to real screen.
# Copy over mouse handlers.
# Set screen.width/height.
# Copy over window write positions.
# Copy over cursor positions, if they are visible.
# Copy over menu positions, but clip them to the visible area.
# Draw scrollbar.
# Cache mouse handlers when wrapping them. Very often the same mouse
# handler is registered for many positions.
# Copy handlers.
# TODO: if the window is only partly visible, then truncate width/height.
# Start with maximum allowed scroll range, and then reduce according to
# the focused window and cursor position.
# Reduce min/max scroll according to the cursor in the focused window.
# Reduce min/max scroll according to focused window position.
# If the window is small enough, bot the top and bottom of the window
# should be visible.
# Window does not fit on the screen. Make sure at least the whole
# screen is occupied with this window, and nothing else is shown.
# Finally, properly clip the vertical scroll.
# Give the last cell a different style, because we want
# to underline this.
# If we end up having one of these special control sequences in the input string,
# we should display them as follows:
# Usually this happens after a "quoted insert".
# Control space
# Escape
# ASCII Delete (backspace).
# Special characters. All visualized like Vim does.
# For the non-breaking space: visualize like Emacs does by default.
# (Print a space, but attach the 'nbsp' class that applies the
# underline style.)
# If this character has to be displayed otherwise, take that one.
# Will be underlined.
# Calculate width. (We always need this, so better to store it directly
# as a member for performance.)
# In theory, `other` can be any type of object, but because of performance
# we don't want to do an `isinstance` check every time. We assume "other"
# is always a "Char".
# Not equal: We don't do `not char.__eq__` here, because of the
# performance of calling yet another function.
#: Escape sequences to be injected.
#: Position of the cursor.
# Map `Window` objects to `Point` objects.
#: Visibility of the cursor.
#: (Optional) Where to position the menu. E.g. at the start of a completion.
#: (We can't use the cursor position, because we don't want the
#: completion menu to change its position when we browse through all the
#: completions.)
#: Currently used width/height of the screen. This will increase when
#: data is written to the screen.
# Windows that have been drawn. (Each `Window` class will add itself to
# this list.)
# List of (z_index, draw_func)
# We keep looping because some draw functions could add new functions
# to this list. See `FloatContainer`.
# Sort the floats that we have so far by z_index.
# Draw only one at a time, then sort everything again. Now floats
# might have been added.
# xpos and ypos can be negative. (A float can be partially visible.)
# NOTE: Previously, the data structure was a dictionary mapping (x,y)
# to the handlers. This however would be more inefficient when copying
# over the mouse handlers of the visible region in the scrollable pane.
# Map y (row) to x (column) to handlers.
# Preferred minimum size of the menu control.
# The CompletionsMenu class defines a width of 8, and there is a scrollbar
# of 1.)
# Can be None!
# Calculate width of completions menu.
# If the amount of completions is over 200, compute the width based
# on the first 200 completions, otherwise this can be very slow.
# Select completion.
# Scroll up.
# Scroll down.
# When the text is too wide, trim it.
# Text fragments.
# NOTE: We use a pretty big z_index by default. Menus are supposed to be
# Show when there are completions but not at the point we are
# returning the input.
# One extra padding on the right + space for arrows.
# Cache for column width computations. This computation is not cheap,
# so we don't want to do it over and over again while the user
# navigates through the completions.
# (map `completion_state` to `(completion_count, width)`. We remember
# the count, because a completer can add new completions to the
# `CompletionState` while loading.)
# Info of last rendering.
# When the desired width is still more than the maximum available,
# reduce by removing columns until we are less than the available
# width.
# Space required outside of the regular columns, for displaying the
# left and right arrow.
# There should be at least one column, but it cannot be wider than
# the available width.
# However, when the columns tend to be very wide, because there are
# some very wide entries, shrink it anyway.
# `column_width` can still be bigger that `suggested_max_column_width`,
# but if there is place for two columns, we divide by two.
# Make sure the current completion is always visible: update scroll offset.
# Write completions to screen.
# Draw left arrow if we have hidden completions on the left.
# Reserve one column empty space. (If there is a right
# arrow right now, there can be a left arrow as well.)
# Draw row content.
# Remember render position for mouse click handler.
# Draw trailing padding for this row.
# (_get_menu_item_fragments only returns padding on the left.)
# Draw right arrow if we have hidden completions on the right.
# Add line.
# Number of completions changed, recompute.
# Mouse click on left arrow.
# Mouse click on right arrow.
# Mouse click on completion.
# There need to be completions, and one needs to be selected.
# This menu needs to be visible.
# Calculate new complete index.
# NOTE: the is_global is required because the completion menu will
# Display filter: show when there are completions but not at the point
# we are returning the input.
# Create child windows.
# NOTE: We don't set style='class:completion-menu' to the
# Initialize split.
# When there are many completions, calling `get_cwidth` for
# every `display_meta_text` is too expensive. In this case,
# just return the max available width. There will be enough
# columns anyway so that the whole screen is filled with
# completions and `create_content` will then take up as much
# space as needed.
# TODO: When creating a copy() or [:], return also an _ExplodedList.
# In case of `OneStyleAndTextTuple`.
# When the fragments is already exploded, don't explode again.
# For each search match, replace the style string.
# Get cursor column.
# When the search buffer has focus, take that text.
# In case of selection, highlight all matches.
# When this is an empty line, insert a space in order to
# visualize the selection.
# Try for the character under the cursor.
# Try for the character before the cursor.
# Return a list of (row, col) tuples that need to be highlighted.
# pos is relative.
# When the application is in the 'done' state, don't highlight.
# Get the highlight positions.
# Apply if positions were found at this line.
# If any cursor appears on the current line, highlight that.
# Replace fragment.
# Cursor needs to be displayed after the current text.
# Get fragments.
# Insert fragments after the last line.
# Walk through all te fragments.
# Walk backwards through all te fragments and replace whitespace.
# Create separator for tabs.
# Transform fragments.
# Calculate how many characters we have to insert.
# Insert tab.
# Add `pos+1` to mapping, because the cursor can be right after the
# line as well.
# Emulate the BufferControl through which we are searching.
# For this we filter out some of the input processors.
# For a `_MergedProcessor`, check each individual processor, recursively.
# For a `ConditionalProcessor`, check the body.
# Otherwise, check the processor itself.
# Get the line from the original document for this search.
# Run processor when enabled.
# Nothing to merge.
# In the case of a nested _MergedProcessor, each processor wants to
# receive a 'source_to_display' function (as part of the
# TransformationInput) that has everything in the chain before
# included, because it can be called as part of the
# `apply_transformation` function. However, this first
# `source_to_display` should not be part of the output that we are
# returning. (This is the most consistent with `display_to_source`.)
# Current.
# Dummy.
# Run_in_terminal
# Can be None, True or False.
# I/O.
# If `enable_page_navigation_bindings` is not specified, enable it in
# case of full screen applications only. This can be overridden by the user.
# Events.
# List of 'extra' functions to execute before a Application.run.
#: Quoted insert. This flag is set if we go into quoted insert mode.
#: Vi state. (For Vi key bindings.)
#: When to flush the input (For flushing escape keys.) This is important
#: on terminals that use vt100 input. We can't distinguish the escape
#: key from for instance the left-arrow key, if we don't know what follows
#: after "\x1b". This little timer will consider "\x1b" to be escape if
#: nothing did follow in this time span.
#: This seems to work like the `ttimeoutlen` option in Vim.
# Seconds.
#: Like Vim's `timeoutlen` option. This can be `None` or a float.  For
#: instance, suppose that we have a key binding AB and a second key
#: binding A. If the uses presses A and then waits, we don't handle
#: this binding yet (unless it was marked 'eager'), because we don't
#: know what will follow. This timeout is the maximum amount of time
#: that we wait until we call the handlers anyway. Pass `None` to
#: disable this timeout.
#: The `Renderer` instance.
# Make sure that the same stdout is used, when a custom renderer has been passed.
#: Render counter. This one is increased every time the UI is rendered.
#: It can be used as a key for caching certain information during one
#: rendering.
# Invalidate flag. When 'True', a repaint has been scheduled.
# Collection of 'invalidate' Event objects.
# Unix timestamp of last redraw. Used when
# `min_redraw_interval` is given.
#: The `InputProcessor` instance.
# If `run_in_terminal` was called. This will point to a `Future` what will be
# set at the point when the previous run finishes.
# Trigger initialize callback.
# Dummy buffer.
# Dummy search state.  (Don't return None!)
# Notice that we don't reset the buffers. (This happens just before
# returning, and when we have multiple buffers, we clearly want the
# content in the other buffers to remain unchanged between several
# calls of `run`. (And the same is true for the focus stack.)
# Trigger reset event.
# Make sure that we have a 'focusable' widget focused.
# (The `Layout` class can't determine this.)
# Don't schedule a redraw if we're not running.
# Otherwise, `get_running_loop()` in `call_soon_threadsafe` can fail.
# See: https://github.com/dbcli/mycli/issues/797
# `invalidate()` called if we don't have a loop yet (not running?), or
# after the event loop was closed.
# Never schedule a second redraw, when a previous one has not yet been
# executed. (This should protect against other threads calling
# 'invalidate' many times, resulting in 100% CPU.)
# Trigger event.
# When a minimum redraw interval is set, wait minimum this amount
# of time between redraws.
# Only draw when no sub application was started.
# Render
# Draw in 'done' state and reset renderer.
# Fire render event.
# NOTE: We want to make sure this Application is the active one. The
# Remove all the original event handlers. (Components can be removed
# from the UI.)
# Gather all new events.
# (All controls are able to invalidate themselves.)
# Erase, request position (when cursor is at the start position)
# and redraw again. -- The order is important.
# Process registered "pre_run_callables" and clear list.
# Handling signals in other threads is not supported.
# Also on Windows, `add_signal_handler(signal.SIGINT, ...)` raises
# `NotImplementedError`.
# See: https://github.com/prompt-toolkit/python-prompt-toolkit/issues/1553
# Counter for cancelling 'flush' timeouts. Every time when a key is
# pressed, we start a 'flush' timer for flushing our escape key. But
# when any subsequent input is received, a new timer is started and
# the current timer will be ignored.
# Reset.
# (`self.future` needs to be set when `pre_run` is called.)
# Feed type ahead input first.
# Ignore when we aren't running anymore. This callback will
# removed from the loop next time. (It could be that it was
# still in the 'tasks' list of the loop.)
# Except: if we need to process incoming CPRs.
# Get keys from the input object.
# Feed to key processor.
# Quit when the input stream was closed.
# Ensure that key bindings callbacks are always executed in the
# current context. This is important when key bindings are
# accessing contextvars. (These callbacks are currently being
# called from a different context. Underneath,
# `loop.add_reader` is used to register the stdin FD.)
# (We copy the context to avoid a `RuntimeError` in case the
# context is already active.)
# Flush input after timeout.
# (Used for flushing the enter key.)
# This sleep can be cancelled, in that case we won't flush yet.
# Get keys, and feed to key processor.
# Enter raw mode, attach input and attach WINCH event handler.
# Draw UI.
# Wait for UI to finish.
# In any case, when the application finishes.
# (Successful, or because of an error.)
# _redraw has a good chance to fail if it calls widgets
# with bad code. Make sure to reset the renderer
# anyway.
# Unset `is_running`, this ensures that possibly
# scheduled draws won't paint during the following
# yield.
# Detach event handlers for invalidate events.
# (Important when a UIControl is embedded in multiple
# applications, like ptterm in pymux. An invalidate
# should not trigger a repaint in terminated
# applications.)
# Wait for CPR responses.
# Wait for the run-in-terminals to terminate.
# Store unprocessed input as typeahead for next time.
# save sigint handlers (python and os level)
# See: https://github.com/prompt-toolkit/python-prompt-toolkit/issues/1576
# Set slow_callback_duration.
# Reset slow_callback_duration.
# XXX: make sure to set this before calling '_redraw'.
# Also remove the Future again. (This brings the
# application back to its initial state, where it also
# doesn't have a Future.)
# Make sure to set `_invalidated` to `False` to begin with,
# otherwise we're not going to paint anything. This can happen if
# this application had run before on a different event loop, and a
# paint was scheduled using `call_soon_threadsafe` with
# `max_postpone_time`.
# Wait for the background tasks to be done. This needs to
# go in the finally! If `_run_async` raises
# `KeyboardInterrupt`, we still want to wait for the
# background tasks.
# The `ExitStack` above is defined in typeshed in a way that it can
# swallow exceptions. Without next line, mypy would think that there's
# a possibility we don't return here. See:
# https://github.com/python/mypy/issues/7726
# Signal handling only works in the main thread.
# Create new event loop with given input hook and run the app.
# In Python 3.12, we can use asyncio.run(loop_factory=...)
# For now, use `run_until_complete()`.
# workaround to make input hooks work for IPython until
# https://github.com/ipython/ipython/pull/14241 is merged.
# IPython was setting the input hook by installing an event loop
# previously.
# See whether a loop was installed already. If so, use that.
# That's required for the input hooks to work, they are
# installed using `set_event_loop`.
# No loop installed. Run like usual.
# Use existing loop.
# For Python 2: we have to get traceback at this point, because
# we're still in the 'except:' block of the event loop where the
# traceback is still available. Moving this code in the
# 'print_exception' coroutine will loose the exception.
# Print output. Similar to 'loop.default_exception_handler',
# but don't use logger. (This works better on Python 2.)
# Inline import on purpose. We don't want to import pdb, if not needed.
# Hide application.
# Detach input and dispatch to debugger.
# Note: we don't render the application again here, because
# there's a good chance that there's a breakpoint on the next
# line. This paint/erase cycle would move the PDB prompt back
# to the middle of the screen.
# from .run_in_terminal import in_terminal
# async with in_terminal():
# Here we block the App's event loop thread until the
# debugger resumes. We could have used `with
# run_in_terminal.in_terminal():` like the commented
# code above, but it seems to work better if we
# completely stop the main event loop while debugging.
# Wait until the cancellation of the background tasks completes.
# `asyncio.wait()` does not propagate exceptions raised within any of
# these tasks, which is what we want. Otherwise, we can't distinguish
# between a `CancelledError` raised in this task because it got
# cancelled, and a `CancelledError` raised on this `await` checkpoint,
# because *we* got cancelled during the teardown of the application.
# (If we get cancelled here, then it's important to not suppress the
# `CancelledError`, and have it propagate.)
# NOTE: Currently, if we get cancelled at this point then we can't wait
# We know about this already.
# Note: only do this if the input queue is not empty, and a return
# value has not been set. Otherwise, we won't be able to read the
# response anyway.
# Try to use the same input/output file descriptors as the one,
# used to run this application.
# Run sub process.
# Wait for the user to press enter.
# Only suspend when the operating system supports it.
# (Not on Windows.)
# Send `SIGTSTP` to own process.
# This will cause it to suspend.
# Usually we want the whole process group to be suspended. This
# handles the case when input is piped from another process.
# Collect key bindings from currently focused control and all parent
# controls. Don't include key bindings of container parent controls.
# Include global bindings (starting at the top-model container).
# Add App key bindings
# Add mouse bindings.
# Reverse this list. The current control's key bindings should come
# last. They need priority.
# Control-c pressed. Don't propagate this error.
# The tricky part here is that signals are registered in the Unix event
# loop with a wakeup fd, but another application could have registered
# signals using signal.signal directly. For now, the implementation is
# hard-coded for the `asyncio.unix_events._UnixSelectorEventLoop`.
# No WINCH? Then don't do anything.
# Keep track of the previous handler.
# (Only UnixSelectorEventloop has `_signal_handlers`.)
# Restore the previous signal handler.
# The following functions are part of the stable ABI since python 3.2
# See: https://docs.python.org/3/c-api/sys.html#c.PyOS_getsig
# Inline import: these are not available on Pypy.
# GraalPy has the functions, but they don't work
# PyOS_sighandler_t PyOS_getsig(int i)
# PyOS_sighandler_t PyOS_setsig(int i, PyOS_sighandler_t h)
# When a previous `run_in_terminal` call was in progress. Wait for that
# to finish, before starting this one. Chain to previous call.
# Wait for the previous `run_in_terminal` to finish.
# Wait for all CPRs to arrive. We don't want to detach the input until
# all cursor position responses have been arrived. Otherwise, the tty
# will echo its input and can show stuff like ^[[39;1R.
# Draw interface in 'done' state, or erase.
# Disable rendering.
# Detach input.
# Redraw interface again.
# (Check for `.done()`, because it can be that this future was
# cancelled.)
# The application will be set dynamically by the `set_app` context
# manager. This is called in the application itself.
# If no input/output is specified, fall back to the current input/output,
# if there was one that was set/created for the current session.
# (Note that we check `_input`/`_output` and not `input`/`output`. This is
# because we don't want to accidentally create a new input/output objects
# here and store it in the "parent" `AppSession`. Especially, when
# combining pytest's `capsys` fixture and `create_app_session`, sys.stdin
# and sys.stderr are patched for every test, so we don't want to leak
# those outputs object across `AppSession`s.)
# Create new `AppSession` and activate.
# Color depth.
# Low intensity.
# High intensity.
# Don't use, 'default' doesn't really have a value.
# When we have a bit of saturation, avoid the gray-like colors, otherwise,
# too often the distance to the gray color is less.
# Between 0..510
# Take the closest color.
# (Thanks to Pygments for this part.)
# "infinity" (>distance from #000000 to #ffffff)
# Turn color name into code.
# Build color table.
# colors 0..15: 16 basic colors
# 0
# 1
# 2
# 3
# 4
# 5
# 6
# 7
# 8
# 9
# 10
# 11
# 12
# 13
# 14
# 15
# colors 16..232: the 6x6x6 color cube
# colors 233..253: grayscale
# Find closest color.
# (Thanks to Pygments for this!)
# XXX: We ignore the 16 ANSI colors when mapping RGB
# to the 256 colors, because these highly depend on
# the color scheme of the terminal.
# When requesting ANSI colors only, and both fg/bg color were converted
# to ANSI, ensure that the foreground and background color are not the
# same. (Unless they were explicitly defined to be the same color.)
# 16 ANSI colors. (Given by name.)
# RGB colors. (Defined as 'ffffff'.)
# When only 16 colors are supported, use that.
# Background.
# Foreground.
# True colors. (Only when this feature is enabled.)
# 256 RGB colors.
# For the error messages. Only display "Output is not a terminal" once per
# file descriptor.
# Cache for escape codes.
# Keep track of whether the cursor shape was ever changed.
# (We don't restore the cursor shape if it was never changed - by
# default, we don't change them.)
# Don't hide/show the cursor when this was already done.
# (`None` means that we don't know whether the cursor is visible or
# not.)
# Normally, this requires a real TTY device, but people instantiate
# this class often during unit tests as well. For convenience, we print
# an error message, use standard dimensions, and go on.
# If terminal (incorrectly) reports its size as 0, pick a
# reasonable default.  See
# https://github.com/ipython/ipython/issues/10071
# It is possible that `stdout` is no longer a TTY device at this
# point. In that case we get an `OSError` in the ioctl call in
# `get_size`. See:
# https://github.com/prompt-toolkit/python-prompt-toolkit/pull/1021
# Not supported by the Linux console.
# Enable mouse-drag support.
# Enable urxvt Mouse mode. (For terminals that understand this.)
# Also enable Xterm SGR mouse mode. (For terminals that understand this.)
# Note: E.g. lxterminal understands 1000h, but not the urxvt or sgr
# Get current depth.
# Write escape character.
# Put the terminal in cursor mode. (Instead of application mode.)
# Note: Not the same as '\n', '\n' can cause the window content to
# '\x1b[D'
# Stop blinking cursor and show.
# (Only reset cursor shape, if we ever changed it.)
# Reset cursor shape.
# When the input is a tty, we assume that CPR is supported.
# It's not when the input is piped from Pexpect.
#: If True: write the output of the renderer also to the following file. This
#: is very useful for debugging. (e.g.: to see that we don't write more bytes
#: than required.)
# Are we running in 'xterm' on Windows, like git-bash for instance?
# Remember the default console colors.
# We take the width of the *visible* region as the size. Not the width
# of the complete screen buffer. (Unless use_complete_width has been
# set.)
# We avoid the right margin, windows will wrap otherwise.
# Create `Size` object.
# NOTE: We don't call the `GetConsoleScreenBufferInfo` API through
# The Python documentation contains the following - possibly related - warning:
# Also see:
# success = self._winapi(windll.kernel32.GetConsoleScreenBufferInfo,
# Reset attributes.
# Start from the default attributes.
# Override the last four bits: foreground color.
# Override the next four bits: background color.
# Reverse: swap these four bits groups.
# Not supported by Windows.
# Only flush stdout buffer. (It could be that Python still has
# something in its buffer. -- We want to be sure to print that in
# the correct color.)
# Print characters one by one. This appears to be the best solution
# in order to avoid traces of vertical lines when the completion
# menu disappears.
# Get current window size
# Scroll to the left.
# Scroll vertical
# no vertical scroll if cursor already on the screen
# Scroll API
# Create a new console buffer and activate that one.
# This `ENABLE_QUICK_EDIT_MODE` flag needs to be cleared for mouse
# support to work, but it's possible that it was already cleared
# before.
# Get console handle
# Foreground color is intensified.
# Background color is intensified.
# Cache (map color string to foreground and background code).
# Consider TERM, PROMPT_TOOLKIT_BELL, and PROMPT_TOOLKIT_COLOR_DEPTH
# environment variables. Notice that PROMPT_TOOLKIT_COLOR_DEPTH value is
# the default that's used if the Application doesn't override it.
# By default, render to stdout. If the output is piped somewhere else,
# render to stderr.
# (This is `None` when using `pythonw.exe` on Windows.)
# If the patch_stdout context manager has been used, then sys.stdout is
# replaced by this proxy. For prompt_toolkit applications, we want to use
# the real stdout.
# If the output is still `None`, use a DummyOutput.
# This happens for instance on Windows, when running the application under
# `pythonw.exe`. In that case, there won't be a terminal Window, and
# stdin/stdout/stderr are `None`.
# Stdout is not a TTY? Render as plain text.
# This is mostly useful if stdout is redirected to a file, and
# `print_formatted_text` is used.
#: One color only.
#: ANSI Colors.
#: The default.
#: 24 bit True color.
# Disable color if a `NO_COLOR` environment variable is set.
# See: https://no-color.org/
# Check the `PROMPT_TOOLKIT_COLOR_DEPTH` environment variable.
# If the IO object has an `encoding` and `buffer` attribute, it means that
# we can access the underlying BinaryIO object and write into it in binary
# mode. This is preferred if possible.
# NOTE: When used in a Jupyter notebook, don't write binary.
# Ensure that `stdout` is made blocking when writing into it.
# Otherwise, when uvloop is activated (which makes stdout
# non-blocking), and we write big amounts of text, then we get a
# `BlockingIOError` here.
# (We try to encode ourself, because that way we can replace
# characters that don't exist in the character set, avoiding
# UnicodeEncodeError crashes. E.g. u'\xb7' does not appear in 'ascii'.)
# My Arch Linux installation of july 2015 reported 'ANSI_X3.4-1968'
# for sys.stdout.encoding in xterm.
# Interrupted system call. Can happen in case of a window
# resize signal. (Just ignore. The resize handler will render
# again anyway.)
# This can happen when there is a lot of output and the user
# sends a KeyboardInterrupt by pressing Control-C. E.g. in
# a Python REPL when we execute "while True: print('test')".
# (The `ptpython` REPL uses this `Output` class instead of
# `stdout` directly -- in order to be network transparent.)
# So, just ignore.
# On Windows, the `os` module doesn't have a `get/set_blocking`
# Failed somewhere.
# `get_blocking` can raise `OSError`.
# The io object can raise `AttributeError` when no `fileno()` method is
# present if we're not a real file object.
# Assume we're good, and don't do anything.
# Make blocking if we weren't blocking yet.
# Restore original blocking mode.
# We don't need this on Windows.
# See: https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms686033(v=vs.85).aspx
# Remember the previous console mode.
# Enable processing of vt100 sequences.
# Restore console mode.
# NOTE: Now that we use "virtual terminal input" on
# "enable_mouse_support",
# "disable_mouse_support",
# "enable_bracketed_paste",
# "disable_bracketed_paste",
# Previously, we used `DEPTH_4_BIT`, even on Windows 10. This was
# because true color support was added after "Console Virtual Terminal
# Sequences" support was added, and there was no good way to detect
# what support was given.
# 24bit color support was added in 2016, so let's assume it's safe to
# take that as a default:
# https://devblogs.microsoft.com/commandline/24-bit-color-in-the-windows-console/
# Get original console mode.
# Try to enable VT100 sequences.
# Toolbars.
# Dialogs.
# Writeable attributes.
# If no height was given, guarantee height of at least 1.
# There is no cursor navigation in a label, so it makes sense to always
# wrap lines by default.
# Note: `dont_extend_width` is False, because we want to allow buttons
# Notice: we use `Template` here, because `self.title` can be an
# `HTML` object for instance.
# Padding is required to make sure that if the content is
# too small, the right frame border is still aligned.
# specifying height here will increase the rendering speed.
# current_values will be used in multiple_selection,
# current_value will be used otherwise.
# Cursor index: take first selected item or first item otherwise.
# Vi-like.
# We first check values after the selected value, then all values.
# Control and window.
# Add mouse handler to all fragments.
# Remove last newline.
# We first draw the label, then the actual progress bar.  Right
# now, this is the only way to have the colors of the progress
# bar appear on top of the label. The problem is that our label
# can't be part of any `Window` below.
# When a button is selected, handle left/right key bindings.
# Add optional padding around the body.
# The buttons.
# Key bindings for whole dialog.
# Navigation through the main menu.
# Sub menu navigation.
# If This item does not have a sub menu. Go up in the parent menu.
# Look for previous enabled items in this sub menu.
# Return to main menu.
# The titlebar.
# The 'body', like defined above.
# This is called during the rendering. When we discover that this
# widget doesn't have the focus anymore. Reset menu state.
# Generate text fragments for the main menu.
# Toggle focus.
# The arrow keys can't interact with menu items that are disabled.
# The mouse shouldn't be able to either.
# Note: The style needs to be applied to the toolbar as a whole, not
# Emacs
# Vi.
# Global bindings. (Listen to these bindings, even when this widget is
# not focussed.)
# Width of the completions without the left/right arrows in the margins.
# Booleans indicating whether we stripped from the left/right
# Create Menu content.
# When there is no more place for the next completion
# If the current one was not yet displayed, page to the next sequence.
# If the current one is visible, stop here.
# Extend/strip until the content width.
# Return fragments
# Prompts.
# Progress bars.
# Choice selection.
# Keep text.
# Prefer this text area as big as possible, to avoid having a window
# that keeps resizing when we add text to it.
# Run the callback in the executor. When done, set a return value for the
# UI, so that it quits.
# Used by '_display_completions_like_readline'.
# Formatted text for the continuation prompt. It's the same like other
# formatted text, except that if it's a callable, it takes three arguments.
# (prompt_width, line_number, wrap_count) -> AnyFormattedText.
# Ensure backwards-compatibility, when `vi_mode` is passed.
# Store all settings in this class.
# Store attributes.
# (All except 'editing_mode'.)
# Create buffers, layout and Application.
# Create buffers list.
# Keep text, we call 'reset' later on.
# Make sure that complete_while_typing is disabled when
# enable_history_search is enabled. (First convert to Filter,
# to avoid doing bitwise operations on bool objects.)
# Create functions that will dynamically split the prompt. (If we have
# a multiline prompt.)
# Create processors list.
# Users can insert processors here.
# Create bottom toolbars.
# Build the layout.
# The main input, with completion menus floating on top of it.
# Completion menus.
# NOTE: Especially the multi-column menu needs to be
# The right prompt.
# Wrap the main input in a frame, if requested.
# In multiline mode, we use two toolbars for 'arg' and 'search'.
# Default key bindings.
# Create application
# During render time, make sure that we focus the right search control
# (if we are searching). - This could be useful if people make the
# 'multiline' property dynamic.
# When any of these arguments are passed, this value is overwritten
# in this PromptSession.
# `message` should go first, because people call it as
# positional argument.
# Following arguments are specific to the current `prompt()` call.
# NOTE: We used to create a backup of the PromptSession attributes and
# NOTE 2: YES, this is a lot of repeation below...
# This is not reactive.
# If we are using the default output, and have a dumb terminal. Use the
# dumb prompt.
# Send prompt to output.
# Key bindings for the dumb prompt: mostly the same as the full prompt.
# Create and run application.
# Render line ending.
# Validate and handle input. We use `call_from_executor` in
# order to run it "soon" (during the next iteration of the
# event loop), instead of right now. Otherwise, it won't
# display the default value.
# If there is an autocompletion menu to be shown, make sure that our
# layout has at least a minimal height in order to display it.
# Reserve the space, either when there are completions, or when
# `complete_while_typing` is true and we expect completions very
# soon.
# When the continuation prompt is not given, choose the same width as
# the actual prompt.
# First line: display the "arg" or the prompt.
# For the next lines, display the appropriate continuation.
# Should not happen because of the `has_arg` filter in the layout.
# Expose the Input and Output objects as attributes, mainly for
# backward-compatibility.
# The history is the only attribute that has to be passed to the
# `PromptSession`, it can't be passed into the `prompt()` method.
# Add an empty window between the selection input and the
# bottom toolbar, if the bottom toolbar is visible, in
# order to allow the bottom toolbar to be displayed at the
# bottom of the screen.
# Create Output object.
# Get color depth.
# Merges values.
# Normal lists which are not instances of `FormattedText` are
# considered plain text.
# Print output.
# Flush the output stream.
# If an application is running, print above the app. This does not require
# `patch_stdout`.
# `DummyInput` will cause the application to terminate immediately.
# Formatters.
# If no `cancel_callback` was given, and we're creating the progress
# bar from the main thread. Cancel by sending a `KeyboardInterrupt` to
# the main thread.
# Note that we use __stderr__ as default error output, because that
# works best with `patch_stdout`.
# Create UI Application.
# Needs to be passed as callable (partial) to the 'width'
# parameter, because we want to call it on every resize.
# Run application in different thread.
# Wait for the app to be started. Make sure we don't quit earlier,
# otherwise `self.app.exit` won't terminate the app because
# `self.app.future` has not yet been set.
# Quit UI application.
# Make sure that the key bindings work.
# We don't know the total length.
# Only done if we iterate to the very end.
# Ensure counter has stopped even if we did not iterate to the
# end (e.g. break or exceptions).
# This counter has not already been stopped.
# Clearing any previously set stop_time.
# It doesn't fit -> scroll task name.
# Compute pb_a based on done, total, or stopped states.
# 100% completed irrelevant of how much was actually marked as completed.
# Show percentage completed.
# Total is unknown and bar is still running.
# Compute percent based on the time.
# Subtract left, sym_b, and right.
# Scale percent by width
# Get formatted text from nested formatter, and explode it in
# text/style tuples.
# Insert colors.
# app
# base.
# utils.
# NOTE: `has_focus` below should *not* be `memoized`. It can reference any user
# Consider focused when any window inside this container is
# focused.
# Turn nested _AndLists into one.
# Remove duplicates. This could speed up execution, and doesn't make a
# difference for the evaluation.
# If only one filter is left, return that without wrapping into an
# `_AndList`.
# Often used as type annotation.
# Old names.
# Keep the original classnames for backwards compatibility.
# No lambda here! (Has_focus is callable that returns a callable.)
# Never go more than this amount of lines backwards for synchronization.
# That would be too CPU intensive.
# Start lexing at the start, if we are in the first 'n' lines and no
# synchronization position was found.
# Scan upwards, until we find a point where we can start the syntax
# synchronization.
# No synchronization point found. If we aren't that far from the
# beginning, start at the very beginning, otherwise, just try to start
# at the current line.
# For Python, start highlighting at any class/def block.
# For HTML, start at any open/close tag definition.
# For javascript, start at a function.
# TODO: Add definitions for other languages.
# Minimum amount of lines to go backwards when starting the parser.
# This is important when the lines are retrieved in reverse order, or when
# scrolling upwards. (Due to the complexity of calculating the vertical
# scroll offset in the `Window` class, lines are not always retrieved in
# order.)
# When a parser was started this amount of lines back, read the parser
# until we get the current line. Otherwise, start a new parser.
# (This should probably be bigger than MIN_LINES_BACKWARDS.)
# Instantiate the Pygments lexer.
# Create syntax sync instance.
# Inline imports: the Pygments dependency is optional!
# Cache of already lexed lines.
# Pygments generators that are currently lexing.
# Map lexer generator to the line number.
# We call `get_text_fragments_unprocessed`, because `get_tokens` will
# still replace \r\n and \r by \n.  (We don't want that,
# Pygments should return exactly the same amount of text, as we
# have given as input.)
# Turn Pygments `Token` object into prompt_toolkit style
# Find closest line generator.
# No generator found. Determine starting point for the syntax
# synchronization first.
# Go at least x lines back. (Make scrolling upwards more
# efficient.)
# Find generator close to this point, or otherwise create a new one.
# If the column is not 0, ignore the first line. (Which is
# incomplete. This happens when the synchronization algorithm tells
# us to start parsing in the middle of a line.)
# Exhaust the generator, until we find the requested line.
# Remove the next item from the cache.
# (It could happen that it's already there, because of
# another generator that started filling these lines,
# but we want to synchronize these lines with the
# current lexer's state.)
# This needs to be true, so that the dummy input will trigger an
# `EOFError` immediately in the application.
# Call the callback immediately once after attaching.
# This tells the callback to call `read_keys` and check the
# `input.closed` flag, after which it won't receive any keys, but knows
# that `EOFError` should be raised. This unblocks `read_from_input` in
# `application.py`.
# For the error messages. Only display "Input is not a terminal" once per
# Test whether the given input object has a file descriptor.
# (Idle reports stdin to be a TTY, but fileno() is not implemented.)
# This should not raise, but can return 0.
# Even when we have a file descriptor, it doesn't mean it's a TTY.
# this class often during unit tests as well. They use for instance
# pexpect to pipe data into an application. For convenience, we print
# an error message and go on.
# Create a backup of the fileno(). We want this to work even if the
# underlying file is closed, so that `typeahead_hash()` keeps working.
# Buffer to collect the Key objects.
# Read text from stdin.
# Pass it through our vt100 parser.
# Return result.
# Flush all pending keys. (This is most important to flush the vt100
# 'Escape' key early when nothing else follows.)
# (loop, fd) -> current callback
# For `EPollSelector`, adding /dev/null to the event loop will raise
# `PermissionError` (that doesn't happen for `SelectSelector`
# apparently). Whenever we get a `PermissionError`, we can raise
# `EOFError`, because there's not more to be read anyway. `EOFError` is
# an exception that people expect in
# `prompt_toolkit.application.Application.run()`.
# To reproduce, do: `ptpython 0< /dev/null 1< /dev/null`
# There are several reasons for ignoring errors:
# 1. To avoid the "Inappropriate ioctl for device" crash if somebody would
# 2. Related, when stdin is an SSH pipe, and no full terminal was allocated.
# Ignore attribute errors.
# NOTE: On os X systems, using pty.setraw() fails. Therefor we are using this:
# VMIN defines the number of characters read at a time in
# non-canonical mode. It seems to default to 1 on Linux, but on
# Solaris and derived operating systems it defaults to 4. (This is
# because the VMIN slot is the same as the VEOF slot, which
# defaults to ASCII EOT = Ctrl-D = 4.)
# Disable XON/XOFF flow control on output and input.
# (Don't capture Ctrl-S and Ctrl-Q.)
# Like executing: "stty -ixon."
# Don't translate carriage return into newline on input.
# # Put the terminal in application mode.
# self._stdout.write('\x1b[?1h')
# Turn the ICRNL flag back on. (Without this, calling `input()` in
# run_in_terminal doesn't work and displays ^M instead. Ptpython
# evaluates commands using `run_in_terminal`, so it's important that
# they translate ^M back into ^J.)
# Win32 Constants for MOUSE_EVENT_RECORD.
# See: https://docs.microsoft.com/en-us/windows/console/mouse-event-record-str
# The windows console doesn't depend on the file handle, so
# this is not used for the event loop (which uses the
# handle instead). But it's used in `Application.run_system_command`
# which opens a subprocess with a given stdin/stdout.
# Keys with character data.
# Control-Space (Also for Ctrl-@)
# Control-A (home)
# Control-B (emacs cursor left)
# Control-C (interrupt)
# Control-D (exit)
# Control-E (end)
# Control-F (cursor forward)
# Control-G
# Control-H (8) (Identical to '\b')
# Control-I (9) (Identical to '\t')
# Control-J (10) (Identical to '\n')
# Control-K (delete until end of line; vertical tab)
# Control-L (clear; form feed)
# Control-M (enter)
# Control-N (14) (history forward)
# Control-O (15)
# Control-P (16) (history back)
# Control-Q
# Control-R (18) (reverse search)
# Control-S (19) (forward search)
# Control-T
# Control-U
# Control-V
# Control-W
# Control-X
# Control-Y (25)
# Control-Z
# Both Control-\ and Ctrl-|
# Control-]
# Control-^
# Control-underscore (Also for Ctrl-hyphen.)
# (127) Backspace   (ASCII Delete.)
# Keys that don't carry character data.
# Home/End
# Arrows
# F-keys.
# When stdin is a tty, use that handle, otherwise, create a handle from
# CONIN$.
# Max events to read at the same time.
# Check whether there is some input to read. `ReadConsoleInputW` would
# block otherwise.
# (Actually, the event loop is responsible to make sure that this
# function is only called when there is something to read, but for some
# reason this happened in the asyncio_win32 loop, and it's better to be
# safe anyway.)
# Get next batch of input event.
# First, get all the keys from the input buffer, in order to determine
# whether we should consider this a paste event or not.
# Fill in 'data' for key presses.
# Correct non-bmp characters that are passed as separate surrogate codes
# Pasting: if the current key consists of text or \n, turn it
# into a BracketedPaste.
# Method only needed for structural compatibility with `Vt100ConsoleInputReader`.
# Get the right EventType from the EVENT_RECORD.
# (For some reason the Windows console application 'cmder'
# [http://gooseberrycreative.com/cmder/] can return '0' for
# ir.EventType. -- Just ignore that.)
# Process if this is a key event. (We also have mouse, menu and
# focus events.)
# convert high surrogate + low surrogate to single character
# Consider paste when it contains at least one newline and at least one
# other character.
# Use surrogatepass because u_char may be an unmatched surrogate
# NOTE: We don't use `ev.uChar.AsciiChar`. That appears to be the
# unicode code point truncated to 1 byte. See also:
# https://github.com/ipython/ipython/issues/10004
# https://github.com/jonathanslenders/python-prompt-toolkit/issues/389
# Windows sends \n, turn into \r for unix compatibility.
# First we handle Shift-Control-Arrow/Home/End (need to do this first)
# Correctly handle Control-Arrow/Home/End and Control-Insert/Delete keys.
# Turn 'Tab' into 'BackTab' when shift was pressed.
# Also handle other shift-key combination
# Turn 'Space' into 'ControlSpace' when control was pressed.
# Turn Control-Enter into META-Enter. (On a vt100 terminal, we cannot
# detect this combination. But it's really practical on Windows.)
# Return result. If alt was pressed, prefix the result with an
# 'Escape' key, just like unix VT100 terminals do.
# NOTE: Only replace the left alt with escape. The right alt key often
# Scroll events.
# Handle button state for non-scroll events.
# Move events.
# No key pressed anymore: mouse up.
# Some button pressed.
# No button pressed.
# Windows Events that are triggered when we have to stop watching this
# handle.
# Make sure to remove a previous registered handler first.
# Create remove event.
# Tell the callback that input's ready.
# Wait for the input to become ready.
# (Use an executor for this, the Windows asyncio event loop doesn't
# allow us to wait for handles like stdin.)
# Wait until either the handle becomes ready, or the remove event
# has been set.
# Ignore.
# Trigger remove events, so that the reader knows to stop.
# Remember original mode.
# Set raw
# Restore original mode
# Set cooked.
# Private constructor. Users should use the public `.create()` method.
# Identifier for every PipeInput for the hash.
# Only close the write-end of the pipe. This will unblock the reader
# callback (in vt100.py > _attached_input), which eventually will raise
# `EOFError`. If we'd also close the read-end, then the event loop
# won't wake up the corresponding callback because of this.
# If `stdin` was assigned `None` (which happens with pythonw.exe), use
# a `DummyInput`. This triggers `EOFError` in the application code.
# If no input TextIO is given, use stdin/stdout.
# If we can't access the file descriptor for the selected stdin, return
# a `DummyInput` instead. This can happen for instance in unit tests,
# when `sys.stdin` is patched by something that's not an actual file.
# (Instantiating `Vt100Input` would fail in this case.)
# Event (handle) for registering this input in the event loop.
# This event is set when there is data available to read from the pipe.
# Note: We use this approach instead of using a regular pipe, like
# Parser for incoming keys.
# Reset event.
# (If closed, the event should not reset.)
# Set event.
# By default, we want to 'ignore' errors here. The input stream can be full
# of junk.  One occurrence of this that I had was when using iTerm2 on OS X,
# with "Option as Meta" checked (You should choose "Option as +Esc".)
# Create incremental decoder for decoding stdin.
# We can not just do `os.read(stdin.fileno(), 1024).decode('utf-8')`, because
# it could be that we are in the middle of a utf-8 byte sequence.
#: True when there is nothing anymore to read.
# By default we choose a rather small chunk size, because reading
# big amounts of input at once, causes the event loop to process
# all these key bindings also at once without going back to the
# loop. This will make the application feel unresponsive.
# Check whether there is some input to read. `os.read` would block
# reason this happens in certain situations.)
# Happens for instance when the file descriptor was closed.
# (We had this in ptterm, where the FD became ready, a callback was
# scheduled, but in the meantime another callback closed it already.)
# Note: the following works better than wrapping `self.stdin` like
# Nothing more to read, stream is closed.
# In case of SIGWINCH
# Mapping of vt100 escape codes to Keys.
# Control keys.
# Control-At (Also for Ctrl-Space)
# Control-M (13) (Identical to '\r')
# Both Control-\ (also Ctrl-| )
# ASCII Delete (0x7f)
# Vt220 (and Linux terminal) send this when pressing backspace. We map this
# to ControlH, because that will make it easier to create key bindings that
# work everywhere, with the trade-off that it's no longer possible to
# handle backspace and control-h individually for the few terminals that
# support it. (Most terminals send ControlH when backspace is pressed.)
# See: http://www.ibb.net/~anne/keyboard.html
# Various
# tmux
# xrvt
# Linux console
# Windows console
# Function keys.
# Linux console.
# rxvt-unicode
# Xterm
# '\x1b[1;2R': Keys.F15,  # Conflicts with CPR response.
# CSI 27 disambiguated modified "other" keys (xterm)
# Ref: https://invisible-island.net/xterm/modified-keys.html
# These are currently unsupported, so just re-map some common ones to the
# unmodified versions
# Shift + Enter
# Ctrl + Enter
# Ctrl + Shift + Enter
# Control + function keys.
# "\x1b[1;5R": Keys.ControlF3,  # Conflicts with CPR response.
# "\x1b[1;6R": Keys.ControlF15,  # Conflicts with CPR response.
# Tmux (Win32 subsystem) sends the following scroll events.
# Start of bracketed paste.
# Sequences generated by numpad 5. Not sure what it means. (It doesn't
# appear in 'infocmp'. Just ignore.
# Xterm.
# Meta/control/escape + pageup/pagedown/insert/delete.
# xterm, gnome-terminal.
# Arrows.
# (Normal cursor mode).
# Tmux sends following keystrokes when control+arrow is pressed, but for
# Emacs ansi-term sends the same sequences for normal arrow keys. Consider
# it a normal arrow press, because that's more important.
# (Application cursor mode).
# Shift + arrows.
# Meta + arrow keys. Several terminals handle this differently.
# The following sequences are for xterm and gnome-terminal.
# Alt+shift+number.
# Control + arrows.
# Cursor Mode
# rxvt
# Control + shift + arrows.
# Control + Meta + arrows.
# Meta + Shift + arrows.
# Meta + arrow on (some?) Macs when using iTerm defaults (see issue #483).
# Control/shift/meta + number in mintty.
# (c-2 will actually send c-@ and c-6 will send c-^.)
# Regex matching any CPR response
# (Note that we use '\Z' instead of '$', because '$' could include a trailing
# newline.)
# Mouse events:
# Typical: "Esc[MaB*"  Urxvt: "Esc[96;14;13M" and for Xterm SGR: "Esc[<64;85;12M"
# Regex matching any valid prefix of a CPR response.
# (Note that it doesn't contain the last character, the 'R'. The prefix has to
# be shorter.)
# (hard coded) If this could be a prefix of a CPR response, return
# True.
# If this could be a prefix of anything else, also return True.
# Lookup table of ANSI escape sequences for a VT100 terminal
# Hint: in order to know what sequences your terminal writes to stdin, run
# (hard coded) If we match a CPR response, return Keys.CPRResponse.
# (This one doesn't fit in the ANSI_SEQUENCES, because it contains
# integer variables.)
# Otherwise, use the mappings.
# Get next character.
# If we have some data, check for matches.
# Exact matches found, call handlers..
# No exact match found.
# Loop over the input, try the longest match first and
# shift.
# Received ANSI sequence that corresponds with multiple keys
# (probably alt+something). Handle keys individually, but only pass
# data payload to first KeyPress (so that we won't insert it
# multiple times).
# Handle bracketed paste. (We bypass the parser that matches all other
# key presses and keep reading input until we see the end mark.)
# This is much faster then parsing character by character.
# Feed content to key bindings.
# Quit bracketed paste mode and handle remaining input.
# Handle normal input character by character.
# Quit loop and process from this position when the parser
# entered bracketed paste.
# HTML.
# ANSI.
# List of (style, text) tuples.
# Callable[[], 'AnyFormattedText']  # Recursive definition not supported by mypy.
# StyleAndTextTuples
# Apply extra style.
# Make sure the result is wrapped in a `FormattedText`. Among other
# reasons, this is important for `print_formatted_text` to work correctly
# and distinguish between lists and formatted text.
# Split the template in parts.
# Alias for 'fg'.
# Check for spaces in attributes. This would result in
# invalid style strings otherwise.
# The string interpolation functions also take integers and other types.
# Convert to string first.
# Default style attributes.
# Process received text.
# NOTE: CSI is a special token within a stream of characters that
# Everything between \001 and \002 should become a ZeroWidthEscape.
# Check for CSI
# Start of color escape sequence.
# Got a CSI sequence. Color codes are following.
# Construct number
# Eval number
# Limit and save number value
# Get delimiter token if present
# Check and evaluate color codes
# Set attributes and token.
# Check and evaluate cursor forward
# add <SPACE> using current style
# Ignore unsupported sequence.
# Add current character.
# NOTE: At this point, we could merge the current character
# Slow blink
# Fast blink
# Normal intensity
# Reset all style attributes
# 256 colors.
# True colors.
# Mapping of the ANSI color codes to their names.
# Mapping of the escape codes for 256colors to their 'ffffff' value.
# Always yield the last line, even when this is an empty line. This ensures
# that when `fragments` ends with a newline character, an additional empty
# line is yielded. (Otherwise, there's no way to differentiate between the
# cases where `fragments` does and doesn't end with a newline.)
# This package shouldn't be imported before psycopg itself, or weird things
# will happen
# Copyright (C) 2025 The Psycopg Team
# Re-exports
# Give the class the same memory layout of the base clasee
# make the class writable
# Work with PEP8 and non-PEP8 versions of threading module.
# Thread objects in Python 2.4 and earlier do not have ident
# attrs.  Worm around that.
# unique name is mostly about the current process, but must
# also contain the path -- otherwise, two adjacent locked
# files conflict (one file gets locked, creating lock-file and
# unique file, the other one gets locked, creating lock-file
# and overwriting the already existing lock-file, then one
# gets unlocked, deleting both lock-file and unique file,
# finally the last lock errors out upon releasing.
# This is a bit funky, but it's only for awhile.  The way the unit tests
# are constructed this function winds up as an unbound method, so it
# actually takes three args, not two.  We want to toss out self.
# We are testing, avoid the first arg
# Lock file itself is a directory.  Place the unique file name into
# Already locked.
# Already locked by me.
# Someone else has the lock.
# Couldn't create the lock for some other reason
# Try and create a hard link to it.
# Link creation failed.  Maybe we've double-locked?
# The original link plus the one I created == 2.  We're
# good to go.
# Otherwise the lock creation failed.
# Link creation succeeded.  We're good to go.
# super(SymlinkLockFile).__init(...)
# split it back!
# Hopefully unnecessary for symlink.
# try:
# except IOError:
# Try and create a symbolic link to it.
# Linked to out unique name. Proceed.
# exists && link
# pidlockfile.py
# Copyright © 2008–2009 Ben Finney <ben+python@benfinney.id.au>
# This is free software: you may copy, modify, and/or distribute this work
# under the terms of the Python Software Foundation License, version 2 or
# later as published by the Python Software Foundation.
# No warranty expressed or implied. See the file LICENSE.PSF-2 for details.
# pid lockfiles don't support threaded operation, so always force
# False as the threaded arg.
# The lock creation failed.  Maybe sleep a bit.
# According to the FHS 2.3 section on PID files in /var/run:
# Not locked.  Try to lock it.
# Check to see if we are the only lock holder.
# Nope.  Someone else got there.  Remove our lock.
# Yup.  We're done, so go home.
# We're the locker, so go home.
# Maybe we should wait a bit longer.
# No more waiting.
# Someone else has the lock and we are impatient..
# Well, okay.  We'll give it a bit longer.
# We need to check if we've seen some resources already, because on
# some Linux systems (e.g. some Debian/Ubuntu variants) there are
# symlinks which alias other files in the environment.
# We hit a problem on Travis where enum34 was installed and doesn't
# have a provides attribute ...
# for case-insensitive comparisons
# additional features requested
# environment marker overrides
# Backward compatibility
# Requirement may contain extras - parse to lose those
# from what's passed to the matcher
# XXX compat-mode if cannot read the version
# case-insensitive
# Temporary - for Wheel 0.23 support
# Temporary - for legacy support
# Base location is parent dir of .dist-info dir
# base_location = os.path.dirname(self.path)
# base_location = os.path.abspath(base_location)
# if not os.path.isabs(path):
# do not put size and hash, as in PEP-376
# add the RECORD file itself
# Check if it is an absolute path  # XXX use relpath, add tests
# it's an absolute path?
# The file must be relative
# XXX add separator or use real relpath algo
# See http://docs.python.org/reference/datamodel#object.__hash__
# as we have no way of knowing, assume it was
# Need to be set before caching
# sectioned files have bare newlines (separating sections)
# FIXME handle the case where zipfile is not available
# look for top-level modules in top_level.txt, if present
# "./" is present as a marker between installed files
# and installation metadata files
# otherwise fall through and fail
# self.missing[distribution] = []
# multiple edges are allowed, so be careful
# Make a shallow copy of the adjacency list
# See what we can remove in this run
# What's left in alist (if anything) is a cycle.
# Remove from the adjacency list of others
# maps names to lists of (version, dist) tuples
# first, build the graph and find out what's provided
# now make the edges
# dependent distributions
# list of nodes we should inspect
# remove dist from dep, was there to prevent infinite loops
# required distributions
# already added to todo
# https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
# Reinstate the local version separator
# replace - with _ as a local version separator
# wv = wheel_metadata['Wheel-Version'].split('.', 1)
# file_version = tuple([int(i) for i in wv])
# if file_version < (1, 1):
# fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME,
# LEGACY_METADATA_FILENAME]
# fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME]
# Preserve any arguments after the interpreter
# make a copy, as mutated
# hasher = getattr(hashlib, self.hash_kind)
# First, stuff which is not in site-packages
# Now, stuff which is in site-packages, other than the
# distinfo stuff.
# At the top level only, save distinfo for later
# and skip it for now
# comment out next suite to leave .pyc files in
# Now distinfo. It may contain subdirectories (e.g. PEP 639)
# sort the entries by archive path. Not needed by any spec, but it
# keeps the archive listing and RECORD tidier than they would otherwise
# be. Use the number of path segments to keep directory entries together,
# and keep the dist-info stuff at the end.
# Now, at last, RECORD.
# Paths in here are archive paths - nothing else makes sense.
# Now, ready to build the zip file
# The signature file won't be in RECORD,
# and we  don't currently don't do anything with it
# We also skip directories, as they won't be in RECORD
# either. See:
# https://github.com/pypa/wheel/issues/294
# https://github.com/pypa/wheel/issues/287
# https://github.com/pypa/wheel/pull/289
# make a new instance rather than a copy of maker's,
# as we mutate it
# so we can rollback if needed
# Double negatives. Lovely!
# for RECORD writing
# for script copying/shebang processing
# set target dir later
# we default add_launchers to False, as the
# Python Launcher should be used instead
# meant for site-packages.
# Issue #147: permission bits aren't preserved. Using
# zf.extract(zinfo, libdir) should have worked, but didn't,
# see https://www.thetopsites.net/article/53834422.shtml
# So ... manually preserve permission bits as given in zinfo
# just set the normal permission bits
# Double check the digest of the written file
# Don't give up if byte-compilation fails,
# but log it and perhaps warn the user
# Generate scripts
# Try to get pydist.json so we can see if there are
# any commands to generate. If this fails (e.g. because
# of a legacy wheel), log a warning but don't give up.
# Use legacy info
# Write SHARED
# don't change passed in dict
# for now - metadata details TBD
# data_dir = '%s.data' % name_ver
# metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
# wv = message['Wheel-Version'].split('.', 1)
# TODO version verification
# See issue #115: some wheels have .. in their entries, but
# in the filename ... e.g. __main__..py ! So the check is
# updated to look for .. in the directory portions
# Remember the version.
# Files extracted. Call the modifier.
# Something changed - need to build a new wheel.
# Add or update local version to signify changes.
# Decide where the new wheel goes.
# already there
# Most specific - our Python version, ABI and arch
# manylinux
# where no ABI / arch dependency, but IMP_PREFIX dependency
# no IMP_PREFIX, ABI or arch dependency
# assume it's a filename
# Copyright (C) 2012-2023 Vinay Sajip.
# There's a bug in the base version for some 3.2.x
# (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header
# returns e.g. /abc, it bails because it says the scheme ''
# is bogus, when actually it should use the request's
# URL for the scheme. See Python issue #13696.
# Some servers (incorrectly) return multiple Location headers
# (so probably same goes for URI).  Use first header.
# A list of tags indicating which wheels you want to match. The default
# value of None matches against the tags compatible with the running
# Python. If you want to match other values, set wheel_tags on a locator
# instance to a list of tuples (pyver, abi, arch) which you want to match.
# Because of bugs in some of the handlers on some of the platforms,
# we use our own opener rather than just using urlopen.
# If get_project() is called from locate(), the matcher instance
# is set from the requirement passed to locate(). See issue #18 for
# why this can be useful to know.
# Just get the errors and throw them away
# downloadable extension
# urls and digests keys are present
# sometimes, versions are invalid
# logger.debug('%s did not match %r', matcher, k)
# slist.append(k)
# for now
# urls = d['urls']
# Now get other releases
# already done
# The following slightly hairy-looking regex just looks for the contents of
# an anchor link, which has an attribute "href" either immediately preceded
# or immediately followed by a "rel" attribute. The attribute values can be
# declared with double quotes, single quotes or no quotes - which leads to
# the length of the expression.
# We sort the result, hoping to bring the most recent versions
# to the front
# These are used to deal with various Content-Encoding schemes.
# See issue #45: we need to be resilient when the locator is used
# in a thread, e.g. with concurrent.futures. We can't use self._lock
# as it is for coordinating our internal threads - the ones created
# in _prepare_threads.
# See issue #112
# Note that you need two loops, since you can't say which
# thread will get each sentinel
# sentinel
# needed because self.result is shared
# e.g. after an error
# e.g. invalid versions
# always do this, to avoid hangs :-)
# logger.debug('Sentinel seen, quitting.')
# http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api
# fail if not found
# even if None (failure)
# We don't store summary in project metadata as it makes
# the data bigger for no benefit during dependency
# TODO SHA256 digest
# next line could overwrite result['urls'], result['digests']
# See issue #18. If any dists are found and we're looking
# for specific constraints, we only return something if
# a match is found. For example, if a DirectoryLocator
# returns just foo (1.0) while we're looking for
# foo (>= 2.0), we'll pretend there was nothing there so
# that subsequent locators can be queried. Otherwise we
# would just return foo (1.0) which would then lead to a
# failure to find foo (>= 2.0), because other locators
# weren't searched. Note that this only matters when
# merge=False.
# We use a legacy scheme simply because most of the dists on PyPI use legacy
# versions which don't conform to PEP 440.
# JSONLocator(), # don't use as PEP 426 is withdrawn
# can't replace other with provider
# can replace other with provider
# :meta: and :run: are implicitly included
# If no provider is found and we didn't consider
# prereleases, consider them now.
# see if other can be replaced by p
# Note: In PEP 345, the micro-language was Python compatible, so the ast
# module could be used to parse it. However, PEP 508 introduced operators such
# as ~= and === which aren't in Python, necessitating a different approach.
# value is either a callable or the name of a method
# by default, compatible => >=.
# this is a method only to support alternative implementations
# via overriding
# Could be a partial version (e.g. for '2.*') which
# won't parse as a version, so keep it as a string
# Just to check that vn is a valid version
# Should parse as a version, so we can create an
# instance for the comparison
# to ensure that numeric compares as > lexicographic, avoid
# comparing them directly, but encode a tuple which ensures
# correct sorting
# either before pre-release, or final release and after
# before pre-release
# to sort before a0
# to sort after all pre-releases
# now look at the state of post and dev.
# sort before 'a'
# _normalized_key loses trailing zeroes in the release
# clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0
# However, PEP 440 prefix matching needs it: for example,
# (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0).
# must succeed
# both constraint and version are
# NormalizedVersion instances.
# If constraint does not have a local component,
# ensure the version doesn't, either.
# remove trailing puncts
# .N -> 0.N at start
# remove leading puncts
# remove leading v(ersion)
# multiple runs of '.'
# misspelt alpha
# standardise
# remove unwanted chars
# replace illegal chars
# trailing '.'
# Now look for numeric prefix, and separate it out from
# the rest.
# massage the suffix.
# already rational
# part of this could use maketrans
# if something ends with dev or pre, we add a 0
# if we have something like "b-2" or "a.2" at the end of the
# version, that is probably beta, alpha, etc
# let's remove the dash or dot
# 1.0-dev-r371 -> 1.0.dev371
# 0.1-dev-r79 -> 0.1.dev79
# Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1
# Clean: v0.3, v1.0
# Clean leading '0's on numbers.
# TODO: unintended side-effect on, e.g., "2003.05.09"
# PyPI stats: 77 (~2%) better
# Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers
# zero.
# PyPI stats: 245 (7.56%) better
# the 'dev-rNNN' tag is a dev tag
# clean the - when used as a pre delimiter
# a terminal "dev" or "devel" can be changed into ".dev0"
# a terminal "dev" can be changed into ".dev0"
# a terminal "final" or "stable" can be removed
# The 'r' and the '-' tags are post release tags
# Clean 'r' instead of 'dev' usage:
# PyPI stats:  ~150 (~4%) better
# Clean '.pre' (normalized from '-pre' above) instead of 'c' usage:
# PyPI stats: ~21 (0.62%) better
# Tcl/Tk uses "px" for their post release markers
# We can't compare ints and strings on Python 3, so fudge it
# by zero-filling numeric values so simulate a numeric comparison
# choose the '|' and '*' so that versions sort correctly
# See issue #140. Be tolerant of a single trailing comma.
# Copyright (C) 2012 The Python Software Foundation.
# public API of this module
# Encoding used for the PKG-INFO files
# preferred version. Hopefully will be changed
# to 1.2 once PEP 345 is supported everywhere
# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in
# the metadata. Include them in the tuple literal below to allow them
# (for now).
# Ditto for Obsoletes - see issue #140.
# avoid adding field names if already there
# return _426_FIELDS
# 2.0 removed
# first let's try to see if a field is not part of one of the version
# In 2.1, description allowed after headers
# if key not in _426_FIELDS and '2.0' in possible_versions:
# possible_versions.remove('2.0')
# logger.debug('Removed 2.0 due to %s', key)
# possible_version contains qualified versions
# found !
# let's see if one unique marker is found
# is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS)
# we have the choice, 1.0, or 1.2, 2.1 or 2.2
# we couldn't find any specific marker
# if is_2_2:
# return '2.2'
# This follows the rules about transforming keys as described in
# https://www.python.org/dev/peps/pep-0566/#id17
# For both name and version any runs of non-alphanumeric or '.'
# characters are replaced with a single '-'.  Additionally any
# spaces in the version string become '.'
# TODO document the mapping API and UNKNOWN default key
# Public API
# When reading, get all the fields we can
# we can have multiple lines
# single line
# PEP 566 specifies that the body be used for the description, if
# available
# logger.debug('Attempting to set metadata for %s', self)
# self.set_metadata_version()
# other is None or empty container
# check that the values are valid
# FIXME this rejects UNKNOWN, is that right?
# That's for Project-URL
# XXX should check the versions (if the file was loaded)
# required by PEP 345
# checking metadata 1.2 (XXX needs to check 1.1, 1.0)
# we can't have 1.1 metadata *and* Setuptools requires
# Mapping API
# TODO could add iter* variants
# Initialised with no args - to be added
# Note: MetadataUnrecognizedVersionError does not
# inherit from ValueError (it's a DistlibException,
# which should not inherit from ValueError).
# The ValueError comes from the json.load - if that
# succeeds and we get a validation error, we want
# that to propagate
# special cases for PEP 459
# unconditional
# Not extra-dependent - only environment-dependent
# Not excluded because of extras, check environment
# A recursive call, but it should terminate since 'test'
# has been removed from the extras
# skip missing ones
# author = {}
# maintainer = {}
# TODO: any other fields wanted
# Copyright (C) 2012-2023 Python Software Foundation.
# a \ followed by some spaces + EOL
# Due to the different results returned by fnmatch.translate, we need
# to do slightly different processing for Python 2.7 and 3.2 ... this needed
# to be brought in for Python 3.6 onwards.
# Avoid excess stat calls -- just one will do, thank you!
# make a copy!
# Parse the line: split it up, make sure the right number of words
# is there, and return the relevant words.  'action' is always
# defined: it's the first word of the line.  Which of the other
# three are defined depends on the action; it'll be either
# patterns, (dir and patterns), or (dirpattern).
# OK, now we know that the action is valid and we have the
# right number of words on the line for that action -- so we
# can proceed with minimal error-checking.
# This should never happen, as it should be caught in
# _parse_template_line
# Private API
# no action given, let's use the default 'include'
# XXX docstring lying about what the special chars are?
# delayed loading of allfiles list
# ditch start and end characters
# ditch end of pattern character
# no prefix -- respect anchor flag
# '?' and '*' in the glob pattern become '.' and '.*' in the RE, which
# IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix,
# and by extension they shouldn't match such "special characters" under
# any OS.  So change all non-escaped dots in the RE to match any
# character except the special characters (currently: just os.sep).
# we're using a regex to manipulate a regex, so we need
# to escape the backslash twice
# Use gpg by default rather than gpg2, as gpg2 insists on
# prompting for passwords
# We don't use communicate() here because we may need to
# get clever with interacting with the command
# The following code is equivalent to urlretrieve.
# We need to do it this way so that we can compute the
# digest of the file as we go.
# addinfourl is not a context manager on 2.x
# so we have to use try/finally
# check that we got the whole file, if we can
# if we have a digest, it must match.
# Adapted from packaging, which in turn was adapted from
# http://code.activestate.com/recipes/146306
# This is a wrapper module for different platform implementations
# This file is part of pySerial. https://github.com/pyserial/pyserial
# (C) 2001-2020 Chris Liechti <cliechti@gmx.net>
# SPDX-License-Identifier:    BSD-3-Clause
#~ SerialBase, SerialException, to_bytes, iterbytes
# pylint: disable=wrong-import-position
# chose an implementation, depending on os
# sys.platform == 'win32':
# check and remove extra parameter to not confuse the Serial class
# the default is to use the native implementation
# it's not a string, use default
# if it is an URL, try to import the handler module from the list of possible packages
# instantiate and open when desired
#! python
# This module implements a RFC2217 compatible client. RF2217 descibes a
# protocol to access serial ports over TCP/IP and allows setting the baud rate,
# modem control lines etc.
# (C) 2001-2015 Chris Liechti <cliechti@gmx.net>
# - setting control line -> answer is not checked (had problems with one of the
# - write timeout not implemented at all
# ###########################################################################
# observations and issues with servers
# ===========================================================================
# sredird V2.2.1
# - http://www.ibiblio.org/pub/Linux/system/serial/   sredird-2.2.2.tar.gz
# - does not acknowledge SET_CONTROL (RTS/DTR) correctly, always responding
# - SET_BAUDRATE answer contains 4 extra null bytes -> probably for larger
# - To get the signature [COM_PORT_OPTION 0] has to be sent.
# - run a server: while true; do nc -l -p 7000 -c "sredird debug /dev/ttyUSB0 /var/lock/sredir"; done
# telnetcpcd (untested)
# - http://ftp.wayne.edu/kermit/sredird/telnetcpcd-1.09.tar.gz
# - To get the signature [COM_PORT_OPTION] w/o data has to be sent.
# ser2net
# - does not negotiate BINARY or COM_PORT_OPTION for his side but at least
# - The configuration may be that the server prints a banner. As this client
# - NOTIFY_MODEMSTATE: the poll interval of the server seems to be one
# - run a server: run ser2net daemon, in /etc/ser2net.conf:
# How to identify ports? pySerial might want to support other protocols in the
# future, so lets use an URL scheme.
# for RFC2217 compliant servers we will use this:
# options:
# - "logging" set log level print diagnostic messages (e.g. "logging=debug")
# - "ign_set_control": do not look at the answers to SET_CONTROL
# - "poll_modem": issue NOTIFY_MODEMSTATE requests when CTS/DTR/RI/CD is read.
# the order of the options is not relevant
# port string is expected to be something like this:
# rfc2217://host:port
# host may be an IP or including domain, whatever.
# port is 0...65535
# map log level names to constants. used in from_url()
# telnet protocol characters
# Subnegotiation End
# No Operation
# Data Mark
# Break
# Interrupt process
# Abort output
# Are You There
# Erase Character
# Erase Line
# Go Ahead
# Subnegotiation Begin
# Interpret As Command
# selected telnet options
# 8-bit data path
# echo
# suppress go ahead
# RFC2217
# Client to Access Server
# Request Com Port Flow Control Setting (outbound/both)
# Use No Flow Control (outbound/both)
# Use XON/XOFF Flow Control (outbound/both)
# Use HARDWARE Flow Control (outbound/both)
# Request BREAK State
# Set BREAK State ON
# Set BREAK State OFF
# Request DTR Signal State
# Set DTR Signal State ON
# Set DTR Signal State OFF
# Request RTS Signal State
# Set RTS Signal State ON
# Set RTS Signal State OFF
# Request Com Port Flow Control Setting (inbound)
# Use No Flow Control (inbound)
# Use XON/XOFF Flow Control (inbound)
# Use HARDWARE Flow Control (inbound)
# Use DCD Flow Control (outbound/both)
# Use DTR Flow Control (inbound)
# Use DSR Flow Control (outbound/both)
# Time-out Error
# Transfer Shift Register Empty
# Transfer Holding Register Empty
# Break-detect Error
# Framing Error
# Parity Error
# Overrun Error
# Data Ready
# Receive Line Signal Detect (also known as Carrier Detect)
# Ring Indicator
# Data-Set-Ready Signal State
# Clear-To-Send Signal State
# Delta Receive Line Signal Detect
# Trailing-edge Ring Detector
# Delta Data-Set-Ready
# Delta Clear-To-Send
# Purge access server receive data buffer
# Purge access server transmit data buffer
# Purge both the access server receive data
# buffer and the access server transmit data buffer
# Telnet filter states
# TelnetOption and TelnetSubnegotiation states
# add property to have a similar interface as TelnetOption
# prevent 100% CPU load
# error propagation done in is_ready
# must be last call in case of auto-open
# XXX good value?
# use a thread save queue as buffer. it also simplifies implementing
# the read timeout
# to ensure that user writes does not interfere with internal
# telnet/rfc2217 options establish a lock
# name the following separately so that, below, a check can be easily done
# all supported telnet options
# RFC 2217 specific states
# COM port settings
# There are more subnegotiation objects, combine all in one dictionary
# for easy access
# cache for line and modem states that the server sends to us
# RFC 2217 flow control between server and client
# must clean-up if open fails
# negotiate Telnet/RFC 2217 -> send initial requests
# now wait until important options are negotiated
# fine, go on, set RFC 2217 specific things
# all things set up get, now a clean start
# if self._timeout != 0 and self._interCharTimeout is not None:
# Setup the connection
# to get good performance, all parameter changes are sent first...
# and now wait until parameters are active
# ignore errors.
# XXX more than socket timeout
# in case of quick reconnects, give the server some time
# process options now, directly altering self
# XXX is that good to call it here?
# -> timeout
# empty read buffer
# - - - platform specific - - -
# None so far
# - - - RFC2217 specific - - -
# just need to get out of recv form time to time to check if
# still alive
# connection fails -> terminate loop
# lost connection
# interpret as command or as data
# store data in read buffer or sub option buffer
# depending on state
# interpret as command doubled -> insert character
# itself
# sub option start
# sub option end -> process it now
# negotiation
# other telnet commands
# DO, DONT, WILL, WONT was received, option now following
# - incoming telnet commands and options
# Currently none. RFC2217 only uses negotiation and subnegotiation.
# check our registered telnet options and forward command to them
# they know themselves if they have to answer or not
# can have more than one match! as some options are duplicated for
# 'us' and 'them'
# handle unknown options
# only answer to positive requests and deny them
# ensure it is a number
# update time when we think that a poll would make sense
#~ print "processing COM_PORT_OPTION: %r" % list(suboption[1:])
# - outgoing telnet commands and options
# transmit desired purge type
# wait for acknowledge from the server
# transmit desired control type
# answers are ignored when option is set. compatibility mode for
# servers that answer, but not the expected one... (or no answer
# at all) i.e. sredird
# this helps getting the unit tests passed
#~ if self._remote_suspend_flow:
#~     wait---
# active modem state polling enabled? is the value fresh enough?
# when it is older, request an update
# when expiration time is updated, it means that there is a new
# value
# even when there is a timeout, do not generate an error just
# return the last known value. this way we can support buggy
# servers that do not respond to polls, but send automatic
# updates.
# never received a notification from the server
#############################################################################
# The following is code that helps implementing an RFC 2217 server.
# filter state machine
# states for modem/line control events
# negotiate Telnet/RFC2217 -> send initial requests
# issue 1st modem state notification
# The callback is used for we and they so if one party agrees, we're
# already happy. it seems not all servers do the negotiation correctly
# and i guess there are incorrect clients too.. so be happy if client
# answers one or the other positively.
# this is to ensure that the client gets a notification, even if there
# was no change
# - check modem lines, needs to be called periodically from user to
# establish polling
# check what has changed
# when last is None -> 0
# if new state is different and the mask allows this change, send
# notification. suppress notifications when client is not rfc2217
# save last state, but forget about deltas.
# otherwise it would also notify about changing deltas which is
# probably not very useful
# - outgoing data escaping
# - incoming data filter
# store data in sub option buffer or pass it to our
# consumer depending on state
# XXX needs cached value
#~ self.rfc2217_send_subnegotiation(SERVER_SET_CONTROL, SET_CONTROL_RTS_ON)
#~ elif suboption[2:3] == SET_CONTROL_REQ_FLOW_SETTING_IN:
#~ elif suboption[2:3] == SET_CONTROL_USE_NO_FLOW_CONTROL_IN:
#~ elif suboption[2:3] == SET_CONTROL_USE_SW_FLOW_CONTOL_IN:
#~ elif suboption[2:3] == SET_CONTROL_USE_HW_FLOW_CONTOL_IN:
#~ elif suboption[2:3] == SET_CONTROL_USE_DCD_FLOW_CONTROL:
#~ elif suboption[2:3] == SET_CONTROL_USE_DTR_FLOW_CONTROL:
#~ elif suboption[2:3] == SET_CONTROL_USE_DSR_FLOW_CONTROL:
# client polls for current state
# sorry, nothing like that implemented
# simple client test
# Constants and types for use with Windows API, used by serialwin32.py
# pylint: disable=invalid-name,too-few-public-methods,protected-access,too-many-instance-attributes
# some details of the windows API differ between 32 and 64 bit systems..
# ULONG_PTR is a an ordinary number, not a pointer and contrary to the name it
# is either 32 or 64 bits, depending on the type of windows...
# so test if this a 32 bit windows...
# Fallback to non wide char version for old OS...
# alias
# Variable c_int
# Variable c_ulong
# Variable c_long
# Variable c_uint
# backend for serial IO for POSIX compatible systems, like Linux, OSX
# parts based on code from Grant B. Edwards  <grante@visi.com>:
# references: http://www.easysw.com/~mike/serial/serial.html
# Collection of port names (was previously used by number_to_device which was
# removed.
# - Linux                   /dev/ttyS%d (confirmed)
# - cygwin/win32            /dev/com%d (confirmed)
# - openbsd (OpenBSD)       /dev/cua%02d
# - bsd*, freebsd*          /dev/cuad%d
# - darwin (OS X)           /dev/cuad%d
# - netbsd                  /dev/dty%02d (NetBSD 1.6 testing by Erk)
# - irix (IRIX)             /dev/ttyf%d (partially tested) names depending on flow control
# - hp (HP-UX)              /dev/tty%dp0 (not tested)
# - sunos (Solaris/SunOS)   /dev/tty%c (letters, 'a'..'z') (confirmed)
# - aix (AIX)               /dev/tty%d
# pylint: disable=abstract-method
# some systems support an extra flag to enable the two in POSIX unsupported
# paritiy settings for MARK and SPACE
# default, for unsupported platforms, override below
# try to detect the OS so that a device can be selected...
# this code block should supply a device() and set_special_baudrate() function
# for the platform
# Linux (confirmed)  # noqa
# extra termios flags
# Use "stick" (mark/space) parity
# baudrate ioctls
# RS485 ioctls
# hang up
# get serial_struct
# set or unset ASYNC_LOW_LATENCY flag
# set serial_struct
# right size is 44 on x86_64, allow for some growth
# set custom speed
# flags, delaytx, delayrx, padding
# clear SER_RS485_ENABLED
# cygwin/win32 (confirmed)
# OS X
# _IOW('T', 2, speed_t)
# _IO('t', 123)
# _IO('t', 122)
# Tiger or above can support arbitrary serial speeds
# use IOKit-specific call to set up high speeds
# Only tested on FreeBSD:
# The baud rate may be passed in as
# a literal value.
# load some constants for later use.
# try to use values from termios, use defaults from linux otherwise
# TIOCM_LE = getattr(termios, 'TIOCM_LE', 0x001)
# TIOCM_ST = getattr(termios, 'TIOCM_ST', 0x008)
# TIOCM_SR = getattr(termios, 'TIOCM_SR', 0x010)
# TIOCM_OUT1 = getattr(termios, 'TIOCM_OUT1', 0x2000)
# TIOCM_OUT2 = getattr(termios, 'TIOCM_OUT2', 0x4000)
# open
#~ fcntl.fcntl(self.fd, fcntl.F_SETFL, 0)  # set blocking
# ignore Invalid argument and Inappropriate ioctl
# ignore any exception when closing the port
# also to keep original exception that happened when setting up
# if exclusive lock is requested, create it before we modify anything else
# timeout is done via select
# if a port is nonexistent but has a /dev file, it'll fail here
# set up raw mode / no echo / binary
# |termios.ECHOPRT
# netbsd workaround for Erk
# setup baud rate
#~ raise ValueError('Invalid baud rate: %r' % self._baudrate)
# See if BOTHER is defined for this platform; if it is, use
# this for a speed not defined in the baudrate constants list.
# may need custom baud rate, it isn't in our list.
# store for later
# setup char len
# setup stop bits
# XXX same as TWO.. there is no POSIX support for 1.5
# setup parity
# setup flow control
# xonxoff
# |termios.IXANY)
# rtscts
# try it with alternate constant name
# XXX should there be a warning if setting up rtscts (and xonxoff etc) fails??
# buffer
# vmin "minimal number of characters to be read. 0 for non blocking"
# vtime
# activate settings
# apply custom baud rate, if any
#~ s = fcntl.ioctl(self.fd, termios.FIONREAD, TIOCM_zero_str)
# select based implementation, proved to work on many systems
# If select was used with a timeout, and the timeout occurs, it
# returns with empty lists -> thus abort read operation.
# For timeout == 0 (non-blocking operation) also abort when
# there is nothing to read.
# timeout
# this is for Python 3.x where select.error is a subclass of
# OSError ignore BlockingIOErrors and EINTR. other errors are shown
# https://www.python.org/dev/peps/pep-0475.
# this is for Python 2.x
# ignore BlockingIOErrors and EINTR. all errors are shown
# see also http://www.python.org/dev/peps/pep-3151/#select
# read should always return some data as select reported it was
# ready to read when we get to this point.
# Disconnected devices, at least on Linux, show the
# behavior that they are always ready to read immediately
# but reading returns nothing.
# Zero timeout indicates non-blocking - simply return the
# number of bytes of data actually written
# when timeout is set, use select to wait for being ready
# with the time left as timeout
# wait for write operation
# - - platform specific - - - -
# print "\tread(): size",size, "have", len(read)    #debug
# wait until device becomes ready to read (or something fails)
# early abort on timeout
# clear O_NONBLOCK
# hack to make hasattr return false
# backend for Windows ("win32" incl. 32/64 bit support)
# Initial patch to use ctypes by Giovanni Bajo <rasky@develer.com>
# the "\\.\COMx" format is required for devices other than COM1-COM8
# not all versions of windows seem to support this properly
# so that the first few ports are used with the DOS device name
# for like COMnotanumber
# exclusive access
# no security
# 'cause __del__ is called anyway
#~ self._overlapped_write.hEvent = win32.CreateEvent(None, 1, 0, None)
# Setup a 4k buffer
# Save original timeout values:
# Clear buffers:
# Remove anything that was there
# Set Windows timeout values
# timeouts is a tuple with the following items:
# (ReadIntervalTimeout,ReadTotalTimeoutMultiplier,
# default of all zeros is OK
# Setup the connection info.
# Get state and modify it:
# Disable Parity Check
# Enable Parity Check
# Enable Binary Transmission
# Char. w/ Parity-Err are replaced with 0xff (if fErrorChar is set to TRUE)
# checks for unsupported settings
# XXX verify if platform really does not have a setting for those
#~ def __del__(self):
#~ self.close()
# Restore original timeout values:
#~ if not isinstance(data, (bytes, bytearray)):
#~ raise TypeError('expected %s or bytearray, got %s' % (bytes, type(data)))
# convert data (needed in case of memoryview instance: Py 3.1 io lib), ctypes doesn't like memoryview
#~ win32event.ResetEvent(self._overlapped_write.hEvent)
# if blocking (None) or w/ write timeout (>0)
# Wait for the write to complete.
#~ win32.WaitForSingleObject(self._overlapped_write.hEvent, win32.INFINITE)
# canceled IO is no error
# no info on true length provided by OS function in async mode
# XXX could also use WaitCommEvent with mask EV_TXEMPTY, but it would
# require overlapped IO and it's also only possible to set a single mask
# on the port---
# check if read operation is pending
# cancel, ignoring any errors (e.g. it may just have finished on its own)
#!jython
# Backend Jython with JavaComm
# (C) 2002-2015 Chris Liechti <cliechti@gmx.net>
# Java Communications API implementations
# http://mho.republika.pl/java/comm/
# Sun/IBM
# RXTX
# strings are taken directly
# numbers are transformed to a comport id obj
# Base class and support functions used by various backends.
# ``memoryview`` was introduced in Python 2.7 and ``bytes(some_memoryview)``
# isn't returning the contents (very unfortunate). Therefore we need special
# cases and test for it. Ensure that there is a ``memoryview`` object for older
# Python versions. This is easier than making every test dependent on its
# existence.
# implementation does not matter as we do not really use it.
# it just must not inherit from something else we might care for.
# pylint: disable=redefined-builtin,invalid-name
# for Python 3, pylint: disable=redefined-builtin,invalid-name
# "for byte in data" fails for python3 as it returns ints instead of bytes
# all Python versions prior 3.x convert ``str([17])`` to '[17]' instead of '\x11'
# so a simple ``bytes(sequence)`` doesn't work for all versions
# handle list of integers and bytes (one or more items) for Python 2 and 3
# create control bytes
# Timeout implementation with time.monotonic(). This function is only
# supported by Python 3.3 and above. It returns a time in seconds
# (float) just as time.time(), but is not affected by system clock
# adjustments.
# Timeout implementation with time.time(). This is compatible with all
# Python versions but has issues if the clock is adjusted while the
# timeout is running.
# clock jumped, recalculate
# default values, may be overridden in subclasses that do not support all values
# correct values are assigned below through properties
# disabled by default
# assign values using get/set methods using the properties feature
# watch for backward compatible kwargs
# to be implemented by subclasses:
# def open(self):
# def close(self):
# test if it's a number, will throw a TypeError if not...
# if not set, keep backwards compatibility and follow rtscts setting
# if defined independently, follow its value
# functions useful for RS-485 adapters
# check against internal "_" value
# set non "_" value to use properties write function
# compatibility with io library
# pylint: disable=invalid-name,missing-docstring
# context manager
# backwards compatibility / deprecated functions
# additional functionality
# Backend for .NET/Mono (IronPython), .NET >= 2
# (C) 2008-2015 Chris Liechti <cliechti@gmx.net>
# must invoke function with byte array, make a helper to convert strings
# to byte arrays
# XXX will require adaption when run with a 3.x compatible IronPython
# if RTS and/or DTR are not set before open, they default to True
#~ self._port_handle.ReceivedBytesThreshold = 1
# timeouts = (int(self._interCharTimeout * 1000),) + timeouts[1:]
# catch errors from illegal baudrate settings
# reserved keyword in Py3k
# ignore errors. can happen for unplugged USB serial devices
# must use single byte reads as this is the only way to read
# without applying encodings
# must call overloaded method with byte array argument
# as this is the only one not applying encodings
#~ return self._port_handle.XXX
# XXX an error would be better
# none
# RS485 support
# (C) 2015 Chris Liechti <cliechti@gmx.net>
# apply level for TX and optional delay
# write and wait for data to be written
# optional delay and apply level for RX
# redirect where the property stores the settings so that underlying Serial
# instance does not see them
# This is a thin wrapper to load the rfc2217 implementation.
# (C) 2011 Chris Liechti <cliechti@gmx.net>
# This module implements a loop back connection receiving itself what it sent.
# The purpose of this module is.. well... You can run the unit tests with it.
# and it was so easy to implement ;-)
# URL format:    loop://[option[/option...]]
# - "debug" print diagnostic messages
# not that there is anything to open, but the function applies the
# options found in the URL
# not that there anything to configure...
# not that's it of any real use, but it helps in the unit tests
# attention the logged value can differ from return value in
# threaded environments...
# XXX inter char timeout
# check for timeout now, after data has been read.
# useful for timeout = 0 (non blocking) read
# calculate aprox time that would be used to send the data
# when a write timeout is configured check if we would be successful
# (not sending anything, not even the part that would have time)
# must wait so that unit test succeeds
# This module implements a simple socket based client.
# It does not support changing any port parameters and will silently ignore any
# requests to do so.
# The purpose of this module is that applications using pySerial can connect to
# TCP/IP to serial port converters that do not support RFC 2217.
# URL format:    socket://<host>:<port>[/option[/option...]]
# timeout is used for write timeout support :/ and to get an initial connection timeout
# after connecting, switch to non-blocking, we're using select
# not that there is anything to configure...
# Poll the socket to see if it is ready for reading.
# If ready, at least one byte will be to read.
# select based implementation, similar to posix, but only using socket API
# to be portable, additionally handle socket timeout which is used to
# emulate write timeouts
# ready to read when we get to this point, unless it is EOF
# just use recv to remove input, while there is some
# works on Linux and probably all the other POSIX systems
# This module implements a special URL handler that wraps an other port,
# print the traffic for debugging purposes. With this, it is possible
# to debug the serial port traffic on every application that uses
# serial_for_url.
# URL format:    spy://port[?option[=value][&option[=value]]]
# - dev=X   a file or device to write to
# - color   use escape code to colorize output
# - raw     forward raw bytes instead of hexdump
# example:
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# This module implements a special URL handler that uses the port listing to
# find ports by searching the string descriptions.
# (C) 2011-2015 Chris Liechti <cliechti@gmx.net>
# URL format:    hwgrep://<regexp>&<option>
# where <regexp> is a Python regexp according to the re module
# violating the normal definition for URLs, the charachter `&` is used to
# separate parameters from the arguments (instead of `?`, but the question mark
# is heavily used in regexp'es)
# n=<N>     pick the N'th entry instead of the first one (numbering starts at 1)
# skip_busy tries to open port to check if it is busy, fails on posix as ports are not locked!
# python 3  pylint: disable=redefined-builtin
# pick n'th element
# open to test if port is available. not the nicest way..
# use a for loop to get the 1st element from the generator
# it has some error, skip this one
# This module implements a special URL handler that allows selecting an
# alternate implementation provided by some backends.
# URL format:    alt://port[?option[=value][&option[=value]]]
# - class=X used class named X instead of Serial
# Backend for Silicon Labs CP2110/4 HID-to-UART devices.
# (C) 2019 Google LLC
# This backend implements support for HID-to-UART devices manufactured
# by Silicon Labs and marketed as CP2110 and CP2114. The
# implementation is (mostly) OS-independent and in userland. It relies
# on cython-hidapi (https://github.com/trezor/cython-hidapi).
# The HID-to-UART protocol implemented by CP2110/4 is described in the
# AN434 document from Silicon Labs:
# https://www.silabs.com/documents/public/application-notes/AN434-CP2110-4-Interface-Specification.pdf
# TODO items:
# - rtscts support is configured for hardware flow control, but the
# - Cancelling reads and writes is not supported.
# - Baudrate validation is not implemented, as it depends on model and configuration.
# hidapi
# Report IDs and related constant
# This is not quite correct. AN343 specifies that the minimum
# baudrate is different between CP2110 and CP2114, and it's halved
# when using non-8-bit symbols.
# cp2100://BUS:DEVICE:ENDPOINT, for libusb
# read timeout is 0.1
# Note that while AN434 states "There are no data bytes in
# the payload other than the Report ID", either hidapi or
# Linux does not seem to send the report otherwise.
# This is a module that gathers a list of serial ports including details on
# GNU/Linux systems.
# special handling for links
# check device type
# fill-in info for USB devices
# multi interface devices like FT4232
#~ elif self.subsystem in ('pnp', 'amba'):  # PCI based devices, raspi
# PCI based devices
# raspi
# built-in serial ports
# usb-serial with own driver
# xr-usb-serial port exar (DELL Edge 3001)
# usb-serial with CDC-ACM profile
# ARM internal port (raspi)
# BT serial devices
# Advantech multi-port serial controllers
# hide non-present internal serial ports
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# test
# Enumerate serial ports on Windows including a human readable description
# and hardware information.
# (C) 2001-2016 Chris Liechti <cliechti@gmx.net>
#~ LPBYTE = PBYTE = ctypes.POINTER(BYTE)
# XXX avoids error about types
# If the traversal depth is beyond the max, abandon attempting to find the serial number.
# Get the parent device instance.
# If there is no parent available, the child was the root device. We cannot traverse
# further.
# Get the ID of the parent device and parse it for vendor ID, product ID, and serial number.
# return early if we have no matches (likely malformed serial, traversed too far)
# store what we found as a fallback for malformed serial values up the chain
# Check that the USB serial number only contains alpha-numeric characters. It may be a windows
# device ID (ephemeral ID).
# If pid and vid are not available at this device level, continue to the parent.
# If the VID or PID has changed, we are no longer looking at the same physical device. The
# serial number is unknown.
# In this case, the vid and pid of the parent device are identical to the child. However, if
# there still isn't a serial number available, continue to the next parent.
# Finally, the VID and PID are identical to the child and a serial number is present, so return
# so far only seen one used, so hope 8 are enough...
# repeat for all possible GUIDs
# was DIGCF_PRESENT|DIGCF_DEVICEINTERFACE which misses CDC ports
# get the real com port name
# DIREG_DRV for SW info
# unfortunately does this method also include parallel ports.
# we could check for names starting with COM or just exclude LPT
# and hope that other "unknown" names are serial ports...
# hardware ID
# try to get ID that includes serial number
#~ ctypes.byref(szHardwareID),
# fall back to more generic hardware ID if that would fail
# Ignore ERROR_INSUFFICIENT_BUFFER
# stringify
# in case of USB, make a more readable string, similar to that form
# that we also generate on other platforms
# Check that the USB serial number only contains alpha-numeric characters. It
# may be a windows device ID (ephemeral ID) for composite devices.
# calculate a location string
# XXX how to determine correct bConfigurationValue?
# USB location is hidden by FDTI driver :(
# friendly name
#~ SPDRP_DEVICEDESC,
#~ else:
#~ if ctypes.GetLastError() != ERROR_INSUFFICIENT_BUFFER:
#~ raise IOError("failed to get details for %s (%s)" % (devinfo, szHardwareID.value))
# ignore errors and still include the port in the list, friendly name will be same as port name
# manufacturer
# This is a module that gathers a list of serial ports on POSIXy systems.
# For some specific implementations, see also list_ports_linux, list_ports_osx
# OS X (confirmed)
# cygwin/win32
# cygwin accepts /dev/com* in many contexts
# (such as 'open' call, explicit 'ls'), but 'glob.glob'
# and bare 'ls' do not; so use /dev/ttyS* instead
# OpenBSD
# NetBSD
# IRIX
# HP-UX (not tested)
# Solaris/SunOS
# AIX
# platform detection has failed...
# This is a helper module for the various platform dependent list_port
# implementations.
# USB specific data
# This is a codec to create and decode hexdumps with spaces between characters. used by miniterm.
# (C) 2015-2016 Chris Liechti <cliechti@gmx.net>
# Codec APIs
# allow spaces to separate values
#~ _is_text_encoding=True,
# Very simple serial terminal
# (C)2002-2020 Chris Liechti <cliechti@gmx.net>
# pylint: disable=wrong-import-order,wrong-import-position
# in python3 it's "raw"
# context manager:
# switch terminal temporary to normal mode (e.g. to get user input)
# F1
# F2
# F3
# F4
# F5
# F6
# F7
# F8
# F9
# F10
# UP
# DOWN
# LEFT
# RIGHT
# HOME
# END
# INSERT
# DELETE
# PGUP
# PGDN
# ANSI handling available through SetConsoleMode since Windows 10 v1511
# https://en.wikipedia.org/wiki/ANSI_escape_code#cite_note-win10th2-1
# PY2
# the change of the code page is not propagated to Python, manually fix it
# needed for input
# in case no _saved_cm
# CancelIo, CancelSynchronousIo do not seem to work when using
# getwch, so instead, send a key to the window with the console
# map the BS key (which yields DEL) to backspace
# DEL
# CSI
# visual space
# XXX make it configurable, use colorama?
# other ideas:
# - add date/time for each newline
# - insert newline after: a) timeout b) packet end character
# no transformation
# GS/CTRL+]
# Menu: CTRL+T
# start serial->console thread
# enter console->serial loop
# on RFC 2217 ports, it can happen if no modem state notification was
# yet received. ignore this error.
# read all that is there or wait for one byte
# XXX handle instead of re-raise?
# next char will be for menu
# exit app
#~ if self.raw:
# Menu/exit character again -> send itself
# CTRL+U -> upload file
# CTRL+H, h, H, ? -> Show help
# CTRL+R -> Toggle RTS
# CTRL+D -> Toggle DTR
# CTRL+B -> toggle BREAK condition
# CTRL+E -> toggle local echo
# CTRL+F -> edit filters
# CTRL+L -> EOL mode
# keys
# CTRL+A -> set encoding
# CTRL+I -> info
#~ elif c == '\x01':                       # CTRL+A -> cycle escape mode
#~ elif c == '\x0c':                       # CTRL+L -> cycle linefeed mode
# P -> change port
# S -> suspend / open port temporarily
# B -> change baudrate
# 8 -> change to 8 bits
# 7 -> change to 8 bits
# E -> change to even parity
# O -> change to odd parity
# M -> change to mark parity
# S -> change to space parity
# N -> change to no parity
# 1 -> change to 1 stop bits
# 2 -> change to 2 stop bits
# 3 -> change to 1.5 stop bits
# X -> change software flow control
# R -> change hardware flow control
# Q -> exit app
# Wait for output buffer to drain.
# Progress indicator.
# reader thread needs to be shut down
# save settings
# restore settings and open
# and restart the reader thread
# help text, starts with blank line!
# default args can be used to override when calling main() from an other script
# e.g to create a miniterm-my-device.py
# no port given on command line -> ask user now
# enable timeout for alive flag polling if cancel_read is not available
# Serial port enumeration. Console tool and backend selection.
#~ if sys.platform == 'cli':
#~ elif os.name == 'java':
# get iteraror w/ or w/o filter
# list them
# This is a module that gathers a list of serial ports including details on OSX
# code originally from https://github.com/makerbot/pyserial/tree/master/serial/tools
# with contributions from cibomahto, dgs3, FarMcKon, tedbrandston
# and modifications by cliechti, hoihu, hardkrash
# (C) 2013-2020
# List all of the callout devices in OS/X by querying IOKit.
# See the following for a reference of how to do this:
# http://developer.apple.com/library/mac/#documentation/DeviceDrivers/Conceptual/WorkingWSerial/WWSerial_SerialDevs/SerialDevices.html#//apple_ref/doc/uid/TP30000384-CIHGEAFD
# More help from darwin_hid.py
# Also see the 'IORegistryExplorer' for an idea of what we are actually searching
# kIOMasterPortDefault is no longer exported in BigSur but no biggie, using NULL works just the same
# WAS: ctypes.c_void_p.in_dll(iokit, "kIOMasterPortDefault")
# defined in `IOKit/usb/USBSpec.h`
# `io_name_t` defined as `typedef char io_name_t[128];`
# in `device/device_types.h`
# defined in `mach/kern_return.h`
# kern_return_t defined as `typedef int kern_return_t;` in `mach/i386/kern_return.h`
# void CFRelease ( CFTypeRef cf );
# CFNumber type defines
# this works in python2 but may not be valid. Also I don't know if
# this encoding is guaranteed. It may be dependent on system locale.
# First, try to walk up the IOService tree to find a parent of this device that is a IOUSBDevice.
# If we weren't able to find a parent for the device, we're done.
# XXX include_links is currently ignored. are links in /dev even supported here?
# Scan for all iokit serial ports
# First, add the callout device file.
# If the serial port is implemented by IOUSBDevice
# NOTE IOUSBDevice was deprecated as of 10.11 and finally on Apple Silicon
# devices has been completely removed.  Thanks to @oskay for this patch.
# fetch some useful informations from properties
# We know this is a usb device, so the
# IORegistryEntryName should always be aliased to the
# usb product name string descriptor.
# Working with threading and pySerial
# make read-only copy
# + is not the best choice but bytes does not support % or .format in py3 and we want a single write call
# read all that is there or wait for one byte (blocking)
# probably some I/O problem such as disconnected USB serial
# adapters -> exit
# make a separated try-except for called user code
# use the lock to let other threads finish writing
# first stop reading, so that closing can be done on idle port
# - -  context manager, returns protocol
#~ PORT = 'spy:///dev/ttyUSB0'
# alternative usage
# Reexporting the utility funcs and classes
# This will be useful when `from cliutils import *`
# Caches the Node UUID in global variable,
# Executes gluster system:: uuid get command only if
# calling this function for first time
# Prints Success JSON output and exits with returncode zero
# Prints Error JSON output and exits with returncode zero
# Get the file name of Caller function, If the file name is peer_example.py
# then Gluster peer command will be gluster system:: execute example.py
# Command name is without peer_
# If file is symlink then find actual file
# Get the name of file without peer_
# JSON decode each line and construct one object with node id as key
# gluster pool list
# Iterate pool_list and merge all_nodes_data collected above
# If a peer node is down then set node_up = False
# Node is UP
# Copy file from current node to all peer nodes, fname
# is path after GLUSTERD_WORKDIR
# Must required method. Raise NotImplementedError if derived class
# not implemented this method
# Get list of Classes derived from class "Cmd" and create
# a subcommand as specified in the Class name. Call the args
# method by passing subcommand parser, Derived class can add
# arguments to the subcommand parser.
# Do not show in help message if subcommand starts with node-
# Apply common args if any
# A dict to save subcommands, key is name of the subcommand
# Hide node commands in Help message
# Get all parsed arguments
# Get the subcommand to execute
# Run
# pylint: disable=unsubscriptable-object
# default terminal
# Attribute access kept separate from invocation, to avoid
# swallowing AttributeErrors from the call which should bubble up.
# Check which frame of the animation is the widest
# Subtract to the current terminal size the max spinner length
# (-1 to leave room for the extra space between spinner and text)
# Add ellipsis if text is larger than terminal width and no animation was specified
# in case we're disabled or stream is closed while still rendering,
# we render the frame and increment the frame index, so the proper
# frame is rendered if we're reenabled or the stream opens again.
# Return first frame (can't return original text because at this point it might be ellipsed)
# TODO: using property and setter
# If column size is 0 either we are not connected
# to a terminal or something else went wrong. Fallback to 80.
# ``Requirement`` doesn't implement ``__eq__`` so we cannot compare reqs for
# equality directly but the string representation is stable.
# cyclical dependency, already checked.
# a requirement can have multiple extras but ``evaluate`` can
# only check one at a time.
# if the marker conditions are not met, we pretend that the
# dependency is satisfied.
# dependency is not installed in the environment.
# the installed version is incompatible.
# yields transitive dependencies that are not satisfied.
# Call ``realpath`` to prevent spurious warning from being emitted
# that the venv location has changed on Windows for the venv impl.
# The username is DOS-encoded in the output of tempfile - the location is the same
# but the representation of it is different, which confuses venv.
# Ref: https://bugs.python.org/issue46171
# uv is opt-in only.
# cleanup folder if creation fails
# in case the user already deleted skip remove
# Version to have added the `--python` option.
# `pip install --python` is nonfunctional on Gentoo debundled pip.
# Detect that by checking if pip._vendor` module exists.  However,
# searching for pip could yield warnings from _distutils_hack,
# so silence them.
# macOS 11+ name scheme change requires 20.3. Intel macOS 11.0 can be
# told to report 10.16 for backwards compatibility; but that also fixes
# earlier versions of pip so this is only needed for 11+.
# PEP-517 and manylinux1 was first implemented in 19.1
# The creator attributes are `pathlib.Path`s.
# Uninstall setuptools from the build env to prevent depending on it implicitly.
# Pythons 3.12 and up do not install setuptools, check if it exists first.
# pip does not honour environment markers in command line arguments
# but it does from requirement files.
# Using definition used by venv.main()
# Windows may support symlinks (setting in Windows 10)
# globally cached, copy before altering it
# Python distributors with custom default installation scheme can set a
# scheme that can't be used to expand the paths in a venv.
# This can happen if build itself is not installed in a venv.
# The distributors are encouraged to set a "venv" scheme to be used for this.
# See https://bugs.python.org/issue45413
# and https://github.com/pypa/virtualenv/issues/2208
# The Python that ships on Debian/Ubuntu varies the default scheme to
# install to /usr/local
# But it does not (yet) set the "venv" scheme.
# If we're the Debian "posix_local" scheme is available, but "venv"
# is not, we use "posix_prefix" instead which is venv-compatible there.
# The Python that ships with the macOS developer tools varies the
# default scheme depending on whether the ``sys.prefix`` is part of a framework.
# If the Apple-custom "osx_framework_library" scheme is available but "venv"
# first install the build dependencies
# then get the extra required dependencies from the backend (which was installed in the call above :P)
# extract sdist
# Workaround for 3.14.0 beta 1, can remove once beta 2 is out
# Prevent argparse from taking up the entire width of the terminal window
# which impedes readability. Also keep the description formatted.
# Handle --config-json
# Handle --config-setting (original logic)
# outdir is relative to srcdir only if omitted.
# If pyproject.toml is missing (per PEP 517) or [build-system] is missing
# (per PEP 518), use default values
# If [build-system] is present, it must have a ``requires`` field (per PEP 518)
# If ``build-backend`` is missing, inject the legacy setuptools backend
# but leave ``requires`` intact to emulate pip
# prepare_metadata hook
# fallback to build_wheel hook
# Logging in sub-thread to more-or-less ensure order of stdout and stderr whilst also
# being able to distinguish between the two.
# Per https://peps.python.org/pep-0706/, the "data" filter will become
# the default in Python 3.14. The first series of releases with the filter
# had a broken filter that could not process symlinks correctly.
# helps bootstrapping when dependencies aren't installed
# deprecated compatibility alias FBO twine.
# preserve newlines
# PEP 241
# PEP 314
# PEP 345
#XXX PEP 426?
# PEP 566
# PEP 643
# PEP 685
# PEP 639
# See: https://bugs.launchpad.net/bugs/2066340
# version 1.0
# version 1.1
# version 1.2
# version 2.1
# version 2.2
# version 2.4
# egg?
# pragma: NO COVER
#pragma NO COVER Py3k
#pragma NO COVER Python2
#pragma NO COVER
# first dist wins
# requirements
# Work around Python 3.12 tarfile warning.
# See: https://bugs.launchpad.net/pkginfo/+bug/2084140
# capital E w/ acute accent
# 'extractMetadata' calls 'read', which subclasses must override.
# Subclasses must override 'read'.
# no raise
# Metadata version 1.1, defined in PEP 314.
# Metadata version 1.2, defined in PEP 345.
# Metadata version 2.1, defined in PEP 566.
# Metadata version 2.2, defined in PEP 643.
# Metadata version 2.4, defined in PEP 639.
# See: https://bugs.launchpad.net/pkginfo/+bug/2090840
# E.g., installed by a Linux distro
# raise if write
# raise if used
# No such process? Give up.
# The executable name would be encoded with the current code page if
# we're in ANSI mode (usually). Try to decode it into str/unicode,
# replacing invalid characters to be safe (not thoeratically necessary,
# I think). Note that we need to use 'mbcs' instead of encoding
# settings from sys because this is from the Windows API, not Python
# internals (which those settings reflect). (pypa/pipenv#3382)
# Bourne.
# C.
# Common alternatives.
# Microsoft.
# More exotic.
# Based on QEMU docs: https://www.qemu.org/docs/master/user/main.html
# Login shell! Let's use this.
# If the current process is Rosetta or QEMU, this likely is a
# containerized process. Parse out the actual command instead.
# Command looks like a shell.
# FreeBSD: https://www.freebsd.org/cgi/man.cgi?query=procfs
# NetBSD: https://man.netbsd.org/NetBSD-9.3-STABLE/mount_procfs.8
# DragonFlyBSD: https://www.dragonflybsd.org/cgi/web-man?command=procfs
# See https://docs.kernel.org/filesystems/proc.html
# We only care about TTY and PPID -- both are numbers.
# XXX: Command line arguments can be arbitrary byte sequences, not
# necessarily decodable. For Shellingham's purpose, however, we don't
# care. (pypa/pipenv#2820)
# cmdline appends an extra NULL at the end, hence the [:-1].
# Inner generator function so we correctly throw an error eagerly if proc
# is not supported, rather than on the first call to the iterator. This
# allows the call site detects the correct implementation.
# Python 2-compatible FileNotFoundError.
# `ps` can return 1 if the process list is completely empty.
# (sarugaku/shellingham#15)
# XXX: This is not right, but we are really out of options.
# ps does not offer a sane way to decode the argument display,
# and this is "Good Enough" for obtaining shell names. Hopefully
# people don't name their shell with a space, or have something
# like "/usr/bin/xonsh is uber". (sarugaku/shellingham#14)
# If the item exists in the cache we will just return this immediately
# otherwise we will execute the given callback and cache the result
# of that execution for the given number of minutes in storage.
# of that execution forever.
# Serialize pickle dumps using the highest pickle protocol (binary, default
# uses ascii)
# If the file doesn't exists, we obviously can't return the cache so we will
# just return null. Otherwise, we'll get the contents of the file and get
# the expiration UNIX timestamps from the start of the file's contents.
# If the current time is greater than expiration timestamps we will delete
# the file and return null. This helps clean up the old files and keeps
# this directory much cleaner for us as old files aren't hanging out.
# Next, we'll extract the number of minutes that are remaining for a cache
# so that we can properly retain the time for things like the increment
# operation that may be performed on the cache. We'll round this out.
# If the key does not exist, we return nothing
# the entry
# Removing potential "driver" key
# noqa: B019  # TODO: find a workaround for B019
# TODO: fix C417 Unnecessary use of map - use a generator expression instead.
# noqa: C417
# Copyright (C) 2009-2020 the sqlparse authors and contributors
# <see AUTHORS file>
# This module is part of python-sqlparse and is released under
# the BSD License: https://opensource.org/licenses/BSD-3-Clause
# Setup namespace
# This code is based on the SqlLexer in pygments.
# http://pygments.org/
# It's separated from the rest of pygments to increase performance
# and to allow some customizations.
# Development notes:
# - This class is prepared to be able to support additional SQL dialects
# - The lexer class uses an explicit singleton behavior with the
# TODO: Add CLI Tests
# TODO: Simplify formatter by using argparse `type` arguments
# read from stdin
# object() only supports "is" and is useful as a marker
# use this marker to specify that the given regex in SQL_REGEX
# shall be processed further through a lookup in the KEYWORDS dictionaries
# FIXME(andi): VALUES shouldn't be listed here
# see https://github.com/andialbrecht/sqlparse/pull/64
# AS and IN are special, it may be followed by a parenthesis, but
# are never functions, see issue183 and issue507
# see issue #39
# Spaces around period `schema . name` are valid identifier
# TODO: Spaces before period not implemented
# 'Name'.
# FIXME(atronah): never match,
# because `re.match` doesn't work with look-behind regexp feature
# .'Name'
# side effect: change kw to func
# not a real string literal in ANSI SQL:
# sqlite names can be escaped with [square brackets]. left bracket
# cannot be preceded by word character or a right bracket --
# otherwise it's probably an array index
# Check for keywords, also returns tokens.Name if regex matches
# but the match isn't a keyword.
# JSON operators
# 'C': tokens.Keyword,  # most likely this is an alias
# 'G': tokens.Keyword,
# 'K': tokens.Keyword,
# 'M': tokens.Keyword,
# 'STATE': tokens.Keyword,
# Name.Builtin
# groups seems too common as table name
# 'GROUPS': tokens.Keyword,
# MySQL
# PostgreSQL Syntax
# Hive Syntax
# This regular expression replaces the home-cooked parser that was here before.
# It is much faster, but requires an extra post-processing step to get the
# desired results (that are compatible with what you would expect from the
# str.splitlines() method).
# It matches groups of characters: newlines, quoted strings, or unquoted text,
# and splits on that basis. The post-processing step puts those back together
# into the actual lines of SQL.
# enforce reindent
# Token filter
# After grouping
# Serializer
# a.b
# "name AS alias"
# "name alias" or "complicated column expression alias"
# Pending tokenlist __len__ bug fix
# def __len__(self):
# TODO: Add test for regex with is_keyboard = false
# weird bug
# this on is inconsistent, using Comment instead of T.Comment...
# TODO: May need to re-add default value to idx
# alot of code usage current pre-compensates for this
# will be needed later for new group_clauses
# while skip_ws and tokens and tokens[-1].is_whitespace:
# An "empty" statement that either has not tokens at all
# or only whitespace tokens.
# The WITH keyword should be followed by either an Identifier or
# an IdentifierList containing the CTE definitions;  the actual
# DML keyword (e.g. SELECT, INSERT) will follow next.
# Hmm, probably invalid syntax, so return unknown.
# Use [1:-1] index to discard the square brackets
# Set mode from the current statement
# First condition without preceding WHEN
# Append token depending of the current mode
# Return cases list
# The Token implementation is based on pygment's token system written
# by Georg Brandl.
# don't mess with dunder
# self can be False only if its the `root` i.e. Token itself
# SQL specific tokens
# parenthesis increase/decrease a level
# if normal token return
# Everything after here is ttype = T.Keyword
# Also to note, once entered an If statement you are done and basically
# returning
# three keywords begin with CREATE, but only one of them is DDL
# DDL Create though can contain more words such as "or replace"
# can have nested declare inside of being...
# FIXME(andi): This makes no sense.  ## this comment neither
# BEGIN and CASE/WHEN both end with END
# Run over all stream tokens
# Yield token if we finished a statement and there's no whitespaces
# It will count newline token as a non whitespace. In this context
# whitespace ignores newlines.
# why don't multi line comments also count?
# Reset filter and prepare to process next statement
# Change current split level (increase, decrease or remain equal)
# Append the token to the current statement
# Check if we get the end of a statement
# Issue762: Allow GO (or "GO 2") as statement splitter.
# When implementing a language toggle, it's not only to add
# keywords it's also to change some rules, like this splitting
# rule.
# Yield pending statement (if any)
# ~50% of tokens will be whitespace. Will checking early
# for them avoid 3 comparisons, but then add 1 more comparison
# for the other ~50% of tokens...
# Check inside previously grouped (i.e. parenthesis) if group
# of different type is inside (i.e., case). though ideally  should
# should check for all open/close tokens at once to avoid recursion
# this indicates invalid sql and unbalanced tokens.
# instead of break, continue in case other "valid" groups exist
# definitely not complete, see e.g.:
# https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/interval-literal-syntax
# https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/interval-literals
# https://www.postgresql.org/docs/9.1/datatype-datetime.html
# https://www.postgresql.org/docs/9.1/functions-datetime.html
# issue261, allow invalid next token
# next_ validation is being performed here. issue261
# TODO: convert this to eidx instead of end token.
# i think above values are len(tlist) and eidx-1
# _group_matching
# tidx shouldn't get negative
# Process token stream
# Output: Stream processed Statements
# offset = 1 represent a single space after SELECT
# add two for the space and parenthesis
# process the main query body
# if this isn't a subquery, don't re-indent
# process the inside of the parenthesis
# de-indent last parenthesis
# columns being selected
# align the end as well
# cond is None when 'else or end'
# treat "BETWEEN x and y" as a single statement
# joins, group/order by are special case. only consider the first
# word as aligner
# process any sub-sub statements
# HACK: make "group/order by" work. Longer than max_len.
# TODO(andi) Comment types should be unified, see related issue38
# See issue484 why line breaks should be preserved.
# Note: The actual value for a line break is replaced by \n
# in SerializerUnicode which will be executed in the
# postprocessing state.
# skipping token remove if token is a SQL-Hint. issue262
# using current index as start index to search next token for
# preventing infinite loop in cases when token type is a
# "SQL-Hint" and has to be skipped
# Replace by whitespace if prev and next exist and if they're not
# whitespaces. This doesn't apply if prev or next is a parenthesis.
# Insert a whitespace to ensure the following SQL produces
# a valid SQL (see #425).
# Removes newlines before commas, see issue140
# next_ = tlist.token_next(token, skip_ws=False)
# if (next_ and not next_.is_whitespace and
# save to remove the last whitespace
# has to shift since token inserted before it
# assert tlist.token_index(token) == tidx
# ---------------------------
# postprocess
# SQL query assignation to varname
# Print the tokens on the quote
# Token is a new line separator
# Close quote and add a new line
# Quote header on secondary lines
# Indentation
# Token has escape chars
# Put the token
# Close quote
# SQL query assignation to varname (quote header)
# Now take current offset into account and return relative offset.
# only break if it's not the first token
# issue121, errors in statement fixed??
# Add 1 for the "," separator
# ensure whitespace
# Line breaks on group level are done. let's add an offset of
# len "when ", "then ", "else "
# FIXME: Doesn't work
# sql.TypeCast, sql.Identifier, sql.Alias,
# return
# group.tokens = self._process(group, group.tokens)
# Copyright 2008-2023 Canonical Ltd.
# for redirecting stdout in msg() and write_to_file()
# We support different protocols these days and only come combinations are
# valid
# quick and dirty test
# Check netmask specified via '/'
# socket.inet_pton() should raise an exception, but let's be sure
# valid_address()
# Remove host netmasks
# Not valid cidr, so just use the dotted quads
# Convert to packed binary, then convert back
# Redirect our writes to stdout to msg_output, if it is set
# cover not in python3, so can't test for this
# TODO: this is pretty horrible. We should be using only unicode strings
# Depends on python version
# LP: #1101304
# 9983 (cmd) S 923 ...
# 9983 (cmd with spaces) S 923 ...
# LP: #2015645
# 229 (cmd(withparen)) S 228 ...
# pid '1' is 'init' and '0' is the kernel. This should still work when
# pid randomization is in use, but needs to be checked.
# unit tests might be run remotely, so can't test for either
# Internal helper functions
# _dotted_netmask_to_cidr()
# Returns:
# Raises exception if cidr cannot be found
# python3 doesn't have long(). We could technically use int() here
# since python2 guarantees at least 32 bits for int(), but this helps
# future-proof.
# _cidr_to_dotted_netmask()
# Raises exception if dotted netmask cannot be found
# The above socket.inet_ntoa() should raise an error, but let's be sure
# Now have dotted quad host and nm, find the network
# Get the host bits
# python3 doesn't have long()
# Create netmask bits
# Apply the netmask to the host to determine the network
# Break the network into chunks suitable for repacking
# Create the network string
# Now apply the network's netmask to the address
# We prefer to hardcode the iptables dir in common.py, but we do not import
# common.py here. While internally ufw always uses common.py to determine the
# path, _find_system_iptables() is implemented for get_iptables_version() and
# get_netfilter_capabilities() so as to not break API for external consumers
# since these have historically used a default for 'exe'.
# must be root, so don't report coverage in unit tests
# Use a unique chain name (with our locking code, this shouldn't be
# needed, but this is a cheap safeguard in case the chain happens to
# still be lying around. We do this to avoid a separate call to
# iptables to check for existence)
# First install a test chain
# Now test for various capabilities. We won't test for everything, just
# the stuff we know isn't supported everywhere but we want to support.
# recent-set
# recent-update
# Cleanup
# d[proto][port] -> list of dicts:
# we may not have an IPv6 address, so no coverage
# this can fail for certain devices, so just skip them
# skip stuff we can't read or that goes away
# can't test for this
# need root for this, so turn off in unit tests
# /
# hexlify returns a bytes string (eg, b'ab12cd') so decode that to ascii
# to have identical output as python2
# unhexlify requires an even length string, which should normally happen
# since hex_encode() will create a string and decode to ascii, which has 2
# bytes per character. If we happen to get an odd length string, instead of
# tracing back, truncate it by one character and move on. This works
# reasonably well in some cases, but might result in a UnicodeDecodeError,
# so use backslashreplace in that case.
# If the lock is already closed, ignore the exception. This should
# never happen but let's guard against it in case something changes
# Basic commands
# Application commands
# Logging commands
# Default commands
# Status commands ('status', 'status verbose', 'status numbered')
# Show commands
# Rule commands
# Don't require the user to have to specify 'rule' as the command. Instead
# insert 'rule' into the arguments if this is a rule command.
# Initialize input strings for translations
# Update the config files when toggling enable/disable
# Revert config files when toggling enable/disable and
# firewall failed to start
# Report the error
# Create an incoming rule since matching outgoing and
# forward rules doesn't make sense for this report.
# Get the non-tuple rule from get_matching(), and then
# add its corresponding CLI command.
# Don't need UFWCommandRule here either
# Approximate the order the rules were added. Since rules is
# internally rules4 + rules6, IPv6 only rules will show up after
# other rules. In terms of rule ordering in the kernel, this is
# an equivalent ordering.
# Only add rules that are different by more than v6 (we
# will handle 'ip_version == both' specially, below).
# Don't process removal of non-existing application rules
# Reverse the order of rules for inserted or prepended
# rules, so they are inserted in the right order
# prepend
# user specified position
# The user specified a v6 rule, so try to find a
# match in the v4 rules and use its position.
# If not found, then add the rule
# We need to readjust the position since the number
# of ipv4 rules increased
# The user specified a v4 rule, so try to find a
# match in the v6 rules and use its position.
# Subtract count since the list is reversed
# Readjust position to send to set_rule
# Just return the last result if no error
# If no error, and just one rule, error out
# If error and more than one rule, delete the successfully added
# rules in reverse order
# Don't fail, so we can try to backout more
# allow case insensitive matches for application rules
# allow for the profile being deleted (LP: #407810)
# Don't reload the firewall if running under ssh
# If for some reason we get an exception trying to find the parent
# pid, err on the side of caution and don't automatically reload
# the firewall. LP: #424528
# parser.py: parser class for ufw
# Copyright 2009-2018 Canonical Ltd.
# Adding New Commands
# 1. Create a new UFWCommandFoo object that implements UFWCommand
# 2. Create UFWCommandFoo.parse() to return a UFWParserResponse object
# 3. Create UFWCommandFoo.help() to display help for this command
# 4. Register this command with the parser using:
# Extending Existing Commands
# 1. Register the new command with an existing UFWCommand via
# 2. Update UFWCommandExisting.parse() for new_command
# 3. Update UFWCommandExisting.help() for new_command
# TODO: break this out
# return quickly if deleting by rule number
# Using position '0' appends the rule while '-1' prepends,
# which is potentially confusing for the end user
# strip out 'insert NUM' and parse as normal
# set/strip
# strip out direction if not an interface rule
# strip out 'on' as in 'allow in on eth0 ...'
# strip out 'log' or 'log-all' and parse as normal
# TODO: properly support "'" in the comment string. See r949 for
# details
# Short form where only app or port/proto is given
# Check if name collision with /etc/services. If so, use
# /etc/services instead of application profile
# Full form with PF-style syntax
# quick check
# This can't normally be reached because of nargs
# checks above, but leave it here in case our parsing
# changes
# Figure out the type of rule (IPv4, IPv6, or both) this is
# Adjust protocol
# This can't normally be reached because of set_port()
# Verify found proto with specified proto
# adjust type as needed
# Now verify the rule
# Short syntax
# Full syntax
# If still haven't added more than action, direction and/or
# logtype, then we have a very generic rule, so add 'to any' to
# mark it as extended form.
# 'ufw delete NUM' is the correct usage, not 'ufw route delete NUM'
# 'route delete NUM' is unsupported
# 'route delete RULE' is supported
# Let's use as much as UFWCommandRule.parse() as possible. The only
# difference with our rules is that argv[0] is 'route' and we support
# both 'in on <interface>' and 'out on <interface>' in our rules.
# Because UFWCommandRule.parse() expects that the interface clause is
# specified first, strip out the second clause and add it later
# eg: ['route', 'allow', 'in', 'on', 'eth0', 'out', 'on', 'eth1']
# Remove 2nd interface clause from argv and add it to the rule
# later. Because we searched for " <strip> on " in our joined
# string we are guaranteed to have argv[argv.index(<strip>) + 2]
# Specifying a direction without an interface doesn't make any
# sense with route rules. application names could be 'in' or 'out'
# so don't artificially limit those names.
# Handle quoted name with spaces in it by stripping Python's ['...']
# list as string text.
# Basic sanity check
# Set the direction
# Set the policy
# Discover the type
# Skip any inherited commands that inherit from
# UFWCommandRule since they must have more than one
# argument to be valid and used
# If the command is empty, then use 'type' as command
# Copyright 2008-2018 Canonical Ltd.
# Be sure to update dup_rule accordingly...
# Protocol is handled below
# follow TCP's default and send RST
# Caller needs to change this
# Format the comment string, and quote it just in case
# Limitation of iptables
# Port range
# libxtables/xtables.c xtables_parse_interface() specifies
# - < 16
# - not empty
# - doesn't contain ' '
# - doesn't contain '/'
# net/core/dev.c from the kernel specifies:
# - != '.' or '..'
# - doesn't contain '/', ':' or whitespace
# Separate a few of the invalid checks out so we can give a nice error
# We are going to limit this even further to avoid shell meta
# -1 prepend
# >0 insert
# Ok if exact match
# Direction must match
# forward must match
# Protocols must match or y 'any'
# Destination ports must match or y 'any'
# If destination interface is not specified, destination addresses
# must match or x must be contained in y
# if x and y interfaces are not specified, and x.dst is
# anywhere then ok
# If destination interface is specified, then:
# if we made it here, it is a fuzzy match
# if neither interface exists, add the direction
# otherwise, add the interfaces
# Verify protocol not specified with application rule
# Can't use protocol these protocols with v6 addresses
# 10MB
# Try to gracefully handle huge files for the user (no security
# benefit, just usability)
# If multiple occurences of profile name, use the last one
#debug("add '%s' = '%s' to '%s'" % (key, value, p))
# Reserved profile name
# Don't allow integers (ports)
# Require first character be alpha, so we can avoid collisions with port
# numbers.
# quick checks if error in profile
# Initialize via initcaps only when we need it (LP: #1044361)
# Only initialize if not initialized already
# Set defaults for dryrun, non-root, etc
# historical default for the testsuite
# Try to get capabilities from the running system if root
# v4
# v6 (skip capabilities check for ipv6 if ipv6 is disabled in ufw
# because the system may not have ipv6 support (LP: #1039729)
# IPv6 may be disabled, so ignore sysctl output
# Don't do coverage on this cause we don't run the unit tests as root
# Not needed on Linux, but who knows the places we will go...
# Use these so we only warn once
# snaps and clicks unpack to this, so handle it
# do some default policy sanity checking
# Perform this here so we can present a nice error to the user rather
# than a traceback
# Add the entry if not found
# Now that the files are written out, update value in memory
# Just use the same ports as dst for src when they are the
# same to avoid duplicate rules
# Remember, self.rules is from user[6].rules, and not the running
# firewall.
# We assume that the rules are in app rule order. Specifically,
# if app rule has multiple rules, they are one after the other.
# If the rule ordering changes, the below will have to change.
# Skip the rule if seen this tuple already (ie, it is part
# of a known tuple).
# Have a new tuple, so find and insert new app rules here
# ipv4
# ipv6
# Invalid search (v6 rule with too low position)
# Invalid search (v4 rule with too high position)
# self.rules[6] is a list of tuples. Some application rules have
# multiple tuples but the user specifies by ufw rule, not application
# tuple, so we need to find how many tuples there are leading up to
# the specified position, which we can then use as an offset for
# getting the proper match_rule.
# API overrides
# when rootdir/datadir are not set, ufw-init is in the same area as
# the lock files (ufw.common.state_dir, aka /lib/ufw), but when set,
# ufw-init is in rootdir/lib/ufw (ro) and the lockfiles in
# datadir/lib/ufw (rw)
# The default log rate limiting rule ('ufw[6]-user-limit chain should
# be prepended before use)
# Switch logging message in catch-all rules
# Initialize the capabilities database
# Is the firewall loaded at all?
# Show the protocol if Anywhere to Anwhere, have
# protocol and source and dest ports are any
# Show the protocol if have protocol, and source
# and dest ports are any
# Add v6 if have port but no addresses so it doesn't look
# a duplicate of the v4 rule
# Reporting the interfaces is different in route rules and
# non-route rules. With route rules, the reporting should be
# relative to how packets flow through the firewall, with
# other rules the reporting should be relative to the firewall
# system as endpoint. As such, for route rules, report the
# incoming interface under 'From' and the outgoing interface
# under 'To', and for non-route rules, report the incoming
# interface under 'To', and the outgoing interface under
# 'From'.
# why is the direction added to attribs if shown in action?
# now construct the rule output string
# Show the list in the order given if a numbered list, otherwise
# split incoming and outgoing rules
# Add the loglevel if not valid
# first flush the user logging chains
# then restore the system rules
# adjust reject and protocol 'all'
# adjust for logging rules
# adjust for limit
# split the string such that the log prefix can contain spaces
# comment= should always be last, so just strip it out
# set direction to "in" to support upgrades
# from old format, which only had 6 or 8 fields.
# in_eth0!out_eth1
# in_eth0
# out_eth0
# route rules use 'route:<action> ...'
# Removed leading [sd]app_ and unescape spaces
# Write header
# Rate limiting is runtime supported
# Write rules
# Write footer
# Add logging rules, skipping any delete ('-D') rules
# bail if we have a bad position
# First construct the new rules list
# insert the rule if:
# 1. the last rule was not an application rule
# 2. the current rule is not an application rule
# 3. the last application rule is different than the current
# If find the rule, add it if it's not to be removed, otherwise
# skip it.
# Allow removing a rule if the comment is empty
# If only the action is different, replace the rule if it's not
# to be removed.
# Add rule to the end if it was not already added.
# Don't process non-existing or unchanged pre-exisiting rules
# Update the user rules file
# We wrote out the rules, so set reasonable string. We will change
# this below when operating on the live firewall.
# Operate on the chains
# Reload the chain
# TODO: we only need to reload on delete when there are
# overlapping proto-specific and 'proto any' rules, but for
# now, unconditionally reload with all deletes. LP: #1933117
# Is the firewall running?
# delete any lingering RETURN rules (needed for upgrades)
# Don't update the running firewall if not enabled
# make sure all the chains are here, it's redundant but helps make
# sure the chains are in a consistent state
# Flush all the logging chains except 'user'
# Add logging rules to running firewall
# Always delete these and re-add them so that we don't have extras
# when off, insert a RETURN rule at the top of user rules, thus
# preserving the rules
# when on, remove the RETURN rule at the top of user rules, thus
# honoring the log rules
# log levels of low and higher log blocked packets
# Setup the policy violation logging chains
# log levels under high use limit
# Setup the miscellaneous logging chains
# only log INVALID in medium and higher
# Setup the audit logging chains
# loglevel full logs all packets without limit
# loglevel high logs all packets with limit
# loglevel medium logs all new packets with limit
# First make sure we have all the original files
# This implementation will intentionally traceback if someone tries to
# do something to take advantage of the race conditions here.
# Don't do anything if the files already exist
# Move the old to the new
# Copy files into place
# decode to str
# NOQA: PTH100
# While process is running, read lines from stdout/stderr
# and write them to this process's stdout/stderr if isatty
# You can do a simple loop reading in real-time:
# Use .poll() to check if the child has exited
# Read one line from stdout, if available
# Read one line from stderr, if available
# If no more data from pipes and process ended, break
# At this point, the process has finished
# Join captured stderr lines for your exception
# NOQA: PTH110, PTH111
# NOQA: T201 RUF100
# flake8: NOQA: F401
# flake8: NOQA
# mypy: allow-untyped-calls
# NOQA: F401
# F401
# NOQA: TRY300
# IPython < 0.11
# Explicitly pass an empty list as arguments, because otherwise
# IPython would use sys.argv from this script.
# Notebook not supported for IPython < 0.11.
# prompt_toolkit < v0.27
# Try activating rlcompleter, because it's handy.
# We don't have to wrap the following import in a 'try', because
# we already know 'readline' was imported successfully.
# type:ignore
# Enable tab completion on systems using libedit (e.g. macOS).
# These lines are copied from Lib/site.py on Python 3.4.
# We want to honor both $PYTHONSTARTUP and .pythonrc.py, so follow system
# conventions and get $PYTHONSTARTUP first then .pythonrc.py.
# Match the behavior of the cpython shell where an error in
# PYTHONSTARTUP prints an exception and continues.
# Also allowing passing shell='code' to force using code.interact
#: Minimum version of tmux required to run libtmux
#: Most recent version of tmux supported
#: Minimum version of libtmux required to run libtmux
#: Most recent version of libtmux supported
#: Minimum version of tmuxp required to use plugins
#: Most recent version of tmuxp
# Dependency versions
# Blue
# Green
# if no logger exists, make one
# setup logger handlers
# E501
# Return last path as default if none of the previous ones matched
# if purename, resolve to confg dir
# if relative, fill in full path
# no extension, scan
# validation.validate_schema(session_config)
# we want to run the before_script file cwd'd from the
# session start directory, if it exists.
# if first window, use window 1
# If the first pane specifies a start_directory, use that instead.
# If the first pane specifies a shell, use that instead.
# Falling back to use the environment of the first pane for the window
# creation is nice but yields misleading error messages.
# do not move to the new window
# Just issue a warning when the environment comes from the pane
# configuration as a warning for the window was already issued when
# the window was created.
# recurse into window and pane workspace items
# If all panes have same path, set 'start_directory' instead
# of using 'cd' shell commands.
# TODO support for height/width
# NOQA: PTH111
# Note: cli.py will expand workspaces relative to project's workspace directory
# for the first cwd argument.
# Any workspace section, session, window, pane that can contain the
# 'shell_command' value
# if window has a session, or pane has a window with a
# start_directory of . or ./, make sure the start_directory can be
# relative to the parent.
# This is for the case where you may be loading a workspace from
# outside your shell current directory.
# prepends a pane's ``shell_command`` list with the window and sessions'
# ``shell_command_before``.
# Prepend start_directory to relative window commands
# We only need to trickle to the window, workspace builder checks wconf
# If panes were NOT specified for a window, assume that a single pane
# with no shell commands is desired
# Prepend shell_command_before to commands
# pane_dict['shell_command'] = commands_before
# verify session_name
# tmuxp ran from inside tmux
# unset TMUX, save it, e.g. '/tmp/tmux-1000/default,30668,0'
# switch client to new session
# set TMUX back again
# get the canonical path, eliminating any symlinks
# ConfigReader allows us to open a yaml or json file as a dict
# shapes workspaces relative to config / profile file location
# Overridden session name
# propagate workspace inheritance (e.g. session -> window, window -> pane)
# create tmux server object
# raise exception if tmux not found
# load WorkspaceBuilder object for tmuxp workspace / tmux server
# if the session already exists, prompt the user to attach
# append and answer_yes have no meaning if specified together
# If inside a server, detect socket_path
# shell: code
# shell: ptpython, ptipython
# tmux environment / libtmux variables
# dest = dest_prompt
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: Copyright 2023-2025 kramo
# SPDX-FileCopyrightText: Copyright 2024-2025 kramo
# if system == "Darwin":
# pyright: ignore[reportAttributeAccessIssue]
# Present the window only after it has loaded or after a 1s timeout
# pyright: ignore[reportIncompatibleMethodOverride]
# This is so props.is_remote works
# OpenGL doesn't work on macOS properly
# pyright: ignore[reportAttributeAccessIssue, reportOptionalMemberAccess]
# SPDX-FileCopyrightText: 2019 The GNOME Music developers
# A lot of the code is taken from GNOME Music
# https://gitlab.gnome.org/GNOME/gnome-music/-/blob/6a32efb74ff4107d1e4a288184e21c43f5dd877f/gnomemusic/mpris.py
# pyright: ignore[reportOptionalMemberAccess]
# out_args is at least (signature1). We therefore always wrap the
# result as a tuple.
# Reference:
# https://bugzilla.gnome.org/show_bug.cgi?id=765603
# Some clients (for example GSConnect) try to access the volume
# property. This results in a crash at startup.
# Return nothing to prevent it.
# Copied from Workbench
# https://github.com/workbenchdev/Workbench/blob/1ebbe1e3915aabfd172c166c88ca23ad08861d15/src/Previewer/previewer.vala#L36
# TODO: Can I always assume that 72 is the default unscaled DPI? Probably not…
# noqa: N802
# noqa: N803
# SPDX-FileCopyrightText: Copyright 2023-2024 Sophie Herold
# SPDX-FileCopyrightText: Copyright 2023 FineFindus
# Taken from Loupe, rewritten in PyGObject
# https://gitlab.gnome.org/GNOME/loupe/-/blob/d66dd0f16bf45b3cd46e3a084409513eaa1c9af5/src/widgets/drag_overlay.rs
# For large enough monitors, occupy 40% of the screen area
# when opening a window with a video
# Screens with this resolution or smaller are handled as small
# For small monitors, occupy 80% of the screen area
# So that seeking isn't too rough
# Unfullscreen on Escape
# Force playback controls and progress bar to be Left-to-Right
# Algorithm copied from Loupe
# https://gitlab.gnome.org/GNOME/loupe/-/blob/4ca5f9e03d18667db5d72325597cebc02887777a/src/widgets/image/rendering.rs#L151
# Duration
# Remaining
# Cursor moved
# Cursor is hovering controls
# Active popover
# Active restore buttons
# Add a timeout to not interfere with loading the stream too much
# Only show a spinner if buffering for more than a second
# TODO: This can probably be done only every second instead
# Add a timeout to reduce things happening at once while the video is loading
# since the user won't want to change languages/subtitles within 500ms anyway
# pyright: ignore[reportCallIssue]
# This is so media that is still partially playable doesn't get interrupted
# https://gstreamer.freedesktop.org/documentation/additional/design/missing-plugins.html#partially-missing-plugins
# Get the debug info from the log files
# Translators: Replace this with your name for it to show up in the about dialog
# pyright: ignore[reportArgumentType]
# SPDX-FileCopyrightText: Copyright 2025 Jamie Gravendeel
# Don't try to rebuild the menu multiple times
# when the media info has many changes
# Translators: The variable is the number of channels
# in an audio track
# HACK: This is to make the item insensitive
# I don't know if there is a better way to do this
# NOTE: The existing 'number' test matches booleans and floats
# NOTE: The existing 'number' test matches booleans and integers
# noqa B018
#: list of lorem ipsum words used by the lipsum() helper function
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
# Only optimize if the frame is not volatile
# noqa E721
# the parent of this frame
# in some dynamic inheritance situations the compiler needs to add
# write tests around output statements.
# inside some tags we are using a buffer rather than yield statements.
# this for example affects {% filter %} or {% macro %}.  If a frame
# is buffered this variable points to the name of the list used as
# buffer.
# the name of the block we're in, otherwise None.
# a toplevel frame is the root + soft frames such as if conditions.
# the root frame is basically just the outermost frame, so no if
# conditions.  This information is used to optimize inheritance
# situations.
# variables set inside of loops and blocks should not affect outer frames,
# but they still needs to be kept track of as part of the active context.
# track whether the frame is being used in an if-statement or conditional
# expression as it determines which errors should be raised during runtime
# or compile time.
# aliases for imports
# a registry for all blocks.  Because blocks are moved out
# into the global python scope they are registered here
# the number of extends statements so far
# some templates have a rootlevel extends.  In this case we
# can safely assume that we're a child template and do some
# more optimizations.
# the current line number
# registry of all filters and tests (global, not block local)
# the debug information
# the number of new lines before the next write()
# the line number of the last written statement
# true if nothing was written so far.
# used by the `temporary_identifier` method to get new
# unique, temporary identifier
# the current indentation
# Tracks toplevel assignments
# Tracks parameter definition blocks
# Tracks the current context.
# -- Various compilation helpers
# if any of the given keyword arguments is a python keyword
# we have to make sure that no invalid call is created.
# add check during runtime that dependencies used inside of executed
# blocks are defined, as this step may be skipped during compile time
# In older Jinja versions there was a bug that allowed caller
# to retain the special behavior even if it was mentioned in
# the argument list.  However thankfully this was only really
# working if it was the last argument.  So we are explicitly
# checking this now and error out if it is anywhere else in
# the argument list.
# macros are delayed, they never require output checks
# always use the standard Undefined class for the implicit else of
# conditional expressions
# -- Statement Visitors
# if we want a deferred initialization we cannot move the
# environment into a local name
# do we have an extends tag at all?  If not, we can save some
# overhead by just not processing any inheritance code.
# find all blocks
# find all imports and import them
# add the load name
# generate the root render function.
# process the root
# make sure that the parent root is called.
# at this point we now have the blocks collected and can visit them too.
# It's important that we do not make this frame a child of the
# toplevel template.  This would cause a variety of
# interesting issues with identifier tracking.
# if we know that we are a child template, there is no need to
# check if we are one
# if the number of extends statements in general is zero so
# far, we don't have to add a check if something extended
# the template before this one.
# if we have a known extends we just add a template runtime
# error into the generated code.  We could catch that at compile
# time too, but i welcome it not to confuse users by throwing the
# same error at different times just "because we can".
# if we have a known extends already we don't need that code here
# as we know that the template execution will end here.
# if this extends statement was in the root level we can take
# advantage of that information and simplify the generated code
# in the top level from this point onwards
# and now we have one more
# The position will contain the template name, and will be formatted
# into a string that will be compiled into an f-string. Curly braces
# in the name must be replaced with escapes so that they will not be
# executed as part of the f-string.
# try to figure out if we have an extended loop.  An extended loop
# is necessary if the loop is in recursive mode if the special loop
# variable is accessed in the body if the body is a scoped block.
# if we don't have an recursive loop we have to find the shadowed
# variables at that point.  Because loops can be nested but the loop
# variable is a special one we have to enforce aliasing for it.
# Use the same buffer for the else frame
# make sure the loop variable is a special one and raise a template
# assertion error if a loop tries to write to loop
# if the node was recursive we have to return the buffer contents
# and start the iteration code
# at the end of the iteration, clear any assignments made in the
# loop from the top level
# Template data doesn't go through finalize.
# If an extends is active, don't render outside a block.
# A top-level extends is known to exist at compile time.
# Evaluate constants at compile time if possible. Each item in
# body will be either a list of static data or a node to be
# evaluated at runtime.
# If the finalize function requires runtime context,
# constants can't be evaluated at compile time.
# Unless it's basic template data that won't be
# finalized anyway.
# The node was not constant and needs to be evaluated at
# runtime. Or another error was raised, which is easier
# to debug at runtime.
# A group of constant data to join and output.
# A node to be evaluated at runtime.
# ``a.b`` is allowed for assignment, and is parsed as an NSRef. However,
# it is only valid if it references a Namespace object. Emit a check for
# that for each ref here, before assignment code is emitted. This can't
# be done in visit_NSRef as the ref could be in the middle of a tuple.
# Only emit the check for each reference once, in case the same
# ref is used multiple times in a tuple, `ns.a, ns.b = c, d`.
# This is a special case.  Since a set block always captures we
# will disable output checks.  This way one can use set blocks
# toplevel even in extended templates.
# -- Expression Visitors
# If we are looking up a variable we might have to deal with the
# case where it's undefined.  We can skip that case if the load
# instruction indicates a parameter which are always defined.
# NSRef is a dotted assignment target a.b=c, but uses a[b]=c internally.
# visit_Assign emits code to validate that each ref is to a Namespace
# object only. That can't be emitted here as the ref could be in the
# middle of a tuple assignment.
# slices bypass the environment getitem method.
# When inside an If or CondExpr frame, allow the filter to be
# undefined at compile time and only raise an error if it's
# actually called at runtime. See pull_dependencies.
# Back to the visitor function to handle visiting the target of
# the filter or test.
# if the filter node is None we are inside a filter block
# and want to write to the current buffer
# -- Unused nodes for extensions
# generated by scripts/generate_identifier_pattern.py
# noqa: B950
# Do constant folding. Some other nodes besides Expr have
# as_const, but folding them causes errors later on.
# did not work out, remove the token we pushed by accident
# from the stack so that the unknown tag fail function can
# produce a proper error message.
# the first token may be a colon for python compatibility
# in the future it would be possible to add whole code sections
# by adding some sort of end of statement token and parsing those here.
# we reached the end of the template too early, the subparser
# does not check for this, so we do that now
# common problem people encounter when switching from django
# to jinja.  we do not support hyphens in block names, so let's
# raise a nicer error message in that case.
# enforce that required blocks only contain whitespace or comments
# by asserting that the body, if not empty, is just TemplateData nodes
# with whitespace data
# If namespace attributes are allowed at this point, and the next
# token is a dot, produce a namespace reference.
# if we don't have explicit parentheses, an empty tuple is
# not a valid expression.  This would mean nothing (literally
# nothing) in the spot of an expression would be an empty
# calls are valid both after postfix expressions (getattr
# and getitem) as well as filters and tests
# noqa: B026
# support for trailing comma
# Parsing a kwarg
# Parsing an arg
# The lparen will be expected in parse_call_args, but the lineno
# needs to be recorded before the stream is advanced.
# Take the doc and annotations from the sync function, but the
# name from the async function. Pallets-Sphinx-Themes
# build_function_directive expects __wrapped__ to point to the
# sync function.
# Avoid a costly call to isawaitable
# cache for the lexers. Exists in order to be able to have multiple
# environments with the same lexer
# static regular expressions
# internal the tokens and keep references to them
# bind operators to token types
# here we do a regular string equality check as test_any is usually
# passed an iterable of not interned strings.
# type: ignore[type-arg]
# Even though it looks like a no-op, creating instances fails
# without this.
# shortcuts
# lexing rules for tags
# assemble the root lexing rule. because "|" is ungreedy
# we have to sort by length so that the lexer continues working
# as expected when we have parsing rules like <% for block and
# <%= for variables. (if someone wants asp like syntax)
# variables are just part of the rules if variable processing
# is required.
# block suffix if trimming is enabled
# global lexing rules
# directives
# comments
# blocks
# variables
# raw block
# line statements
# line comments
# we are not interested in those tokens in the parser
# try to unescape string
# remove all "_" first to support more Python versions
# tokenizer loop
# if no match we try again with the next rule
# we only match blocks and variables if braces / parentheses
# are balanced. continue parsing with the lower rule which
# is the operator rule. do this only if the end tags look
# like operators
# tuples support more options
# Rule supports lstrip. Match will look like
# text, block type, whitespace control, type, control, ...
# Skipping the text and first type, every other group is the
# whitespace control for each type. One of the groups will be
# -, +, or empty string instead of None.
# Strip all whitespace between the text and the tag.
# Not marked for preserving whitespace.
# lstrip is enabled.
# Not a variable expression.
# The start of text between the last newline and the tag.
# If there's only whitespace between the newline and the
# tag, strip it.
# failure group
# bygroup is a bit more complex, in that case we
# yield for the current token the first named
# group that matched
# normal group
# strings as token just are yielded as it.
# update brace/parentheses balance
# yield items
# fetch new position into new variable so that we can check
# if there is a internal parsing error which would result
# in an infinite loop
# handle state changes
# remove the uppermost state
# resolve the new state by group checking
# direct state name given
# we are still at the same position and no stack change.
# this means a loop without break condition, avoid that and
# publish new function and start again
# if loop terminated without break we haven't found a single match
# either we are at the end of the file or we have a problem
# end of text
# something went wrong
# defaults for the parser / lexer
# default filters, tests and namespace
# default policies
# Magic bytes to identify Jinja bytecode cache files. Contains the
# Python major and minor version to avoid loading incompatible bytecode
# if a project upgrades its Python version.
# make sure the magic header is correct
# the source code of the file changed, we need to reload
# if marshal_load fails then we need to reload
# On windows the temporary directory is used specific unless
# explicitly forced otherwise.  We can just use that.
# Don't test for existence before opening the file, since the
# file could disappear after the test before the open.
# PermissionError can occur on Windows when an operation is
# in progress, such as calling clear().
# Write to a temporary file, then rename to the real name after
# writing. This avoids another process reading the file before
# it is fully written.
# Another process may have called clear(). On Windows,
# another program may be holding the file open.
# imported lazily here because google app-engine doesn't support
# write access on the file system and the function does not exist
# normally.
# for direct template usage we have up to ten living environments
#: if this environment is sandboxed.  Modifying this variable won't make
#: the environment sandboxed though.  For a real sandboxed environment
#: have a look at jinja2.sandbox.  This flag alone controls the code
#: generation by the compiler.
#: True if the environment is just an overlay
#: the environment this environment is linked to if it is an overlay
#: shared environments have this set to `True`.  A shared environment
#: must not be modified
#: the class that is used for code generation.  See
#: :class:`~jinja2.compiler.CodeGenerator` for more information.
#: the context class that is used for templates.  See
#: :class:`~jinja2.runtime.Context` for more information.
# !!Important notice!!
# lexer / parser information
# runtime information
# set the loader provided
# configurable policies
# load extensions
# template.globals is a ChainMap, modifying it will only
# affect the template, not the environment globals.
#: Type of environment to create when creating a template directly
#: rather than through an existing environment.
# it returns a `Template`, but this breaks the sphinx build...
# render function and module
# debug and loader helpers
# store the reference
# we can't use async with aclosing(...) because that's only
# in 3.10+
# hook in default template class.  if anyone reads this comment: ignore that
# it's possible to use custom templates ;-)
# these variables are exported to the template runtime
# if the parent is shared a copy should be created because
# we don't want to modify the dict passed
# create the initial mapping of blocks.  Whenever template inheritance
# takes place the runtime will update this mapping with the new blocks
# from the template.
# noqa: B902
# Allow callable classes to take a context
# noqa: B004
# the active context should have access to variables set in
# loops and blocks without mutating the context itself
#: Current iteration of the loop, starting at 0.
#: How many levels deep a recursive loop currently is, starting at 0.
# This requires a bit of explanation,  In the past we used to
# decide largely based on compile-time information if a macro is
# safe or unsafe.  While there was a volatile mode it was largely
# unused for deciding on escaping.  This turns out to be
# problematic for macros because whether a macro is safe depends not
# on the escape mode when it was defined, but rather when it was used.
# Because however we export macros from the module system and
# there are historic callers that do not pass an eval context (and
# will continue to not pass one), we need to perform an instance
# check here.
# This is considered safe because an eval context is not a valid
# argument to callables otherwise anyway.  Worst case here is
# that if no eval context is passed we fall back to the compile
# time autoescape flag.
# try to consume the positional arguments
# For information why this is necessary refer to the handling
# of caller in the `macro_body` handler in the compiler.
# if the number of arguments consumed is not the number of
# arguments expected we start filling in keyword arguments
# and defaults.
# it's important that the order of these arguments does not change
# if not also changed in the compiler's `function_scoping` method.
# the order is caller, keyword arguments, positional arguments!
# Raise AttributeError on requests for names that appear to be unimplemented
# dunder methods to keep Python's internal protocol probing behaviors working
# properly in cases where another exception type could cause unexpected or
# difficult-to-diagnose failures.
# dunder methods to avoid confusing Python with truthy non-method objects that
# do not implement the protocol being probed for. e.g., copy.copy(Undefined())
# fails spectacularly if getattr(Undefined(), '__setstate__') returns an
# Undefined object instead of raising AttributeError to signal that it does not
# support that style of object initialization.
#: maximum number of items a range may produce
#: Unsafe function attributes.
#: Unsafe method attributes. Function attributes are unsafe for methods too.
#: unsafe generator attributes.
#: unsafe attributes on coroutines
#: unsafe attributes on async generators
#: default callback table for the binary operators.  A copy of this is
#: available on each instance of a sandboxed environment as
#: :attr:`binop_table`
#: default callback table for the unary operators.  A copy of this is
#: :attr:`unop_table`
#: a set of binary operators that should be intercepted.  Each operator
#: that is added to this set (empty by default) is delegated to the
#: :meth:`call_binop` method that will perform the operator.  The default
#: operator callback is specified by :attr:`binop_table`.
#: The following binary operators are interceptable:
#: ``//``, ``%``, ``+``, ``*``, ``-``, ``/``, and ``**``
#: The default operation form the operator table corresponds to the
#: builtin function.  Intercepted calls are always slower than the native
#: operator call, so make sure only to intercept the ones you are
#: interested in.
#: .. versionadded:: 2.6
#: a set of unary operators that should be intercepted.  Each operator
#: :meth:`call_unop` method that will perform the operator.  The default
#: operator callback is specified by :attr:`unop_table`.
#: The following unary operators are interceptable: ``+``, ``-``
# the double prefixes are to avoid double keyword argument
# errors when proxying the call.
# If we have not see the name referenced yet, we need to figure
# out what to set it to.
# If there is a parent scope we check if the name has a
# reference there.  If it does it means we might have to alias
# to a variable there.
# Otherwise we can just set it to undefined.
# I18N functions available in Jinja templates. If the I18N library
# provides ugettext, it will be assigned to gettext.
#: if this extension parses this is the list of tags it's listening to.
#: the priority of that extension.  This is especially useful for
#: extensions that preprocess values.  A lower value means higher
#: priority.
#: .. versionadded:: 2.4
# Always treat as a format string, even if there are no
# variables. This makes translation strings more consistent
# and predictable. This requires escaping
# Always treat as a format string, see gettext comment above.
# TODO: the i18n extension is currently reevaluating values in a few
# situations.  Take this example:
# something is called twice here.  One time for the gettext value and
# the other time for the n-parameter of the ngettext function.
# ugettext and ungettext are preferred in case the I18N library
# is providing compatibility with older Python versions.
# find all the variables referenced.  Additionally a variable can be
# defined in the body of the trans block too, but this is checked at
# a later state.
# skip colon for python compatibility
# expressions
# now parse until endtrans or pluralize
# if we have a pluralize block, we parse that too
# register free names as simple name expressions
# no variables referenced?  no need to escape for old style
# gettext invocations only if there are vars.
# in case newstyle gettext is used, the method is powerful
# enough to handle the variable expansion and autoescape
# handling itself
# the function adds that later anyways in case num was
# called num, so just skip it.
# otherwise do that here
# mark the return value as safe if we are in an
# environment with autoescaping turned on
# Set the depth since the intent is to show the top few names.
# skip templates with syntax errors
#: nicer import names
#: if set to `False` it indicates that the loader cannot provide access
#: to the source of templates.
# first we try to get the source for this template together
# with the filename and the uptodate function.
# try to load the code from the bytecode cache if there is a
# bytecode cache configured.
# if we don't have code so far (not cached, no longer up to
# date) etc. we compile the template
# if the bytecode cache is available and the bucket doesn't
# have a code so far, we give the bucket the new code and put
# it back to the bytecode cache.
# Use posixpath even on Windows to avoid "drive:" or UNC
# segments breaking out of the search directory.
# Use normpath to convert Windows altsep to sep.
# normpath preserves ".", which isn't valid in zip paths.
# Make sure the package exists. This also makes namespace
# packages work, otherwise get_loader returns None.
# One element for regular packages, multiple for namespace
# packages, or None for single module file.
# A single module file, use the parent directory instead.
# segments breaking out of the search directory. Use normpath to
# convert Windows altsep to sep.
# Package is a directory.
# Package is a zip file.
# Could use the zip's mtime for all template mtimes, but
# would need to safely reload the module if it's out of
# date, so just report it as always current.
# Find names under the templates directory that aren't directories.
# re-raise the exception with the correct filename here.
# (the one that includes the prefix)
# create a fake module that looks for the templates in the
# path given.
# the only strong reference, the sys.modules entry is weak
# so that the garbage collector can remove it once the
# loader that created it goes out of business.
# remove the entry from sys.modules, we only want the attribute
# on the module object we have stored on the loader.
# Unlike lead, which is anchored to the start of the string,
# need to check that the string ends with any of the characters
# before trying to match all of them, to avoid backtracking.
# Prefer balancing parentheses in URLs instead of ignoring a
# trailing character.
# Balanced, or lighter on the left
# Move as many as possible from the tail to balance
# Move anything in the tail before the end char too
# ignore values like `@a@b`
# each paragraph contains out of 20 to 100 words.
# add commas
# add end of sentences
# ensure that the paragraph ends with a dot.
# this is fast for small capacities (something below 1000) but doesn't
# scale.  But as long as it's only used as storage for templates this
# won't do any harm.
# alias all queue methods for faster lookup
# if something removed the key from the container
# when we read, ignore the ValueError that we would
# get otherwise.
# __class__ is needed for the awaitable check in async mode
# intercepted operators cannot be folded at compile time
# We don't need any special checks here; NSRef assignments have a
# runtime check to ensure the target is a namespace object which will
# have been checked already as it is created using a normal assignment
# which goes through a `Name` node.
# if we evaluate to an undefined object, we better do that at runtime
# Helpers for extensions
# make sure nobody creates custom nodes
# Remove the old traceback, otherwise the frames from the
# compiler still show up.
# Outside of runtime, so the frame isn't executing template
# code, but it still needs to point at the template.
# Skip the frame for the render function.
# Build the stack of traceback object, replacing any in template
# code with the source file and line information.
# Skip frames decorated with @internalcode. These are internal
# calls that aren't useful in template debugging output.
# Assign tb_next in reverse to avoid circular references.
# Replace the real locals with the context that would be
# available at that point in the template.
# Raise an exception at the correct line number.
# Build a new code object that points to the template file and
# replaces the location with a block name.
# Execute the new code, which is guaranteed to raise, and return
# the new traceback without this frame.
# Start with the current template context.
# Might be in a derived context that only sets local variables
# rather than pushing a context. Local variables follow the scheme
# l_depth_name. Find the highest-depth local that has a value for
# each name.
# Not a template variable, or no longer relevant.
# Modify the context with any derived context.
# Silence the Python warning about message being deprecated since
# it's not valid here.
# this is set to True if the debug.translate_syntax_error
# function translated the syntax error into a new traceback
# for translated errors we only return the message
# otherwise attach some stuff
# if the source is set, add the line to the output
# https://bugs.python.org/issue1692335 Exceptions that take
# multiple required arguments have problems with pickling.
# Without this, raises TypeError: __init__() missing 1 required
# positional argument: 'lineno'
# a tuple with some non consts in there
# something const, only yield the strings and ignore
# non-string consts that really just make no sense
# something dynamic in there
# something dynamic we don't know about here
# constant is a basestring, direct template name
# a tuple or list (latter *should* not happen) made of consts,
# yield the consts that are strings.  We could warn here for
# non string values
# something else we don't care about, we could warn here
# Check for characters that would move the parser state from key to value.
# https://html.spec.whatwg.org/#attribute-name-state
# no automatic escaping?  joining is a lot easier then
# if the delimiter doesn't have an html representation we check
# if any of the items has.  If yes we do a coercion to Markup
# no html involved, to normal joining
# No async do_last, it may not be safe in async mode.
# this quirk is necessary for splitlines method
# textwrap.wrap doesn't consider existing newlines when wrapping.
# If the string has a newline before width, wrap will still insert
# a newline at width, resulting in a short line. Instead, split and
# wrap each paragraph individually.
# this quirk is necessary so that "42.23"|int gives 42.
# Use the regular tuple repr to hide this subclass if users print
# out the value during debugging.
# Return the real key from the first value instead of the lowercase key.
# type: ignore[no-any-return, call-overload]
# Environment.getattr will fall back to obj[name] if obj.name doesn't exist.
# But we want to call env.getattr to get behavior such as sandboxing.
# Determine if the attr exists first, so we know the fallback won't trigger.
# This avoids executing properties/descriptors, but misses __getattr__
# and __getattribute__ dynamic attrs.
# This finds dynamic attrs, and we know it's not a descriptor at this point.
# get_running_loop, new in 3.7, is preferred to get_event_loop
# Deprecation warning since 3.10
# First process data that was previously read - if it maches, we don't need
# async stuff.
# Command was fully submitted, now wait for the next prompt
# We got the continuation prompt - command was incomplete
# Found a match
# N.B. If this gets called, async will close the pipe (the spawn object)
# for us
# We may get here without eof_received being called, e.g on Linux
# On Unix, these are available at the top level for backwards compatibility
# vim: set shiftround expandtab tabstop=4 shiftwidth=4 ft=python autoindent :
# This assumes EOF or TIMEOUT will eventually cause run to terminate.
# child.after may have been a TIMEOUT or EOF,
# which we don't want appended to the list.
# make sure fd is a valid file descriptor
# These four methods are left around for backwards compatibility, but not
# documented as part of fdpexpect. You're encouraged to use os.write
# directly.
# references:
# The 'Do.*' functions are helper functions for the ANSI class.
# Should be 4
# screen.setReplaceMode ()
#self.screen = screen (24,80)
# Selects application keypad.
# ELB means Escape Left Bracket. That is ^[[
### It gets worse... the 'm' code can have infinite number of
### number;number;number before it. I've never seen more than two,
### but the specs say it's allowed. crap!
### LED control. Same implementation problem as 'm' code.
# \E[?47h switch to alternate screen
# \E[?47l restores to normal screen from alternate screen.
#RM   Reset Mode                Esc [ Ps l                   none
### LED control. Same problem as 'm' code.
# Create a state for 'q' and 'm' which allows an infinite number of ignored numbers
#\r and \n both produce a call to cr() and lf(), respectively.
# pylint: disable=unused-import
# flake8: noqa: F401
# this assumes async def/await are more stable
# Existing spawn instance has echo enabled, disable it
# to prevent our input from being repeated to output.
# Split up multiline commands and feed them in bit-by-bit
# splitlines ignores trailing newlines - add it back in manually
# If the user runs 'env', the value of PS1 will be in the output. To avoid
# replwrap seeing that as the next prompt, we'll embed the marker characters
# for invisible characters in the prompt; these show up when inspecting the
# environment variable, but not when bash displays the prompt.
# This is purely informational now - changing it has no effect
# The pid and child_fd of this object get set by this method.
# Note that it is difficult for this method to fail.
# You cannot detect if the child process cannot start.
# So the only way you can tell if the child process started
# or not is to try to read from the file descriptor. If you get
# EOF immediately then it means that the child is already dead.
# That may not necessarily be bad because you may have spawned a child
# that performs some task; creates no stdout output; and then dies.
# If command is an int type then it may represent a file descriptor.
# Make a shallow copy of the args list.
# Encode command line using the specified encoding
# PtyProcessError may be raised if it is not possible to terminate
# the child.
# Update exit status from ptyproc
# If there is data available to read right now, read as much as
# we can. We do this to increase performance if there are a lot
# of bytes to be read. This also avoids calling isalive() too
# often. See also:
# * https://github.com/pexpect/pexpect/pull/304
# * http://trac.sagemath.org/ticket/10295
# Maybe the child is dead: update some attributes in that case
# Don't raise EOF, just return what we read so far.
# The process is dead, but there may or may not be data
# available to read. Note that some systems such as Solaris
# do not give an EOF when the child dies. In fact, you can
# still try to read from the child_fd -- it will block
# forever or until TIMEOUT. For that reason, it's important
# to do this check before calling select() with timeout.
# Irix takes a long time before it realizes a child was terminated.
# Make sure that the timeout is at least 2 seconds.
# FIXME So does this mean Irix systems are forced to always have
# FIXME a 2 second delay when calling read_nonblocking? That sucks.
# Because of the select(0) check above, we know that no data
# is available right now. But if a non-zero timeout is given
# (possibly timeout=None), we call select() with a timeout.
# Some platforms, such as Irix, will claim that their
# processes are alive; timeout on the select; and
# then finally admit that they are not alive.
# I think there are kernel timing issues that sometimes cause
# this to happen. I think isalive() reports True, but the
# process is dead to the kernel.
# Make one last attempt to see if the kernel is up to date.
# exception may occur if "Is some other process attempting
# "job control with our child pid?"
# Same as os.kill, but the pid is given for you.
# Flush the buffer.
# Linux-style EOF
# BSD-style EOF
# A value of -1 means to use the figure from spawn, which should
# be None or a positive number.
# First call from a new call to expect_loop or expect_async.
# self.searchwindowsize may have changed.
# Treat all data as fresh.
# A subsequent call, after a call to existing_data.
# search lookback + new data.
# copy the whole buffer (really slow for large datasets).
# in Python 3.x we can use "raise exc from None"
# No match at this point
# Still have time left, so read more data
# Keep reading until exception or return.
# 'freshlen' helps a lot here. Further optimizations could
# possibly include:
# using something like the Boyer-Moore Fast String Searching
# Algorithm; pre-compiling the search through a list of
# strings into something that can scan the input once to
# search for all N strings; realize that if we search for
# ['bar', 'baz'] and the input is '...foo' we need not bother
# rescanning until we've read three more bytes.
# Sadly, I don't know enough about this interesting topic. /grahn
# the match, if any, can only be in the fresh data,
# or at the very end of the old data
# better obey searchwindowsize
#ss = [(n, '    %d: re.compile("%s")' %
# 'freshlen' doesn't help here -- we cannot predict the
# length of a match, and the re module provides no help.
# status returned by os.waitpid
# the child file descriptor is initially closed
# input from child (read_nonblocking)
# output to send (send, sendline)
# max bytes to read at one time into buffer
# Data before searchwindowsize point is preserved, but not searched.
# Delay used before sending data to child. Time in seconds.
# Set this to None to skip the time.sleep() call completely.
# Used by close() to give kernel time to update process status.
# Time in seconds.
# Used by terminate() to give kernel time to update process status.
# Delay in seconds to sleep after each call to read_nonblocking().
# Set this to None to skip the time.sleep() call completely: that
# would restore the behavior from pexpect-2.0 (for performance
# reasons or because you don't want to release Python's global
# interpreter lock).
# Unicode interface
# bytes mode (accepts some unicode for backwards compatibility)
# If stdout has been replaced, it may not have .buffer
# analysis:ignore
# unicode mode
# This can handle unicode in both Python 2 and 3
# storage for async transport
# This is the read buffer. See maxread.
# The buffer may be trimmed for efficiency reasons.  This is the
# untrimmed buffer, used to create the before attribute.
# For backwards compatibility, in bytes mode (when encoding is None)
# unicode is accepted for send and expect. Unicode mode is strictly unicode
# only.
# In bytes mode, regex patterns should also be of bytes type
# And vice-versa
# This property is provided for backwards compatibility (self.buffer used
# to be a string/bytes object)
# Allow dot to match \n
# delimiter default is EOF
# I could have done this more directly by not using expect(), but
# I deliberately decided to couple read() to expect() so that
# I would catch any bugs early and ensure consistent behavior.
# It's a little less efficient, but there is less for me to
# worry about if I have to later modify read() or expect().
# Note, it's OK if size==-1 in the regex. That just means it
# will never match anything in which case we stop only on EOF.
### FIXME self.before should be ''. Should I assert this?
# For 'with spawn(...) as child:'
# We rely on subclasses to implement close(). If they don't, it's not
# clear what a context manager should do.
# Fill character; ignored on input.
# Transmit answerback message.
# Ring the bell.
# Move cursor left.
# Move cursor to next tab stop.
# Line feed.
# Same as LF.
# Move cursor to left margin or newline.
# Invoke G1 character set.
# Invoke G0 character set.
# Resume transmission.
# Halt transmission.
# Cancel escape sequence.
# Same as CAN.
# Introduce a control sequence.
# Space or blank character.
# <ESC>[{ROW};{COLUMN}H
# <ESC>[{COUNT}D (not confused with down)
# <ESC>[{COUNT}B (not confused with back)
# <ESC>[{COUNT}C
# <ESC>[{COUNT}A
# <ESC> M   (called RI -- Reverse Index)
# <ESC>[{ROW};{COLUMN}f
# <ESC>[s
# <ESC>[u
# <ESC>7
# <ESC>8
# <ESC>[r
# <ESC>[{start};{end}r
# <ESC>D
# Screen is indexed from 1, but arrays are indexed from 0.
# <ESC>M
# <ESC>[0K -or- <ESC>[K
# <ESC>[1K
# <ESC>[2K
# <ESC>[0J -or- <ESC>[J
# <ESC>[1J
# <ESC>[2J
# <ESC>H
# <ESC>[g
# <ESC>[3g
# Map (input_symbol, current_state) --> (action, next_state).
# Map (current_state) --> (action, next_state).
# The following is an example that demonstrates the use of the FSM class to
# process an RPN expression. Run this module from the command line. You will
# get a prompt > for input. Enter an RPN Expression. Numbers may be integers.
# Operators are * / + - Use the = sign to evaluate and print the expression.
# will print:
# These define the actions.
# Note that "memory" is a list being used as a stack.
#SUBTLE HACK ALERT! Note that the command that SETS the prompt uses a
#slightly different string than the regular expression to match it. This
#is because when you set the prompt the command will echo back, but we
#don't want to match the echoed command. So if we make the set command
#slightly different than the regex we eliminate the problem. To make the
#set command different we add a backslash in front of $. The $ doesn't
#need to be escaped, but it doesn't hurt and serves to make the set
#prompt command different than the regex.
# used to match the command-line prompt
# used to set shell command-line prompt to UNIQUE_PROMPT.
# Disabling host key checking, makes you vulnerable to MITM attacks.
# Disabling X11 forwarding gets rid of the annoying SSH_ASKPASS from
# displaying a GUI password dialog. I have not figured out how to
# disable only SSH_ASKPASS without also disabling X11 forwarding.
# Unsetting SSH_ASKPASS on the remote side doesn't disable it! Annoying!
#self.SSH_OPTS = "-x -o 'PubkeyAuthentication=no'"
# User defined SSH options, eg,
# ssh.otions = dict(StrictHostKeyChecking="no",UserKnownHostsFile="/dev/null")
# maximum time allowed to read the first response
# maximum time allowed between subsequent characters
# maximum time for reading the entire prompt
# updated total time expired
# All of these timing pace values are magic.
# I came up with these based on what seemed reliable for
# connecting to a heavily loaded machine I have.
# Clear the buffer before getting the prompt.
### TODO: This is getting messy and I'm pretty sure this isn't perfect.
### TODO: I need to draw a flow chart for this.
### TODO: Unit tests for SSH tunnels, remote SSH command exec, disabling original prompt sync
# Allow forwarding our SSH key to the current session
# SSH tunnels, make sure you know what you're putting into the lists
# under each heading. Do not expect these to open 100% of the time,
# The port you're requesting might be bound.
# The structure should be like this:
# { 'local': ['2424:localhost:22'],  # Local SSH tunnels
# 'remote': ['2525:localhost:22'],   # Remote SSH tunnels
# 'dynamic': [8888] } # Dynamic/SOCKS tunnels
# make sure ssh_config has an entry for the server with a username
# insurance
# we have left the relevant section
# Are we asking for a local ssh command or to spawn one in another session?
# This does not distinguish between a remote server 'password' prompt
# and a local ssh 'passphrase' prompt (for unlocking a private key).
# First phase
# New certificate -- always accept it.
# This is what you get if SSH does not have the remote host's
# public key stored in the 'known_hosts' cache.
# password or passphrase
# Second phase
# This is weird. This should not happen twice in a row.
# can occur if you have a public key pair set to authenticate.
### TODO: May NOT be OK if expect() got tricked and matched a false prompt.
# password prompt again
# For incorrect passwords, some ssh servers will
# ask for the password again, others return 'denied' right away.
# If we get the password prompt again then this means
# we didn't get the password right the first time.
# permission denied -- password was bad.
# terminal type again? WTF?
# Timeout
#This is tricky... I presume that we are at the command-line prompt.
#It may be that the shell prompt was so weird that we couldn't match
#it. Or it may be that we couldn't log in for some other reason. I
#can't be sure, but it's safe to guess that we did login because if
#I presume wrong and we are not logged in then this should be caught
#later when I try to set the shell prompt.
# Connection closed by remote host
# Unexpected
# We appear to be in.
# set shell prompt to something unique.
# sh-style
# csh-style
# zsh-style
# vi:ts=4:sw=4:expandtab:ft=python:
# Note that `SpawnBase` initializes `self.crlf` to `\r\n`
# because the default behaviour for a PTY is to convert
# incoming LF to `\r\n` (see the `onlcr` flag and
# https://stackoverflow.com/a/35887657/5397009). Here we set
# it to `os.linesep` because that is what the spawned
# application outputs by default and `popen` doesn't translate
# anything.
# We have already finished reading. Use up any buffered data,
# then raise EOF
# This indicates we have reached EOF
# On Python 2, .write() returns None, so we return the length of
# bytes written ourselves. This assumes they all got written.
# Alias Python2 exception to Python3
# follow symlinks,
# non-files (directories, fifo, etc.)
# When root on Solaris, os.X_OK is True for *all* files, irregardless
# of their executability -- instead, any permission bit of any user,
# group, or other is fine enough.
# (This may be true for other "Unix98" OS's such as HP-UX and AIX)
# Special case where filename contains an explicit path.
# Constants to name the states we can be in.
# The state when consuming whitespace between commands.
# Escape the next character
# Handle single quote
# Handle double quote
# Add arg to arg_list if we aren't in the middle of whitespace.
# if select() is interrupted by a signal (errno==EINTR) then
# we loop back and enter the select() again.
# if we loop back we have to subtract the
# amount of time we already waited.
# something else caused the select.error, so
# this actually is an exception.
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# This file is part of Ansible
# Ansible is free software: you can redistribute it and/or modify
# the Free Software Foundation, either version 3 of the License, or
# Ansible is distributed in the hope that it will be useful,
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
# make vendored top-level modules accessible EARLY
# Note: Do not add any code to this file.  The ansible module may be
# a namespace package when using Ansible-2.1+ Anything in this file may not be
# available if one of the other packages in the namespace is loaded first.
# This is for backwards compat.  Code should be ported to get these from
# ansible.release instead of from here.
# Copyright: (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# initialize config manager/config data to read/store global settings
# and generate 'pseudo constants' for app consumption.
# CONSTANTS ### yes, actual ones
# The following are hard-coded action names
# http://nezzen.net/2008/06/23/colored-text-in-python-using-ansi-escape-sequences/
# this is concatenated with other config settings as lists; cannot be tuple
# characters included in auto-generated passwords
# FIXME: expand to other plugins, but never doc fragments
# NOTE: always update the docs/docsite/Makefile to match
# ignore during module search
# This matches a string that cannot be used as a valid python variable name i.e 'not-valid', 'not!valid@either' '1_nor_This'
# FIXME: remove once play_context mangling is removed
# the magic variable mapping dictionary below is used to translate
# host/inventory variables to fields in the PlayContext
# object. The dictionary values are tuples, to account for aliases
# in variable names.
# base
# connection common
# networking modules
# ssh TODO: remove
# docker TODO: remove
# POPULATE SETTINGS FROM CONFIG ###
# Copyright: (c) 2021, Matt Martz <matt@sivel.net>
# Copyright: (c) 2018, Toshio Kuratomi <tkuratomi@ansible.com>
# Note: this is not the singleton version.  The Singleton is only created once the program has
# actually parsed the args
# This should be called immediately after cli_args are processed (parsed, validated, and any
# normalization performed on them).  No other code should call it
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# Tries to determine if a path is inside a role, last dir must be 'tasks'
# this is not perfect but people should really avoid 'tasks' dirs outside roles when using Ansible.
# NOTE: not effective with forks as the main copy does not get updated.
# avoids rereading files
# NOTE: not thread safe, also issues with forks not returning data to main proc
# used to keep track of temp files for cleaning
# initialize the vault stuff with an empty password
# TODO: replace with a ref to something that can get the password
# TODO: since we can query vault_secrets late, we could provide this to DataLoader init
# DTFIX-FUTURE: consider deprecating this in favor of tagging Origin on data
# DTFIX-FUTURE: consider future deprecation, but would need RedactAnnotatedSourceContext public
# Resolve the file name
# Log the file being loaded
# Check if the file has been cached and use the cached data if available
# only tagging the container, used by include_vars to determine if vars should be shown or not
# this is a temporary measure until a proper data senitivity system is in place
# Cache the file contents for next time based on the cache option
# Return the parsed data, optionally deep-copied for safety
# obj intentionally omitted since there's no value in showing its contents
# DTFIX-FUTURE: why not just let the builtin one fly?
# I have full path, nothing else needs to be looked at
# base role/play path + templates/files/vars + relative filename
# not told if role, but detect if it is a role and if so make sure you get correct base path
# resolved base role/play path + templates/files/vars + relative filename
# look in role's tasks dir w/o dirname
# try to create absolute path for loader basedir + templates/files/vars + filename
# try to create absolute path for loader basedir
# try to create absolute path for  dirname + filename
# try to create absolute path for filename
# path is absolute, no relative needed, check existence and return source
# if path is in role and 'tasks' not there already, add it into the search
# don't add dirname if user already is using it in source
# always append basedir as last resort
# Limit how much of the file is read since we do not know
# whether this is a vault file and therefore it could be very
# large.
# if the file is encrypted and no password was specified,
# the decrypt call would throw an error, but we check first
# since the decrypt function doesn't know the file name
# Make a temp file
# Look for file with no extension first to find dir before file
# add valid extensions to name
# skip hidden and backups
# recursive search if dir
# only consider files with valid extensions or no extension
# Copyright: (c) 2018, Ansible Project
# from ansible.utils.display import Display as _Display
# deprecated: description='deprecate ajson' core_version='2.23'
# _Display().deprecated(
# Imported for backward compat
# NOTE: now unused, but kept for backwards compat
# DTFIX-FUTURE: find a better pattern for this (can we use the new optional error behavior?)
# skip errors can happen when trying to use the normal code
# examples 'can' be yaml, but even if so, we dont want to parse as such here
# as it can create undesired 'objects' that don't display well as docs.
# string should be yaml if already not a dict
# DTFIX-FUTURE: better pattern to conditionally raise/display
# NOTE: adjacency of doc file to code file is responsibility of caller
# cause seealso is specially processed from 'doc' later on
# TODO: stop any other 'overloaded' implementation in main doc
# start capturing the stub until indentation returns
# Detect that the short_description continues on the next line if it's indented more
# than short_description itself.
# (c) 2014 James Cammarata, <jcammarata@ansible.com>
# Decode escapes adapted from rspeer's answer here:
# http://stackoverflow.com/questions/4020539/process-escape-sequences-in-a-string-in-python
# NB: adjusting the column number is left as an exercise for the reader
# ran out of string, but we must have some escaped equals,
# so replace those and append this to the list of raw params
# FIXME: make the retrieval of this list of shell/command options a function, so the list is centralized
# recombine the free-form params, if any were found, and assign
# them to a special option for use later by the shell/command module
# the char before the current one, used to see if
# the current character is escaped
# the list of params parsed out of the arg string
# this is going to be the result value when we are done
# Initial split on newlines
# iterate over the tokens, and reassemble any that may have been
# split on a space inside a jinja2 block.
# ex if tokens are "{{", "foo", "}}" these go together
# These variables are used
# to keep track of the state of the parsing, since blocks and quotes
# may be nested within each other.
# used to count nested jinja2 {{ }} blocks
# used to count nested jinja2 {% %} blocks
# used to count nested jinja2 {# #} blocks
# now we loop over each split chunk, coalescing tokens if the white space
# split occurred within quotes or a jinja2 block of some kind
# we split on spaces and newlines separately, so that we
# can tell which character we split on for reassembly
# inside quotation characters
# Empty entries means we have subsequent spaces
# We want to hold onto them so we can reconstruct them later
# Make sure there is a params item to store result in.
# if we hit a line continuation character, but
# we're not inside quotes, ignore it and continue
# on to the next token while setting a flag
# store the previous quoting state for checking later
# multiple conditions may append a token to the list of params,
# so we keep track with this flag to make sure it only happens once
# append means add to the end of the list, don't append means concatenate
# it to the end of the last token
# if we're inside quotes now, but weren't before, append the token
# to the end of the list, since we'll tack on more to it later
# otherwise, if we're inside any jinja2 block, inside quotes, or we were
# inside quotes (but aren't now) concat this token to the last param
# if the number of paired block tags is not the same, the depth has changed, so we calculate that here
# and may append the current token to the params (if we haven't previously done so)
# finally, if we're at zero depth for all blocks and not inside quotes, and have not
# yet appended anything to the list of params, we do so now
# if this was the last token in the list, and we have more than
# one item (meaning we split on newlines), add a newline back here
# to preserve the original structure
# If we're done and things are not at zero depth or we're still inside quotes,
# raise an error to indicate that the args were unbalanced
# (c) 2014 Michael DeHaan, <michael@ansible.com>
# modules formatted for user msg
# For filtering out modules correctly below, use all permutations
# delayed local imports to prevent circular import
# store the valid Task/Handler attrs for quick access
# HACK: why are these not FieldAttributes on task with a post-validate to check usage?
# final args are the ones we'll eventually return, so first update
# them with any additional args specified, which have lower priority
# than those which may be parsed/normalized next
# how we normalize depends if we figured out what the module name is
# yet.  If we have already figured it out, it's a 'new style' invocation.
# otherwise, it's not
# this can occasionally happen, simplify
# only internal variables can start with an underscore, so
# we don't allow users to set them directly in arguments
# finally, update the args we're going to return with the ones
# which were normalized above
# form is like: { xyz: { x: 2, y: 3 } }
# form is like: copy: src=a dest=b
# k=v parsing intentionally omitted
# this can happen with modules which take no params, like ping:
# form is like:  action: { module: 'copy', src: 'a', dest: 'b' }
# form is like:  action: copy src=a dest=b
# need a dict or a string, so giving up
# This is the standard YAML form for command-type modules. We grab
# the args and pass them in as additional arguments, which can/will
# be overwritten via dict updates from the other arg sources below
# We can have one of action, local_action, or module specified
# action
# an old school 'action' statement
# local_action
# local_action is similar but also implies a delegate_to
# module: <stuff> is the more new-style invocation
# filter out task attributes so we're only querying unrecognized keys as actions/modules
# walk the filtered input dictionary to see if we recognize a module name
# DTFIX-FUTURE: extract to a helper method, shared with Task.post_validate_args
# finding more than one module name is a problem
# if we didn't see any module in the task at all, it's not a task really
# there was one non-task action, but we couldn't find it
# Copyright 2015 Abhijit Menon-Sen <ams@2ndQuadrant.com>
# Components that match a numeric or alphanumeric begin:end or begin:end:step
# range expression inside square brackets.
# Components that match a 16-bit portion of an IPv6 address in hexadecimal
# notation (0..ffff) or an 8-bit portion of an IPv4 address in decimal notation
# (0..255) or an [x:y(:z)] numeric range.
# A hostname label, e.g. 'foo' in 'foo.example.com'. Consists of alphanumeric
# characters plus dashes (and underscores) or valid ranges. The label may not
# start or end with a hyphen or an underscore. This is interpolated into the
# hostname pattern below. We don't try to enforce the 63-char length limit.
# This matches a square-bracketed expression with a port specification. What
# is inside the square brackets is validated later.
# This matches a bare IPv4 address or hostname (or host pattern including
# [x:y(:z)] ranges) with a port specification.
# This matches an IPv4 address, but also permits range expressions.
# This matches an IPv6 address, but also permits range expressions.
# This expression looks complex, but it really only spells out the various
# combinations in which the basic unit of an IPv6 address (0..ffff) can be
# written, from :: to 1:2:3:4:5:6:7:8, plus the IPv4-in-IPv6 variants such
# as ::ffff:192.0.2.3.
# Note that we can't just use ipaddress.ip_address() because we also have to
# accept ranges in place of each component.
# This matches a hostname or host pattern including [x:y(:z)] ranges.
# We roughly follow DNS rules here, but also allow ranges (and underscores).
# In the past, no systematic rules were enforced about inventory hostnames,
# but the parsing context (e.g. shlex.split(), fnmatch.fnmatch()) excluded
# various metacharacters anyway.
# We don't enforce DNS length restrictions here (63 characters per label,
# 253 characters total) or make any attempt to process IDNs.
# First, we extract the port number if one is specified.
# What we're left with now must be an IPv4 or IPv6 address, possibly with
# numeric ranges, or a hostname with alphanumeric ranges.
# If it isn't any of the above, we don't understand it.
# If we get to this point, we know that any included ranges are valid.
# If the caller is prepared to handle them, all is well.
# Otherwise we treat it as a parse failure.
# deprecated: description='Deprecate vault_secrets, it has no effect.' core_version='2.23'
# FUTURE: provide Ansible-specific top-level APIs to expose JSON and YAML serialization/deserialization to hide the error handling logic
# we first try to load this data as JSON.
# Fixes issues with extra vars json strings not being parsed correctly by the yaml parser
# DTFIX-FUTURE: how can we indicate in Origin that the data is in-memory only, to support context information -- is that useful?
# deprecated: description='enable deprecation of everything in this module', core_version='2.23'
# from ansible.utils.display import Display
# Display().deprecated(
# (c) 2014, James Tanner <tanner.jc@gmail.com>
# (c) 2016, Adrian Likins <alikins@redhat.com>
# (c) 2016 Toshio Kuratomi <tkuratomi@ansible.com>
# See also CIPHER_MAPPING at the bottom of the file which maps cipher strings
# (used in VaultFile header) to a cipher class
# Make sure we have a byte string and that it only contains ascii
# bytes.
# The vault format is pure ascii so if we failed to encode to bytes
# via ascii we know that this is not vault data.
# Similarly, if it's not a string, it's not vault data
# read the header and reset the file stream to where it started
# Only attempt to find vault_id if the vault file is version 1.2 or newer
# if self.b_version == b'1.2':
# DTFIX7: possible candidate for propagate_origin
# used by decrypt
# If we specify a vault_id, use format version 1.2. For no vault_id, stick to 1.1
# SPLIT SALT, DIGEST, AND DATA
# FIXME: ? that seems wrong... Unset etc?
# Make sure the passwords match by comparing them all to the first password
# enforce no newline chars at the end of passwords
# FIXME: more specific exception
# if password script is 'something-client' or 'something-client.[sh|py|rb|etc]'
# script_name can still have '.' or could be entire filename if there is no ext
# TODO: for now, this is entirely based on filename
# we unfrack but not follow the full path/context to possible vault script
# so when the script uses 'adjacent' file for configuration or similar
# it still works (as inventory scripts often also do).
# while files from --vault-password-file are already unfracked, other sources are not
# it is a script?
# this is special script type that handles vault ids
# TODO: pass vault_id_name to script via cli
# just a plain vault password script. No args, returns a byte array
# TODO: mv these classes to a separate file so we don't pollute vault with 'subprocess' etc
# We could load from file here, but that is eventually a pain to test
# TODO: replace with use of self.loader
# STDERR not captured to make it easier for users to prompt for input in their scripts
# raise exception?
# See if the --encrypt-vault-id matches a vault-id
# return the best match for --encrypt-vault-id
# If we specified a encrypt_vault_id and we couldn't find it, dont
# fallback to using the first/best secret
# Find the best/first secret from secrets since we didn't specify otherwise
# ie, consider all the available secrets as matches
# can be empty list sans any tuple
# encrypt data
# format the data for output to the file
# enforce vaulttext is str/bytes, keep type check if removing type conversion
# create the cipher object, note that the cipher used for decrypt can
# be different than the cipher used for encrypt
# WARNING: Currently, the vault id is not required to match the vault id in the vault blob to
# iterate over all the applicable secrets (all of them by default) until one works...
# if we specify a vault_id, only the corresponding vault secret is checked and
# we check it first.
# Not adding the other secrets to vault_secret_ids enforces a match between the vault_id from the vault_text and
# the known vault secrets.
# Add all of the known vault_ids as candidates for decrypting a vault.
# for vault_secret_id in vault_secret_ids:
# secret = self.secrets[vault_secret_id]
# TODO: it may be more useful to just make VaultSecrets and index of VaultLib objects...
# TODO: mv shred file stuff to it's own class
# avoid work when file was empty
# get a random chunk of data, each pass with other length
# FIXME remove this assert once we have unittests to check its accuracy
# file is already gone
# shred is not available on this system, or some other error occurred.
# ValueError caught because macOS El Capitan is raising an
# exception big enough to hit a limit in python2-2.7.11 and below.
# Symptom is ValueError: insecure pickle when shred is not
# installed there.
# we could not successfully execute unix shred; therefore, do custom shred.
# Create a tempfile
# if an error happens, destroy the decrypted file
# drop the user into an editor on the tmp file
# Do nothing if the content has not changed
# encrypt new data and write out to tmp
# An existing vaultfile will always be UTF-8,
# so decode to unicode here
# shuffle tmp file into place
# always shred temp, jic
# '-' is special to VaultEditor, dont expand it.
# A file to be encrypted into a vaultfile could be any encoding
# so treat the contents as a byte string.
# follow the symlink
# FIXME: If we can raise an error here, we can probably just make it
# behave like edit instead.
# vault or yaml files are always utf8
# vaulttext gets converted back to bytes, but alas
# TODO: return the vault_id that worked?
# Figure out the vault id from the file, to select the right secret to re-encrypt it
# (duplicates parts of decrypt, but alas...)
# vault id here may not be the vault id actually used for decrypting
# as when the edited file has no vault-id but is decrypted by non-default id in secrets
# (vault_id=default, while a different vault-id decrypted)
# we want to get rid of files encrypted with the AES cipher
# Keep the same vault-id (and version) as in the header
# FIXME/TODO: make this use VaultSecret
# This is more or less an assert, see #18247
# FIXME: VaultContext...?  could rekey to a different vault_id in the same VaultSecrets
# Need a new VaultLib because the new vault data can be a different
# vault lib format or cipher (for ex, when we migrate 1.0 style vault data to
# 1.1 style data we change the version and the cipher). This is where a VaultContext might help
# the new vault will only be used for encrypting, so it doesn't need the vault secrets
# (we will pass one in directly to encrypt)
# preserve permissions
# TODO: add docstrings for arg types since this code is picky about that
# FIXME: do we need this now? data_bytes should always be a utf-8 byte string
# check if we have a file descriptor instead of a path
# if passed descriptor, use that to ensure secure access, otherwise it is a string.
# assumes the fd is securely opened by caller (mkstemp)
# get a ref to either sys.stdout.buffer for py3 or plain old sys.stdout for py2
# We need sys.stdout.buffer on py3 so we can write bytes to it since the plaintext
# of the vaulted object could be anything/binary/etc
# file names are insecure and prone to race conditions, so remove and create securely
# when setting new umask, we get previous as return
# create file with secure permissions
# now write to the file and ensure ours is only data in it
# Make sure the file descriptor is always closed and reset umask
# overwrite dest with src
# old file 'dest' was encrypted, no need to _shred_file
# reset permissions if needed
# TODO: selinux, ACLs, xattr?
########################################
# http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html
# Note: strings in this class should be byte strings by default.
# Concurrent first-use by multiple threads will all execute the method body.
# 16 for AES 128, 32 for AES256
# AES is a 128-bit block cipher, so IVs and counter nonces are 16 bytes
# COMBINE SALT, DIGEST AND DATA
# Unnecessary but getting rid of it is a backwards incompatible vault
# format change
# b_key1, b_key2, b_iv = self._gen_key_initctr(b_password, b_salt)
# EXIT EARLY IF DIGEST DOESN'T MATCH
# http://codahale.com/a-lesson-in-timing-attacks/
# TODO: would be nice if a VaultSecret could be passed directly to _decrypt_*
# though, likely needs to be python cryptography specific impl that basically
# creates a Cipher() with b_key1, a Mode.CTR() with b_iv, and a HMAC() with sign key b_key2
# Keys could be made bytes later if the code that gets the data is more
# naturally byte-oriented
# pylint: disable=unidiomatic-typecheck
# In 2.18 and earlier, vaulted values were not trusted.
# This maintains backwards compatibility with that.
# Additionally, supporting templating on vaulted values could be problematic for a few cases:
# 1) There's no way to compose YAML tags, so you can't use `!unsafe` and `!vault` together.
# 2) It would make composing `EncryptedString` with a possible future `TemplateString` more difficult.
# static, but implemented for simplicity/consistency
# use the utility method to ensure that origin tags are available
# raises if the ciphertext cannot be decrypted
# propagate source value tags plus VaultedValue for round-tripping ciphertext
# ciphertext has tags but value does not
# avoid wasteful raise/except of Marker when calling get_tag below
# main block fields containing the task lists
# for future consideration? this would be functionally
# similar to the 'else' clause for exceptions
# otherwise = FieldAttribute(isa='list')
# FIXME: these do nothing but augment the exception message; DRY and nuke
# If task._parent is the same as new_block, just replace it
# task may not be a direct child of new_block, search for the correct place to insert new_block
# import is here to avoid import loops
# we don't want the full set of attributes (the task lists), as that
# would lead to a serialize/deserialize loop
# if there was a serialized role, unpack it too
# omit self, and only get parent values
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
# Entries in the datastructure of a playbook may
# be either a play or an include statement
# set the loaders basedir
# check for errors and restore the basedir in case this error is caught and handled
# Parse the playbook entries. For plays, we simply parse them
# using the Play() object, and includes are parsed using the
# PlaybookInclude() object
# restore the basedir in case this error is caught and handled
# we're done, so restore the old basedir in the loader
# initialize the data loader and variable manager, which will be provided
# later when the object is actually loaded
# other internal params
# every object gets a random uuid:
# cache the datastructure internally
# the variable manager class is used to manage and merge variables
# down to a single dictionary for reference in templating, etc.
# the data loader class is used to parse data from strings and files
# call the preprocess_data() function to massage the data into
# something we can more easily parse, and then call the validation
# function on it to ensure there are no incorrect key values
# Walk all attributes in the class. We sort them based on their priority
# so that certain fields can be loaded before others, if they are dependent.
# copy the value over unless a _load_field method is defined
# run early, non-critical validation
# return the constructed object
# walk all fields in the object
# run validator only if present
# and make sure the attribute is of the type it should be
# module_defaults do not use the 'collections' keyword, so actions and
# action_groups that are not fully qualified are part of the 'ansible.legacy'
# collection. Update those entries here, so module_defaults contains
# fully qualified entries.
# The resolved action_groups cache is associated saved on the current Play
# If the defaults_entry is an ansible.legacy plugin, these defaults
# are inheritable by the 'ansible.builtin' subset, but are not
# required to exist.
# Should never happen, but handle gracefully by returning None, just in case
# Check if the group has already been resolved and cached
# The collection may or may not use the fully qualified name
# Don't fail if the group doesn't exist in the collection
# Everything should be a string except the metadata entry
# Bad entries may be a warning above, but prevent tracebacks by setting it back to the acceptable type.
# The collection may or may not use the fully qualified name.
# If not, it's part of the current collection.
# Resolve extended groups last, after caching the group in case they recursively refer to each other
# if the ds value was set on the object, copy it to the new copy too
# special value, which may be an integer or float
# with an optional '%' at the end
# setting to sentinel will trigger 'default/default()' on getter
# mostly playcontext as only tasks/handlers/blocks really resolve parent
# and assign the massaged value back to the attribute field
# DTFIX-FUTURE: this can probably be used in many getattr cases below, but the value may be out-of-date in some cases
# we save this original (likely Origin-tagged) value to pass as `obj` for errors
# we don't template 'vars' but allow template as values for later use
# import_role
# normal field attributes should not go through post validation on import_role/import_tasks
# only import_role is checked here because import_tasks never reaches this point
# Skip post validation unless always_post_validate is True, or the object requires post validation.
# Intermediate objects like Play() won't have their fields validated by
# default, as their values are often inherited by other objects and validated
# later, so we don't want them to fail out early
# Run the post-validator if present. These methods are responsible for
# using the given templar to template the values, if required.
# if the attribute contains a variable, template it now
# If this evaluated to the omit value, set the value back to inherited by context
# or default specified in the FieldAttribute and move on
# returning the value results in assigning the massaged value back to the attribute field
# no useful information to contribute, raise the original exception
# Due to where _extend_value may run for some attributes
# it is possible to end up with Sentinel in the list of values
# ensure we strip them
# overridden dump_attrs in derived types may dump attributes which are not field attributes
# from_attrs is only used to create a finalized task
# from attrs from the Worker/TaskExecutor
# Those attrs are finalized and squashed in the TE
# and controller side use needs to reflect that
# serialize the uuid field
# restore the UUID field
# connection/transport
# module default params
# flags and misc. settings
# explicitly invoke a debugger on tasks
# Privilege escalation
# used to hold sudo/su stuff
# type: list[str]
# inside role: add the dependency chain from current to dependent
# add path of task itself, unless it is already in the list
# Copyright: (c) 2019, Ansible Project
# Will be None when used as the default
# FIXME: exclude role tasks?
# if there's something in the list, ensure that builtin or legacy is always there too
# this needs to be populated before we can resolve tasks/roles/etc
# We are always a mixin with Base, so we can validate this untemplated
# field early on to guarantee we are dealing with a list.
# this will only be called if someone specified a value; call the shared value
# don't return an empty collection list, just return None
# from ansible.inventory.host import Host
# if the task result was skipped or failed, continue
# get search path for this task to pass to lookup plugins that may be used in pathing to
# the included file
# ensure basedir is always in (dwim already searches here but we need to display it)
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
# FUTURE: Since the parent include path has already been resolved, it should be used here.
# may throw OSError
# or select the task file if it exists
# template the included role's name here
# pos is relative to idx since we are slicing
# use idx + pos due to relative indexing
# The host already exists for this include, advance forward, this is a new include
# Copyright The Ansible project
# directly assigned
# used to populate from dict in role
# assigned to matching property
# all valid args
# =================================================================================
# ATTRIBUTES
# private as this is a 'module options' vs a task property
# only need play passed in when dynamic
# build role
# add role to play
# save this for later use
# compile role with parent roles as dependencies to ensure they inherit
# collections value is not inherited; override with the value we calculated during role setup
# HACK: parent inheritance doesn't seem to have a way to handle this intermediate override until squashed/finalized
# updated available handlers in play
# Validate options
# name is needed, or use role as alias
# validate bad args, otherwise we silently ignore
# build options for role include/import tasks
# apply is only valid for includes, not imports as they inherit directly
# manual list as otherwise the options would set other task parameters we don't want.
# load_<attribute_name> and
# validate_<attribute_name>
# will be used if defined
# might be possible to define others
# NOTE: ONLY set defaults on task attributes that are not inheritable,
# inheritance is only triggered if the 'current value' is Sentinel,
# default can be set at play/top level object and inheritance will take it's course.
# default is set in TaskExecutor
# deprecated, used to be loop and loop_args but loop has been repurposed
# some strategies may trigger this error when templating task.action, but backstop here if not
# always str or None
# TaskArgsFinalizer performs more thorough type checking, but this provides a friendlier error message for a subset of detected cases.
# FUTURE: validate meta and return an enum instead of a str
# meta currently does not support being templated, so we can cheat
# the new, cleaned datastructure, which will have legacy items reduced to a standard structure suitable for the
# attributes of the task class; copy any tagged data to preserve things like origin
# since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator
# use the parent value if our ds doesn't define it
# Validate this untemplated field early on to guarantee we are dealing with a list.
# This is also done in CollectionSearch._load_collections() but this runs before that call.
# FIXME: and not a collections role
# use the args parsing class to determine the action, args,
# and the delegate_to value from the various possible forms
# supported as legacy
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
# But if it wasn't, we can add the yaml object now to get more detail
# we handle any 'vars' specified in the ds here, as we may
# be adding things to them below (special handling for includes).
# When that deprecated feature is removed, this can be too.
# _load_vars is defined in Base, and is used to load a dictionary
# or list of dictionaries in a standard way
# we don't want to re-assign these values, which were determined by the ModuleArgsParser() above
# transform into loop property
# FUTURE: kill this with fire
# skip this value
# ignore as fact gathering is required for 'env' facts
# NB: the environment FieldAttribute definition ensures that value is always a list
# vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors
# this makes isdisjoint work for untagged
# default, tasks to run
# Check for tags that we need to skip
# Facts
# Variable Attributes
# Role Attributes
# Block (Task) Lists Attributes
# Flag/Setting Attributes
# Only validate 'hosts' if a value was passed in to original data set.
# Make sure each item in the sequence is a valid string
# The use of 'user' in the Play datastructure was deprecated to
# line up with the same change for Tasks, due to the fact that
# 'user' conflicted with the user module.
# this should never happen, but error out with a helpful message
# to the user if it does...
# DTFIX-FUTURE: these do nothing but augment the exception message; DRY and nuke
# avoid circular dep
# Don't insert tasks from ``import/include_role``, preventing
# duplicate execution at the wrong time
# create a block containing a single flush handlers meta
# task, so we can be sure to run handlers at certain points
# of the playbook execution
# Avoid calling flush_handlers in case the whole play is skipped on tags,
# this could be performance improvement since calling flush_handlers on
# large inventories could be expensive even if no hosts are notified
# since we call flush_handlers per host.
# Block.filter_tagged_tasks ignores evaluating tags on implicit meta
# tasks so we need to explicitly call Task.evaluate_tags here.
# NOTE keep flush_handlers tasks even if a section has no regular tasks,
# NB: higher priority numbers sort first
# NOTE this appears to be not used in the codebase,
# _get_attr_connection has been replaced by ConnectionFieldAttribute.
# Leaving it here for test_attr_method from
# test/units/playbook/test_base.py to pass and for backwards compat.
# NOTE this appears to be not needed in the codebase,
# leaving it here for test_attr_int_del from
# test/units/playbook/test_base.py to pass.
# see if SSH can support ControlPersist if not use paramiko
# if someone did `connection: persistent`, default it to using a persistent paramiko connection to avoid problems
# TODO: remove
# TODO: ???
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
# FIXME: docker - remove these
# privilege escalation fields
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
# Note: play is really not optional.  The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
# a file descriptor to be used during locking operations
# set options before play to allow play to override them
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
# From the command line.  These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
# Else default
# Not every cli that uses PlayContext has these command line args so have a default
# loop through a subset of attributes on the task object and set
# connection fields based on their values
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
# and likewise for the remote user
# setup shell
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
# make sure we get port defaults if needed
# special overrides for the connection setting
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
# for case in which connection plugin still uses pc.remote_addr and in it's own options
# specifies 'default: inventory_hostname', but never added to vars:
# we import here to prevent a circular dependency with imports
# Implicit blocks are created by bare tasks listed in a play without
# an explicit block statement. If we have two implicit blocks in a row,
# squash them down to a single block to save processing time later.
# Advance the iterator, so we don't repeat
# Loop both implicit blocks and block_ds as block_ds is the next in the list
# DTFIX-FUTURE: this *should* be unnecessary- check code coverage.
# check to see if this include is dynamic or static:
# we set a flag to indicate this include was static
# since we can't send callbacks here, we display a message directly in
# the same fashion used by the on_include callback. We also do it here,
# because the recursive nature of helper methods means we may be loading
# nested includes, and we want the include order printed correctly
# now we extend the tags on each of the included blocks
# FIXME - END
# FIXME: handlers shouldn't need this special handling, but do
# template the role name now, if needed
# uses compiled list from object
# passes task object itself for latter generation of list
# This check doesn't handle ``include`` as we have no idea at this point if it is static or not
# manually post_validate to get free arg validation/coercion
# import here to avoid a dependency loop
# first, we use the original parent method to correctly load the object
# via the load_data/preprocess_data system we normally use for other
# playbook objects
# then we use the object to load a Playbook
# check for FQCN
# not FQCN try path
# might still be collection playbook
# it is a collection playbook, setup default collections
# it is NOT a collection playbook, setup adjacent paths
# broken, see: https://github.com/ansible/ansible/issues/85357
# finally, update each loaded playbook entry with any variables specified
# on the included playbook and/or any tags which may have been set
# conditional includes on a playbook need a marker to skip gathering
# Check to see if we need to forward the conditionals on to the included
# plays. If so, we can take a shortcut here and simply prepend them to
# those attached to each block (if any)
# NOTE: This import is only needed for the type-checking in __init__. While there's an alternative
# TODO: this should be a utility function, but can't be a member of
# Any container is unhashable if it contains unhashable items (for
# instance, tuple() is a Hashable subclass but if it contains a dict, it
# cannot be hashed)
# Optimistically hope the contents are all hashable
# Hash each entry individually
# This is just a guess.
# Note: We do not handle unhashable scalars but our only choice would be
# to raise an error there anyway.
# includes (static=false) default to private, while imports (static=true) default to public
# but both can be overridden by global config if set
# Indicates whether this role was included via include/import_role
# Purposefully using realpath for canonical path
# TODO: need to fix cycle detection in role load (maybe use an empty dict
# see https://github.com/ansible/ansible/issues/61527
# Using the role path as a cache key is done to improve performance when a large number of roles
# are in use in the play
# copy over all field attributes from the RoleInclude
# update self._attr directly, to avoid squashing
# vars and default vars are regular dictionaries
# load the role's other files, if they exist
# reset collections list; roles do not inherit collections from parents, just use the defaults
# FUTURE: use a private config default for this so we can allow it to be overridden later
# configure plugin/collection loading; either prepend the current role's collection or configure legacy plugin loading
# FIXME: need exception for explicit ansible.legacy?
# this is a collection-hosted role
# this is a legacy role, but set the default collection if there is one
# legacy role, ensure all plugin dirs under the role are added to plugin search path
# collections can be specified in metadata for legacy or collection-hosted roles
# if any collections were specified, ensure that core or legacy synthetic collections are always included
# default append collection is core for collection-hosted roles, legacy for others
# Note: _load_role_yaml() takes care of rebuilding the path.
# We did not find the meta/argument_specs.[yml|yaml] file, so use the spec
# dict from the role meta data, if it exists. Ansible 2.11 and later will
# have the 'argument_specs' attribute, but earlier versions will not.
# Determine the role entry point so we can retrieve the correct argument spec.
# This comes from the `tasks_from` value to include_role or import_role.
# Prepend our validate_argument_spec action to happen before any tasks provided by the role.
# 'any tasks' can and does include 0 or None tasks, in which cases we create a list of tasks and add our
# validate_argument_spec task
# If the arg spec provides a short description, use it to flesh out the validation task name
# Pass only the 'options' portion of the arg spec to the module.
# Valid extensions and ordering for roles is hard-coded to maintain portability
# same as default for YAML_FILENAME_EXTENSIONS
# look for files w/o extensions before/after bare name depending on it being set or not
# keep 'main' as original to figure out errors if no files found
# not really 'find_vars_files' but find_files_with_extensions_default_to_yaml_filename_extensions
# found data so no need to continue unless we want to merge
# this won't trigger with default only when <subdir>_from is specified
# other functions
# get role_vars: from parent objects
# TODO: is this right precedence for inherited role_vars?
# get exported variables from meta/dependencies
# Avoid rerunning dupe deps since they can have vars from previous invocations and they accumulate in deps
# TODO: re-examine dep loading to see if we are somehow improperly adding the same dep too many times
# only take 'exportable' vars from deps
# role_vars come from vars/ in a role
# include_params are 'inline variables' in role invocation. - {role: x, varname: value}
# TODO: add deprecation notice
# these come from vars: keyword in role invocation. - {role: x, vars: {varname: value}}
# Do not recreate this list each time ``get_handler_blocks`` is called.
# Cache the results so that we don't potentially overwrite with copied duplicates
# ``get_handler_blocks`` may be called when handling ``import_role`` during parsing
# as well as with ``Play.compile_roles_handlers`` from ``TaskExecutor``
# update the dependency chain here
# gets the role name out of a repo like
# http://git.example.com/repos/repo.git" => "repo"
# New style: { src: 'galaxy.role,version,name', other_vars: "here" }
# FIXME: consolidate with ansible-galaxy to keep this in sync
# role_def is new style: { src: 'galaxy.role,version,name', other_vars: "here" }
# if the calling role has a collections search path defined, consult it
# if the calling role is a collection role, ensure that its containing collection is searched first
# ensure fallback role search works
# def __repr__(self):
# role names that are simply numbers can be parsed by PyYAML
# as integers even when quoted, so turn it into a string type
# save the original ds for use later
# first we pull the role name out of the data structure,
# and then use that to determine the role path (which may
# result in a new role name, if it was a file path)
# next, we split the role params out from the valid role
# attributes and update the new datastructure with that
# result and the role name
# set the role name in the new ds
# we store the role path internally
# and return the cleaned-up data structure
# if we have the required datastructures, and if the role_name
# contains a variable, try and template it now
# create a templar class to template the dependency names, in
# case they contain variables
# try to load as a collection-based role first
# we found it, stash collection data and return the name/path tuple
# We didn't find a collection role, look in defined role paths
# FUTURE: refactor this to be callable from internal so we can properly order
# ansible.legacy searches with the collections keyword
# we always start the search for roles in the base directory of the playbook
# also search in the configured roles path
# next, append the roles basedir, if it was set, so we can
# search relative to that directory for dependent roles
# finally as a last resort we look in the current basedir as set
# in the loader (which should be the playbook dir itself) but without
# the roles/ dir appended
# now iterate through the possible paths and return the first one we find
# if not found elsewhere try to extract path from name
# use the list of FieldAttribute values to determine what is and is not
# an extra parameter for this role (or sub-class of this role)
# FIXME: hard-coded list of exception key names here corresponds to the
# this key does not match a field attribute, so it must be a role param
# this is a field attribute, so copy it over directly
# (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
# Copyright: Contributors to the Ansible project
# config definition fields
# DTFIX-FUTURE: collapse this with the one in collection loader, once we can
# handle both int and bool (which is an int)
# use Decimal for all other source type conversions; non-zero mantissa is a failure
# FIXME: define and document a pass-through value_type (None, 'raw', 'object', '', ...) and then deprecate acceptance of unknown types
# return non-str values of unknown value_type as-is
# FIXME: see if this can live in utils/path
# allow users to force CWD using 'magic' {{CWD}}
# FIXME: generic file type?
# FIXME: eventually deprecate ini configs
# Note: In this case, warnings does nothing
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
# Environment setting
# Current working directory
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
# If we can't access cwd, we'll simply skip it as a possible config source
# Per user location
# System location
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
# type: list[tuple[str, dict[str, str]]]
# type: set[str]
# set config using ini
# consume configuration
# initialize parser and read config
# ensure we always have config def entry
# ensure we always have a default timeout
# To filter out empty strings or non truthy values as an empty server list env var is equal to [''].
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
# template default values if possible
# NOTE: cannot use is_template due to circular dep
# FIXME: This really should be using an immutable sandboxed native environment, not just native environment
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
# TODO: take list of files with merge/nomerge
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# ignore 'test' config entries, they should not change runtime behaviors
# only set if entry is defined in container
# deal with deprecation of setting source, if used
# use default config
# Note: sources that are lists listed in low to high precedence (last one wins)
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
# use playbook keywords if you have em
# automap to keywords
# TODO: deprecate these in favor of explicit keyword above
# avoid circular import .. until valid
# env vars are next precedence
# try config file entries next, if we have one
# attempt to read from config file
# load from config
# set value and origin
# set default if we got here w/o a value
# ensure correct type, can raise exceptions on mismatched types
# this is empty env var for non string so we can set to default
# deal with restricted values
# assume the worst!
# for a list type, compare all values in type are allowed
# these should be only the simple data types (string, int, bool, float, etc) .. ignore dicts for now
# deal with deprecation of the setting
# TODO: choose to deprecate either singular or plural
# do not collapse a chained event with sub-events, since they would be lost
# do not collapse a chained event with different details, since they would be lost
# do not collapse a chained event which has a chain with a different msg_reason
# prefer a single-line message with help text when there is no source context
# DTFIX5: needs unit test coverage
# preserves tags since this is an instance of EncryptedString; if tags should be discarded from str, another entry will handle it
# DTFIX-FUTURE: this is really fragile- disordered/incorrect imports (among other things) can mess it up. Consider a hosting-env-managed context
# Copyright (c) 2013-2023, Graham Dumpleton
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright notice, this
# * Redistributions in binary form must reproduce the above copyright notice,
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
# copied from https://github.com/GrahamDumpleton/wrapt/blob/1.15.0/src/wrapt/wrappers.py
# LOCAL PATCHES:
# - disabled optional relative import of the _wrappers C extension; we shouldn't need it
# The following makes it easier for us to script updates of the bundled code
# We use properties to override the values of __module__ and
# __doc__. If we add these in ObjectProxy, the derived class
# __dict__ will still be setup to have string variants of these
# attributes and the rules of descriptors means that they appear to
# take precedence over the properties in the base class. To avoid
# that, we copy the properties into the derived class type itself
# via a meta class. In that way the properties will always take
# precedence.
# We similar use a property for __dict__. We need __dict__ to be
# explicit to ensure that vars() works as expected.
# Need to also propagate the special __weakref__ attribute for case
# where decorating classes which will define this. If do not define
# it and use a function like inspect.getmembers() on a decorator
# class it will fail. This can't be in the derived classes.
# Copy our special properties into the class so that they
# always take precedence over attributes of the same name added
# during construction of a derived class. This is to save
# duplicating the implementation for them in all derived classes.
# Python 3.2+ has the __qualname__ attribute, but it does not
# allow it to be overridden using a property and it must instead
# be an actual string object instead.
# Python 3.10 onwards also does not allow itself to be overridden
# using a property and it must instead be set explicitly.
# If we are being to lookup '__wrapped__' then the
# '__init__()' method cannot have been called.
# This method is actually doing double duty for both unbound and
# bound derived wrapper classes. It should possibly be broken up
# and the distinct functionality moved into the derived classes.
# Can't do that straight away due to some legacy code which is
# relying on it being here in this base class.
# The distinguishing attribute which determines whether we are
# being called in an unbound or bound wrapper is the parent
# attribute. If binding has never occurred, then the parent will
# be None.
# First therefore, is if we are called in an unbound wrapper. In
# this case we perform the binding.
# We have one special case to worry about here. This is where we
# are decorating a nested class. In this case the wrapped class
# would not have a __get__() method to call. In that case we
# simply return self.
# Note that we otherwise still do binding even if instance is
# None and accessing an unbound instance method from a class.
# This is because we need to be able to later detect that
# specific case as we will need to extract the instance from the
# first argument of those passed in.
# Now we have the case of binding occurring a second time on what
# was already a bound function. In this case we would usually
# return ourselves again. This mirrors what Python does.
# The special case this time is where we were originally bound
# with an instance of None and we were likely an instance
# method. In that case we rebind against the original wrapped
# function from the parent again.
# If enabled has been specified, then evaluate it at this point
# and if the wrapper is not to be executed, then simply return
# the bound function rather than a bound wrapper for the bound
# function. When evaluating enabled, if it is callable we call
# it, otherwise we evaluate it as a boolean.
# This can occur where initial function wrapper was applied to
# a function that was already bound to an instance. In that case
# we want to extract the instance from the function and use it.
# This is generally invoked when the wrapped function is being
# called as a normal function and is not bound to a class as an
# instance method. This is also invoked in the case where the
# wrapped function was a method, but this wrapper was in turn
# wrapped using the staticmethod decorator.
# This is a special method use to supply information to
# descriptors about what the name of variable in a class
# definition is. Not wanting to add this to ObjectProxy as not
# sure of broader implications of doing that. Thus restrict to
# FunctionWrapper used by decorators.
# This is a special method used by isinstance() to make checks
# instance of the `__wrapped__`.
# This is a special method used by issubclass() to make checks
# about inheritance of classes. We need to upwrap any object
# proxy. Not wanting to add this to ObjectProxy as not sure of
# broader implications of doing that. Thus restrict to
# We need to do things different depending on whether we are
# likely wrapping an instance method vs a static method or class
# This situation can occur where someone is calling the
# instancemethod via the class type and passing the instance
# as the first argument. We need to shift the args before
# making the call to the wrapper and effectively bind the
# instance to the wrapped function using a partial so the
# wrapper doesn't see anything as being different.
# As in this case we would be dealing with a classmethod or
# staticmethod, then _self_instance will only tell us whether
# when calling the classmethod or staticmethod they did it via an
# instance of the class it is bound to and not the case where
# done by the class type itself. We thus ignore _self_instance
# and use the __self__ attribute of the bound function instead.
# For a classmethod, this means instance will be the class type
# and for a staticmethod it will be None. This is probably the
# more useful thing we can pass through even though we loose
# knowledge of whether they were called on the instance vs the
# class type, as it reflects what they have available in the
# decoratored function.
# What it is we are wrapping here could be anything. We need to
# try and detect specific cases though. In particular, we need
# to detect when we are given something that is a method of a
# class. Further, we need to know when it is likely an instance
# method, as opposed to a class or static method. This can
# become problematic though as there isn't strictly a fool proof
# method of knowing.
# The situations we could encounter when wrapping a method are:
# 1. The wrapper is being applied as part of a decorator which
# is a part of the class definition. In this case what we are
# given is the raw unbound function, classmethod or staticmethod
# wrapper objects.
# The problem here is that we will not know we are being applied
# in the context of the class being set up. This becomes
# important later for the case of an instance method, because in
# that case we just see it as a raw function and can't
# distinguish it from wrapping a normal function outside of
# a class context.
# 2. The wrapper is being applied when performing monkey
# patching of the class type afterwards and the method to be
# wrapped was retrieved direct from the __dict__ of the class
# type. This is effectively the same as (1) above.
# 3. The wrapper is being applied when performing monkey
# wrapped was retrieved from the class type. In this case
# binding will have been performed where the instance against
# which the method is bound will be None at that point.
# This case is a problem because we can no longer tell if the
# method was a static method, plus if using Python3, we cannot
# tell if it was an instance method as the concept of an
# unnbound method no longer exists.
# 4. The wrapper is being applied when performing monkey
# patching of an instance of a class. In this case binding will
# have been perfomed where the instance was not None.
# method was a static method.
# Overall, the best we can do is look at the original type of the
# object which was wrapped prior to any binding being done and
# see if it is an instance of classmethod or staticmethod. In
# the case where other decorators are between us and them, if
# they do not propagate the __class__  attribute so that the
# isinstance() checks works, then likely this will do the wrong
# thing where classmethod and staticmethod are used.
# Since it is likely to be very rare that anyone even puts
# decorators around classmethod and staticmethod, likelihood of
# that being an issue is very small, so we accept it and suggest
# that those other decorators be fixed. It is also only an issue
# if a decorator wants to actually do things with the arguments.
# As to not being able to identify static methods properly, we
# just hope that that isn't something people are going to want
# to wrap, or if they do suggest they do it the correct way by
# ensuring that it is decorated in the class definition itself,
# or patch it in the __dict__ of the class type.
# So to get the best outcome we can, whenever we aren't sure what
# it is, we label it as a 'function'. If it was already bound and
# that is rebound later, we assume that it will be an instance
# method and try an cope with the possibility that the 'self'
# argument it being passed as an explicit argument and shuffle
# the arguments around to extract 'self' for use as the instance.
# disabled support for native extension; we likely don't need it
# except ImportError:
# Helper functions for applying wrappers to existing functions.
# We can't just always use getattr() because in doing
# that on a class it will cause binding to occur which
# will complicate things later and cause some things not
# to work. For the case of a class we therefore access
# the __dict__ directly. To cope though with the wrong
# class being given to us, or a method being moved into
# a base class, we need to walk the class hierarchy to
# work out exactly which __dict__ the method was defined
# in, as accessing it from __dict__ will fail if it was
# not actually on the class given. Fallback to using
# getattr() if we can't find it. If it truly doesn't
# exist, then that will fail.
# Function for applying a proxy object to an attribute of a class
# instance. The wrapper works by defining an attribute of the same name
# on the class which is a descriptor and which intercepts access to the
# instance attribute. Note that this cannot be used on attributes which
# are themselves defined by a property object.
# Functions for creating a simple decorator using a FunctionWrapper,
# plus short cut functions for applying wrappers to functions. These are
# for use when doing monkey patching. For a more featured way of
# creating decorators see the decorator decorator instead.
# A weak function proxy. This will work on instance methods, class
# methods, static methods and regular functions. Special treatment is
# needed for the method types because the bound method is effectively a
# transient object and applying a weak reference to one will immediately
# result in it being destroyed and the weakref callback called. The weak
# reference is therefore applied to the instance the method is bound to
# and the original function. The function is then rebound at the point
# of a call via the weak function proxy.
# This could raise an exception. We let it propagate back and let
# the weakref.proxy() deal with it, at which point it generally
# prints out a short error message direct to stderr and keeps going.
# We need to determine if the wrapped function is actually a
# bound method. In the case of a bound method, we need to keep a
# reference to the original unbound function and the instance.
# This is necessary because if we hold a reference to the bound
# function, it will be the only reference and given it is a
# temporary object, it will almost immediately expire and
# the weakref callback triggered. So what is done is that we
# hold a reference to the instance and unbound function and
# when called bind the function to the instance once again and
# then call it. Note that we avoid using a nested function for
# the callback here so as not to cause any odd reference cycles.
# We perform a boolean check here on the instance and wrapped
# function as that will trigger the reference error prior to
# calling if the reference had expired.
# If the wrapped function was originally a bound function, for
# which we retained a reference to the instance and the unbound
# function we need to rebind the function and then call it. If
# not just called the wrapped function.
# EncryptedString can hide a template
# DTFIX-FUTURE: once we start implementing nested scoped contexts for our own bookkeeping, this should be an interface facade that forwards to the nearest
# DTFIX-FUTURE: move this to an AmbientContext-derived TaskContext (once it exists)
# DTFIX-FUTURE: return a read-only list proxy instead
# DTFIX-FUTURE: the logic for omitting date/version doesn't apply to the payload, so it shows up in vars in some cases when it should not
# indeterminate has no resolved_name or type
# collections have a resolved_name but no type
# core deprecations from base classes (the API) have no plugin name, only 'ansible.builtin'
# seconds
# BSD 3 Clause License (see licenses/BSD-3-Clause.txt or https://opensource.org/license/bsd-3-clause/)
# Responses
# Constraints
# Requests
# NOTE: Classes below somewhat represent "Data Type Representations Used in the SSH Protocols"
# DTFIX-FUTURE: most of this isn't JSON specific, find a better home
# supports StateTrackingMixIn
# apply compatibility behavior
# key=None prevents state tracking from seeing the key as value
# handle EncryptedString conversion before more generic transformation and native conversions
# DTFIX-FUTURE: Visitor generally ignores dict/mapping keys by default except for debugging and schema-aware checking.
# DTFIX7: de-duplicate and optimize; extract inline generator expressions and fallback function or mapping for native type calculation?
# check mappings first, because they're also collections
# supported scalar type that requires no special handling, just return as-is
# apply shared instance default origin tag
# if not emitting unsafe markers, bypass custom unsafe serialization and just return the raw value
# decrypt and return the plaintext (or fail trying)
# backward compatibility with `ansible.module_utils.basic.jsonify`
# DTFIX5: does the variable visitor need to support conversion of sequence/mapping for inventory?
# legacy stdlib-compatible key behavior
# existing devel behavior
# always failed pre-2.18, so okay to include for consistency
# for VaultedValue tagged str
# equivalent to AnsibleJSONEncoder(preprocess_unsafe=True) in devel
# type: ignore[method-assign]  # legacy stdlib-compatible key behavior
# DTFIX7: these conversion args probably aren't needed
# NB: these can only be sampled properly when loading strings, eg, `json.loads`; the global `json.load` function does not expose the file-like to us
# noinspection PyProtectedMember
# convert tagged strings and path-like values to a native str
# Since VaultedValue stores the encrypted representation of the value on which it is tagged,
# it is incorrect to propagate the tag to a value which is not equal to the original.
# If the tag were copied to another value and subsequently serialized as the original encrypted value,
# the result would then differ from the value on which the tag was applied.
# Comparisons which can trigger an exception are indicative of a bug and should not be handled here.
# * When `src` is an undecryptable `EncryptedString` -- it is not valid to apply this tag to that type.
# * When `value` is a `Marker` -- this requires a templating, but vaulted values do not support templating.
# assume the tag was correctly applied to src
# same plaintext value, tag propagation with same ciphertext is safe
# different value, preserve the existing tag, if any
# FUTURE: consider deprecating some/all usages of this method; they generally imply a code smell or pattern we shouldn't be supporting
# PyYAML + libyaml barfs on str/bytes subclasses
# provide feature parity with the Python implementation (yaml.reader.Reader provides name)
# provided by the YAML parser, which retrieves it from the stream
# always an ordered dictionary on py3.7+
# Delegate to built-in implementation to construct the mapping.
# This is done before checking for duplicates to leverage existing error checking on the input node.
# Now that the node is known to be a valid mapping, handle any duplicate keys.
# Override the default string handling function
# to always return unicode objects
# DTFIX-FUTURE: is this to_text conversion still necessary under Py3?
# NB: since we're not context aware, this will happily add trust to dictionary keys; this is actually necessary for
# use a copied node to avoid mutating existing node and tripping the recursion check in construct_object
# repeat implicit resolution process to determine the proper tag for the value in the unsafe node
# re-entrant call using the correct tag
# non-deferred construction of hierarchical nodes so the result is a fully realized object, and so our stateful unsafe propagation behavior works
# the line number where the previous token has ended (plus empty lines)
# Add one so that the first line is line 1 rather than line 0
# volatile state var used during recursive construction of a value tagged unsafe
# automatically decrypts encrypted strings
# hide the underlying cause message, it's included by `handle_exception` as needed
# for these cases, we don't need to distinguish between None and empty string
# FIXME: Do all this by walking the parsed YAML doc stream. Using regexes is a dead-end; YAML's just too flexible to not have a
# noinspection PyUnboundLocalVariable
# unexpected exception, don't use special analysis of exception
# raised internally by ansible code, don't use special analysis of exception
# Check for tabs.
# There may be cases where there is a valid tab in a line that has other errors.
# That's OK, users should "fix" their tab usage anyway -- at which point later error handling logic will hopefully find the real issue.
# Check for unquoted templates.
# FIXME: Use the captured value to show the actual fix required.
# Check for common unquoted colon mistakes.
# ignore lines starting with only whitespace and a colon
# find the value after list/dict preamble
# ignore properly quoted values
# look for an unquoted colon in the value
# Check for common quoting mistakes.
# "foo" in bar
# FIXME: Use the captured value to show the actual fix required, and use that same logic to improve the origin further.
# "foo" and "bar"
# marked YAML error, pull out the useful messages while omitting the noise
# unexpected error, use the exception message (normally hidden by overriding include_cause_message)
# The plugin name was invalid or no plugin was found by that name.
# An unexpected exception occurred.
# dynamic container
# ensure list conversion occurs under the call context
# DTFIX-FUTURE: which name to use? PluginInfo?
# convert the args tuple to a list, since some plugins make a poor assumption that `run.args` is a list
# if the lookup doesn't understand `Marker` and there's at least one in the top level, short-circuit by returning the first one we found
# don't pass these through to the lookup
# for backwards compat, only trust constant templates in lookup terms
# Force lazy marker support on for this call; the plugin's understanding is irrelevant, as is any existing context, since this backward
# compat code always understands markers.
# since embedded template support is enabled, repeat the check for `Marker` on lookup_terms, since a template may render as a `Marker`
# The lookup context currently only supports the internal use-case where `first_found` requires extra info when invoked via `with_first_found`.
# The context may be public API in the future, but for now, other plugins should not implement this kind of dynamic behavior,
# though we're stuck with it for backward compatibility on `first_found`.
# DTFIX-FUTURE: Consider allowing/requiring lookup plugins to declare how their result should be handled.
# DTFIX-FUTURE: deprecate return types which are not a list
# DTFIX-FUTURE: convert this to the new error/warn/ignore context manager
# when wantlist=False the lookup result is either partially delaizified (single element) or fully delaizified (multiple elements)
# for backwards compatibility, attempt to join `ran` into single string
# for backwards compatibility, return `ran` as-is when the sequence contains non-string values
# deprecated: description='disable embedded templates by default and deprecate the feature' core_version='2.23'
# deprecated: description='embedded Jinja constant string template support' core_version='2.23'
# without an origin, we need to include what context we do have (the template)
# record only the first access for each deprecated tag in a given context
# when the current template input is a container, provide a descriptive string with origin propagated (if possible)
# DTFIX-FUTURE: ascend the template stack to try and find the nearest string source template
# DTFIX-FUTURE: this should probably use a synthesized description value on the tag
# -[DEPRECATION WARNING]: `something_old` is deprecated, don't use it! This feature will be removed in version 1.2.3.
# +[DEPRECATION WARNING]: While processing '<<container>>': `something_old` is deprecated, don't use it! This feature will be removed in ...
# scalar singletons (see jinja2.nodes.Name.can_assign)
# other
# unary operator always applicable to names
# AnsibleEnvironment overrides this default, so don't use the Jinja default here
# overridden by _dataclass_validation._inject_post_init_validation
# DTFIX-FUTURE: calculate default/non-default during __post_init__
# DTFIX-FUTURE: this is inefficient, use a compiled regex instead
# narrow the type specified by the base
# prevent Jinja from dumping vars in case this gets repr'd
# HACK: always include a sacrificial plain-dict on the bottom layer, since Jinja's debug and stacktrace rewrite code invokes
# `__setitem__` outside a call context; this will ensure that it always occurs on a plain dict instead of a lazy one.
# noinspection PyShadowingBuiltins
# this is a clone of Jinja's impl of derived, but using our lazy-aware _new_context
# DTFIX-FUTURE: this still isn't working reliably; something else must be keeping the template object alive
# DTFIX-FUTURE: contribute this upstream as a fix to Jinja's native support
# NB: This is slightly more efficient than Jinja's _output_const_repr, which generates a throw-away list instance to pass to join.
# NB: This is actually more efficient than Jinja's visit_Const, which contains obsolete (as of Py2.7/3.1) float conversion instance checks. Before
# if we have no context, Jinja's doing a nested compile at runtime (eg, import/include); historically, no backslash escaping is performed
# capture the previously raised exception
# raise the newly synthesized exception before capturing it
# always raise an AnsibleTemplateError/subclass
# DTFIX-FUTURE: Look through the TemplateContext hierarchy to find the most recent non-template
# Allow bitwise operations on int until bitwise filters are available.
# see: https://github.com/ansible/ansible/issues/85204
# DTFIX-FUTURE: bikeshed a name/mechanism to control template debugging
# type: ignore[assignment,arg-type]
# future Jinja releases may default-enable autoescape; force-disable to prevent the problems it could cause
# see https://github.com/pallets/jinja/blob/3.1.2/docs/api.rst?plain=1#L69
# the sandboxed environment limits range in ways that may cause us problems; use the real Python one
# Disabling the optimizer prevents compile-time constant expression folding, which prevents our
# visit_Const recursive inline template expansion tricks from working in many cases where Jinja's
# ignorance of our embedded templates are optimized away as fully-constant expressions,
# eg {{ "{{'hi'}}" == "hi" }}. As of Jinja ~3.1, this specifically avoids cases where the @optimizeconst
# visitor decorator performs constant folding, which bypasses our visit_Const impl and causes embedded
# templates to be lost.
# See also optimizeconst impl: https://github.com/pallets/jinja/blob/3.1.0/src/jinja2/compiler.py#L48-L49
# only present if debugging is enabled
# facilitate deletion of the temp file when template_obj is deleted
# deprecated: description="remove relaxed template sandbox mode support" core_version="2.23"
# DTFIX-FUTURE: optimization - we should pre-generate the default cached lexer before forking, not leave it to chance (e.g. simple playbooks)
# DTFIX-FUTURE: need better logic to handle non-list/non-dict inputs for args/kwargs
# compile_expression parses and passes the tree to from_string; for debug support, activate the context here to capture the intermediate results
# if debugging is enabled, use existing context when present (e.g., from compile_expression)
# this code is complemented by our tweaked CodeGenerator _output_const_repr that ensures that literal constants
# in templates aren't double-repr'd in the generated code
# In order to ensure that all markers are tripped, do a recursive finalize before we repr (otherwise we can end up
# repr'ing a Marker). This requires two passes, but avoids the need for a parallel reimplementation of all repr methods.
# return the first Marker encountered
# warnings will be issued when lookup terms processing occurs, to avoid false positives
# example template that uses this: "{{ some.thing }}" -- obj is the "some" dict, attribute is "thing"
# Both `_lookup` and `_query` handle arg proxying and `Marker` args internally.
# Performing either before calling them will interfere with that processing.
# Jinja's generated macro code handles Markers, so preemptive raise on Marker args and lazy retrieval should be disabled for the macro invocation.
# Preserve the ability to do `range(1000000000) | random` by not converting range objects to lists.
# Historically, range objects were only converted on Jinja finalize and filter outputs, so they've always been floating around in templating
# code and visible to user plugins.
# DTFIX-FUTURE: We should be able to determine if truncation occurred by having the code generator smuggle out the number of expected nodes.
# avoid yielding `None`-valued nodes to avoid literal "None" in stringified template results
# noinspection PyUnresolvedReferences
# don't propagate empty dictionary layers
# Omit values set to Jinja's internal `missing` sentinel; they are locals that have not yet been
# initialized in the current context, and should not be exposed to child contexts. e.g.: {% import 'a' as b with context %}.
# The `b` local will be `missing` in the `a` context and should not be propagated as a local to the child context we're creating.
# Even though we don't currently support templating globals, it's easier to ensure that everything is template-able rather than trying to
# pick apart the ChainMaps to enforce non-template-able globals, or to risk things that *should* be template-able not being lazified.
# ensure we have at least one layer (which should be lazy), since _flatten_and_lazify_vars eliminates most empty layers
# only return a ChainMap if we're combining layers, or we have none
# the `parent` cast is only to satisfy Jinja's overly-strict type hint
# Jinja passes these into filters/tests via @pass_environment
# DTFIX5: this should check all supported scalar subclasses, not just JSON ones (also, does the JSON serializer handle these cases?)
# we don't want to show the object value, and it can't be Origin-tagged; send the current template value for best effort
# DTFIX5: add tests to ensure this method doesn't drift from allowed types
# DTFIX-FUTURE: provide an optional way to check for trusted templates leaking out of templating (injected, but not passed through templar.template)
# prevent _JinjaConstTemplate from leaking into finalized results
# silently convert known mapping types to dict
# silently convert known sequence types to list
# this early return assumes handle_marker follows our variable type rules
# unsupported type (do not raise)
# early abort for disallowed types that would otherwise be handled below
# since isinstance checks are slower, this is separate from the exact type check above
# DTFIX-FUTURE: sanity test to ensure this doesn't drift from requirements
# deprecated description="update the ansible.windows collection to inline this logic instead of calling this internal function" core_version="2.23"
# only inject the config default value if the variable wasn't set
# deprecated description="remove the `_generate_ansible_managed` function and use a constant instead" core_version="2.23"
# IMPORTANT: These values must be constant strings to avoid template injection.
# apply Jinja2 patches before types are declared that are dependent on the changes
# Ansible doesn't set this argument or consume the attribute it is stored under.
# These are the methods StrictUndefined already intercepts.
# '__getitem__',  # using a custom implementation that propagates self instead
# These additional methods should be intercepted, even though they are not intercepted by StrictUndefined.
# CAUTION: This function is exposed in public API as ansible.template.get_first_marker_arg.
# DTFIX-FUTURE: find a home for this as a general-purpose utliity method and expose it after some API review
# omit == obliterate - matches historical behavior where dict layers were squashed before templating was applied
# DTFIX-FUTURE: this enum ideally wouldn't exist - revisit/rename before making public
# DTFIX-FUTURE: these aren't really overrides anymore, rename the dataclass and this field
# inherit marker behavior from the active template context's templar unless otherwise specified
# Uncommon cases: zero length string and string containing only newlines
# DTFIX3: ensure that we're always accessing this as a shallow container-level snapshot, and eliminate uses of anything
# allow empty strings and integers
# DTFIX-FUTURE: once we settle the new/old API boundaries, rename this (here and in other methods)
# quickly ignore supported scalar types which are not be templated
# let default Jinja marker behavior apply, since we're descending into a new template
# transforms are currently limited to non-str types as an optimization
# When the template result is Omit, raise an AnsibleValueOmittedError if value_for_omit is Omit, otherwise return value_for_omit.
# Other occurrences of Omit will simply drop out of containers during _finalize_template_result.
# trust that value_for_omit is an allowed type
# Use of stop_on_container implies the caller will perform necessary checks on values,
# most likely by passing them back into the templating system.
# non-lazy containers are returned as-is
# MarkerError is never suitable for use as the cause of another exception, it is merely a raiseable container for the source marker
# used for flow control (so its stack trace is rarely useful). However, if the source derives from a ExceptionMarker, its contained
# exception (previously raised) should be used as the cause. Other sources do not contain exceptions, so cannot provide a cause.
# when the exception to raise is the active exception, just re-raise it
# preserve the exception's cause, if any, otherwise no cause will be used
# always raise from something to avoid the currently active exception becoming __context__
# NOTE: Creating an overlay that lives only inside _compile_template means that overrides are not applied
# when templating nested variables, where Templar.environment is used, not the overlay. They are, however,
# applied to includes and imports.
# a template/expression compile error always results in a single node representing the compile error
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we
# calculate the difference in newlines and append them
# to the resulting output for parity
# Using AnsibleEnvironment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
# If the input string template was source-tagged and the result is not, propagate the source tag to the new value.
# This provides further contextual information when a template-derived value/var causes an error.
# best effort- if we can't, oh well
# Leading/trailing whitespace on conditional expressions is not a problem, except we can't tell if the expression is empty (which *is* a problem).
# Always strip conditional input strings. Neither conditional expressions nor all-template conditionals have legit reasons to preserve
# surrounding whitespace, and they complicate detection and processing of all-template fallback cases.
# deprecated backward-compatible behavior; None/empty input conditionals are always True
# this must follow `_strip_conditional_handle_empty`, since None/empty are coerced to bool (deprecated)
# because the input isn't a string, the result will never be a bool; the broken conditional warning in the caller will apply on the result
# Indirection of trusted expressions is always allowed. If the expression appears to be entirely wrapped in template delimiters,
# we must resolve it. e.g. `when: "{{ some_var_resolving_to_a_trusted_expression_string }}"`.
# Some invalid meta-templating corner cases may sneak through here (e.g., `when: '{{ "foo" }} == {{ "bar" }}'`); these will
# result in an untrusted expression error.
# not an expression
# The only allowed use of templates for conditionals is for indirect usage of an expression.
# Any other usage should simply be an expression, not an attempt at meta templating.
# Disable escape_backslashes when processing conditionals, to maintain backwards compatibility.
# This is necessary because conditionals were previously evaluated using {% %}, which was *NOT* affected by escape_backslashes.
# Now that conditionals use expressions, they would be affected by escape_backslashes if it was not disabled.
# DTFIX-FUTURE: we're only augmenting the message for context here; once we have proper contextual tracking, we can dump the re-raise
# FAIL_ON_MARKER_BEHAVIOR
# _DETONATE_MARKER_BEHAVIOR - internal singleton since it's the default and nobody should need to reference it, or make it an actual singleton
# no sense in making many instances...
# didn't exist; create it
# we ignore the token, since this should live for the life of the thread/async ctx
# example: hostvars
# example: hostvars.localhost | select
# example: range(20) | list  # triggered on retrieval of `range` type from globals
# example: range(20) | list  # triggered when returning a `range` instance from a call
# example: undef() | default("blah")
# example: ansible_facts.get | type_debug
# example: inventory_hostname.upper | type_debug  # using `startswith` to resolve `builtin_function_or_method`
# example: '{% import "importme.j2" as im %}{{ im | type_debug }}'
# There are several operations performed by lazy containers, with some variation between types.
# Columns: D=dict, L=list, T=tuple
# D  L  T  Feature      Description
# -  -  -  -----------  ---------------------------------------------------------------
# l  l  n  propagation  when container items which are containers become lazy instances
# l  l  n  transform    when transforms are applied to container items
# l  l  n  templating   when templating is performed on container items
# l  l  l  access       when access calls are performed on container items
# populated by __init_subclass__
# from AnsibleTaggedObject
# never revert to the native type when no tags remain
# Try to use exact type match first to determine which wrapper (if any) to apply; isinstance checks
# are extremely expensive, so try to avoid them for our commonly-supported types.
# Create a generator that yields the elements of `item` wrapped in a `_LazyValue` wrapper.
# The wrapper is used to signal to the lazy container that the value must be processed before being returned.
# Values added to the lazy container later through other means will be returned as-is, without any special processing.
# DTFIX-FUTURE: check relative performance of method-local vs stored generator expressions on implementations of this method
# type: ignore  # pylint: disable=unnecessary-dunder-call
# consumers of lazy collections rely heavily on the concrete types being final
# inefficient, but avoids mutating the current instance (to make debugging practical)
# We're using the base implementation, but must override `__iter__` to skip `dict` fast-path copy, which would bypass lazy behavior.
# See: https://github.com/python/cpython/blob/ffcc450a9b8b6927549b501eff7ac14abc238448/Objects/dictobject.c#L3861-L3864
# DTFIX-FUTURE: support preservation of laziness when possible like we do for list
# Both sides end up going through _proxy_or_render_lazy_value, so there's no Templar preservation needed.
# In the future this could be made more lazy when both Templar instances are the same, or if per-value Templar tracking was used.
# When other is lazy with a different templar/options, it cannot be lazily combined with self and a plain list must be returned.
# If other is a list, de-lazify both, otherwise just let the operation fail.
# For all other cases, the new list inherits our templar and all values stay lazy.
# We use list.__add__ to avoid implementing all its error behavior.
# DTFIX5: ensure we have tests that explicitly verify this behavior
# nonempty __slots__ not supported for subtype of 'tuple'
# allow pass through of omit for later handling after top-level finalize completes
# since config can't expand this yet, we need the post-processed version
# DTFIX-FUTURE: plumb through normal config fallback
# accepts a list of literal expressions (no templating), evaluates with no failure on undefined, returns all results
# On BSD based systems, alarm is implemented using setitimer.
# If out-of-bounds values are passed to alarm, they will return -1, which would be interpreted as an existing timer being set.
# To avoid that, bounds checking is performed in advance.
# execute the context manager's body
# no timeout to deal with, exit immediately
# Disable the alarm.
# If the alarm fires inside this finally block, the alarm is still disabled.
# This guarantees the cleanup code in the outer finally block runs without risk of encountering the `TaskTimeoutError` from the alarm.
# FUTURE: add sanity test to detect use of skip_on_ignore without Skippable (and vice versa)
# only mask a _SkipException, allow all others to raise
# skipping ignored action
# ErrorAction.IGNORE
# completed skippable action, ensures the `Skippable` context was used
# avoid circular import due to AnsibleError import
# a captured error provides its own cause event, it never has a normal __cause__
# deprecated: description='remove support for orig_exc (deprecated in 2.23)' core_version='2.27'
# pylint: disable=broad-except  # if config is broken, this can raise things other than ImportError
# encourage the use of `raise ... from` before deprecating `orig_exc`
# DTFIX-FUTURE: warn if failed is present and not a bool, or exception is present without failed being True
# translate non-ErrorDetail errors
# even though error detail was normalized, only return it if the result indicated failure
# DTFIX-FUTURE: cleanup/share width
# avoid circular import
# DTFIX-FUTURE: support referencing the column after the end of the target line, so we can indicate where a missing character (quote) needs to be added
# DTFIX-FUTURE: Implement line wrapping and match annotated line width to the terminal display width.
# message omitted since lack of line number is obvious from pos
# if near start of file
# universal newline default mode on `open` ensures we'll never see anything but \n
# mixed tab/space handling is intentionally disabled since we're both format and display config agnostic
# if nothing contributed `msg`, generate one from the exception messages
# shebang placeholder
# For test-module.py script to tell this is a ANSIBALLZ_WRAPPER
# This code is part of Ansible, but is an independent component.
# The code in this particular templatable string, and this templatable string
# only, is BSD licensed.  Modules which end up using this snippet, which is
# dynamically combined together by Ansible still belong to the author of the
# module, and they may assign their own license to the complete work.
# Copyright (c), James Cammarata, 2016
# Copyright (c), Toshio Kuratomi, 2016
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Access to the working directory is required by Python when using pipelining, as well as for the coverage module.
# Some platforms, such as macOS, may not allow querying the working directory when using become to drop privileges.
# adjust soft limit subject to existing hard limit
# some platforms (eg macOS) lie about their hard limit
# For some distros and python versions we pick up this script in the temporary
# directory.  This leads to problems when the ansible module masks a python
# library that another import needs.  We have not figured out what about the
# specific distros and python versions causes this to behave differently.
# Tested distros:
# Fedora23 with python3.4  Works
# Ubuntu15.10 with python2.7  Works
# Ubuntu15.10 with python3.4  Fails without this
# Ubuntu16.04.1 with python3.5  Fails without this
# To test on another platform:
# * use the copy module (since this shadows the stdlib copy module)
# * Turn off pipelining
# * Make sure that the destination file does not exist
# * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m'
# This will traceback in shutil.  Looking at the complete traceback will show
# that shutil is importing copy which finds the ansible module instead of the
# stdlib module
# Some platforms don't set __file__ when reading from stdin
# OSX raises OSError if using abspath() in a directory we don't have
# permission to read (realpath calls abspath)
# Strip cwd from sys.path to avoid potential permissions issues
# When installed via setuptools (including python setup.py install),
# ansible may be installed with an easy-install.pth file.  That file
# may load the system-wide install of ansible rather than the one in
# the module.  sitecustomize is the only way to override that setting.
# py3: modlib_path will be text, py2: it's bytes.  Need bytes at the end
# Use a ZipInfo to work around zipfile limitation on hosts with
# clocks set to a pre-1980 year (for instance, Raspberry Pi)
# Put the zipped up module_utils we got from the controller first in the python path so that we
# can monkeypatch the right basic
# The code here normally doesn't run.  It's only used for debugging on the
# remote machine.
# The subcommands in this function make it easier to debug ansiballz
# modules.  Here's the basic steps:
# Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv
# to save the module file remotely::
# Part of the verbose output will tell you where on the remote machine the
# module was written to::
# Login to the remote machine and run the module file via from the previous
# step with the explode subcommand to extract the module payload into
# source files::
# You can now edit the source files to instrument the code or experiment with
# different parameter values.  When you're ready to run the code you've modified
# (instead of the code from the actual zipped module), use the execute subcommand like this::
# Okay to use __file__ here because we're running from a kept file
# transform the ZIPDATA into an exploded directory of code and then
# print the path to the code.  This is an easy way for people to look
# at the code on the remote machine for debugging it in that
# environment
# write the args file
# Execute the exploded code instead of executing the module from the
# embedded ZIPDATA.  This allows people to easily run their modified
# code on the remote machine to see how changes will affect it.
# Set pythonpath to the debug dir
# read in the args file which the user may have modified
# See comments in the debug() method for information on debugging
# There's a race condition with the controller removing the
# remote_tmpdir and this module executing under async.  So we cannot
# store this in remote_tmpdir (use system tempdir instead)
# Only need to use [ansible_module]_payload_ in the temp_path until we move to zipimport
# (this helps ansible-test produce coverage stats)
# IMPORTANT: The real path must be used here to ensure a remote debugger such as PyCharm (using pydevd) can resolve paths correctly.
# (c) 2019 Ansible Project
# whether at least one coll_filter is a namespace-only filter
# deprecated: description='enable top-level facts deprecation' core_version='2.20'
# _DEPRECATE_TOP_LEVEL_FACT_TAG = _tags.Deprecated(
# return _DEPRECATE_TOP_LEVEL_FACT_TAG.tag(value)
# FIXME: this does not properly handle omit, undefined, or dynamic structure from templated `vars` ; templating should be done earlier
# If the basedir is specified as the empty string then it results in cwd being used.
# This is not a safe location to load vars from.
# load extra vars
# load fact cache
# bad cache plugin is not fatal error
# fallback to builtin memory cache plugin
# use FQCN to ensure the builtin version is used
# FIXME: this no longer does any tracking, only a slight optimization for empty new_data
# default for all cases
# avoid adhoc/console loading cwd
# get role defaults (lowest precedence)
# set basedirs
# should be default
# only option in 2.4.0
# preserves default basedirs, only option pre 2.3
# if we have a task in this context, and that task has a role, make
# sure it sees its defaults above any other roles, as we previously
# (v1) made sure each task had a copy of its roles default vars
# TODO: investigate why we need play or include_role check?
# THE 'all' group and the rest of groups for a host, used below
# internal functions that actually do the work
# configurable functions that are sortable via config, remember to add to _ALLOWED if expanding this list
# Merge groups as per precedence config
# only allow to call the functions we want exposed
# host vars, from inventory, inventory adjacent and play adjacent via plugins
# finally, the facts caches for this host, if it exists
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
# push facts to main namespace
# always 'promote' ansible_local
# create a set of temporary vars here, which incorporate the extra
# and magic vars so we can properly template the vars_files entries
# NOTE: this makes them depend on host vars/facts so things like
# we assume each item in the list is itself a list, as we
# support "conditional includes" for vars_files, which mimics
# the with_first_found mechanism.
# now we iterate through the (potential) files, and break out
# as soon as we read one from the list. If none are found, we
# raise an error, which is silently ignored at this point.
# we continue on loader failures
# We now merge in all exported vars from all roles in the play (very high precedence)
# next, we merge in the vars from the role, which will specifically
# follow the role dependency chain, and then we merge in the tasks
# vars (which will look at parent blocks/task includes)
# next, we merge in the vars cache (include vars) and nonpersistent
# facts cache (set_fact/register), in that order
# include_vars non-persistent cache
# fact non-persistent cache
# next, we merge in role params and task include params
# special case for include tasks, where the include params
# may be specified in the vars field for the task, which should
# have higher precedence than the vars/np facts above
# extra vars
# before we add 'reserved vars', check we didn't add any reserved vars
# magic variables
# special case for the 'environment' magic variable, as someone
# may have set it as a variable and we don't want to stomp on it
# 'vars' magic var
# has to be copy, otherwise recursive ref
# using role_cache as play.roles only has 'public' roles for vars exporting
# ansible_role_names includes all role names, dependent or directly referenced by the play
# ansible_play_role_names includes the names of all roles directly referenced by this play
# roles that are implicitly referenced via dependencies are not listed.
# ansible_dependent_role_names includes the names of all roles that are referenced via dependencies
# dependencies that are also explicitly named as roles are included in this list
# TODO: data tagging!!! DEPRECATED: role_names should be deprecated in favor of ansible_ prefixed ones
# add the list of hosts in the play, as adjusted for limit/filters
# use a static tag instead of `deprecate_value` to avoid stackwalk in a hot code path
# Set options vars
# sentinel value distinct from empty/None, which are errors
# bypass for unspecified value/omit
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
# We get to set this as new
# Update the existing facts
# Save the facts back to the backing store
# Copyright (c) 2017 Ansible Project
# All keys starting with _ansible_ are internal, so change the 'dirty' mapping and remove them.
# listify to avoid updating dict while iterating over it
# remove bad/empty internal keys
# cleanse fact values that are allowed from actions but not modules
# first we add all of our magic variable names to the set of
# keys we want to remove from facts
# NOTE: these will eventually disappear in favor of others below
# remove common connection vars
# next we remove any connection plugin specific vars
# most lightweight VM or container tech creates devices with this pattern, this avoids filtering them out
# remove some KNOWN keys
# finally, we search for interpreter keys to remove
# then we remove them (except for ssh host keys)
# Copyright (c) 2018 Ansible Project
# find 3rd party legacy vars plugins once, and look them up by name subsequently
# optimized for stateless plugins; non-stateless plugin instances will fall out quickly
# if a plugin-specific setting has not been provided, use the global setting
# older/non shipped plugins that don't support the plugin-specific setting should also use the global setting
# plugin didn't declare a preference; consult global config
# cache has been reset, reload all()
# Warn if a collection plugin has REQUIRES_ENABLED because it has no effect.
# skip host lists
# always pass the directory of the inventory source file
# does not use inventory.hosts, so it can create localhost on demand
# this object may be stored in a var dict that is itself deep copied, but since the underlying data
# is supposed to be immutable, we don't need to actually copy the data
# this may be stored in a var dict that is itself deep copied, but since the underlying data
# DTFIX-FUTURE: is there a better way to add this to the ignorable types in the module_utils code
# (c) 2017 Ansible By Red Hat
# FIXME: find a way to 'not hardcode', possibly need role deps/includes
# build ordered list to loop over and dict with attributes
# local_action is implicit with action
# loop implies with_
# FIXME: remove after with_ is not only deprecated but removed
# we add this one internally, so safe to ignore
# Ensure the varname used for obj is the tagged one from myvars and not the untagged one from reserved.
# This can occur because tags do not affect value equality, and intersection can return values from either the left or right side.
# (C) 2015, Brian Coca <bcoca@ansible.com>
# TODO: eventually remove this as it contains a mismash of properties that aren't really global
# roles_path needs to be a list and will be by default
# cli option handling is responsible for splitting roles_path
# load data path for resource usage
# (C) 2013, James Cammarata <jcammarata@ansible.com>
# TODO: Allow user-configuration
# Galaxy rate limit error code (Cloudflare unknown error)
# Common error from galaxy that may represent any number of transient backend issues
# Note: cloud.redhat.com masks rate limit errors with 403 (Forbidden) error codes.
# Since 403 could reflect the actual problem (such as an expired token), we should
# not retry by default.
# URLError is often a proxy for an underlying error, handle wrapped exceptions
# Handle common URL related errors
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
# Either the URL doesn't exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
# Let exceptions here bubble up but raise the original if this returns a 404 (/api/ wasn't found).
# Update api_server to point to the "real" API root, which in this case could have been the configured
# url + '/api/' appended.
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
# Verify that the API versions the function works with are available on the server specified.
# Warn only when we know we are talking to a collections API
# While the URL is probably invalid, let the caller figure that out when using it
# Cannot use netloc because it could contain credentials if the server specified had them in there.
# Set the cache after we've cleared the existing entries
# Defaults are set below, we just need to make sure 1 error is present.
# v1 and unknown API endpoints
# Keep the raw string results for the date. It's too complex to parse as a datetime object and the various APIs return
# them in different formats.
# type: (GalaxyAPI) -> str
# type: (GalaxyAPI, GalaxyAPI) -> bool
# type: ignore[misc]  # https://github.com/python/mypy/issues/1362
# Calling g_connect will populate self._available_api_versions
# Got a hit on the cache and we aren't getting a paginated response
# Technically some v3 paginated APIs return in 'data' but the caller checks the keys for this so
# always returning the cache under results is fine.
# The cache entry had expired or does not exist, start a new blank entry to be filled later.
# v3 can return data or results for paginated results. Scan the result so we can determine what to cache.
# Don't add the auth token if one is already present
# https://github.com/ansible/ansible/issues/64355
# api_server contains part of the API path but next_link includes the /api part so strip it out.
# Collection APIs #
# Construct the appropriate URL per version
# The import job may not have started, and as such, the task url may not yet exist
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
# galaxy does a lot of redirects, with much more complex pathing than we use
# within this codebase, without updating _call_galaxy to be able to return
# the final URL, we can't reliably build a relative URL.
# AH pagination results are relative an not an absolute URI.
# We should only rely on the cache if the collection has not changed. This may slow things down but it ensures
# we are not waiting a day before finding any new collections that have been published.
# No collection found, return an empty list to keep things consistent with the various APIs
# v3 doesn't raise a 404 so we need to mimic the empty response from APIs that do.
# v3 automation-hub is the only known API that uses `data`
# since v3 pulp_ansible does not, we cannot rely on version
# to indicate which key to use
# Implemented the following code to circumvent broken implementation of data_filter
# in tarfile. See for more information - https://github.com/python/cpython/issues/107845
# deprecated: description='probing broken data filter implementation' python_version='3.11'
# We explicitly check if tarfile.data_filter is broken or not
# Look for a meta/main.ya?ml inside the potential role dir in case
# use the first path by default
# first grab the file and save it to a temp location
# create tar file from scm url
# Container Role
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
# check if there's a source link/url for our role_version
# verify the role's meta file
# next find the metadata file
# Look for parent of meta/main.yml
# Due to possibility of sub roles each containing meta/main.yml
# look for shortest length parent
# path can be passed though __init__
# FIXME should this be done in __init__?
# using --force, remove the old path
# We strip off any higher-level directories for all of the files
# contained within the tar file here. The default is 'github_repo-target'.
# Gerrit instances, on the other hand, does not have a parent directory at all.
# we only extract files, and remove any relative path
# bits that might be in the file for security purposes
# and drop any containing directory, as mentioned above
# Symlinks are relative to the link
# Normalize paths that start with the archive dir
# remove leading os.sep
# deprecated: description='extract fallback without filter' python_version='3.11'
# type: ignore[call-arg]
# Remove along with manual path filter once Python 3.12 is minimum supported version
# write out the install info file for later use
# return the parsed yaml metadata
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
# So that we have a buffer, expire the token in ~2/3 the given value
# Done so the config file is only opened when set/get/save is called
# Prioritise the token passed into the constructor
# token file not found, create and chmod u+rw
# owner has +rw
# Copyright: (c) 2020-2021, Ansible Project
# type: t.Iterable[GalaxyAPI]
# type: ConcreteArtifactsManager
# type: t.Iterable[Candidate]
# type: bool
# type: (...) -> CollectionDependencyResolver
# TODO: add python requirements to ansible-test's ansible-core distribution info and remove the hardcoded lowerbound/upperbound fallback
# type: CollectionDependencyProviderBase
# type: MultiGalaxyAPIProxy
# type: (...) -> None
# type: (t.Union[Candidate, Requirement]) -> str
# type: (t.Any, t.Any) -> t.Union[float, int]
# type: (list[Candidate]) -> t.Union[float, int]
# NOTE: Prefer pre-installed candidates over newer versions
# NOTE: available from Galaxy or other sources.
# type: (t.Any, t.Any) -> list[Candidate]
# type: (list[Requirement]) -> list[Candidate]
# FIXME: The first requirement may be a Git repo followed by
# FIXME: its cloned tmp dir. Using only the first one creates
# FIXME: loops that prevent any further dependency exploration.
# FIXME: We need to figure out how to prevent this.
# The fqcn is guaranteed to be the same
# If we're upgrading collections, we can't calculate preinstalled_candidates until the latest matches are found.
# Otherwise, we can potentially avoid a Galaxy API call by doing this first.
# type: t.Iterable[t.Tuple[str, GalaxyAPI]]
# Non hashable versions will cause a TypeError
# Unexpected error from a Galaxy server
# FIXME: do we assume that all the following artifacts are also concrete?
# FIXME: does using fqcn==None cause us problems here?
# Ensure the version found in the concrete artifact is SemVer-compliant
# NOTE: The known cases causing the version to be a non-string object come from
# NOTE: the differences in how the YAML parser normalizes ambiguous values and
# NOTE: how the end-users sometimes expect them to be parsed. Unless the users
# NOTE: explicitly use the double quotes of one of the multiline string syntaxes
# NOTE: in the collection metadata file, PyYAML will parse a value containing
# NOTE: two dot-separated integers as `float`, a single integer as `int`, and 3+
# NOTE: integers as a `str`. In some cases, they may also use an empty value
# NOTE: which is normalized as `null` and turned into `None` in the Python-land.
# NOTE: Another known mistake is setting a minor part of the SemVer notation
# NOTE: skipping the "patch" bit like "1.0" which is assumed non-compliant even
# NOTE: after the conversion to string.
# NOTE: The optimization of conditionally looping over the requirements
# NOTE: is used to skip having to compute the pinned status of all
# NOTE: requirements and apply version normalization to the found ones.
# NOTE: Pinned versions can start with a number, but also with an
# NOTE: equals sign. Stripping it at the beginning should be
# NOTE: enough. If there's a space after equals, the second strip
# NOTE: will take care of it.
# NOTE: Without this conversion, requirements versions like
# NOTE: '1.2.3-alpha.4' work, but '=1.2.3-alpha.4' don't.
# NOTE: Do not discard pre-release candidates in the
# NOTE: following cases:
# NOTE:   * the end-user requested pre-releases explicitly;
# NOTE:   * the candidate is a concrete artifact (e.g. a
# NOTE:     Git repository, subdirs, a tarball URL, or a
# NOTE:     local dir or file etc.);
# NOTE:   * the candidate's pre-release version exactly
# NOTE:     matches a version specifically requested by one
# NOTE:     of the requirements in the current match
# NOTE:     discovery round (i.e. matching a requirement
# NOTE:     that is not a range but an explicit specific
# NOTE:     version pin). This works when some requirements
# NOTE:     request version ranges but others (possibly on
# NOTE:     different dependency tree level depths) demand
# NOTE:     pre-release dependency versions, even if those
# NOTE:     dependencies are transitive.
# candidate_is_from_requested_source = (
# if not candidate_is_from_requested_source:
# candidate satisfies requirements, `break` never happened
# prefer newer versions over older ones
# check if an upgrade is necessary
# check if an upgrade is preferred
# type: (Requirement, Candidate) -> bool
# NOTE: This is a set of Pipenv-inspired optimizations. Ref:
# https://github.com/sarugaku/passa/blob/2ac00f1/src/passa/models/providers.py#L58-L74
# type: (Candidate) -> list[Candidate]
# FIXME: If there's several galaxy servers set, there may be a
# FIXME: situation when the metadata of the same collection
# FIXME: differs. So how do we resolve this case? Priority?
# FIXME: Taking into account a pinned hash? Exploding on
# FIXME: any differences?
# NOTE: The underlying implementation currently uses first found
# NOTE: This guard expression MUST perform an early exit only
# NOTE: after the `get_collection_dependencies()` call because
# NOTE: internally it populates the artifact URL of the candidate,
# NOTE: its SHA hash and the Galaxy API token. These are still
# NOTE: necessary with `--no-deps` because even with the disabled
# NOTE: dependency resolution the outer layer will still need to
# NOTE: know how to download and validate the artifact.
# NOTE: Virtual candidates should always return dependencies
# NOTE: because they are ephemeral and non-installable.
# Classes to handle resolvelib API changes between minor versions for 0.X
# type: (t.Optional[Candidate], list[Candidate], list[t.NamedTuple]) -> t.Union[float, int]
# type: (str, t.Mapping[str, t.Iterator[Requirement]], t.Mapping[str, t.Iterator[Requirement]]) -> list[Candidate]
# type: (str, t.Mapping[str, Candidate], t.Mapping[str, t.Iterator[Candidate]], t.Iterator[t.NamedTuple]) -> t.Union[float, int]
# type: (str, t.Mapping[str, Candidate], t.Mapping[str, t.Iterator[Candidate]], t.Iterator[t.NamedTuple], t.Sequence) -> t.Union[float, int]
# type () -> CollectionDependencyProviderBase
# Inspired by https://github.com/pypa/pip/commit/9731131
# resolvelib >= 0.9.0
# resolvelib < 0.9.0
# FIXME: add caching all over the place
# NOTE: This is a feature flag
# b'*',  # namespace is supposed to be top-level per spec
# collection name
# NOTE: Maintain the checks to be sorted from light to heavy:
# type: t.Type[Collection]
# type: bytes
# type: (...)  -> Collection
# special handling for bundled collections without manifests, e.g., ansible._protomatter
# Looks like installed/source dir but isn't: doesn't have valid metadata.
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
# Arg is a file path or URL to a collection, or just a collection
# packaging doesn't know what this is, let it fly, better errors happen in from_requirement_dict
# TODO: decide how to deprecate the old src API behavior
# FIXME: decide on the future behavior:
# NOTE: leading LFs are for concat
# NOTE: I'd prefer a ValueError instead
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
# No name for a virtual req or "namespace."?
# NOTE: this is never supposed to be hit
# TODO: fix the cache key in artifacts manager?
# Virtual collection
# Not a dir or isn't on-disk
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
# ns.coll-1.0.0.info
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
# FIXME: use LRU cache
# type: ignore[name-match]
# type: (Candidate) -> Candidate
# Copyright: (c) 2019-2020, Ansible Project
# type: (str) -> bool
# type: (str, str) -> bool
# The loop was broken early, it does not meet all the requirements
# Copyright: (c) 2019-2021, Ansible Project
# type: ignore[import]
# collection meta:
# files meta:
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
# 6 chars
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
# type: list[ModifiedContent]
# partial away the local FS detail so we can just ask generically during validation
# Compare installed version versus requirement version
# since we're not downloading this, just seed it with the value from disk
# fetch remote
# NOTE: AnsibleError is raised on URLError
# partial away the tarball details so we can just ask generically during validation
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
# Use the manifest to verify the file manifest checksum
# Verify the file manifest before using it to verify individual files
# Use the file manifest to verify individual file checksums
# Find any paths not in the FILES.json
# type: (str, str, list[str], str, str, list[str]) -> bool
# Do not include ignored errors in either the failed or successful count
# type: (str, str, str, list[str]) -> None
# Get error status (dict key) from the class (dict value)
# No errors and rc is 0, verify was successful
# type: (str, str, bool) -> str
# type: t.Iterable[Requirement]
# type: str
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
# FIXME: move into the provider
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
# Galaxy returns a url fragment which differs between v2 and v3.  The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
# NOTE: No need to include signatures if the collection is already installed
# Duplicate warning msgs are not displayed
# NOTE: imported in ansible.cli.galaxy
# type: t.Iterable[str]
# type: (...) -> list[CollectionVerifyResult]
# type: list[CollectionVerifyResult]
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
# Download collection on a galaxy server for comparison
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
# FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
# Display a message from the main thread
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
# type: (bytes, str, str, list[str], dict[str, t.Any], t.Optional[str]) -> FilesManifestType
# type: (bytes, str, str, dict[str, t.Any], t.Optional[str]) -> FilesManifestType
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
# Ignore ansible-test result output directory.
# Ignores previously built artifacts in the root dir.
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
# Handle galaxy.yml having an empty string (None)
# Filled out in _build_collection_tar
# type: CollectionManifestType
# type: FilesManifestType
# type: (...) -> str
# Add the MANIFEST.json and FILES.json file to the archive
# arcname expects a native string, cannot be bytes
# Dealing with a normal file, just add it by name.
# Write contents to the files
# ensure symlinks to dirs are not translated to empty dirs
# do not follow symlinks to ensure the original link is used
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
# This is annoying, but GalaxyCLI._resolve_path did it
# FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
# Ensure we don't leave the dir behind in case of a failure.
# type: (str, list[str], str, str, list[str]) -> None
# get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
# Verify the signature on the MANIFEST.json before extracting anything else
# installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
# Default to rw-r--r-- and only add execute if the tar file has execute.
# type: (bytes, str) -> str
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
# type: t.Iterable[Candidate] | None
# type: (...) -> dict[str, Candidate]
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
# NOTE: same constant pip uses
# Copyright: (c) 2022, Ansible Project
# type: (str, t.Optional[Display]) -> str
# type: Display
# type: (...) -> tuple[str, int]
# running the gpg command will create the keyring if it does not exist
# type: (str) -> t.Iterator[GpgBaseError]
# type: (bytes, bool, str, int, str, list[str]) -> None
# type: dict[bytes, bytes]
# type: dict[Candidate | Requirement, bytes]
# type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None | t.Type[Sentinel]]]
# type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
# type: dict[Candidate, tuple[str, list[dict[str, str]]]]
# type: dict[str, str]
# type: int
# type: () -> bool
# type: (bool) -> None
# type: (Candidate) -> dict[str, t.Union[str, list[dict[str, str]]]]
# type: (t.Union[Candidate, Requirement]) -> bytes
# type: (Collection) -> bytes
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
# NOTE: URLs don't support checksums
# NOTE: This may happen `if collection.is_online_index_pointer`
# type: (Candidate) -> bytes
# type: (Candidate) -> t.Optional[str]
# type: (Collection) -> t.Optional[str]
# NOTE: should it be something like "<virtual>"?
# type: ignore[type-var]
# type: (Collection) -> str
# type: (t.Union[Candidate, Requirement]) -> dict[str, str]
# type: (Collection) -> dict[str, t.Union[str, dict[str, str], list[str], None, t.Type[Sentinel]]]
# FIXME: use unique collection identifier as a cache key?
# should we just build a coll instead?
# FIXME: what if there's subdirs?
# NOTE: Dropping b_artifact_path since it's based on src anyway
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
# type: (...) -> t.Iterator[ConcreteArtifactsManager]
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
# Perform a shallow clone if simply cloning HEAD
# FIXME: '--branch', version
# should probably be LookupError
# FIXME: use random subdirs while preserving the file names
# type: (str, bytes, t.Optional[str], bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
# type: t.BinaryIO
# type: (t.BinaryIO, t.BinaryIO) -> str
# type: dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
# type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
# type: list[dict[str, t.Any]]  # FIXME: <--
# FIXME: 👆maybe precise type: list[dict[str, t.Union[bool, str, list[str]]]]
# Add the defaults if they have not been set
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
# Valid build metadata is not required by ansible-galaxy list. Raise ValueError to fall back to implicit metadata.
# type: (...) -> dict
# type: tarfile.TarFile
# type: tarfile.TarInfo
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.Optional[t.IO[bytes]]]]
# Prevent all GalaxyAPI calls
# `verify` doesn't use `get_collection_versions` since the version is already known.
# Do the same as `install` and `download` by trying all APIs before failing.
# Warn for debugging purposes, since the Galaxy server may be unexpectedly down.
# FIXME: return Requirement instances instead?
# (c) 2012, Daniel Hokka Zakrisson <daniel@hozac.com>
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com> and others
# (c) 2017, Toshio Kuratomi <tkuratomi@ansible.com>
# Global so that all instances of a PluginLoader will share the caches
# type: dict[str, dict[str, types.ModuleType]]
# type: dict[str, list[_t_loader.PluginPathContext] | None]
# type: dict[str, dict[str, dict[str, _t_loader.PluginPathContext]]]
# Set by plugin loader
# allow extra passthrough parameters
# some plugins don't use set_option(s) and cannot use direct settings, so this populates the local copy for them
# callers expect key error on missing
# let it populate _options
# allow extras/wildcards from vars that are not directly consumed in configuration
# this is needed to support things like winrm that can have extended protocol options we don't directly handle
# these are largely unvalidated passthroughs, either plugin or underlying API will validate
# TODO: deprecate and remove, most plugins that needed this don't use this facility anymore
# FIXME: standardize required check based on config
# Declare support for markers. Plugins with `False` here will never be invoked with markers for top-level arguments.
# (c) Ansible Project
# not real plugins
# ptype: names
# create FQCN
# TODO: update to use importlib.resources
# skip hidden/special dirs
# hidden or python internal file/dir
# its a dir, recurse
# dont recurse for synthetic unless init.py present
# actually recurse dirs
# general files to ignore
# general extensions to ignore
# ignore docs files
# plugin in reject list
# skip aliases, author should document in 'aliases' field
# ignore no ext when looking for docs files
# NOTE: pass the composite resource to ensure any relative
# imports it contains are interpreted in the correct context
# Kept for backwards compatibility.
# get plugins for each collection
# dirs from ansible install, but not configured paths
# configured paths + search paths (should include basedirs/-M)
# search path in this case is for locating collection itselfA
# acr = AnsibleCollectionRef.try_parse_fqcr(collection, ptype)
# if acr:
# no 'invalid' tests for modules
# detect invalid plugin candidates AND add loaded object to return data
# Add in any builtin Jinja2 plugins that have not been shadowed in Ansible.
# {plugin_name: (filepath, class), ...}
# list all collections, add synthetic ones
# add builtin, since legacy also resolves to these
# wrappers
# (c) 2017 Ansible Project
# TODO: take the packaging dep, or vendor SpecifierSet?
# type: t.DefaultDict[str, frozenset]
# default to sh
# mostly for backwards compat
# empty string for resolved plugins from user-supplied paths
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
# If both removal_date and removal_version are specified, use removal_date
# pylint: disable=ansible-deprecated-date-not-permitted,ansible-deprecated-unnecessary-collection-name
# Unlike other plugin types, filter and test plugin names are independent of the file where they are defined.
# As a result, the Python module name must be derived from the full path of the plugin.
# This prevents accidental shadowing of unrelated plugins of the same type.
# FIXME: remove alias dict in favor of alias by symlink?
# hold dirs added at runtime outside of config
# caches
# reset global caches
# reset internal caches
# Uses a list to get the order right
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
# look in any configured plugin paths, allow one level deep for subcategories
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last.  This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed).  the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader.  But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE.  Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
# cache and return the result
# plugins w/o class name don't support config
# if type name != 'module_doc_fragment':
# trust-tagged source propagates to loaded values; expressions and templates in config require trust
# TODO: allow configurable plugins to use sidecar
# if not dstring:
# append the directory and invalidate the path cache
# FIXME: shouldn't need this...
# force any type-specific metadata postprocessing to occur
# this will be created by the collection PEP302 loader
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
# try for extension-agnostic entry
# check collection metadata to see if any special handling is required for this plugin
# TODO: factor this into a wrapper method
# this will no-op if there's no deprecation metadata for this plugin
# FIXME: clean up text gen
# Prevent mystery redirects that would be determined by the collections keyword
# FIXME: remove once this is covered in debug or whatever
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
# we want this before the extension is added
# FIXME: there must be cheaper/safer way to do this
# FIXME: and is file or file link or ...
# the request was extension-specific, don't try for an extensionless match
# look for any matching extension in the package location (sans filter)
# sort to ensure deterministic results, with the shortest match first
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# Ansible plugins that run in the controller process (most plugins)
# Only Ansible Modules.  Ansible modules can be any executable so
# they can have any suffix
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
# HACK: refactor this properly
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# if we got an answer or need to chase down a redirect, return
# these are generally fatal, let them fly
# DTFIX-FUTURE: can we deprecate/remove these stringified versions?
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
# The particular cache to look for modules within.  This matches the
# requested mod_type
# Cache miss.  Now let's find the plugin
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
# everything downstream expects unicode
# Module found, now enter it into the caches that match this file
# Didn't find the plugin in this directory. Load modules from the next one
# if nothing is found, try finding alias/deprecated
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
# log and continue, likely an innocuous type/package loading failure in collections import
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
# FIXME: this still has issues if the module was previously imported but not "cached",
# mimic import machinery; make the module-being-loaded available in sys.modules during import
# and remove if there's a failure...
# DTFIX-FUTURE: clean this up- standardize types, document, split/remove redundant bits
# set extra info on the module, in case we want it later
# reverse list so best name comes first
# pylint: disable=ansible-deprecated-no-version
# Resolving the FQCN is slow, even if we've passed in the resolved FQCN.
# Short-circuit here if we've previously resolved this name.
# This will need to be restricted if non-vars plugins start using the cache, since
# some non-fqcn plugin need to be resolved again with the collections list.
# FIXME: this is probably an error (eg removed plugin)
# This is unused by vars plugins, but it's here in case the instance cache expands to other plugin types.
# We get here if we've seen this plugin before, but it wasn't called with the resolved FQCN.
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
# Check whether this obj has the required base class.
# FIXME: update this to use the load context
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
# pylint: disable=unnecessary-dunder-call
# Abstract Base Class or incomplete plugin, don't load
# The cache doubles as the load order, so record the FQCN even if the plugin hasn't set is_stateless = True
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# Having both path_only and class_only is a coding bug
# we sort within each path, but keep path precedence from config
# j2 plugins get processed in own class, here they would just be container files
# cache has legacy 'base.py' file, which is wrapper for __init__.py
# for j2 this is 'same file', other plugins it is basename
# Here just in case, but we don't call all() multiple times for vars plugins, so this should not be used.
# Use get_with_context to cache the plugin the first time we see it.
# When the resolved name hasn't been cached, do so.
# Functions that have aliases will appear more than once, and we don't need to overwrite them.
# use 'parent' loader class to find files, but cannot return this as it can contain multiple plugins per file
# FUTURE: now that the resulting plugins are closer, refactor base class method with some extra
# hooks so we can avoid all the duplicated plugin metadata logic, and also cache the collection results properly here
# pop N/A kwargs to avoid passthrough to parent methods
# avoid collection path for legacy
# check for stuff loaded via legacy/builtin paths first
# follow the meta!
# no collection
# TODO: implement cycle detection (unified across collection redir as well)
# check deprecations
# check removal
# check redirects
# use 'parent' loader class to find files, but cannot return this as it can contain
# multiple plugins per file
# TODO: load  anyways into CACHE so we only match each at end of loop
# context will have filename, which for tests/filters might not be correct
# FIXME: once we start caching these results, we'll be missing functions that would have loaded later
# go to next file as it can override if dupe (dont break both loops)
# basically ignored for test/filters since they are functions
# get plugins from files in configured paths (multiple in each)
# p_map is really object from file with class that holds multiple plugins
# the plugin class returned by the loader may host multiple Jinja plugins, but we wrap each plugin in
# its own surrogate wrapper instance here to ease the bookkeeping...
# Try to convert for people specifying version as a float instead of string
# Modules and action plugins share the same reject list since the difference between the
# two isn't visible to the users
# Special case: the stat module as Ansible can run very few things if stat is rejected
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
# ignore prerelease/postrelease/beta/dev flags for simplicity
# this must be a Python warning so that it can be filtered out by the import sanity test
# insert the internal ansible._protomatter collection up front
# this should succeed now
# TODO: Evaluate making these class instantiations lazy, but keep them in the global scope
# doc fragments first
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
# Copyright: (c) , Ansible Project
# WARNING: this is mostly here as a convenience for documenting core behaviours, no plugin outside of ansible-core should use this file
# requires action_common
# also requires core above
# Copyright (c) 2019 Ansible Project
# Common options for Ansible.ModuleUtils.WebRequest
# Copyright: (c) 2016, Ansible, Inc
# Standard documentation fragment
# Copyright: (c) 2015, Ansible, Inc
# Windows shell documentation fragment
# FIXME: set_module_language don't belong here but must be set so they don't fail when someone
# common shelldocumentation fragment
# Standard template documentation fragment, use by template and win_template.
# Copyright: (c) 2018, John Barker <gundalow@redhat.com>
# Standard files documentation fragment
# Copyright (c) 2021 Ansible Project
# Copyright: (c) 2017, Brian Coca <bcoca@redhat.com>
# Copyright: (c) 2019,  Ansible Project
# Copyright: Ansible Project
# Copyright (c) 2024 ShIRann Chen <shirannx@gmail.com>
# inventory cache
# Copyright: (c) 2014, Matt Martz <matt@sivel.net>
# Note: mode is overridden by the copy and template modules so if you change the description
# here, you should also change it there.
# (c) 2013, Javier Candeira <javier@candeira.com>
# (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# getattr from string expands things like "ascii_letters" and "digits"
# into a set of characters.
# No salt
# no ident
# At this point, the calling code should have assured us that there is a salt value.
# if the lock is got by other process, wait until it's released
# Only a single argument given, therefore it's a path
# Spaces in the path?
# Check that we parsed the params correctly
# Likely, the user had a non parameter following a parameter.
# Reject this as a user typo
# No _raw_params means we already found the complete path when
# we split it initially
# Check for invalid parameters.  Probably a user typo
# update options with what we got
# chars still might need more
# return processed params
# make sure only one process finishes all the job first
# let other processes continue
# Backwards compat: self._display isn't really needed, just import the global display and use that.
# TODO: place holder to deprecate in future version allowing for long transition period
# self._display.deprecated('Passing inline k=v values embedded in a string to this lookup. Use direct ,k=v, k2=v2 syntax instead.', version='2.18')
# (c) 2012-17 Ansible Project
# (c) 2015, Yannig Perre <yannig.perre(at)gmail.com>
# TODO: deprecate this method
# TODO: check kv_parser to see if it can handle spaces this same way
# initialize for 'lookup item'
# update current key if used
# if first term or key does not exist
# append to existing key
# return list of values
# Retrieve all values from a section using a regexp
# Retrieve a single value
# parameters specified?
# only take first, this format never supported multiple keys inline
# bad params passed
# only passed options in inline string
# TODO: look to use cache to avoid redoing this for every term if they use same file
# Retrieve file path
# Create StringIO later used to parse ini
# Special case for java properties
# https://docs.python.org/3/library/subprocess.html#popen-constructor
# The shell argument (which defaults to False) specifies whether to use the
# shell as the program to execute. If shell is True, it is recommended to pass
# args as a string rather than as a sequence
# https://github.com/ansible/ansible/issues/6550
# (c) 2013, Jan-Piet Mens <jpmens(at)gmail.com>
# populate options
# parameters override per term using k/v
# default is just placeholder for real tab
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright: (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright: (c) 2012-17, Ansible Project
# capture options
# set jinja2 internal search path for includes
# our search paths aren't actually the proper ones for jinja includes.
# We want to search into the 'templates' subdir of each search path in
# addition to our original search paths.
# The template will have access to all existing variables,
# plus some added by ansible (e.g., template_{path,mtime}),
# plus anything passed to the lookup with the template_vars=
# FIXME: why isn't this a chainmap with a sacrificial bottom layer?
# do not clobber ansible_managed when set by the user
# use the internal template API to avoid forced top-level finalization behavior imposed by the public API
# Find the file in the expected search path
# TODO: only add search info if abs path?
# (c) 2015, Brian Coca <bcoca@ansible.com>
# (c) 2013, Serge van Ginderachter <serge@vanginderachter.be>
# check lookup terms - check number of terms
# first term should be a list (or dict), second a string holding the subkey
# convert to list:
# the registered result was completely skipped
# check for optional flags in third term
# build_items
# this particular item is to be skipped
# lastsubkey
# (c) 2014, Kent R. Spillner <kspillner@acm.org>
# Expect any type of Mapping, notably hostvars
# (c) 2013, Michael DeHaan <michael.dehaan@gmail.com>
# (c) 2013, Steven Dossett <sdossett@panath.com>
# (c) 2012, Jan-Piet Mens <jpmens(at)gmail.com>
# the `default` arg can accept undefined values
# plugin creates settings on load, this is cached so not too expensive to redo
# this is not needed, but added to have all 3 options stated
# no dir, just file, so use paths and 'files' paths instead
# (c) 2013, seth vidal <skvidal@fedoraproject.org> red hat, inc
# added since options will already listify
# terms are allowed to be undefined
# can use a dict instead of list item to pass inline config
# magic extra splitting to create lists
# create search structure
# NOTE: this is now 'extend', previously it would clobber all options, but we deemed that a bug
# we're being invoked by TaskExecutor.get_loop_items(), special backwards compatibility behavior
# recursively drop undefined values from terms for backwards compatibility
# invoked_as_with shouldn't be possible outside a TaskContext
# FIXME: this value has not been templated, it should be (historical problem)...
# based on the presence of `var`/`template`/`file` in the enclosing task action name, choose a subdir to search
# convert to the matching directory name
# undefined values are only omitted when invoked using `with`
# NOTE: during refactor noticed that the 'using a dict' as term
# is designed to only work with 'one' otherwise inconsistencies will appear.
# see other notes below.
# get subdir if set by task executor, default to files otherwise
# exit if we find one!
# if we get here, no file was found
# NOTE: global skip won't matter, only last 'skip' value in dict term
# (c) 2020 Ansible Project
# (c) 2013, Jayson Vantuyl <jayson@aggressive.ly>
# shortcut format
# Group 0
# Group 1: Start
# Group 2: End
# Group 3
# Group 4: Stride
# Group 5, Group 6: Format String
# convert count to end
# All of the necessary arguments can be provided as keywords, but we still need something to loop over
# set defaults/global
# (c) 2013, Bradley Young <young.bradley@gmail.com>
# (c) 2014, Chris Church <chris@ninemoreminutes.com>
# This was added in pywinrm 0.5.0, we just use our no-op exception for
# older versions which won't be able to handle this scenario.
# used to try and parse the hostname and detect if IPv6 is being used
# this used to be in set_options, as win_reboot needs to be able to
# override the conn timeout, we need to be able to build the args
# after setting individual options. This is called by _connect before
# starting the WinRM connection
# old behaviour, scheme should default to http if not set and the port
# is 5985 otherwise https
# for legacy versions of pywinrm, use the values we know are supported
# calculate transport if needed
# TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic
# if kerberos is among our transports and there's a password specified, we're managing the tickets
# HACK: ideally, remove multi-transport stuff
# arg names we're going passing directly
# warn for kwargs unsupported by the installed version of pywinrm
# pass through matching extras, excluding the list we want to treat specially
# Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection
# auth itself with a private CCACHE.
# Add any explicit environment vars into the krb5env block
# Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR
# '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation).
# It is important to use start_new_session which spawns the process
# with setsid() to avoid it inheriting the current tty. On macOS it
# will force it to read from stdin rather than the tty.
# one last attempt at making sure the password does not exist
# in the output
# open the shell from connect so we know we're able to talk to the server
# A WSMan OperationTimeout can be received for a Send
# operation when the server is under severe load. On manual
# testing the input is still processed and it's safe to
# continue. As the calling method still tries to wait for
# the proc to end if this failed it shouldn't hurt to just
# treat this as a warning.
# Error 170 == ERROR_BUSY. This could be the result of a
# timed out Send from above still being processed on the
# server. Add a 5 second delay and try up to 3 times before
# fully giving up.
# pywinrm does not expose the internal WSMan fault details
# through an actual object but embeds it as a repr.
# If we were able to get output at least once then we should be
# able to get the rest.
# This is an expected error when waiting for a long-running process,
# just silently retry if we haven't been set to do one attempt.
# Even on a failure above we try at least once to get the output
# in case the stdin was actually written and it an normally.
# This is done after logging so we can still see the raw stderr for
# debugging purposes.
# There are cases where the stdin input failed but the WinRM service still processed it. We attempt to
# see if stdout contains a valid json return value so we can ignore this error
# stdout does not contain a return response, stdin input was a fatal error
# Due to a bug in how pywinrm works with message encryption we
# ignore a 400 error which can occur when a task timeout is
# set and the code tries to clean up the command. This happens
# as the cleanup msg is sent over a new socket but still uses
# the already encrypted payload bound to the other socket
# causing the server to reply with 400 Bad Request.
# 0x803381A6 == ERROR_WSMAN_QUOTA_MAX_OPERATIONS
# WinRS does not decrement the operation count for commands,
# only way to avoid this is to re-create the shell. This is
# important for action plugins that might be running multiple
# processes in the same connection.
# build the kwargs from the options set
# Avoid double encoding the script, the first means we are already
# running the standard PowerShell command, the latter is used for
# the no pipeline case where it uses type to pipe the script into
# powershell which is known to work without re-encoding as pwsh.
# TODO: display something meaningful here
# FUTURE: determine buffer size at runtime via remote winrm config?
# yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded
# cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal
# empty file, return an empty buffer + eof to close it
# stdout does not contain a valid response
# consistent with other connection plugins, we assume the caller has created the target dir
# 0.5MB chunks
# If out_path is a directory and we're expecting a file, bail out now.
# (c) 2015 Toshio Kuratomi <tkuratomi@ansible.com>
# (c) 2017, Peter Sprygada <psprygad@redhat.com>
# eg, winrm
# for interacting with become plugins
# When running over this connection type, prefer modules written in a certain language
# as discovered by the specified file extension.  An empty string as the
# language means any language.
# the following control whether or not the connection supports the
# persistent connection framework or not
# All these hasattrs allow subclasses to override these parameters
# Backwards compat: self._play_context isn't really needed, using set_options/get_option
# we always must have shell
# In Python3, shlex.split doesn't work on a byte string.
# dont update existing
# no secrets!
# its me mom!
# its my cousin ...
# deal with generic options if the plugin supports em (for example not all connections have a remote user)
# for these variables there should be only one option
# fallback to play_context, unless become related  TODO: in the end, should come from task/play and not pc
# It was not defined; fine to ignore
# create dict of 'templated vars'
# add extras if plugin supports them
# TODO: deprecate always_pipeline_modules and has_native_async in favor for each plugin overriding this function
# enabled via config or forced via connection (eg winrm)
# user wants remote files
# async does not normally support pipelining unless it does (eg winrm)
# Do not use _remote_is_local in other connections
# reconstruct the socket_path and set instance values accordingly
# SSH Options Regex
# don't print the prompt string since the user cannot respond
# to the question anyway
# existing implementation below:
# host keys are actually saved in close() function below
# in order to control ordering.
# keep connection objects on a per host basis to avoid repeated attempts to reconnect
# pylint: disable=ansible-deprecated-unnecessary-collection-name
# entire plugin being removed; this improves the messaging
# Set pubkey and hostkey algorithms to disable, the only manipulation allowed currently
# is keeping or omitting rsa-sha2 algorithms
# default_keys: t.Tuple[str] = ()
# override paramiko's default logger name
# TODO: check if we need to look at several possible locations, possible for loop
# file was not found, but not required to function
# paramiko 2.2 introduced auth_timeout parameter
# paramiko 1.15 introduced banner timeout parameter
# sudo usually requires a PTY (cf. requiretty option), therefore
# we give it one by default (pty=True in ansible.cfg), and we try
# to initialise from the calling environment when sudoable is enabled
# raise AnsibleError('ssh connection closed waiting for password prompt')
# need to check every line because we might get lectured
# and we might get the middle of a line in a chunk
# was f.write
# add any new SSH host keys -- warning -- this could be slow
# (This doesn't acquire the connection lock because it needs
# to exclude only other known_hosts writers, not connections
# that are starting up.)
# just in case any were added recently
# gather information about the current key file, so
# we can ensure the new file has the correct mode/owner
# Save the new keys to a temporary file and move it into place
# rather than rewriting the file. We set delete=False because
# the file will be moved into place rather than cleaned up.
# unable to save keys, including scenario when key was invalid
# and caught earlier
# Satisfies mypy as this connection only ever runs with this plugin
# create our pseudo host to capture the exit code and host output
# Try out best to ensure the runspace is closed to free up server side resources
# There's a good chance the connection was already closed so just log the error and move on
# This is a PowerShell script encoded by the shell plugin, we will
# decode the script and execute it in the runspace instead of
# starting a new interpreter to save on time
# ANSIBALLZ wrapper, we need to get the interpreter and execute
# that as the script - note this won't work as basic.py relies
# on packages not available on Windows, once fixed we can enable
# this path
# script = "$input | &'%s' -" % interpreter
# call build_module_command to get the bootstrap wrapper text
# Do not display to the user each invocation of the bootstrap wrapper
# trailing space is on purpose
# Used when executing a script file, we will execute it in the runspace process
# instead on a new subprocess
# Using shlex isn't perfect but it's good enough.
# In other cases we want to execute the cmd as the script. We add on the 'exit $LASTEXITCODE' to ensure the
# rc is propagated back to the connection plugin.
# Get the buffer size of each fragment to send, subtract 82 for the fragment, message, and other header info
# fields that PSRP adds. Adjust to size of the base64 encoded bytes length.
# PSRP technically supports sending raw bytes but that method requires a larger CLIXML message.
# Sending base64 is still more efficient here.
# because we are dealing with base64 data we need to get the max size
# of the bytes that the base64 size would equal
# Call poll once to get the first output telling us if it's a file/dir/failure
# to be consistent with other connection plugins, we assume the caller has created the target dir
# cert validation can either be a bool or a path to the cert
# Check if there's a command on the current pipeline that still needs to be closed.
# Current pypsrp versions raise an exception if the current state was not RUNNING. We manually set it so we
# can call stop without any issues.
# We should really call .stop() on all pipelines that are run to decrement the concurrent command counter on
# PSSession but that involves another round trip and is done when the runspace is closed. We instead store the
# last pipeline which is closed if another command is run on the runspace.
# we try and get the rc from our host implementation, this is set if
# exit or $host.SetShouldExit() is called in our pipeline, if not we
# set to 0 if the pipeline had not errors and 1 if it did
# TODO: figure out a better way of merging this with the host output
# Not all pipeline outputs are a string or contain a __str__ value,
# we will create our own output based on the properties of the
# complex object if that is the case.
# the error record is not as fully fleshed out like we usually get
# in PS, we will manually create it here
# NativeCommandError and NativeCommandErrorMessage are special
# cases used for stderr from a subprocess, we will just print the
# error message
# This can be removed once Server 2016 is EOL and no longer
# supported. PS 5.1 on 2016 will emit 1 error record under
# NativeCommandError being the first line, subsequent records
# are the raw stderr up to 4096 chars. Each entry is the raw
# stderr value without any newlines appended so we just use the
# value as is. We know it's 2016 as the target_name is empty in
# this scenario.
# reset the host back output back to defaults, needed if running
# multiple pipelines on the same RunspacePool
# (c) 2015, 2017 Toshio Kuratomi <tkuratomi@ansible.com>
# Because we haven't made any remote connection we're running as
# the local user, rather than as whatever is configured in remote_user.
# Create a pty if sudoable for privilege escalation that needs it.
# Falls back to using a standard pipe if this fails, which may
# cause the command to fail in certain situations where we are escalating
# privileges or the command otherwise needs a pty.
# if we created a pty, we can close the other half of the pty now, otherwise primary is stdin
# preserve output from privilege escalation stage as `bytes`; it may contain actual output (eg `raw`) or error messages
# finally, close the other half of the pty, if it was created
# _id is set by build_become_command, if it was not called, assume no become
# map the buffers to their associated stream for the selector reads
# we only reach end of stream after all descriptors are EOF
# ignoring remaining output after timeout to prevent hanging
# read all content (non-blocking) from streams that signaled available input and append to the associated buffer
# EOF on this obj, stop polling it
# Copyright (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright 2017 Toshio Kuratomi <tkuratomi@ansible.com>
# error messages that indicate 255 return code is not from ssh itself.
# Python-2.6 when there's an exception
# Php always returns with error
# chmod, but really only on AIX
# chmod, other AIX
# sshpass errors
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
# 1 == stout, 2 == stderr
# For other errors, no exception is raised so the connection is retried and we only log the messages
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
# TODO: this should come from task
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
# Retry one more time because of the ControlPersist broken pipe (see #16731)
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
# 5 = Invalid/incorrect password from sshpass
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
# deprecated: description='unneeded due to track argument for SharedMemory' python_version='3.12'
# There is a resource tracking issue where the resource is deleted, but tracking still has a record
# This will effectively overwrite the record and remove it
# TODO: all should come from get_option(), but not might be set at this point yet
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
# parser to discover 'passed options', used later on for pipelining resolution
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
# We test once if sshpass is available, and remember the result.
# Write the public key to disk, to be provided as IdentityFile.
# This allows ssh to pick an explicit key in the agent to use,
# preventing ssh from attempting all keys in the agent.
# move atomically to prevent race conditions, silently succeeds if the target exists
# First, the command to invoke
# If we want to use sshpass for password authentication, we have to set up a pipe to
# write the password to sshpass.
# Set default password prompt for pkcs11_provider to make it clear its a PIN
# Next, additional arguments based on the configuration.
# pkcs11 mode allows the use of Smartcards or Yubikey devices
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using password auth
# Next, we add ssh_args
# Now we add various arguments that have their own specific settings defined in docs above.
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
# The directory must exist and be writable.
# Finally, we add any caller-supplied extras.
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
# Used by _run() to kill processes on failures
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
# skip lines from ssh debug output to avoid false matches
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
# deprecated: description='track argument for SharedMemory always available' python_version='3.12'
# SSH_ASKPASS_REQUIRE was added in openssh 8.4, prior to 8.4 there must be no tty, and DISPLAY must be set
# If the user has DISPLAY set, assume it is there for a reason
# start_new_session runs setsid which detaches the tty to support the use of ASKPASS prior to openssh 8.4
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
# Make sure stdin is a proper pty to avoid tcgetattr errors
# type: ignore[assignment] # stdin will be set and not None due to the calls above
# Ignore broken pipe errors if the sshpass process has exited.
# SSH state machine
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise, we can send initial data straight away.
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
# If we can send initial data without waiting for anything, we do so
# before we start polling
# We pay attention to timeouts only while negotiating a prompt.
# We timed out
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
# stdout has been closed, stop watching it
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
# stderr has been closed, stop watching it
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
# Otherwise there may still be outstanding data to read.
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
# Windows does not support dd so we cannot use the piped method
# Transfer methods to try
# Use the transfer_method option if set
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
# Check the return code and rollover to next method if failed
# If not in smart mode, the data will be printed by the raise below
# If using a root path then we need to start with /
# Convert all '\' to '/'
# Main public methods
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
# When running on Windows, stderr may contain CLIXML encoded output
# type: ignore[override]  # Used by tests and would break API
# need to add / if path is rooted
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
# check if we require tty (only from our args, cannot see options in configuration files)
# Copyright 2014, Brian Coca <bcoca@ansible.com>
# Copyright 2017, Ken Celenza <ken@networktocode.com>
# Copyright 2017, Jason Edelman <jason@networktocode.com>
# Copyright 2017, Ansible Project
# Use case_sensitive=None as a sentinel value, so we raise an error only when
# explicitly set and cannot be handle (by Jinja2 w/o 'unique' or fallback version)
# handle Jinja2 specific attributes when using Ansible's version
# Note: if new_obj[key_elem] exists it will always be a non-empty dict (it will at
# minimum contain {key: key_elem}
# exponents and logarithms
# set theory
# combinatorial
# computer theory
# zip
# If a query is supplied, make sure it's valid then return the results.
# If no option is supplied, return the entire dictionary.
# ---- Ansible filters ----
# (c) 2012, Jeroen Hoekx <jeroen@hoekx.be>
# deprecated: description='deprecate vault_to_text' core_version='2.23'
# deprecated: description='deprecate preprocess_unsafe' core_version='2.23'
# TODO separators can be potentially exposed to the user as well
# CAUTION: Do not put non-string values here since they can have unwanted logical equality, such as 1.0 (equal to 1 and True) or 0.0 (equal to 0 and False).
# accept mixed case variants
# bool is also an int
# accept int (0, 1) and bool (True, False) -- not just string versions
# if we're still here, the value is unsupported- always fire a deprecation warning
# backwards compatibility with the old code which checked: value in ('yes', 'on', '1', 'true', 1)
# NB: update the doc string to reflect reality once this fallback is removed
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE.  It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
# backward compatibility; ensure consistent result between classic/native Jinja for None/empty string input
# hash is not supported?
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
# DTFIX-FUTURE: deprecate this filter; there are much better ways via undef, etc...
# allow the user to do `[dict1, dict2, ...] | combine`
# merge all the dicts so that the dict at the end of the array have precedence
# over the dict at the beginning.
# we merge the dicts from the highest to the lowest priority because there is
# a huge probability that the lowest priority dict will be the biggest in size
# (as the low prio dict will hold the "default" values and the others will be "patches")
# and merge_hash create a copy of it's first argument.
# so high/right -> low/left is more efficient than low/left -> high/right
# Predefined comment types
# Pointer to the right comment type
# Default params
# Update default params
# Compose substrings for the final string
# Prepend each line of the text with the decorator
# Remove trailing spaces when only decorator is on the line
# Return the final string
# ignore null items
# decrement as we go down the stack
# DTFIX-FUTURE: make these dumb wrappers more dynamic
# base 64
# uuid
# json
# yaml
# path
# file glob
# date formatting
# quote string for shell usage
# hash filters
# md5 hex digest of string
# sha1 hex digest of string
# checksum of string as used by ansible for checksumming files
# generic hashing
# regex
# ? : ;
# random stuff
# comment-style decoration
# debug
# Data structures
# FDI038 - replace this with a standard type compat shim
# Jinja builtins that need special arg handling
# replaces the implementation instead of wrapping it
# Copyright: (c) 2012, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2021, Ansible Project
# This list can be an exact match, or start of string bound
# does not accept regex
# relay unexpected errors so bugs in display are reported and don't cause workers to hang
# We don't know the host yet, copy the previous states, for lookup after we process new results
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
# rollback host state
# redo
# Matches KeyboardInterrupt from bin/ansible
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
# internal counters
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
# create the result processing thread for reading results in the background
# holds the list of active (persistent) connections to be shutdown at
# play completion
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
# close active persistent connections
# most likely socket is already closed
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
# return the appropriate code, depending on the status hosts after the run
# create a templar and template things we need later for the queuing process
# and then queue the new task
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
# Pass WorkerProcess its strategy worker number so it can send an identifier along with intra-task requests
# most likely an abort
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
# iterate in reversed order since last handler loaded with the same name wins
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
# all host status messages contain 2 entries: (msg, task_result)
# save the current state before failing it for later inspection
# if we're using run_once, we have to fail every host here
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
# only ensure that notified handlers exist, if so save the notifications for when
# handlers are actually flushed so the last defined handlers are executed,
# otherwise depending on the setting either error or warn
# we're currently iterating handlers, so we need to expand this now
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
# this task added a new host (add_host module)
# ensure host is available for subsequent plays
# this task added a new group (group_by module)
# if delegated fact and we are delegating facts, we need to change target host for them
# Set facts that should always be on the delegated hosts
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
# finally, send the ok for this task
# register final results
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
# TODO:  and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
# _post_validate_args is never called for meta actions, so resolved_action hasn't been set
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
# These don't support "when" conditionals
# actually notify proper handlers based on all notifications up to this point
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
# multiple lines
# cmd.Cmd is old-style class
# pick up any tasks left after clear_host_errors
# prevent infinite loop
# iterate over each task, while there is one left to run
# queue up this task for each host in the inventory
# skip control
# flag set if task is set to any_errors_fatal
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
# we don't care here, because the action may simply not have a
# corresponding action plugin
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
# handle step if needed, skip meta actions as they are used internally
# if we're bypassing the host loop, break out now
# go to next host/task group
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
# FIXME: send the error to the callback; don't directly write to display here
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
# removed unnecessary exception handler, don't want to mis-attribute the entire code block by changing indentation
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
# This strategy manages throttling on its own, so we don't want it done in queue_task
# the last host to be given a task
# start with all workers being counted as being free
# assume we have no more work to do
# save current position so we know when we've looped back around and need to break
# try and find an unblocked host with a task to run
# peek at the next task for the host, to see if there's
# anything to do do for this host
# check if there is work to do, either there is a task or the host is still blocked which could
# mean that it is processing an include task and after its result is processed there might be
# more tasks to run
# set the flag so the outer loop knows we've still found
# some work which needs to be done
# check to see if this host is blocked (still executing a previous task)
# advance the host, mark the host blocked, and queue it
# each task is counted as a worker being busy
# all workers have tasks to do (and the current host isn't done with the play).
# loop back to starting host and break out
# move on to the next host and make sure we
# haven't gone past the end of our hosts list
# if we've looped around back to the start, break out
# each result is counted as a worker being free again
# pause briefly so we don't spin lock
# collect all the final results
# (c) 2018 Red Hat Inc.
# Stored auth appears to be invalid, clear and retry
# Unauthorized and there's no token. Return an error
# (c) 2012-2014, Ansible, Inc
# TREE_DIR comes from the CLI option --tree, only available for adhoc
# Characters that libyaml/pyyaml consider breaks
# NL, NEL, LS, PS
# regex representation of libyaml/pyyaml of a space followed by a break character
# exact type checks occur first against representers, then subclasses against multi-representers
# This method of searching is faster than using a regex
# "special character" logic from pyyaml yaml.emitter.Emitter.analyze_scalar, translated to decimal
# for perf w/ str.translate
# we care more about readability than accuracy, so...
# ...libyaml/pyyaml does not permit trailing spaces for block scalars
# ...libyaml/pyyaml does not permit tabs for block scalars
# ...libyaml/pyyaml only permits special characters for double quoted scalars
# ...libyaml/pyyaml only permits spaces followed by breaks for double quoted scalars
# FUTURE: fix double-loading of non-collection stdout callback plugins that don't set CALLBACK_NEEDS_ENABLED
# FUTURE: this code is jacked for 2.x- it should just use the type names and always assume 2.0+ for normal cases
# helper for callbacks, so they don't all have to include deepcopy
# v2 method directly implemented by subclass
# no corresponding v1 method
# v1 method directly implemented by subclass
# avoid including v1 on_any in the v1 deprecation below
# show delegated host
# in case we have 'extra resolution'
# Callback does not declare result_format nor extend result_format_callback
# Callback does not declare pretty_results nor extend result_format_callback
# pretty_results=False overrides any specified indentation
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
# remove invocation unless specifically wanting it
# remove diff information from screen output
# remove error/warning values; the stdout callback should have already handled them
# ensure the dumped view matches the transformed view a playbook sees
# Just return ``abridged_result`` without going through serialization
# to permit callbacks to take advantage of ``_dump_results``
# that want to further modify the result, or use custom serialization
# None is a sentinel in this case that indicates default behavior
# default behavior for yaml is to prettify results
# if we already have stdout, we don't need stdout_lines
# if we already have stderr, we don't need stderr_lines
# sort_keys=sort_keys  # This requires PyYAML>=5.1
# DTFIX5: add test to exercise this case
# display warnings from the current task result if `warnings` was not removed from `result` (or made falsey)
# display deprecations from the current task result if `deprecations` was not removed from `result` (or made falsey)
# display exception from the current task result if `exception` was not removed from `result` (or made falsey)
# DTFIX5: make/doc/porting-guide a public version of this method?
# format complex structures into 'files'
# just remove them as now they get handled by individual callbacks
# mostly controls that debug only outputs what it was meant to
# FIXME: this is a terrible heuristic to format debug's output- it masks exception detail
# msg should be alone
# 'var' value as field, so eliminate others and what is left should be varname
# V2 METHODS, by default they call v1 counterparts if possible
# FIXME, get real clock
# Attempt to get the async job ID. If the job does not finish before the
# async timeout value, the ID may be within the unparsed 'async_result' dict.
# no v1 correspondence
# (c) 2016 Matt Clay <matt@mystile.com>
# ignore failure if expected and toggle result if asked for
# concatenate task include output from multiple items
# FIXME: this method should not exist, delegate "suggested keys to display" to the plugin or something... As-is, the placement of this
# Cache output prefix for task if provided
# This is needed to properly display 'RUNNING HANDLER' and similar
# when hiding skipped/ok task results
# Preserve task name, as all vars may not be available for templating
# when we need it later
# Explicitly set to None for strategy free/host_pinned to account for any cached
# task title from a previous non-free play
# Display the task banner immediately if we're not doing any filtering based on task result
# args can be specified as no_log in several places: in the task or in
# the argument spec.  We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
# machine and we haven't run it there yet at this time.
# So we give people a config option to affect display of the args so
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
# FIXME: the no_log value is not templated at this point, so any template will be considered truthy
# Use cached task name
# print custom stats if required
# per host
# TODO: come up with 'pretty format'
# print per run custom stats
# show CLI arguments
# transform to a string
# extract just the actual error message from the exception text
# (c) 2017 Red Hat Inc.
# This appears to be benign.
# (c) 2016 Red Hat Inc.
#: compiled bytes regular expressions as stdout
# type: list[re.Pattern]
#: compiled bytes regular expressions as stderr
#: compiled bytes regular expressions to remove ANSI codes
# CSI ? 1 h ESC =
# [Backspace] .
# ANSI reset code
#: terminal initial prompt
#: terminal initial answer
#: Send newline after prompt match
# messages for detecting prompted password issues
# this could be simplified, but kept as is for now for backwards string matching
# handle -XnxxX flags only
# type: str | None
# type: tuple[str, ...]
# many connection plugins cannot provide tty, set to True if your become
# plugin requires a tty, i.e su
# plugin allows for pipelining execution
# prompt to match
# TODO: add deprecation warning for ValueError in devel that removes the playcontext fallback
# this is a noop, the 'real' runas is implemented
# inside the windows powershell execution subsystem
# See ansible.executor.powershell.become_wrapper.ps1 for the
# parameter names
# Colon or unicode fullwidth colon
# preserve the actual matched string so we can scrub the output
# Prompt handling for ``su`` is more complicated, this
# is used to satisfy the connection plugin
# (c) 2014, Serge van Ginderachter <serge@vanginderachter.be>
# Copyright 2017 RedHat, inc
#############################################
# type: dict[str, list[str]]
# ignore empty files
# realpath is expensive
# avoid 'chroot' type inventory hostnames /path/to/chroot
# load vars
# cache missing dirs so we don't have to keep looking for things beneath the
# cache non-directory matches
# (c) 2017,  Red Hat, inc
# along with Ansible.  If not, see <https://www.gnu.org/licenses/>.
# Helper methods
# placeholder for backwards compat
# A hostname such as db[1:6]-node is considered to consists
# three parts:
# head: 'db'
# nrange: [1:6]; range() is a built-in. Can't use the name
# tail: '-node'
# Add support for multiple ranges in a host so:
# db[01:10:3]node-[01:10]
# - to do this we split off at the first [...] set, getting the list
# - also add an optional third parameter which contains the step. (Default: 1)
# range length formatting hint
# range sequence
# not an alpha range
# 3rd party plugins redefine this to
# use custom group name sanitization
# since constructed features enforce
# it by default.
# These attributes are set by the parse() method on this (base) class.
# avoid loader cache so meta: refresh_inventory can pick up config changes
# if we read more than once, fs cache should be good enough
# a plugin can be loaded via many different names with redirection- if so, we want to accept any of those names
# no data
# this is not my config file
# configs are dictionaries
# Can the given hostpattern be parsed as a host with an optional port
# specification?
# not a recognizable host pattern
# Once we have separated the pattern, we expand it into list of one or
# more hostnames, depending on whether it contains any [x:y] ranges.
# process each 'group entry'
# ensure group exists, use sanitized name
# add host to group
# if list item is empty, 'default_value' will be used as group name
# key's value is empty
# exclude case of empty list and dictionary, because these are valid constructions
# simply no groups need to be constructed, but are still falsy
# template trust is applied internally to strings
# hardcode exclusion for TOML to prevent partial parsing of things we know we don't want
# Read in the hosts, groups, and variables defined in the inventory file.
# Faster to do to_text once on a long string than many
# times on smaller strings
# Handle non-utf8 in comment lines: https://github.com/ansible/ansible/issues/17593
# Replace is okay for comment lines
# data.append(to_text(line, errors='surrogate_then_replace'))
# Currently we only need these lines for accurate lineno in errors
# Non-comment lines still have to be valid uf-8
# We behave as though the first line of the inventory is '[ungrouped]',
# and begin to look for host definitions. We make a single pass through
# each line of the inventory, building up self.groups and adding hosts,
# subgroups, and setting variables as we go.
# Skip empty lines and comments
# Is this a [section] header? That tells us what group we're parsing
# definitions for, and what kind of definitions to expect.
# If we haven't seen this group before, we add a new Group.
# Either [groupname] or [groupname:children] is sufficient to declare a group,
# but [groupname:vars] is allowed only if the # group is declared elsewhere.
# We add the group anyway, but make a note in pending_declarations to check at the end.
# It's possible that a group is previously pending due to being defined as a child
# group, in that case we simply pass so that the logic below to process pending
# declarations will take the appropriate action for a pending child group instead of
# incorrectly handling it as a var state pending declaration
# When we see a declaration that we've been waiting for, we process and delete.
# It's not a section, so the current state tells us what kind of
# definition it must be. The individual parsers will raise an
# error if we feed them something they can't digest.
# [groupname] contains host definitions that must be added to
# the current group.
# [groupname:vars] contains variable definitions that must be
# applied to the current group.
# [groupname:children] contains subgroup names that must be
# added as children of the current group. The subgroup names
# must themselves be declared as groups, but as before, they
# may only be declared later.
# This can happen only if the state checker accepts a state that isn't handled above.
# Any entries in pending_declarations not removed by a group declaration above mean that there was an unresolved reference.
# We report only the first such error here.
# TODO: We parse variable assignments as a key (anything to the left of
# an '='"), an '=', and a value (anything left) and leave the value to
# _parse_value to sort out. We should be more systematic here about
# defining what is acceptable, how quotes work, and so on.
# A host definition comprises (1) a non-whitespace hostname or range,
# optionally followed by (2) a series of key="some value" assignments.
# We ignore any trailing whitespace and/or comments. For example, here
# are a series of host definitions in a group:
# [groupname]
# alpha
# beta:2345 user=admin      # we'll tell shlex
# gamma sudo=True user=root # to ignore comments
# Try to process anything remaining as a series of key=value pairs.
# some YAML parsing prevention checks
# NB: intentional coercion of tuple/set to list, deal with it
# FIXME: enforce keys are strings
# literal_eval parses ellipsis, but it's not a supported variable type
# convert unsupported variable types recognized by literal_eval back to str
# Using explicit exceptions.
# Likely a string that literal_eval does not like. We will then just set it.
# For some reason this was thought to be malformed.
# Is this a hash with an equals at the end?
# this is mostly unnecessary, but prevents the (possible) case of bytes literals showing up in inventory
# Section names are square-bracketed expressions at the beginning of a
# line, comprising (1) a group name optionally followed by (2) a tag
# that specifies the contents of the section. We ignore any trailing
# whitespace and/or comments. For example:
# [somegroup:vars]
# [naughty:children] # only get coal in their stockings
# FIXME: What are the real restrictions on group names, or rather, what
# should they be? At the moment, they must be non-empty sequences of non
# whitespace characters excluding ':' and ']', but we should define more
# precise rules in order to support better diagnostics.
# advanced_host_list does not set vars, so needs no special trust assistance from the inventory API
# implicit trust behavior is already added by the YAML parser invoked by the loader
# Allow pass-through of data structures for templating later (if applicable).
# This limitation was part of the original plugin implementation and was updated to maintain feature parity with the new templating API.
# Copyright (c) 2018 Matt Martz <matt@sivel.net>
# we need the inventory system to mark trust for us, since we're not manually traversing var assignments
# Copyright (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# obj will be added by inventory manager
# if no other errors happened, and you want to force displaying stderr, do so now
# A "_meta" subelement may contain a variable "hostvars" which contains a hash for each host
# if this "hostvars" exists at all then do not call --host for each # host.
# This is for efficiency and scripts should still return data
# if called with --host for backwards compat with 1.2 and earlier.
# Use standard legacy trust inversion here.
# Unlike the normal inventory output, everything here is considered a variable and thus supports trust (and trust inversion).
# DTFIX-FUTURE: another use case for the "not quite help text, definitely not message" diagnostic output on errors
# host_list does not set vars, so needs no special trust assistance from the inventory API
# no need to set trusted_by_default, since the consumers of this value will always consult the real plugin substituted during our parse()
# unfortunate magic to swap the real plugin type we're proxying here into the inventory data API wrapper, so the wrapper can make the right compat
# decisions based on the metadata the real plugin provides instead of our metadata
# We expect top level keys to correspond to groups, iterate over them
# to get host, vars and subgroups (which we iterate over recursively)
# make sure they are dicts
# convert strings to dicts as these are allowed
# Go over hosts (less var copies)
# get available variables to templar
# adds facts if cache is active
# create composite vars
# refetch host vars in case new ones have been created above
# constructed groups based on conditionals
# constructed groups based variable values
# Copyright (c) 2014, Chris Church <chris@ninemoreminutes.com>
# Common shell filenames that this plugin handles.
# Note: sh is the default shell plugin so this plugin may also be selected
# This code needs to be SH-compliant. BASH-isms will not work if /bin/sh points to a non-BASH shell.
# if the filename is not listed in any Shell plugin.
# Family of shells this has.  Must match the filename without extension
# commonly used
# How to end lines in a python script one-liner
# In the following test, each condition is a check and logical
# comparison (|| or &&) that sets the rc value.  Every check is run so
# the last check in the series to fail will be the rc that is returned.
# If a check fails we error before invoking the hash functions because
# hash functions may successfully take the hash of a directory on BSDs
# (UFS filesystem?) which is not what the rest of the ansible code expects
# If all of the available hashing methods fail we fail with an rc of 0.
# This logic is added to the end of the cmd at the bottom of this function.
# Return codes:
# checksum: success!
# 0: Unknown error
# 1: Remote file does not exist
# 2: No read permissions on the file
# 3: File is a directory
# 4: No python interpreter
# Quoting gets complex here.  We're writing a python string that's
# used by a variety of shells on the remote host to invoke a python
# "one-liner".
# NOQA
# (c) 2016 RedHat
# This file is part of Ansible.
# Not used but here for backwards compatibility.
# ansible.posix.fish uses (but does not actually use) this value.
# https://github.com/ansible-collections/ansible.posix/blob/f41f08e9e3d3129e709e122540b5ae6bc19932be/plugins/shell/fish.py#L38-L39
# Normalize the tmp directory strings. We don't use expanduser/expandvars because those
# can vary between remote user and become user.  Therefore the safest practice will be for
# this to always be specified as full paths)
# Make sure all system_tmpdirs are absolute otherwise they'd be relative to the login dir
# which is almost certainly going to fail in a cornercase.
# We can remove the try: except in the future when we make ShellBase a proper subset of
# *all* shells.  Right now powershell and third party shells which do not use the
# shell_common documentation fragment (and so do not have system_tmpdirs) will fail
# some shells (eg, powershell) are snooty about filenames/extensions, this lets the shell plugin have a say
# When system is specified we have to create this in a directory where
# other users can read and access the tmp directory.
# This is because we use system to create tmp dirs for unprivileged users who are
# sudo'ing to a second unprivileged user.
# The 'system_tmpdirs' setting defines directories we can use for this purpose
# the default are, /tmp and /var/tmp.
# So we only allow one of those locations if system=True, using the
# passed in tmpdir if it is valid or the first one from the setting if not.
# use mkdir -p to ensure parents exist, but mkdir fullpath to ensure last one is created by us
# change the umask in a subshell to achieve the desired mode
# also for directories created with `mkdir -p`
# Check that the user_path to expand is safe
# if present the user name is appended to resolve "that user's home"
# these are the metachars that have a special meaning in cmd that we want to escape when quoting
# Common shell filenames that this plugin handles
# type: frozenset[str]
# Used by various parts of Ansible to do Windows specific changes
# cmd does not support single quotes that the shlex_quote uses. We need to override the quoting behaviour to
# better match cmd.exe.
# https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/
# Return an empty argument
# Escape the metachars as we are quoting the string to stop cmd from interpreting that metachar. For example
# 'file &whoami.exe' would result in 'file $(whoami.exe)' instead of the literal string
# https://stackoverflow.com/questions/3411771/multiple-character-replace-with-python
# '^' must be the first char that we scan and replace
# I can't find any docs that explicitly say this but to escape ", it needs to be prefixed with \^.
# This is weird, we are matching on byte sequences that match the utf-16-be
# matches for '_x(a-fA-F0-9){4}_'. The \x00 and {4} will match the hex sequence
# when it is encoded as utf-16-be byte sequence.
# If the line does not contain the closing CLIXML tag, we just
# add the found header line and this line without trying to parse.
# While we expect the stderr to be UTF-8 encoded, we fallback to
# the most common "ANSI" codepage used by Windows cp437 if it is
# not valid UTF-8.
# cp427 can decode any sequence and once we have the string, we
# can encode any cp427 chars to UTF-8.
# Any errors and we just add the original CLIXML header and
# line back in.
# The next line should contain the full CLIXML data.
# This should never happen but if there was a CLIXML header without a newline
# following it, we need to add it back.
# A serialized string will serialize control chars and surrogate pairs as
# _xDDDD_ values where DDDD is the hex representation of a big endian
# UTF-16 code unit. As a surrogate pair uses 2 UTF-16 code units, we need
# to operate our text replacement on the utf-16-be byte encoding of the raw
# text. This allows us to replace the _xDDDD_ values with the actual byte
# values and then decode that back to a string from the utf-16-be bytes.
# There are some scenarios where the stderr contains a nested CLIXML element like
# '<# CLIXML\r\n<# CLIXML\r\n<Objs>...</Objs><Objs>...</Objs>'.
# Parse each individual <Objs> element and add the error strings to our stderr list.
# https://github.com/ansible/ansible/issues/69550
# If this is a new CLIXML element, add a newline to separate the messages.
# Powershell is handled differently.  It's selected when winrm is the
# connection
# We try catch as some connection plugins don't have a console (PSRP).
# TODO: add binary module support
# powershell/winrm env handling is handled in the exec wrapper
# use normpath() to remove doubled slashed and convert forward to backslashes
# Because ntpath.join treats any component that begins with a backslash as an absolute path,
# we have to strip slashes from at least the beginning, otherwise join will ignore all previous
# path components except for the drive.
# powershell requires that script files end with .ps1
# Allow Windows paths to be specified using either slash.
# This is not called in Ansible anymore but it is kept for backwards
# compatibility in case other action plugins outside Ansible calls this.
# Windows does not have an equivalent for the system temp files, so
# the param is ignored
# compatibility in case other actions plugins outside Ansible called this.
# pipelining bypass
# non-pipelining
# Running a module without the exec_wrapper and with an argument
# Running a module with ANSIBLE_KEEP_REMOTE_FILES=true, the script
# arg is actually the input manifest JSON to provide to the bootstrap
# wrapper.
# The module is assumed to be a binary
# There are 5 chars that need to be escaped in a single quote.
# https://github.com/PowerShell/PowerShell/blob/b7cb335f03fe2992d0cbd61699de9d9aafa1d7c1/src/System.Management.Automation/engine/parser/CharTraits.cs#L265-L272
# try to propagate exit code if present- won't work with begin/process/end-style scripts (ala put_file)
# NB: the exit code returned may be incorrect in the case of a successful command followed by an invalid command
# file testing
# (c) 2016, Ansible, Inc
# numbers
# some modules return a 'results' key
# For async tasks, return status
# For non-async tasks, warn user, but return as if started
# For non-async tasks, warn user, but return as if finished
# failure testing
# changed testing
# skip testing
# async testing
# version comparison
# lists
# truthiness
# vault
# overrides that require special arg handling
# (c) 2015, Ansible, Inc
# path testing
# paramiko and gssapi are incompatible and raise AttributeError not ImportError
# When running in FIPS mode, cryptography raises InternalError
# https://bugzilla.redhat.com/show_bug.cgi?id=1778939
# In case xml reply is transformed or namespace is removed in
# ncclient device specific handler return modified xml response
# if data node is present in xml response return the xml string
# with data node as root
# return raw xml string received from host with rpc-reply as the root node
# TODO Restore .xml, when ncclient supports it for all platforms
# Copyright: (c) 2016, Allen Sanabria <asanabria@linuxdynasty.org>
# convert/validate extensions list
# tmp no longer has any effect
# Validate arguments
# set internal vars from args
# origin.path is not present for ad-hoc tasks
# Depth 1 is the root, relative_to omits the root
# support empty files, but not falsey values
# Never include main.yml from a role, as that is the default included by the role
# Copyright: (c) 2016-2018, Matt Davis <mdavis@ansible.com>
# Copyright: (c) 2018, Sam Doran <sdoran@redhat.com>
# No args were provided
# Convert seconds to minutes. If less than 60, set it to 0.
# FIXME: only execute the module if we don't already have the facts we need
# Convert bare strings to a list
# prevent collection search by calling with ansible.legacy (still allows library/ override of find)
# override connection timeout from defaults to the custom value
# try and get boot time
# FreeBSD returns an empty string immediately before reboot so adding a length
# check to prevent prematurely assuming system has rebooted
# may need to reset the connection in case another reboot occurred
# which has invalidated our connection
# Use exponential backoff with a max timeout, plus a little bit of randomness
# If the connection is closed too quickly due to the system being shutdown, carry on
# keep on checking system boot_time with short connection responses
# Get the connect_timeout set on the connection to compare to the original
# reset the connection to clear the custom connection timeout
# finally run test command to ensure everything is working
# FUTURE: add a stability check (system must remain up for N seconds) to deal with self-multi-reboot updates
# If running with local connection, fail so we don't reboot ourselves
# Get current boot time
# Get the original connection_timeout option var so it can be reset after
# Initiate reboot
# Make sure reboot was successful
# coding: utf-8
# Purposefully not using to_bytes here for performance reasons
# A set of valid arguments
# behavioral attributes
# avoid circular global import since PluginLoader needs ActionBase
# shared_loader_obj was just a ref to `ansible.plugins.loader` anyway; this lets us inherit its type
# interpreter discovery state
# does not default to {'changed': False, 'failed': False}, as it used to break async
# Error if invalid argument is passed
# Fail for validation errors, even in check mode
# Search module path(s) for named module.
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
# NB: we only rewrite the module if it's not being called by the user (eg, an action calling something else)
# and if it's unqualified or FQ to a builtin
# TODO: move this tweak down to the modules, not extensible here
# Remove extra quotes surrounding path parameters before sending to module.
# take the last one in the redirect list, we may have successfully jumped through N other redirects
# This is a for-else: http://bit.ly/1ElPkyg
# insert shared code and arguments into the module
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
# update the local vars copy for the retry
# TODO: this condition prevents 'wrong host' from being updated
# but in future we would want to be able to update 'delegated host facts'
# irrespective of task settings
# store in local task_vars facts collection for the retry and any other usages in this worker
# preserve this so _execute_module can propagate back to controller as a fact
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
# plugin does not have, fallback to play_context
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
# plugin does not have remote_user option, fallback to default and/play_context
# plugin does not use config system, fallback to old play_context
# if we don't use become then we know we aren't switching to a
# different unprivileged user
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths  as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
# error handling on this seems a little aggressive?
# stdout was empty or just space, set to / to trigger error in next if
# Catch failure conditions, files should never be
# written to locations in /.
# If we have gotten here we have a working connection configuration.
# If the connection breaks we could leave tmp directories out on the remote system.
# Step 1: Are we on windows?
# This won't work on Powershell as-is, so we'll just completely
# skip until we have a need for it, at which point we'll have to do
# something different.
# Step 2: If we're not becoming an unprivileged user, we are roughly
# done. Make the files +x if we're asked to, and return.
# Can't depend on the file being transferred with required permissions.
# Only need user perms because no become was used here
# If we're still here, we have an unprivileged user that's different
# than the ssh user.
# Try to use file system acls to make the files readable for sudo'd
# user
# Apple patches their "file_cmds" chmod with ACL support
# POSIX-draft ACL specification. Solaris, maybe others.
# See chmod(1) on something Solaris-based for syntax details.
# TODO: this form fails silently on freebsd.  We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
# Apple
# POSIX-draft
# Step 3a: Are we able to use setfacl to add user ACLs to the file?
# Step 3b: Set execute if we need to. We do this before anything else
# because some of the methods below might work but not let us set
# permissions as part of them.
# Step 3c: File system ACLs failed above; try falling back to chown.
# Check if we are an admin/root user. If we are and got here, it means
# we failed to chown as root and something weird has happened.
# Step 3d: Try macOS's special chmod + ACL
# macOS chmod's +a flag takes its own argument. As a slight hack, we
# pass that argument as the first element of remote_paths. So we end
# up running `chmod +a [that argument] [file 1] [file 2] ...`
# Solaris-based chmod will return 5 when it sees an invalid mode,
# and +a is invalid there. Because it returns 5, which is the same
# thing sshpass returns on auth failure, our sshpass code will
# assume that auth failed. If we don't handle that case here, none
# of the other logic below will get run. This is fairly hacky and a
# corner case, but probably one that shows up pretty often in
# Solaris-based environments (and possibly others).
# Step 3e: Try Solaris/OpenSolaris/OpenIndiana-sans-setfacl chmod
# Similar to macOS above, Solaris 11.4 drops setfacl and takes file ACLs
# via chmod instead. OpenSolaris and illumos-based distros allow for
# using either setfacl or chmod, and compatibility depends on filesystem.
# It should be possible to debug this branch by installing OpenIndiana
# (use ZFS) and going unpriv -> unpriv.
# we'll need this down here
# Step 3f: Common group
# Otherwise, we're a normal user. We failed to chown the paths to the
# unprivileged user, but if we have a common group with them, we should
# be able to chown it to that.
# Note that we have no way of knowing if this will actually work... just
# because chgrp exits successfully does not mean that Ansible will work.
# We could check if the become user is in the group, but this would
# create an extra round trip.
# Also note that due to the above, this can prevent the
# world_readable_temp logic below from ever getting called. We
# leave this up to the user to rectify if they have both of these
# features enabled.
# warn user that something might go weirdly here.
# Step 4: World-readable temp directory
# chown and fs acls failed -- do things this insecure way only if
# the user opted in in the config file
# No longer used
# ansible.windows.win_stat added this in 1.11.0
# Unknown opts are ignored as module_args could be specific for the
# module that is being executed.
# empty might be matched, 1 should never match, also backwards compatible
# happens sometimes when it is a dir and not on bsd
# We only expand ~/path and ~username/path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
# use remote user instead, if none set default to current user
# use shell to construct appropriate command and execute
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
# set check mode in the module arguments, if required
# set no log in the module arguments, if required
# set debug in the module arguments, if required
# let module know we are in diff mode
# let module know our verbosity
# give the module information about the ansible version
# give the module information about its name
# set the syslog facility to be used in the module
# let module know about filesystems that selinux treats specially
# give the module the socket for persistent connections
# make sure all commands use the designated shell executable
# make sure modules are aware if they need to keep the remote files
# make sure all commands use the designated temporary directory if created
# force fallback on remote_tmp as user cannot normally write to dir
# make sure the remote_tmp value is sent through in case modules needs to create their own
# tells the module to ignore options that are not in its argspec.
# allow user to insert string to add context to remote logging
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
# if a module name was not specified for this execution, use the action from the task
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
# we might need remote tmp dir
# we'll also need a tmp file to hold our module arguments
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task.
# configure, upload, and chmod the async_wrapper module
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
# maintain a fixed number of positional parameters for async_wrapper
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
# remove none/empty
# actually execute
# parse the main result
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
# NOTE: dnf returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
# remove internal keys
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
# FIXME: for backwards compat, figure out if still makes sense
# pre-split stdout/stderr into lines if needed
# if the value is 'False', a default won't catch it.
# propagate interpreter discovery results back to the controller
# this must occur after normalize_result_exception, since it checks the type of data to ensure it's a dict
# "not found" case is currently not tested; it was once reproducible
# see: https://github.com/ansible/ansible/pull/53534
# cause context *might* be useful in the traceback, but the JSON deserialization failure message is not
# Because the underlying action API is built on result dicts instead of exceptions (for all but the most catastrophic failures),
# we're using a tweaked version of the module exception handler to get new ErrorDetail-backed errors from this part of the code.
# Ideally this would raise immediately on failure, but this would likely break actions that assume `ActionBase._execute_module()`
# does not raise on module failure.
# FIXME: move to connection base
# if not cmd:
# https://github.com/ansible/ansible/issues/68054
# if sudoable and have become
# if not using network_cli
# if we allow same user PE or users are different and either is set
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
# Change directory to basedir of task for command execution when connection is local
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
# be sure to remove the BECOME-SUCCESS message now
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate.  To fix this, we'd need
# to move the diffing from the callback plugins into here.
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'.  Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'�' ; diff['after'] = u'�'  When the callback plugin later
# diffs before and after it shows an empty diff.
# dwim already deals with playbook basedirs
# if missing it will return a file not found exception
# deal with 'setup specific arguments'
# TODO: remove in favor of controller side argspec detecting valid arguments
# network facts modules must support gather_subset
# Strip out keys with ``None`` values, effectively mimicking ``omit`` behavior
# This ensures we don't pass a ``None`` value as an argument expecting a specific type
# handle module defaults
# on conflict the last plugin processed wins, but try to do deep merge and append to lists.
# remove as this will cause 'module not found' errors
# most don't realize how setup works with networking connection plugins (forced_local)
# no network OS and setup not in list, add setup by default since 'smart'
# copy the value with list() so we don't mutate the config
# serially execute each module
# just one module, no need for fancy async
# TODO: use gather_timeout to cut module execution if module itself does not support gather_timeout
# do it async, aka parallel
# TODO: make this action complain about async/async settings, use parallel option instead .. or remove parallel in favor of async settings?
# restore value for post processing
# tell executor facts were gathered
# hack to keep --verbose from showing all the setup module result
# (c) 2015, Brian Coca  <briancoca+dev@gmail.com>
# (c) 2018, Matt Martz  <matt@sivel.net>
# everything is remote, so we just execute the module
# without changing any of the module arguments
# call with ansible.legacy prefix to prevent collections collisions while allowing local override
# Copyright: (c) 2023, Ansible Project
# Carry-over concept from the package action plugin
# if we delegate, we should use delegated host's facts
# could not get it from template!
# eliminate collisions with collections search while still allowing local override
# Copyright 2012, Seth Vidal <skvidal@fedoraproject.org>
# We need to be able to modify the inventory
# TODO: create 'conflict' detection in base class to deal with repeats and aliases and warn user
# Parse out any hostname:port patterns
# not a parsable hostname, but might still be usable
# add it to the group if that was specified
# Add any variables to the new_host
# individual modules might disagree but as the generic the action plugin, pass at this point.
# do work!
# moved from setup module as now we filter out all _ansible_ from result
# remove a temporary path we created
# explicitly call `ansible.legacy.command` for backcompat to allow library/ override of `command` while not allowing
# collections search for an unqualified `command` module
# Copyright 2012, Tim Bielawa <tbielawa@redhat.com>
# Don't break backwards compat, allow floats, by using int callable
# Add a note saying the output is hidden if echo is disabled
# If no custom prompt is specified, set a default prompt
# Begin the hard work!
# show the timer and control prompts
# show the prompt specified in the task
# corner case where enter does not continue, wait for timeout/interrupt only
# don't complete on LF/CR; we expect a timeout/interrupt and ignore user input when a pause duration is specified
# Only echo input if no timeout is specified
# wait specified duration
# user interrupt
# Copyright 2012, Jeroen Hoekx <jeroen@hoekx.be>
# Copyright: (c) 2015, Michael DeHaan <michael.dehaan@gmail.com>
# Options type validation
# strings
# booleans
# assign to local vars for ease of use
# We need to convert unescaped sequences to proper escaped sequences for Jinja2
# logical validation
# template the source data locally & get ready to transfer
# add ansible 'template' vars
# mode is either the mode from task.args or the mode of the source file if the task.args
# mode == 'preserve'
# remove 'template only' options:
# call with ansible.legacy prefix to eliminate collisions with collections while still allowing local override
# Shell module is implemented via command with a special arg
# Shell shares the same module code as command. Fail if command
# specific options are set.
# FIXME: validate source and dest are strings; use basic.py and module specs
# Get checksum for the remote file. Don't bother if using become as slurp will be used.
# Follow symlinks because fetch always follows symlinks
# Historically, these don't fail because you may want to transfer
# a log file that possibly MAY exist but keep going to fetch other
# log files. Today, this is better achieved by adding
# ignore_errors or failed_when to the task.  Control the behaviour
# via fail_when_missing
# use slurp if permissions are lacking or privilege escalation is needed
# calculate the destination name
# ensure we only use file name, avoid relative paths
# TODO: ? dest = os.path.expanduser(dest.replace(('../','')))
# if the path ends with "/", we'll use the source filename as the
# destination filename
# if dest does not start with "/", we'll assume a relative path
# files are saved in dest dir, with a subdir for each host, then the filename
# calculate checksum for the local file
# create the containing directories, if needed
# fetch the file and check for changes
# For backwards compatibility. We'll return None on FIPS enabled systems
# (c) 2017 Toshio Kuratomi <tkuraotmi@ansible.com>
# Supplement the FILE_COMMON_ARGUMENTS with arguments that are specific to file
# Convert the path segments into byte strings
# Dereference the symlnk
# Add the file pointed to by the symlink
# Mark this file as a symlink to copy
# Just a normal file
# Just insert the symlink if the target directory
# exists inside of the copy already
# Walk the dirpath to find all parent directories.
# Reached the point at which the directory
# tree is already known.  Don't add any
# more or we might go to an ancestor that
# isn't being copied.
# This was a circular symlink.  So add it as
# a symlink
# Walk the directory pointed to by the symlink
# Add the symlink to the destination
# Just a normal directory
# Check if the source ends with a "/" so that we know which directory
# level to work at (similar to rsync)
# Calculate the offset needed to strip the base_path to make relative
# paths
# Make sure we're making the new paths relative
# Actually walk the directory hierarchy
# NOTE: adding invocation arguments here needs to be kept in sync with
# any no_log specified in the argument_spec in the module.
# This is not automatic.
# NOTE: do not add to this. This should be made a generic function for action plugins.
# This should also use the same argspec as the module instead of keeping it in sync.
# NOTE: Should be removed in the future. For now keep this broken
# behaviour, have a look in the PR 51582
# If the local file does not exist, get_real_file() raises AnsibleFileNotFound
# Get the local mode and set if user wanted it preserved
# https://github.com/ansible/ansible-modules-core/issues/1124
# This is kind of optimization - if user told us destination is
# dir, do path manipulation right away, otherwise we still check
# for dest being a dir via remote call below.
# Attempt to get remote file info
# The dest is a directory.
# If source was defined as content remove the temporary file and fail out.
# Append the relative source location to the destination and get remote stats again
# remote_file exists so continue to next iteration.
# Generate a hash of the local file.
# The checksums don't match and we will change or error out.
# Define a remote directory that we will copy the file to.
# ensure we keep suffix for validate
# We have copied the file remotely and no longer require our content_tempfile
# FIXME: I don't think this is needed when PIPELINING=0 because the source is created
# world readable.  Access to the directory itself is controlled via fixup_perms2() as
# part of executing the module. Check that umask with scp/sftp/piped doesn't cause
# a problem before acting on this idea. (This idea would save a round-trip)
# fix file permissions when the copy is done as a different user
# Continue to next iteration if raw is defined.
# Run the copy module
# src and dest here come after original and override them
# we pass dest only to make sure it includes trailing slash in case of recursive copy
# no need to transfer the file, already correct hash, but still need to call
# the file module in case we want to change attributes
# Fix for https://github.com/ansible/ansible-modules-core/issues/1568.
# If checksums match, and follow = True, find out if 'dest' is a link. If so,
# change it to point to the source of the link.
# Build temporary module_args.
# src is sent to the file module in _original_basename, not in src
# Execute the file module.
# ensure user is not setting internal parameters
# Define content_tempfile in case we set it after finding content populated.
# If content is defined make a tmp file and write the content into it.
# If content comes to us as a dict it should be decoded json.
# We need to encode it back into a string to write it out.
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
# find_needle returns a path that may not have a trailing slash on
# a directory so we need to determine that now (we use it just
# like rsync does to figure out whether to include the directory
# or only the files inside the directory
# find in expected paths
# A list of source file tuples (full_path, relative_path) which will try to copy to the destination
# If source is a directory populate our list else source is a file and translate it to a tuple.
# Get a list of the files we want to replicate on the remote side
# If it's recursive copy, destination is always a dir,
# explicitly mark it so (note - copy module relies on this).
# FIXME: Can we optimize cases where there's only one file, no
# symlinks and any number of directories?  In the original code,
# empty directories are not copied....
# A register for if we executed a module.
# Used to cut down on command calls when not recursive.
# expand any user home dir specifier
# copy files over.  This happens first as directories that have
# a file do not need to be created later
# We only follow symlinks for files in the non-recursive case
# Find directories that are leaves as they might not have been
# created yet.
# Use file module to create these
# Copy symlinks over
# Only follow remote symlinks in the non-recursive case
# file module cannot deal with 'preserve' mode and is meaningless
# for symlinks anyway, so just don't pass it.
# the file module returns the file path as 'path', but
# the copy module uses 'dest', so add it if it's not there
# Delete tmp path
# (c) 2013-2016, Michael DeHaan <michael.dehaan@gmail.com>
# always put a newline between fragments if the previous fragment didn't end with a newline.
# delimiters should only appear between fragments
# un-escape anything like newlines
# always make sure there's a newline after the
# delimiter, so lines don't run together
# call assemble via ansible.legacy to allow library/ overrides of the module without collection search
# Does all work assembling the file
# setup args for running modules
# clean assemble specific options
# Copyright 2012, Dag Wieers <dag@wieers.com>
# `that` is the only key requiring special handling; delegate to base handling otherwise
# if `that` is not a string, we don't need to attempt to resolve it as a template before validation (which will also listify it)
# if `that` is entirely a string template, we only want to resolve to the container and avoid templating the container contents
# only use `templated_that` if it is a list
# explicitly not validating types `elements` here to let type rules for conditionals apply
# (c) 2012, Dag Wieers <dag@wieers.com>
# (c) 2015, Ansible Inc,
# HACK: list of unqualified service manager names that are/were built-in, we'll prefix these with `ansible.legacy` to
# avoid collisions with collections search
# run the 'service' module
# get defaults for specific module
# collection prefix known internal modules to avoid collisions from collections search, while still allowing library/ overrides
# use config
# no use, no config, get from facts
# we had no facts, so generate them
# very expensive step, we actually run fact gathering because we don't have facts for this host.
# actually get from facts
# run the 'package' module
# prefix with ansible.legacy to eliminate external collisions while still allowing library/ override
# avoid de-dent all on refactor
# (c) 2017, Dag Wieers <dag@wieers.com>
# CI-required python3 boilerplate
# PY3 compatibility to store exception for use outside of this block
# re-run interpreter discovery if we ran it in the first iteration
# call connection reset between runs if it's there
# Test module output
# If the connection has a transport_test method, use it first
# Use the ping module test to determine end-to-end connectivity
# On Windows platform, absolute paths begin with a (back)slash
# after chopping off a potential drive letter.
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# The chdir must be absolute, because a relative path would rely on
# remote node behaviour & user config.
# Powershell is the only Windows-path aware shell
# Every other shell is unix-path-aware.
# Split out the script as the first item in raw_params using
# shlex.split() in order to support paths and files with spaces in the name.
# Any arguments passed to the script will be added back later.
# Support executable paths and files with spaces in the name.
# check mode is supported if 'creates' or 'removes' are provided
# the task has already been skipped if a change would not occur
# If the script doesn't return changed in the result, it defaults to True,
# but since the script may override 'changed', just skip instead of guessing.
# transfer the file to a remote tmp location
# Convert raw_params to text for the purpose of replacing the script since
# parts and tmp_src are both unicode strings and raw_params will be different
# depending on Python version.
# Once everything is encoded consistently, replace the script path on the remote
# system with the remainder of the raw_params. This preserves quoting in parameters
# that would have been removed by shlex.split().
# set file permissions, more permissive when the copy is done as a different user
# add preparation steps to one ssh roundtrip executing the script
# PowerShell runs the script in a special wrapper to enable things
# like become and environment args
# FUTURE: use a more public method to get the exec payload
# the profile doesn't really matter since the module args dict is empty
# build the necessary exec wrapper command
# FUTURE: this still doesn't let script work on Windows with non-pipelined connections or
# full manual exec of KEEP_REMOTE_FILES
# now we execute script, always assume changed.
# in --check mode, always skip this module execution
# Copyright 2013 Dag Wieers <dag@wieers.com>
# a rare case where key templating is allowed; backward-compatibility for dynamic storage
# just as _facts actions, we don't set changed=true as we are not modifying the actual host
# this should not happen, but JIC we get here
# Copyright 2016 Ansible (RedHat, Inc)
# TODO: document this in non-empty set_stats.py module
# set boolean options, defaults are set above in stats init
# Copyright 2016, Toshio Kuratomi <tkuratomi@ansible.com>
# get task verbosity
# If var name is same as result, try to template it
# force flag to make debug output module always verbose
# propagate any warnings in the task result unless we're skipping the task
# Copyright 2021 Red Hat
# This action can be called from anywhere, so pass in some info about what it is
# validating args for so the error results make some sense
# Get the task var called argument_spec. This will contain the arg spec
# data dict (for the proper entry point for a role).
# the values that were passed in and will be checked against argument_spec
# async directory based on the shell option
# initialize response
# Backwards compat shim for when started/finished were ints,
# mostly to work with ansible.windows.async_status
# (c) 2013, Dylan Martin <dmartin@seattlecentral.edu>
# "copy" is deprecated in favor of "remote_src".
# They are mutually exclusive.
# We will take the information from copy and store it in
# the remote_src var to use later in this file.
# CCTODO: Fix path for Windows hosts.
# handle diff mode client side
# handle check mode client side
# remove action plugin only keys
# execute the unarchive module now, with the updated args (using ansible.legacy prefix to eliminate collections
# collisions with local override
# (c) 2014, Michael DeHaan <michael.dehaan@gmail.com>
# (c) 2018, Ansible Project
# Backwards compat only.  Just import the global display instead
# When using a prefix we must remove it from the key name before
# checking the expiry and returning it to the caller. Keys that do not
# share the same prefix cannot be fetched from the cache.
# TODO: only pass on non existing?
# (c) 2017, ansible by Red Hat
# moved actual classes to __init__ kept here for backward compat with 3rd parties
# (c) 2014, Brian Coca, Josh Drake, et al
# prevent unnecessary JSON serialization and key munging
# NOTE not in use anymore
# Gathering facts with run_once would copy the facts from one host to
# the others.
# Unless play is specifically tagged, gathering should 'always' run
# Default options to gather
# short circuit fact gathering if the entire playbook is conditional
# keep flatten (no blocks) list of all tasks from the play
# used for the lockstep mechanism in the linear strategy
# keep list of all handlers, it is copied into each HostState
# at the beginning of IteratingStates.HANDLERS
# the copy happens at each flush in order to restore the original
# list and remove any included handlers that might not be notified
# at the particular flush
# if we're looking to start at a specific task, iterate through
# the tasks for this host until we find the specified task
# finally, reset the host's state to IteratingStates.SETUP
# we have our match, so clear the start_at_task field on the
# play context to flag that we've started at a task (and future
# plays won't try to advance)
# Since we're using the PlayIterator to carry forward failed hosts,
# in the event that a previous host was not in the current inventory
# we create a stub state for it now
# try and find the next task, given the current state.
# try to get the current block from the list of blocks, and
# if we run past the end of the list we know we're done with
# this block
# First, we check to see if we were pending setup. If not, this is
# the first trip through IteratingStates.SETUP, so we set the pending_setup
# flag and try to determine if we do in fact want to gather facts for
# the specified host.
# Gather facts if the default is 'smart' and we have not yet
# done it for this host; or if 'explicit' and the play sets
# gather_facts to True; or if 'implicit' and the play does
# NOT explicitly set gather_facts to False.
# The setup block is always self._blocks[0], as we inject it
# during the play compilation in __init__ above.
# This is the second trip through IteratingStates.SETUP, so we clear
# the flag and move onto the next block in the list while setting
# the run state to IteratingStates.TASKS
# clear the pending setup flag, since we're past that and it didn't fail
# First, we check for a child task state that is not failed, and if we
# have one recurse into it for the next task. If we're done with the child
# state, we clear it and drop back to getting the next task from the list.
# failed child state, so clear it and move into the rescue portion
# get the next task recursively
# we're done with the child state, so clear it and continue
# back to the top of the loop to get the next task
# First here, we check to see if we've failed anywhere down the chain
# of states we have, and if so we move onto the rescue portion. Otherwise,
# we check to see if we've moved past the end of the list of tasks. If so,
# we move into the always portion of the block, otherwise we get the next
# task from the list.
# if the current task is actually a child block, create a child
# state for us to recurse into on the next pass
# since we've created the child state, clear the task
# so we can pick up the child state on the next pass
# The process here is identical to IteratingStates.TASKS, except instead
# we move into the always portion of the block.
# And again, the process here is identical to IteratingStates.TASKS, except
# instead we either move onto the next block in the list, or we set the
# run state to IteratingStates.COMPLETE in the event of any errors, or when we
# have hit the end of the list of blocks.
# reset handlers for HostState since handlers from include_tasks
# might be there from previous flush
# if something above set the task, break out of the loop now
# skip implicit flush_handlers if there are no handlers notified
# the state store in the `state` variable could be a nested state,
# notifications are always stored in the top level state, get it here
# in case handlers notifying other handlers, the notifications are not
# saved in `handler_notifications` and handlers are notified directly
# to prevent duplicate handler runs, so check whether any handler
# is notified
# we are failing `meta: flush_handlers`, so just reset the state to whatever
# it was before and let `_set_failed_state` figure out the next state
# if we've failed at all, or if the task list is empty, just return the current state
# preserve order
# This is a special case for when ending a host occurs in rescue.
# By definition the meta task responsible for ending the host
# is the last task, so we need to clear the fail state to mark
# the host as rescued.
# The reason we need to do that is because this operation is
# normally done when PlayIterator transitions from rescue to
# always when only then we can say that rescue didn't fail
# but with ending a host via meta task, we don't get to that transition.
# save the error raised here for use later
# create the overall result item
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
# ensure no_log processing recognizes at least one item needs to be censored
# FIXME: normalize `failed` to a bool, warn if the action/module used non-bool
# ensure to accumulate these
# make sure changed is set in the result, if it's not present
# get search path for this task to pass to lookup plugins
# Smuggle a special wrapped lookup invocation in as a local variable for its exclusive use when being evaluated as `with_(lookup)`.
# This value will not be visible to other users of this templar or its `available_variables`.
# ensure we always have a label
# Update template vars to reflect current loop iteration
# pause between loop iterations
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
# NB: this swap-a-dee-doo confuses some type checkers about the type of tmp_task/self._task
# Ensure per loop iteration results are registered in case `_execute()`
# returns early (when conditional, failure, ...).
# This is needed in case the registered variable is used in the loop label template.
# now update the result with the item info, and append the result
# to the list of results
# gets templated here unlike rest of loop_control fields, depends on loop_var above
# if plugin is loaded, get resolved name, otherwise leave original task connection
# break loop if break_when conditions are met
# delete loop vars before exiting loop
# done with loop var, remove for next iteration
# clear 'connection related' plugin variables for next iteration
# NOTE: run_once cannot contain loop vars because it's templated earlier also
# This is saving the post-validated field from the last loop so the strategy can use the templated value post task execution
# At the point this is executed it is safe to mutate self._task,
# since `self._task` is either a copy referred to by `tmp_task` in `_run_loop`
# or just a singular non-looped task
# always override, since a templated result could be an omit (-> None)
# DTFIX-FUTURE: improve error handling to prioritize the earliest exception, turning the remaining ones into warnings
# TaskTimeoutError is BaseException
# The warnings/deprecations in the result have already been captured in the DeferredWarningContext by _apply_task_result_compat.
# The captured warnings/deprecations are a superset of the ones from the result, and may have been converted from a dict to a dataclass.
# These are then used to supersede the entries in the result.
# a certain subset of variables exist.
# TODO: remove play_context as this does not take delegation nor loops correctly into account,
# the task itself should hold the correct values for connection/shell/become/terminal plugin options to finalize.
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
# loop error takes precedence
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
# pylint: disable=raising-bad-type
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
# and undefined vars (correct values are in cvars later on and connection plugins, if still error, blows up there)
# DTFIX-FUTURE: this should probably be declaratively handled in post_validate (or better, get rid of play_context)
# parser error, might be cause by undef too
# DTFIX-FUTURE: should not be possible to hit this now (all are AnsibleFieldAttributeError)?
# set templar to use temp variables until loop is evaluated
# Now we do final validation on the task, which sets all fields to their final values.
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
# free tempvars up, not used anymore, cvars and vars_copy should be mainly used after this point
# updating the original 'variables' at the end
# setup cvars copy, used for all connection related templating
# use vars from delegated host (which already include task vars) instead of original host
# just use normal host vars
# use magic var if it exists, if not, let task inheritance do it's thing.
# get the connection and the handler for this execution
# pc compare, left here for old plugins, but should be irrelevant for those
# using get_option, since they are cleared each iteration.
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
# make a copy of the job vars here, as we update them here and later,
# but don't want to pollute original
# update with connection info (i.e ansible_host/ansible_user)
# TODO: eventually remove as pc is taken out of the resolution path
# feed back into pc to ensure plugins not using get_option can get correct value
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules, right now rely on remote_is_local connection
# special handling for python interpreter for network_os, default to ansible python unless overridden
# this also avoids 'python discovery'
# get handler
# includes the default actual run + retries set by user/default
# the default is not set in FA because we need to differentiate "unset" value
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# set the failed property if it was missing.
# rc is here for backwards compatibility and modules that use it instead of 'failed'
# Make attempts and retries available early to allow their use in changed/failed_when
# set the changed property if it was missing.
# re-update the local copy of vars with the registered value, if specified,
# This gives changed/failed_when access to additional recently modified
# attributes of result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
# DTFIX-FUTURE: error normalization has not yet occurred; this means that the expressions used for until/failed_when/changed_when/break_when
# no conditional check, or it failed, so sleep for the specified time
# we ran out of attempts, so mark the result as failed
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
# DTFIX-FUTURE: why is this happening twice, esp since we're post-fork and these will be discarded?
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add connection vars results when delegating
# note: here for callbacks that rely on this info to display delegation
# and return
# translate non-WarningMessageDetail messages
# translate non-DeprecationSummary message dicts
# deprecated: description='enable the deprecation message for collection_name' core_version='2.23'
# CAUTION: This deprecation cannot be enabled until the replacement (deprecator) has been documented, and the schema finalized.
# self.deprecated('The `collection_name` key in the `deprecations` dictionary is deprecated.', version='2.27')
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparsable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
# Little hack to raise the exception if we've exhausted the timeout period
# If the async task finished, automatically cleanup the temporary
# status file left behind.
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
# No longer used, kept for backwards compat for plugins that explicitly accept this as an arg
# Also backwards compat call for those still using play_context
# load become plugin if needed
# If become is not enabled on the task it needs to be removed from the connection plugin
# https://github.com/ansible/ansible/issues/78425
# Older connection plugin that does not support set_become_plugin
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
# Some plugins are assigned to private attrs, ``become`` is not
# network_cli's "real" connection plugin is not named connection
# to avoid the confusion of having connection.connection
# TODO move to task method?
# keep list of variable names possibly consumed
# grab list of usable vars for this plugin
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
# Prevent task retries from overriding connection retries
# set options with 'templated vars' specific to this plugin and dependent ones
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
# FOR BACKWARDS COMPAT:
# some plugins don't support all base flags
# deals with networking sub_plugins (network_cli/httpapi/netconf)
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
# Check if the module has specified an action handler
# let action plugin override module, fallback to 'normal' action plugin otherwise
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
# until then, we don't want the task's collection list to be consulted; use the builtin
# networking/psersistent connections handling
# check handler in case we dont need to do all the work to setup persistent connection
# for persistent connections, initialize socket path and start connection manager
# TODO: set self._connection to dummy/noop connection, using local for now
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, the persistent connection helper won't always see the same plugins the controller
# can.
# Note: We run this here to cache whether the default ansible ssh
# executable supports control persist.  Sometime in the future we may
# need to enhance this to check that ansible_ssh_executable specified
# in inventory is also cached.  We can't do this caching at the point
# where it is used (in task_executor) because that is post-fork and
# therefore would be discarded after every task.
# preload become/connection/shell to set config defs cached
# deal with FQCN
# not fqcn, but might still be collection playbook
# FIXME: move out of inventory self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path)))
# we are doing a listing
# make sure the tqm has callbacks loaded
# clear any filters which may have been applied to the inventory
# Allow variables to be used in vars_prompt fields.
# FIXME: this should be a play 'sub object' like loop_control
# we are either in --list-<option> or syntax check
# Post validate so any play level variables are templated
# we are just doing a listing
# we are actually running plays
# restrict the inventory to the hosts in the serialized batch
# and run it...
# break the play if the result equals the special return code
# check the number of failures here and break out if the entire batch failed
# update the previous counts so they don't accumulate incorrectly
# over multiple serial batches
# save the unreachable hosts from this batch
# per play
# per playbook
# send the stats callback for this playbook
# if the last result wasn't zero, break out of the playbook file name loop
# make sure we have a unique list of hosts
# the serial value can be listed as a scalar or a list of
# scalars, so we make sure it's a list here
# get the serial value from current item in the list
# if the serial count was not specified or is invalid, default to
# a list of all hosts, otherwise grab a chunk of the hosts equal
# to the current serial item size
# increment the current batch list item number, and if we've hit
# the end keep using the last element until we've consumed all of
# the hosts in the inventory
# user defined stats, which can be per host or global
# This should never happen, but let's be safe
# mismatching types
# let overloaded + take care of other types
# Copyright: (c) 2018 Ansible Project
# fallback value
# not all command -v impls accept a list of commands, so we have to call it once per python
# FUTURE: in most cases we probably don't want to use become, but maybe sometimes we do?
# this is lame, but returning None or throwing an exception is uglier
# the current ssh plugin implementation always has stderr, making coverage of the false case difficult
# (c) 2013-2014, Michael DeHaan <michael.dehaan@gmail.com>
# module_common is relative to module_utils, so fix the path
# ******************************************************************************
# Strip comments and blank lines from the wrapper
# Keep comments when KEEP_REMOTE_FILES is set.  That way users will see
# the comments with some nice usage instructions.
# Otherwise, strip comments for smaller over the wire size.
# read during startup to prevent individual workers from doing so
# dirname(dirname(dirname(site-packages/ansible/executor/module_common.py) == site-packages
# Do this instead of getting site-packages from distutils.sysconfig so we work when we
# haven't been installed
# Detect new-style Python modules by looking for required imports:
# import ansible_collections.[my_ns.my_col.plugins.module_utils.my_module_util]
# from ansible_collections.[my_ns.my_col.plugins.module_utils import my_module_util]
# import ansible.module_utils[.basic]
# from ansible.module_utils[ import basic]
# from ansible.module_utils[.basic import AnsibleModule]
# from ..module_utils[ import basic]
# from ..module_utils[.basic import AnsibleModule]
# Relative imports
# Collection absolute imports:
# Core absolute imports
# DTFIX-FUTURE: add support for ignoring imports with a "controller only" comment, this will allow replacing import_controller_module with standard imports
# squirrel this away so we can compare node parents to it
# if the import's parent is the root document, it's a required import, otherwise it's optional
# FIXME: These should all get skipped:
# from ansible.executor import module_common
# from ...executor import module_common
# from ... import executor (Currently it gives a non-helpful error)
# if we're in a package init, we have to add one to the node level (and make it none if 0 to preserve the right slicing behavior)
# relative import: from .module import x
# relative import: from . import x
# fall back to an absolute import
# absolute import: from module import x
# Specialcase: six is a special case because of its
# import logic
# from ansible.module_utils.MODULE1[.MODULEn] import IDENTIFIER [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [,IDENTIFIER] [as asname]
# from ansible.module_utils import MODULE1 [,MODULEn] [as asname]
# from ansible_collections.ns.coll.plugins.module_utils import MODULE [as aname] [,MODULE2] [as aname]
# from ansible_collections.ns.coll.plugins.module_utils.MODULE import IDENTIFIER [as aname]
# FIXME: Unhandled cornercase (needs to be ignored):
# from ansible_collections.ns.coll.plugins.[!module_utils].[FOO].plugins.module_utils import IDENTIFIER
# Not from module_utils so ignore.  for instance:
# from ansible_collections.ns.coll.plugins.lookup import IDENTIFIER
# FUTURE: add logical equivalence for python3 in the case of py3-only modules
# name for interpreter var
# key for config
# looking for python, rest rely on matching vars
# skip detection for network os execution, use playbook supplied one if possible
# a config def exists for this interpreter type; consult config for the value
# handle interpreter discovery if requested or empty interpreter was provided
# interpreter discovery is desired, but has not been run for this host
# for non python we consult vars for a possible direct override
# nothing matched(None) or in case someone configures empty string or empty interpreter
# set shebang
# a child package redirection could cause intermediate package levels to be missing, eg
# from ansible.module_utils.x.y.z import foo; if x.y.z.foo is redirected, we may not have packages on disk for
# the intermediate packages x.y.z, so we'll need to supply empty packages for those
# for ambiguous imports, we should only test for things more than one level below module_utils
# this lets us detect erroneous imports and redirections earlier
# only allow redirects from below module_utils- if above that, bail out (eg, parent package names)
# collection not found or some other error related to collection load
# FIXME: add deprecation warning support
# treat all redirects as packages
# expand FQCN redirects
# assume it's an FQCN, expand it
# ns
# coll
# sub-module_utils remainder
# subclasses should override to return the name parts after module_utils
# return the remainder parts as a package string
# didn't find what we were looking for- last chance for packages whose parents were redirected
# make fake packages
# nope, just bail
# FIXME: add __repr__ impl
# FIXME: handle the ansible.module_utils.six._six case with a redirect or an internal _six attr on six itself?
# six creates its submodules at runtime; convert all these to just 'ansible.module_utils.six'
# legacy module utils always look in ansible.builtin for redirects
# let local stuff override redirects for legacy
# eg, foo.bar for ansible.module_utils.foo.bar
# no redirection; try to find the module
# direct child of module_utils, just search the top-level dirs we were given
# a nested submodule of module_utils, extend the paths given with the intermediate package names
# extend the MU paths with the relative bit
# find_spec needs the full module name
# synthesize empty inits for packages down through module_utils- we don't want to allow those to be shipped over, but the
# package hierarchy needs to exist
# NB: we can't use pkgutil.get_data safely here, since we don't want to import/execute package/module code on
# the controller while analyzing/assembling the module, so we'll have to manually import the collection's
# Python package to locate it (import root collection, reassemble resource path beneath, fetch source)
# look for package_dir first, then module
# TODO: we might want to synthesize fake inits for py3-style packages, for now they're required beneath module_utils
# empty string is OK
# TODO: this feels brittle and funky; we should be able to more definitively assure the source path
# DTFIX-FUTURE: not sure if this case is even reachable
# eg, foo.bar for ansible_collections.ns.coll.plugins.module_utils.foo.bar
# experimental module metadata; off by default
# py_module_cache maps python module names to a tuple of the code in the module
# and the pathname to the module.
# Here we pre-load it with modules which we create without bothering to
# read from actual files (In some cases, these need to differ from what ansible
# ships because they're namespace packages in the module)
# FIXME: do we actually want ns pkg behavior for these? Seems like they should just be forced to emptyish pkg stubs
# the format of this set is a tuple of the module name and whether the import is ambiguous as a module name
# or an attribute of a module (e.g. from x.y import z <-- is z a module or an attribute of x.y?)
# include module_utils that are always required
# we'll be adding new modules inline as we discover them, so just keep going til we've processed them all
# not strictly necessary, but nice to process things in predictable and repeatable order
# this is normal; we'll often see the same module imported many times, but we only need to process it once
# FIXME: dot-joined result
# Could not find the module.  Construct a helpful error message.
# this was a best-effort optional import that we couldn't find, oh well, move along...
# FIXME: use dot-joined candidate names
# check the cache one more time with the module we actually found, since the name could be different than the input
# eg, imported name vs module
# we've processed this item, add it to the output list
# ensure we process all ancestor package inits
# we're accumulating this across iterations
# extra machinations to get a hashable type (list is not)
# compile the source, process all relevant imported modules
# Is this a core module?
# Is this a module in a collection?
# We can tell the FQN for core modules and collection modules
# FQNs must be valid as python identifiers.  This sanity check has failed.
# we could check other things as well
# Currently we do not handle modules in roles so we can end up here for that reason
# Write the module
# Write the __init__.py's necessary to get there
# The ansible namespace is setup as part of the module_utils setup...
# ... but ansible_collections and other toplevels are not
# If a collections module uses module_utils from a collection then most packages will have already been added by recursive_finder.
# Note: We don't want to include more than one ansible module in a payload at this time
# so no need to fill the __init__.py with namespace code
# FIXME: switch this to use a locked down pickle config or don't use pickle- easy to mess up and reach objects that shouldn't be pickled
# module_style is something important to calling code (ActionBase).  It
# determines how arguments are formatted (json vs k=v) and whether
# a separate arguments file needs to be sent over the wire.
# module_substyle is extra information that's useful internally.  It tells
# us what we have to look to substitute in the module files and whether
# we're using module replacer or ansiballz to format the module itself.
# Do REPLACER before from ansible.module_utils because we need make sure
# we substitute "from ansible.module_utils basic" for REPLACER
# Neither old-style, non_native_want_json nor binary modules should be modified
# except for the shebang line (Done by modify_module)
# Modules in roles currently are not found by the fqn heuristic so we
# fallback to this.  This means that relative imports inside a module from
# a role may fail.  Absolute imports should be used for future-proofness.
# People should start writing collections instead of modules in roles so we
# may never fix this
# FIXME: add integration test to validate that builtins and legacy modules with the same name are tracked separately by the caching mechanism
# FIXME: surrogate FQN should be unique per source path- role-packaged modules with name collisions can still be aliased
# Optimization -- don't lock if the module has already been cached
# Check that no other process has created this while we were
# waiting for the lock
# Create the module zip data
# walk the module imports, looking for module_utils to send- they'll be added to the zipfile
# Write the assembled module to a temp file (write to temp
# so that no one looking for the file reads a partially
# written file)
# Another process wrote the file while we were waiting for
# the write lock.  Go ahead and read the data from disk
# instead of re-creating it.
# FUTURE: the module cache entry should be invalidated if we got this value from a host-dependent source
# DTFIX-FUTURE: support serialization profiles for PowerShell modules
# Powershell/winrm don't actually make use of shebang so we can
# safely set this here.  If we let the fallback code handle this
# it can fail in the presence of the UTF8 BOM commonly added by
# Windows text editors
# create the common exec wrapper payload and set that as the module_data
# these strings could be included in a third-party module but
# officially they were included in the 'basic' snippet for new-style
# python modules (which has been replaced with something else in
# ansiballz) If we remove them from jsonargs-style module replacer
# then we can remove them everywhere.
# The main event -- substitute the JSON args string into the module
# shlex.split needs text on Python 3
# convert args to text
# read in the module source
# No interpreter/shebang, assume a binary module?
# update shebang
# Get the list of groups that contain this action
# Merge latest defaults into dict, since they are a list of dicts
# handle specific action defaults
# stuff callbacks need
# for debug and other actions, to always expand data (pretty jsonification)
# to know actual 'item' variable
# jic we didn't clean up well enough, DON'T LOG
# controls display of ansible_facts, gathering would be very noise with -v otherwise
# FIXME: this should be immutable, but strategy result processing mutates it in some corner cases
# deprecated: description='Deprecate `_host` in favor of `host`' core_version='2.23'
# deprecated: description='Deprecate `_task` in favor of `task`' core_version='2.23'
# deprecated: description='Deprecate `_task_fields` in favor of `task`' core_version='2.23'
# Loop tasks are only considered skipped if all items were skipped.
# some squashed results (eg, dnf) are not dicts and can't be skipped individually
# regular tasks and squashed non-dict results
# statuses are already reflected on the event type
# debug is verbose by default to display vars, no need to add invocation
# preserve subset for later
# DTFIX-FUTURE: is checking no_log here redundant now that we use _ansible_no_log everywhere?
# maintain shape for loop results so callback behavior recognizes a loop was performed
# remove almost ALL internal keys, keep ones relevant to callback
# keep subset
# deprecated: description='Deprecate `_result` in favor of `result`' core_version='2.23'
# never leaves PlaybookExecutor.run
# never leaves PlaybookExecutor.run, intentionally includes the bit value for 8
# make sure any module paths (if specified) are added to the module_loader
# a special flag to help us exit cleanly
# dictionaries to keep track of failed/unreachable hosts
# Done in tqm, and not display, because this is only needed for commands that execute tasks
# A temporary file (opened pre-fork) used by connection
# plugins for inter-process locking.
# get all configured loadable callbacks (adjacent, builtin)
# add enabled callbacks that refer to collections, which might not appear in normal listing
# load all, as collection ones might be using short/redirected names and not a fqcn
# avoids incorrect and dupes possible due to collections
# for each callback in the list see if we should add it to 'active callbacks' used in the play
# try to get collection world name first
# store the name the plugin was loaded as, as that's what we'll need to compare to the configured callback list later
# fallback to 'old loader name'
# we only allow one callback of type 'stdout' to be loaded,
# TODO: remove special case for tree, which is an adhoc cli option --tree
# only run if not adhoc, or adhoc was specifically configured to run + check enabled list
# 2.x plugins shipped with ansible should require enabling, older or non shipped should load automatically
# avoid bad plugin not returning an object, only needed cause we do class_only load and bypass loader checks,
# really a bug in the plugin itself which we ignore as callback errors are not supposed to be fatal.
# build the iterator
# adjust to # of workers to configured forks or size of batch, whatever is lower
# load the specified strategy (or the default linear one)
# Because the TQM may survive multiple play runs, we start by marking
# any hosts as failed in the iterator here which may have been marked
# as failed in previous runs. Then we clear the internal list of failed
# hosts so we know what failed this round.
# during initialization, the PlayContext will clear the start_at_task
# field to signal that a matching task was found, so check that here
# and remember it so we don't try to skip tasks on future plays
# and run the play using the strategy and cleanup on way out
# now re-save the hosts that failed from the iterator to our internal list
# We no longer flush on every write in ``Display.display``
# just ensure we've flushed during cleanup
# [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,
# <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>
# We always send events to stdout callback first, rest should follow config order
# a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin
# a plugin can opt in to implicit tasks (such as meta). It does this
# by declaring self.wants_implicit_tasks = True.
# send clean copies
# FIXME: add play/task cleaners
# this state hack requires that no callback ever accepts > 1 TaskResult object
# Using an `assert` in integration tests is useful.
# Production code should never use `assert` or raise `AssertionError`.
# clear temporary instance storage hack
# (c) 2018 Ansible Project
# This is also used by validate-modules to get a module's required utils in base and a collection.
# Reference C# module_util in another C# util, this must always be the fully qualified name.
# 'using ansible_collections.{namespace}.{collection}.plugins.module_utils.{name}'
# Reference C# module_util in a PowerShell module
# '#AnsibleRequires -CSharpUtil Ansible.{name}'
# '#AnsibleRequires -CSharpUtil ansible_collections.{namespace}.{collection}.plugins.module_utils.{name}'
# '#AnsibleRequires -CSharpUtil ..module_utils.{name}'
# Can have '-Optional' at the end to denote the util is optional
# Original way of referencing a builtin module_util
# '#Requires -Module Ansible.ModuleUtils.{name}
# New way of referencing a builtin and collection module_util
# '#AnsibleRequires -PowerShell Ansible.ModuleUtils.{name}'
# '#AnsibleRequires -PowerShell ansible_collections.{namespace}.{collection}.plugins.module_utils.{name}'
# '#AnsibleRequires -PowerShell ..module_utils.{name}'
# scans lib/ansible/executor/powershell for scripts used in the module
# exec side. It also scans these scripts for any dependencies
# PS module contains '#Requires -Module Ansible.ModuleUtils.*'
# PS module contains '#AnsibleRequires -Powershell Ansible.*' (or collections module_utils ref)
# PS module contains '#AnsibleRequires -CSharpUtil Ansible.*' (or collections module_utils ref)
# CS module contains 'using Ansible.*;' or 'using ansible_collections.ns.coll.plugins.module_utils.*;'
# tolerate windows line endings by stripping any remaining
# newline chars
# once become is set, no need to keep on checking recursively
# Builtin util, or the old role module_utils reference.
# Collection util, load the package data based on the util import.
# Get the path of the util which is required for coverage collection.
# This should never happen with a collection but we are just being defensive about it.
# This is important to be set before scan_module is called to avoid
# recursive dependencies.
# It is important this is set before calling scan_module to ensure
# recursive dependencies don't result in an infinite loop.
# Any C# code requires the AddType.psm1 module to load.
# PowerShell cannot cast a string of "1" to Version, it must have at
# least the major.minor for it to be valid so we append 0
# determine which is the latest version and set that
# takes a task queue manager as the sole param:
# NOTE: this works due to fork, if switching to threads this should change to per thread storage of temp files
# clear var to ensure we only delete files for this child
# FUTURE: this lock can be removed once a more generalized pre-fork thread pause is in place
# Since setsid is called later, if the worker is termed
# it won't term the new process group
# register a handler to propagate the signal
# If the cause of the fault is OSError being generated by stdio,
# attempting to log a debug message may trigger another OSError.
# Try printing once then give up.
# Create new fds for stdin/stdout/stderr, but also capture python uses of sys.stdout/stderr
# Close stdin so we don't get hanging workers
# We use sys.stdin.close() for places where sys.stdin is used,
# to give better errors, and to prevent fd 0 reuse
# Set the queue on Display so calls to Display.display are proxied over the queue
# ignore the real task result and don't allow result object contribution from the exception (in case the pickling error was related)
# The failure pickling may have been caused by the task attrs, omit for safety
# used by TQM, must be bit-flag safe
# TQM-sourced, must be bit-flag safe
# FIXME: CLI-sourced, conflicts with HOST_UNREACHABLE
# obsolete, no longer used
# DTFIX-FUTURE: these fallback cases mask incorrect use of AnsibleError.message, what should we do?
# deprecated: description='deprecate support for orig_exc, callers should use `raise ... from` only' core_version='2.23'
# deprecated: description='remove support for orig_exc' core_version='2.27'
# FIXME: This exception is used for many non-CLI related errors.
# deprecated: description='add deprecation warnings for these aliases' core_version='2.23'
# avoid an incorrect error message when `obj` is a type
# this is left as a module import to facilitate easier unit test patching
# Converts 'bad' characters in a string to underscores (or provided replacer) so they can be used as Ansible hosts or groups
# when deserializing we might not have name yet
# __slots__ = [ 'name', 'hosts', 'vars', 'child_groups', 'parent_groups', 'depth', '_hosts_cache' ]
# don't add if it's already there
# prepare list of group's new ancestors this edge creates
# update the depth of the child
# update the depth of the grandchildren
# now add self to child's parent_groups list, but only if there
# isn't already a group with the same name
# self.depth could change over loop
# FIXME: warn about invalid priority
# FIXME: this goes away if we apply patterns incrementally or by groups
# if no regular pattern was given, hence only exclude and/or intersection
# make that magically work
# when applying the host selectors, run those without the "&" or "!"
# first, then the &s, then the !s.
# flatten the results
# If it's got commas in it, we'll treat it as a straightforward
# comma-separated list of patterns.
# If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# base objects
# a list of host(names) to contain current inquiries to
# resolved full patterns
# resolved individual patterns
# the inventory dirs, files, script paths or lists of hosts
# get to work!
# allow for multiple inventory parsing
# do post processing
# use binary for path functions
# process directories as a collection of inventories
# Skip hidden files and stuff we explicitly ignore
# recursively deal with directory entries
# left with strings or files, let plugins figure it out
# set so new hosts can use for inventory_file/dir vars
# try source with each plugin
# initialize and figure out if plugin wants to attempt parsing this file
# have this tag ready to apply to errors or output; str-ify source since it is often tagged by the CLI
# FUTURE: now that we have a wrapper around inventory, we can have it use ChainMaps to preview the in-progress inventory,
# some plugins might not implement caching
# DTFIX-FUTURE: fix this error handling to correctly deal with messaging
# omit line number to prevent contextual display of script or possibly sensitive info
# only warn/error if NOT using the default or using it and the file is present
# TODO: handle 'non file' inventory and detect vs hardcode default
# only if no plugin processed files should we show errors.
# `obj` should always be set
# final error/warning on inventory source failure
# clear up, jic
# compile patterns
# apply patterns
# Check if pattern already computed
# This is only used as a hash key in the self._hosts_patterns_cache dict
# a tuple is faster than stringifying
# mainly useful for hostvars[host] access
# exclude hosts not in a subset, if defined
# exclude hosts mentioned in any restriction (ex: failed hosts)
# sort hosts list if needed (should only happen when called from strategy)
# avoid resolving a pattern that is a plain host
# Do not parse regexes for enumeration info
# We want a pattern followed by an integer or range subscript.
# (We can't be more restrictive about the expression because the
# fnmatch semantics permit [\[:\]] to occur.)
# check if pattern matches group
# check hosts if no groups matched or it is a regex/glob pattern
# pattern might match host
# get_host autocreates implicit when needed
# Display warning if specified host pattern did not match any groups or hosts
# no need to write 'ignore' state
# FIXME: cache?
# allow implicit localhost if pattern matches and no other results
# allow Unix style @filename data
# Check if host in inventory, add if not
# Set/update the vars for this host
# reconcile inventory, ensures inventory rules are followed
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
# host was removed from inventory during refresh, we should not process
# create the new group and add it to inventory
# declared as class attrs to signal to ObjectProxy that we want them stored on the proxy, not the wrapped value
# fallback origin to ensure that vars are tagged with at least the file they came from
# __slots__ = [ 'name', 'vars', 'groups' ]
# populate ancestors
# populate ancestors first
# actually add group
# remove exclusive ancestors, xcept all!
# FUTURE: these values should be dynamically calculated on access ala the rest of magic vars
# provides 'groups' magic var, host object has group_names
# current localhost, implicit or explicit
# Always create the 'all' and 'ungrouped' groups,
# set localhost defaults
# sys.executable is not set in some cornercases. see issue #13585
# set group vars from group_vars/ files and vars plugins
# ensure all groups inherit from 'all'
# get host vars from host_vars/ files and vars plugins
# clear ungrouped of any incorrectly stored by parser
# add ungrouped hosts to ungrouped, except implicit
# special case for implicit hosts
# warn if overloading identifier as both group and host
# if host is not in hosts dict
# might need to create implicit localhost
# the group object may have sanitized the group name; use whatever it has
# TODO: add to_safe_host_name
# set to 'first source' in which host was encountered
# set default localhost from inventory to avoid creating an implicit one. Last localhost defined 'wins'.
# (c) 2017, Ansible by RedHat Inc,
# Copyright: (c) 2014, James Tanner <tanner.jc@gmail.com>
# PYTHON_ARGCOMPLETE_OK
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
# hardcoded from ascii values
# 'NORMAL': '\x01b[0m',  # newer?
# 'REVERSE':"\033[;7m",  # newer?
# previously existing string identifiers
# Potential locations of the role arg spec file in the meta subdir, with main.yml
# having the lowest priority.
# Check all potential spec files
# we keep error info, but let caller deal with it
# Check each subdir for an argument spec file
# select first-found role
# None here stands for 'colleciton', which stand alone roles dont have
# makes downstream code simpler by having same structure as collection roles
# only read first existing spec
# Name filters might contain a collection FQCN or not.
# If we didn't add any entry points (b/c of filtering), ignore this entry.
# assume that if it is relative, it is for docsite, ignore rest
# ignore most styles, but some already had 'identifier strings'
# assumes refs are also always colors
# start specific style and 'end' with normal
# default ignore list for detailed views
# Warning: If you add more elements here, you also need to add it to the docsite build (in the
# ansible-community/antsibull repo)
# helper for unescaping
# rst specific
# general formatting
# no ascii code for this
# M(word) => [word]
# U(word) => word
# L(word, url) => word <url>
# P(word#type) => [word]
# R(word, sphinx-ref) => word
# O(expr)
# V(expr)
# E(expr)
# RV(expr)
# HORIZONTALLINE => -------
# remove rst
# seealso to See also:
# .. note:: to note:
# remove :ref: and other tags, keep tilde to match ending one
# remove .. stuff:: in general
# handle docsite refs
# targets
# formatting
# TODO: warn if not used with -t roles
# role-specific options
# exclusive modifiers
# TODO: warn with --json as it is incompatible
# TODO: warn when arg/plugin is passed
# generic again
# format for user
# format display per option
# list plugin file names
# handle deprecated for builtin/legacy
# list plugin names and short desc
# TODO: add mark for deprecated collection plugins
# Handle deprecated ansible.builtin plugins
# display results
# to find max len
# TODO: warn and skip role?
# simplify loops, dont want to handle every with_<lookup> combo
# cause async became reserved in python we had to rename internally
# if no desc, typeerror raised ends this block
# get playbook objects for keyword and use first to get keyword attributes
# we should only need these once
# TODO: make this a field attribute property,
# would also helps with the warnings on {{}} stacking
# those that require no processing
# remove None keys
# Remove the internal ansible._protomatter plugins if getting all plugins
# get appropriate content depending on option
# reset for next iteration
# add short-named versions for lookup
# get the docs for plugins in the command line list
# The doc section existed but was empty
# Check whether JSON serialization would break
# pylint:disable=broad-except
# Dynamically build a doc stub for any Jinja2 builtin plugin we haven't
# explicitly documented.
# add to plugin paths from command line
# save only top level paths for errors
# reset so we can use subdirs later
# we always dump all types, ignore restrictions
# reset list after each type to avoid pollution
# here we require a name
# display specific plugin docs
# Display the docs
# Some changes to how plain text docs are formatted
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
# Removed plugins don't have any documentation
# generate extra data
# is there corresponding action plugin?
# return everything as one dictionary
# these do not take a yaml config that we can write a snippet for
# TODO: do we really want this?
# add_collection_to_versions_and_dates(doc, '(unknown)', is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(doc, collection_name, is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(
# assign from other sections
# TODO: move to plugin itself i.e: plugin.get_desc()
# plugin file was empty or had error, lets try other options
# handle test/filters that are in file with diff name
# Do a final fallback to see if the plugin is a shadowed Jinja2 plugin
# without any explicit documentation.
# In ansible-core, version_added can be 'historical'
# Create a copy so we don't modify the original (in case YAML anchors have been used)
# required is used as indicator and removed
# description is specifically formatted and can either be string or list of strings
# TODO: push this to top of for and sort by size, create indent on largest key?
# sanitize config items
# reformat cli options
# add custom header for conf
# these we handle at the end of generic option processing
# general processing for options
# generic elements we will handle identically
# use empty indent since this affects the start of the yaml doc, not it's keys
# Create a copy so we don't modify the original
# this is actually a usable task!
# just a comment, hopefully useful yaml file
# these are 'list of arguments'
# Copyright: (c) 2016, Toshio Kuratomi <tkuratomi@ansible.com>
# We overload the ``ansible`` adhoc command to provide the functionality for
# ``SSH_ASKPASS``. This code is here, and not in ``adhoc.py`` to bypass
# unnecessary code. The program provided to ``SSH_ASKPASS`` can only be invoked
# as a singular command, ``python -m`` doesn't work for that use case, and we
# aren't adding a new entrypoint at this time. Assume that if we are executing
# and there is only a single item in argv plus the executable, and the env var
# is set we are in ``SSH_ASKPASS`` mode
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
# noinspection PyBroadException
# not a real file handle, such as during the import sanity test
# do not remove or defer; ensures controller-specific state is set early
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
# Initialize plugin loader after parse, so that the init code can utilize parsed arguments
# In some contexts ``collections_path`` is singular
# return (before_@, after_@)
# if no @, return whole string as after_
# convert vault_password_files into vault_ids slugs
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
# if an action needs an encrypt password (create_new_password=True) and we don't
# have other secrets setup, then automatically add a password prompt as well.
# prompts can't/shouldn't work without a tty, so don't add prompt secrets
# list of tuples
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
# 2.3 format prompts for --ask-vault-pass
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
# assuming anything else is a password file
# read vault_pass from a file
# update loader with as-yet-known vault secrets
# An invalid or missing password file will error globally
# if no valid vault secret was found.
# process tags
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
# process skip_tags
# Make sure path argument doesn't have a backslash
# process inventory options except for CLIs that require their own processing
# should always be list
# Ensure full paths when needed
# expensive call, user with care
# this is a much simpler form of what is in pydoc.py
# TODO: evaluate moving all of the code that touches ``AnsibleCollectionConfig``
# into ``init_plugin_loader`` so that we can specifically remove
# ``AnsibleCollectionConfig.playbook_paths`` to make it immutable after instantiation
# all needs loader
# create the inventory, and filter it based on the subset specified (if any)
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
# flush fact cache if requested
# Empty inventory
# ensure its read as bytes
# DTFIX-FUTURE: clean this up so we're not hacking the internals- re-wrap in an AnsibleCLIUnhandledError that always shows TB, or?
# Copyright: (c) 2017, Brian Coca <bcoca@ansible.com>
# remove unused default options
# Actions
# graph
# list
# self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins',
# there can be only one! and, at least, one!
# set host pattern to default if not supplied
# Initialize needed objects
# FIXME: should we template first?
# not doing single host, set limit in general if given
# FIXME: pager?
# get info from inventory source
# Always load vars plugins
# only get vars defined directly host
# get all vars flattened by host, but skip magic hostvars
# remove empty keys
# remove empty groups
# populate meta
# initialize group + vars
# subgroups
# hosts for group
# observe limit
# avoid defining host vars more than once
# options unique to ansible ad-hoc
# avoid adding to tasks that don't support it, unless set, then give user an error
# only thing left should be host pattern
# handle password prompts
# get basic objects
# get list of hosts to execute against
# just listing hosts?
# verify we have arguments if we know we need em
# Avoid modules that don't work with ad-hoc
# construct playbook objects to wrap task
# used in start callback
# Respect custom 'stdout_callback' only with enabled 'bin_ansible_callbacks'
# now create a task queue manager to execute the play
# create parser for CLI options
# ansible playbook specific opts
# for listing, we need to know if user had tag input
# capture here as parent function sets defaults for tags
# default to all tags (including never), when listing tags
# unless user specified tags
# Note: slightly wrong, this is written so that implicit localhost
# manages passwords
# initial error check, to make sure all specified playbooks are accessible
# before we start running anything through the playbook executor
# also prep plugin paths
# resolve if it is collection playbook with FQCN notation, if not, leaves unchanged
# not an FQCN so must be a file
# check if playbook is from collection (path can be passed directly)
# don't add collection playbooks to adjacency search path
# setup dirs to enable loading plugins from all playbooks in case they add callbacks/inventory/etc
# allow collections adjacent to these playbooks
# we use list copy to avoid opening up 'adjacency' in the previous loop
# don't deal with privilege escalation or passwords when we don't need to
# create base objects
# (which is not returned in list_hosts()) is taken into account for
# warning if inventory is empty.  But it can't be taken into account for
# checking if limit doesn't match any hosts.  Instead we don't worry about
# limit if only implicit localhost was in inventory to start with.
# Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts())
# create the playbook executor, which manages running the plays via a task queue manager
# show host list if we were able to template into a list
# For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting
# prompting from stdin and reading from stdin are mutually exclusive, if stdin is still provided, it is ignored
# should only trigger if prompt + either - or encrypt string stdin name were provided
# set default restrictive umask
# there are 3 types of actions, those that just 'read' (decrypt, view) and only
# need to ask for a password once, and those that 'write' (create, encrypt) that
# ask for a new password and confirm it, and 'read/write (rekey) that asks for the
# old password, then asks for a new one and confirms it.
# TODO: instead of prompting for these before, we could let VaultEditor
# no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit'
# only one secret for encrypt for now, use the first vault_id and use its first secret
# TODO: exception if more than one?
# print('encrypt_vault_id: %s' % encrypt_vault_id)
# print('default_encrypt_vault_id: %s' % default_encrypt_vault_id)
# new_vault_ids should only ever be one item, from
# load the default vault ids if we are using encrypt-vault-id
# There is only one new_vault_id currently and one new_vault_secret, or we
# use the id specified in --encrypt-vault-id
# FIXME: do we need to create VaultEditor here? its not reused
# and restore umask
# FIXME: use the correct vau
# Holds tuples (the_text, the_source_of_the_string, the variable name if its provided).
# remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so
# we don't add it to the plaintext list
# We can prompt and read input, or read from stdin, but not both.
# TODO: enforce var naming rules?
# TODO: could prompt for which vault_id to use for each plaintext string
# defaults to None
# use any leftover args as strings to encrypt
# Try to match args up to --name options
# Some but not enough --name's to name each var
# Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that.
# Add the rest of the args without specifying a name
# if no --names are provided, just use the args without a name.
# Convert the plaintext text objects to bytestrings and collect
# TODO: specify vault_id per string?
# Format the encrypted strings and any corresponding stderr output
# The output must end with a newline to play nice with terminal representation.
# Refs:
# * https://stackoverflow.com/a/729795/595220
# * https://github.com/ansible/ansible/issues/78932
# TODO: offer block or string ala eyaml
# If we are only showing one item in the output, we don't need to included commented
# delimiters in the text
# list of dicts {'out': '', 'err': ''}
# Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook.
# For more than one input, show some differentiating info in the stderr output so we can tell them
# apart. If we have a var name, we include that in the yaml
# (the text itself, which input it came from, its name)
# block formatting
# Note: vault should return byte strings because it could encrypt
# and decrypt binary files.  We are responsible for changing it to
# unicode here because we are displaying it and therefore can make
# the decision that the display doesn't have to be precisely what
# the input was (leave that to decrypt instead)
# FIXME: plumb in vault_id, use the default new_vault_secret for now
# initialize each galaxy server's options from known listed servers
# clean list, reused later here
# run the requested action
# build list
# iterate over class instances
# alias or deprecated
# this dumps main/common configs
# for base and all, we include galaxy servers
# now each plugin type
# only for requested types
# python lists are not valid env ones
# list of other stuff
# TODO: might need quoting and value coercion depending on type
# recursed into one of the few settings that is a mapping, now hitting it's strings
# its a plugin
# avoid dupes
# python lists are not valid ini ones
# TODO: add yaml once that config option is added
# proceed normally
# should include '_terms', '_input', etc
# Add base
# convert to settings
# prep loading
# accumulators
# in case of deprecation they diverge
# skip alias
# deprecated, but use 'nice name'
# default entries per plugin
# populate config entries by loading plugin
# actually get the values
# not all cases will be error
# pretty please!
# avoid header for empty lists (only changed!)
# add galaxy servers
# deal with base
# add all plugins
# deal with specific plugin
# validate ini config since it is found
# Also from plugins
# check for valid sections
# check keys in valid sections
# validate any 'ANSIBLE_' env vars found
# we found discrepancies!
# allsgood
# signature is different from parent as caller should not need to add usage/desc
# Do not add check_options as there's a conflict with --checkout/-C
# options unique to pull
# TODO: resolve conflict with check mode, added manually below
# Overloaded with adhoc ... but really passthrough to adhoc
# add a subset of the check_opts flag group manually, as the full set's
# shortcodes conflict with above --checkout/-C, see to-do above
# use a hostname dependent directory, in case of $HOME on nfs
# log command line
# Build Checkout command
# Now construct the ansible command
# Attempt to use the inventory passed in as an argument
# It might not yet have been downloaded so use localhost as default
# avoid interpreter discovery since we already know which interpreter to use on localhost
# SCM specific options
# options common to all supported SCMS
# hardcode local and inventory/host as this is just meant to fetch the repo
# Nap?
# RUN the Checkout command
# detect json/yaml/header, any count as 'changed'
# no change, we bail
# Build playbook command
# redo inventory options as new files might exist now
# RUN THE PLAYBOOK COMMAND
# Copyright: (c) 2013, James Cammarata <jcammarata@ansible.com>
# Copyright: (c) 2018-2021, Ansible Project
# FIXME: use validate_certs context from Galaxy servers when downloading collections
# .get used here for when this is used in a non-CLI context
# Make sure that the number of dashes is at least the width of the header
# Make sure the width isn't smaller than the header
# Inject role into sys.argv[1] as a backwards compatibility step
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
# Common arguments that apply to more than 1 action
# Hidden argument that should only be used in our tests
# --timeout uses the default None to handle two different scenarios.
# * --timeout > C.GALAXY_SERVER_TIMEOUT for non-configured servers
# * --timeout > server-specific timeout > C.GALAXY_SERVER_TIMEOUT for configured servers.
# Add sub parser for the Galaxy role type (role or collection)
# Add sub parser for the Galaxy collection actions
# to satisfy doc build
# Add sub parser for the Galaxy role actions
# Eventually default to ~/.ansible/pubring.kbx?
# might install both roles and collections
# -r, -fr
# --role-file foo, --role-file=foo
# Any collections in the requirements files will also be installed
# ensure we have 'usable' cli option
# the default if validate_certs is None
# dynamically add per server config depending on declared servers
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
# resolve the config created options above with existing config and user options
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
# This allows a user to explicitly force use of an API version when
# multiple versions are supported. This was added for testing
# against pulp_ansible and I'm not sure it has a practical purpose
# outside of this use case. As such, this option is not documented
# as of now
# default case if no auth info is provided.
# The galaxy v1 / github / django / 'Token'
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
# Default to C.GALAXY_SERVER if no servers were defined
# checks api versions once a GalaxyRole makes an api call
# self.api can be used to evaluate the best server immediately
# Older format that contains only roles
# Newer format with a collections and/or roles key
# Assume it's a string:
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
# make sure we have a trailing newline returned
############################
# execute actions
# To satisfy doc build
# x.y
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
# create role directory
# A collection can contain templates in playbooks/*/templates and roles/*/templates
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
# Role does not exist in Ansible Galaxy
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
# only process roles in roles files when names matches if given
# query the galaxy API for the role data
# install dependencies, if we want them
# NOTE: the meta file is also required for installing the role, not just dependencies
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
# show the requested role, if it exists
# Do not warn if the role was found in any of the search paths
# list a specific collection
# don't warn for missing default paths
# Do not warn if the specific collection was found in any of the search paths
# allows unit test override
# Submit an import request
# found multiple roles associated with github_user/github_repo
# found a single role as expected
# Get the status of the import
# List existing integration secrets
# None found
# Remove a secret
# deprecated: description='Python 3.13 and later support track' python_version='3.12'
# This SharedMemory instance is intentionally not closed or unlinked.
# Closing will occur naturally in the SharedMemory finalizer.
# Unlinking is the responsibility of the process which created it.
# When track=False is not available, we must unregister explicitly, since it otherwise only occurs during unlink.
# This avoids resource tracker noise on stderr during process exit.
# Report the password provided by the SharedMemory instance.
# The contents are left untouched after consumption to allow subsequent attempts to succeed.
# This can occur when multiple password prompting methods are enabled, such as password and keyboard-interactive, which is the default on macOS.
# Copyright: (c) 2014, Nandor Sivok <dominis@haxor.hu>
# Copyright: (c) 2016, Redhat Inc
# type: list[str] | None
# use specific to console, but fallback to highlight for backwards compatibility
# Defaults for these are set from the CLI in run()
# options unique to shell
# we found module!
# hosts
# Defaults from the command line
# set module path if needed
# dynamically add 'canonical' modules as commands, aliases could be used and dynamically loaded
# This hack is to work around readline issues on a mac:
# If this is a relative path (~ gets expanded later) then plug the
# key's path on to the directory we originally came from, so we can
# find it now that our cwd is /
# socket.accept() will raise EINTR if the socket.close() is called
# allow time for any exception msg send over socket to receive at other end before shutting down
# when done, close the connection properly and cleanup the socket file so it can be recreated
# initialize verbosity
# Need stdin as a byte stream
# Note: update the below log capture code after Display.display() is refactored.
# read the play context data via stdin, which means depickling it
# create the persistent connection dir if need be and create the paths
# which we will be using later
# Only network_cli has update_play context and set_check_prompt, so missing this is
# not fatal e.g. netconf
# Special purpose OptionParsers
# pylint: disable=ansible-invalid-deprecated-version
# get the action by name, or use as-is (assume it's a subclass of argparse.Action)
# type: ignore[misc, valid-type]
# Callbacks to validate and normalize Options
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
# There is a possibility the .git file to have an absolute path.
# detached HEAD
# pylint: disable=broad-except
# Functions to add pre-canned options to an OptionParser
# base opts
# ssh only
# consolidated privilege escalation (become)
# Values which are not mutated are automatically trusted for templating.
# The `is` reference equality is critically important, as other types may only alter the tags, so object equality is
# not sufficient to prevent them being tagged as trusted when they should not.
# Explicitly include all usages using the `str` type factory since it strips tags.
# simplify debugging by attaching the argument name to the function
# This package exists to host vendored top-level Python packages for downstream packaging. Any Python packages
# installed beneath this one will be masked from the Ansible loader, and available from the front of sys.path.
# It is expected that the vendored packages will be loaded very early, so a warning will be fired on import of
# the top-level ansible package if any packages beneath this are already loaded at that point.
# Python packages may be installed here during downstream packaging using something like:
# pip install --upgrade -t (path to this dir) cryptography pyyaml packaging jinja2
# mask vendored content below this package from being accessed as an ansible subpackage
# patch our vendored dir onto sys.path
# m[1] == m.name
# patch us early to load vendored deps transparently
# handle reload case by removing the existing entry, wherever it might be
# DTFIX-FUTURE: come up with a better way to handle this so it can be deprecated
# Commonly abused by numerous collection lookup plugins and the Ceph Ansible `config_template` action.
# Abused by cloud.common, community.general and felixfontein.tools collections to create a new Templar instance.
# backward compatibility: filter out None values from overrides, even though it is a valid value for some of them
# noinspection PyUnusedLocal
# DTFIX-FUTURE: offer a public version of TemplateOverrides to support an optional strongly typed `overrides` argument
# Skipping a deferred deprecation due to minimal usage outside ansible-core.
# Use `hasattr(templar, 'evaluate_expression')` to determine if `template` or `evaluate_expression` should be used.
# The pre-2.19 config fallback is ignored for content portability.
# Use `hasattr(templar, 'evaluate_expression')` as a surrogate check to determine if `convert_data` is accepted.
# Skipping a deferred deprecation due to no known usage outside ansible-core.
# Use `hasattr(templar, 'evaluate_expression')` as a surrogate check to determine if `disable_lookups` is accepted.
# pre-2.19 compat
# deprecated description="deprecate `generate_ansible_template_vars`, collections should inline the necessary variables" core_version="2.23"
# covers TextIO and BinaryIO at runtime, but type checking disagrees
# Copyright: (c) 2014, Matthew Vernon <mcv21@cam.ac.uk>
# Makes sure public host keys are present or absent in the given known_hosts
# Arguments
# Find the ssh-keygen binary
# Trailing newline in files gets lost, so re-add if necessary
# check if we are trying to remove a non matching key,
# in that case return with no change to the host
# We will change state if found==True & state!="present"
# or found==False & state=="present"
# i.e found XOR (state=="present")
# Alternatively, if replace is true (i.e. key present, and we must change
# it)
# Now do the work.
# Only remove whole host if found and no key provided
# Next, add a new (or replacing) entry
# skip this line to replace its key
# If no key supplied, we're doing a removal, and have nothing to check here.
# Rather than parsing the key ourselves, get ssh-keygen to do it
# (this is essential for hashed keys, but otherwise useful, as the
# key question is whether ssh-keygen thinks the key matches the host).
# The approach is to write the key to a temporary file,
# and then attempt to look up the specified host in that file.
# host not found
# openssh >=6.4 has changed ssh-keygen behaviour such that it returns
# 1 if no host is found, whereas previously it returned 0
# host not found, no other errors
# If user supplied no key, we don't want to try and replace anything with it
# info output from ssh-keygen; contains the line number where key was found
# This output format has been hardcoded in ssh-keygen since at least OpenSSH 4.0
# It always outputs the non-localized comment before the found key
# found a match
# found exactly the same key, don't replace
# do not change host hash if already hashed
# found a different key for the same key type
# No match found, return found and replace, but no line
# @ indicates the optional marker field used for @cert-authority or @revoked
# trim trailing newline
# The optional "marker" field, used for @cert-authority or @revoked
# TODO: deprecate returning everything that was passed in
# Copyright:  Ansible Project
# Copyright: (c) 2012, Stephen Fromm <sfromm@gmail.com>
# The grp module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not a group exists locally.
# It returns True if the group exists locally or in the directory, so instead
# look in the local GROUP file for an existing account.
# ===========================================
# modify the group if cmd will do anything
# check for lowest available system gid (< 500)
# Since there is no groupmod command, modify /etc/group directly
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# List of response key names we do not want sanitize_keys() to change.
# Turn a list of lists into a list of tuples that urlencode accepts
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
# if destination file already exist, only download if file newer
# Try to close the open file handle
# Encode the body unless its a string, then assume it is pre-formatted JSON
# and the filename already exists.  This allows idempotence
# of uri executions.
# and the filename does not exist.  This allows idempotence
# Make the request
# r may be None for some errors
# r.fp may be None depending on the error, which means there are no headers either
# https://www.rfc-editor.org/rfc/rfc6839#section-3.1
# there was no content, but the error read()
# may have been stored in the info as 'body'
# Write the file out if requested
# allow file attribute changes
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased.  Lowercase them to be
# compatible with the python2 behaviour.
# Default content_encoding to try
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <i.gnatenko.brain@gmail.com>
# Copyright 2018 Adam Miller <admiller@redhat.com>
# FIXME: NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# This populates instance vars for all argument spec params
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
# envra format for backwards compat
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
# probe well-known system Python locations for accessible bindings, favoring py3
# respawn under the interpreter where the bindings should be found
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
# Change the configuration file path if provided, this must be done before conf.read() is called
# Fail if we can't read the configuration file.
# Read the configuration file
# Turn off debug messages in the output
# Set whether to check gpg signatures
# Don't prompt for user confirmations
# Set certificate validation
# Set installroot
# Load substitutions from the filesystem
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
# Set excludes
# Set disable_excludes
# Set releasever
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
# Honor installroot for dnf directories
# This will also perform variable substitutions in the paths
# Set skip_broken (in dnf this is strict=0)
# best and nobest are mutually exclusive
# Default in dnf upstream is true
# Default in dnf (and module default) is True
# Disable repositories
# Enable repositories
# Rename updates to upgrades
# Return the corresponding packages
# Return the enabled repository ids
# Return any matching packages
# expects a versioned package spec
# Special case for package specs that contain glob characters.
# For these we skip `is_installed` and `is_newer_version_installed` tests that allow for the
# allow_downgrade feature and pass the package specs to dnf.
# Since allow_downgrade is not available in dnf and while it is relatively easy to implement it for
# package specs that evaluate to a single package, trying to mimic what would the dnf machinery do
# for glob package specs and then filtering those for allow_downgrade appears to always
# result in naive/inferior solution.
# NOTE this has historically never worked even before https://github.com/ansible/ansible/pull/82725
# where our (buggy) custom code ignored wildcards for the installed checks.
# TODO reasearch how feasible it is to implement the above
# for upgrade we pass the spec to both upgrade and install, to satisfy both available and installed
# packages evaluated from the glob spec
# Only load this if necessary, it's slow
# dnf install /usr/bin/vi
# should be only one?
# not installed, pass the filename for dnf to process
# The provided stream was found
# The provided stream was not found
# No stream provided, but module found
# seems like a logical default
# Accumulate failures.  Package management modules install what they can
# and fail with a message about what they can't.
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
# Install files.
# Install modules
# Install groups.
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised.  We want to capture
# this but still install as much as possible.
# This means that the group or env wasn't found in comps
# Install packages.
# "latest" is same as "installed" for filenames.
# Upgrade modules
# If not already installed, try to install.
# state == absent
# Remove modules
# Group is already uninstalled.
# Environment is already uninstalled.
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
# NOTE for people who go down the rabbit hole of figuring out why
# resolve() throws DepsolveError here on dep conflict, but not when
# called from the CLI: It's controlled by conf.best. When best is
# set, Hawkey will fail the goal, and resolve() in dnf.base.Base
# will throw. Otherwise if it's not set, the update (install) will
# be (almost silently) removed from the goal, and Hawkey will report
# success. Note that in this case, similar to the CLI, skip_broken
# does nothing to help here, so we don't take it into account at
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
# validated successfully
# validation failed, install cert?
# fatal error
# No further work left to do, and the results were already updated above.
# Just return them.
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
# Note: base takes a long time to run so we want to check for failure
# before running it.
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
# informational commands:
# -*- mode: python -*-
# Copyright: (c) 2012, Seth Vidal (@skvidal)
# Copyright: Ansible Team
# (c) 2017, Brian Coca <bcoca@ansible.com>
# (c) 2017, Adam Miller <admiller@redhat.com>
# ensure service exists, get script name
# locate binaries for service management
# Keeps track of the service status for various runlevels because we can
# operate on multiple runlevels at once
# figure out enable status
# figure out started status, everyone does it different!
# user knows other methods fail and supplied pattern
# standard tool that has been 'destandardized' by reimplementation in other OS/distros
# maybe script implements status (not LSB)
# special case
# check output messages, messy but sadly more reliable than rc
# hope rc is not lying to us, use often used 'bad' returns
# hail mary
# ps for luck, can only assure positive match
###########################################################################
# BEGIN: Enable/Disable
# Perform enable/disable here
# Assigned above, might be useful is something goes sideways
# END: Enable/Disable
# BEGIN: state
# how to run
# FIXME: ERRORS
# cannot rely on existing 'restart' in init script
# END: state
# import module snippets
# TODO: this mimics existing behavior where gather_subset=["!all"] actually means
# TODO: decide what '!all' means, I lean towards making it mean none, but likely needs
# rename namespace_name to root_key?
# bad subset given, collector, idk, deps declared but not found
# Copyright: (c) 2012, Daniel Hokka Zakrisson <daniel@hozac.com>
# Copyright: (c) 2014, Ahti Kitsik <ak@ahtik.com>
# index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found
# The module's doc says
# "If regular expressions are passed to both regexp and
# insertafter, insertafter is only honored if no match for regexp is found."
# Therefore:
# 1. regexp or search_string was found -> ignore insertafter, replace the founded line
# 2. regexp or search_string was not found -> insert the line after 'insertafter' or 'insertbefore' line
# Given the above:
# 1. First check that there is no match for regexp:
# 2. Second check that there is no match for search_string:
# 3. When no match found on the previous step,
# parse for searching insertafter/insertbefore:
# + 1 for the next line
# index[1] for the previous line
# Exact line or Regexp matched a line in the file
# Don't do backref expansion if not asked.
# If no regexp or search_string was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter
# Insert lines
# Ensure there is a line separator after the found string
# at the end of the file.
# If the line to insert after is at the end of the file
# use the appropriate index value.
# If the line to insert before is at the beginning of the file
# Do absolutely nothing, since it's not safe generating the line
# without the regexp matching to populate the backrefs.
# Add it to the beginning of the file
# Add it to the end of the file if requested or
# if insertafter/insertbefore didn't match anything
# (so default behaviour is to add at the end)
# If the file is not empty then ensure there's a newline before the added line
# Don't insert the line if it already matches at the index.
# If the line to insert after is at the end of the file use the appropriate index value.
# insert matched, but not the regexp or search_string
# Deal with the insertafter default value manually, to avoid errors
# because of the mutually_exclusive mechanism.
# Copyright: (c) 2016, Krzysztof Magosa <krzysztof@magosa.pl>
# Copyright: (c) 2014, Brian Coca <briancoca+ansible@gmail.com>
# line is a collection of tab separated values
# No password found, return a blank password
# If correct question and question type found, return password value
# Fail safe
# TODO: enable passing array of options and/or debconf file from get-selections dump
# ensure we compare booleans supplied to the way debconf sees them (true/false strings)
# if question doesn't exist, value cannot match
# Copyright: (c) 2013, Evan Kaufman <evan@digitalflophouse.com
# We should always follow symlinks so that we change the real file
# Copyright: (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>, and others
# the command module is the one ansible module that does not take key=value args
# hence don't copy this one if you are looking to build others!
# NOTE: ensure splitter.py is kept in sync for exceptions
# The default for this really comes from the action plugin
# we promised these in 'always' ( _lines get auto-added on action plugin)
# All args must be strings
# check_mode partial support, since it only really works in checking creates/removes
# special skips for idempotence if file exists (assumes command creates)
# TODO: deprecate
# special skips for idempotence if file does not exist (assumes command removes)
# actually executes command (or not ...)
# this is partial check_mode support, since we end up skipping if we get here
# skipped=True and changed=True are mutually exclusive
# convert to text for jsonization and usability
# these are datetime objects, but need them as strings to pass back
# Copyright: (c) 2012, Jeroen Hoekx (@jhoekx)
# Get current settings.
# encoding: utf-8
# Copyright: (c) 2012, Matt Wright <matt@nobien.net>
# Copyright: (c) 2013, Alexander Saltanov <asd@mokote.com>
# Copyright: (c) 2014, Rutger Spiertz <rutger@kumina.nl>
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
# group sources by file
# internal DS for tracking symlinks
# Repositories that we're adding -- used to implement mode param
# read sources.list if it exists
# read sources.list.d
# Drop options and protocols.
# split line into valid keywords
# Drop usernames and passwords
# Check for another "#" in the line and treat a part after it as a comment.
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
# Write to symlink target instead of replacing symlink as a normal file
# allow the user to override the default mode
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
# Prefer separate files for new sources.
# If we have more than one entry, we will remove them all (not comment, remove!)
# prefer api.launchpad.net over launchpad.net/api
# see: https://github.com/ansible/ansible/pull/81978#issuecomment-1767062178
# main gpg repo for apt
# add other known sources of gpg sigs for apt, skip hidden files
# https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html
# repository already exists
# add gpg sig if needed
# TODO: report file that would have been added if not check_mode
# use first available key dir, in order of preference
# using gpg we must write keyfile ourselves
# apt source file
# First remove any new files that were created:
# Now revert the existing files to their former state:
# This should not be needed, but exists as a failsafe
# Note: mode is referenced in SourcesList class via the passed in module (self here)
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# this shouldn't be possible; short-circuit early if it happens...
# found the Python bindings; respawn this module under the interpreter where we found them
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
# try again to find the bindings in common places
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
# we've done all we can do; just tell the user it's busted and get out
# Use exponential backoff with a max fail count, plus a little bit of randomness
# This is catching a generic Exception, due to packaging on EL7 raising a TypeError on import
# type: ignore[misc,assignment]
#: Python one-liners to be run at the command line that will determine the
# installed version for these special libraries.  These are libraries that
# don't end up in the output of pip freeze.
# rebuild input name to a flat list so we can tolerate any combination of input
# reconstruct the names
# Try 'pip list' command first.
# If there was an error (pip version too old) then use 'pip freeze'.
# If you define your own executable that executable should be the only candidate.
# As noted in the docs, executable doesn't work with virtualenvs.
# If no executable or virtualenv were specified, use the pip module for the current Python interpreter if available.
# For-else: Means that we did not break out of the loop
# (therefore, that pip was not found)
# If we're using a virtualenv we must use the pip from the
# virtualenv
# type: ignore[assignment] # type: ignore[no-redef]
# type: ignore[truthy-function]
# noinspection PyDeprecation
# Find the binary for the command in the PATH
# and switch the command for the explicit path.
# Add the system-site-packages option if that
# is enabled, otherwise explicitly set the option
# to not use system-site-packages if that is an
# option provided by the command's help function.
# -p is a virtualenv option, not compatible with pyenv or venv
# this conditional validates if the command being used is not any of them
# This code mimics the upstream behaviour of using the python
# which invoked virtualenv to determine which python is used
# inside of the virtualenv (when none are specified).
# if venv or pyvenv are used and virtualenv_python is defined, then
# virtualenv_python is ignored, this has to be acknowledged
# old pkg_resource will replace 'setuptools' with 'distribute' when it's already installed
# old setuptools has no specifier, do fallback
# this is done to avoid permissions issues with privilege escalation and virtualenvs
# If there's a virtualenv we want things we install to be able to use other
# installations that exist as binaries within this virtualenv. Example: we
# install cython and then gevent -- gevent needs to use the cython binary,
# not just a python package that will be found by calling the right python.
# So if there's a virtualenv, we add that bin/ to the beginning of the PATH
# in run_command by setting path_prefix here.
# Automatically apply -e option to extra_args when source is a VCS url. VCS
# includes those beginning with svn+, git+, hg+ or bzr+
# convert raw input package names to Package instances
# check invalid combination of arguments
# if the version specifier is provided by version, append that into the package
# used if extra_args is not used at all
# Ok, we will reconstruct the option string
# Using an env var instead of the `--break-system-packages` option, to avoid failing under pip 23.0.0 and earlier.
# See: https://github.com/pypa/pip/pull/11780
# Older versions of pip (pre-1.3) do not have pip list.
# pip freeze does not list setuptools or pip in its output
# So we need to get those via a specialcase
# rc is 1 when attempting to uninstall non-installed package
# This is a virtual module that is entirely implemented as an action plugin and runs on the controller
# There is no actual shell module source, when you use 'shell' in ansible,
# it runs the 'command' module with special arguments and it behaves differently.
# See the command source and the comment "#USE_SHELL".
# There will only be a single AnsibleModule object per module
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link?  Why don't we exclude directory?  Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent:  Remove the directory
# if state == touch:   Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file:    place inside of the directory (use _original_basename)
# if state == link:    place inside of the directory (use _original_basename.  Fallback to src?)
# if state == hard:    place inside of the directory (use _original_basename.  Fallback to src?)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
# make sure the target path is a directory when we're doing a recursive operation
# Fail if 'src' but no 'state' is specified
# could be many other things, but defaulting to file
# This should be moved into the common file utilities
# Change perms on the link
# The link target could be nonexistent
# Link is a directory so change perms on the directory's contents
# Change perms on the file pointed to by the link
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
# States
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
# If both parameters are None 'preserve', nothing to do
# If both timestamps are already ok, nothing to do
# If we can't read the file, we're okay assuming it's text
# If the file did not already exist
# if we are in check mode and the file is absent
# we can set the changed status to True and return
# Create an empty file
# Update the attributes on the file
# this is the exit code passed to sys.exit, not a constant -- pylint: disable=using-constant-test
# We take this to mean that fail_json() was called from
# somewhere in basic.py
# If we just created the file we can safely remove it
# follow symlink and operate on original
# file is not absent and any other state is a conflict
# For followed symlinks, we need to operate on the target of the link
# Create directory and assign permissions to it
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path.  We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
# Remove leading slash if we're creating a relative path
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
# We already know prev_state is not 'absent', therefore it exists in some form.
# previous state == directory
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
# use the current target of the link as the source
# If src is None that means we are expecting to update an existing link.
# refuse to replace a directory that has files in it
# try to replace atomically
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it.  Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
# src is the source of a hardlink.  We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
# Even if the link already exists, if src was specified it needs to exist.
# The inode number will be compared to ensure the link has the correct target.
# Internal use only, for recursive ops
# Note: Should not be in file_common_args in future
# Note: Different default than file_common_args
# Internal use only, for internal checks in the action plugins
# short-circuit for diff_peek
# Copyright: (c) 2014, Brian Coca <brian.coca+dev@gmail.com>
# more than one result for same key, ensure we store in a list
# new key/value, just assign
# originally copied from AWX's scan_services module to bring this functionality
# into Core
# This function is not intended to run on Red Hat but it could happen
# if `chkconfig` is not installed. `service` on RHEL9 returns rc 4
# when /etc/init.d is missing, add the extra guard of checking /etc/init.d
# instead of solely relying on rc == 4
# Check for special cases where stdout does not fit pattern
# Try extra flags " -l --allservices" needed for SLES11
# Extra flag needed for RHEL5
# elif rc in (1,3):
# Skip lines which are not service names
# find cli tools if available
# TODO: review conditionals ... they should not be this 'exclusive'
# list units as systemd sees them
# systemd sometimes gives misleading status
# check all fields for bad states
# except description
# active/inactive
# now try unit files for complete picture and final 'status'
# there is one more column (VENDOR PRESET) from `systemctl list-unit-files` for systemd >= 245
# Skipping because we expected more data
# Skip header
# populate services will all possible
# Override the state for services which are marked as 'failed'
# Based on the list of services that are enabled/failed, determine which are disabled
# and do the same for those are aren't running
# does not always get pid output
# frebsd is not compatible but will match other classes
# (c) 2015-2016, Jiri Tyr <jiri.tyr@gmail.com>
# async is a Python keyword
# make copy of params as we need to split them into yum repo only and file params
# TODO: Consolidate with the other methods calling set_*_if_different method, this is inefficient.
# recurse into subdirectory
# not checking because of daisy chain to file module
# used to handle 'dest is a directory' via template, a slight hack
# Make sure we always have a directory component for later processing
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
# Backwards compat only.  This will be None in FIPS mode
# Special handling for recursive copy - create intermediate dirs
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
# allow for conversion from symlink.
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
# at this point we should always have tmp file
# If neither have checksums, both src and dest are directories.
# byte indexing differs on Python 2 and 3,
# use indexbytes for compat
# chr(10) == '\n'
# cleanup just in case
# don't error on possible race conditions, but keep warning
# Options that are for the action plugin, but ignored by the module itself.
# We have them here so that the tests pass without ignores, which
# reduces the likelihood of further bugs added.
# Backwards compat.  This won't return data if FIPS mode is active
# handle file permissions (check mode aware)
# Mission complete
# Copyright: (c) 2012, Dag Wieers <dag@wieers.com>
# if we already moved the .git dir, roll it back
# https://github.com/ansible/ansible-modules-core/pull/907
# copied from ansible.utils.path
# or: git submodule [--quiet] update [--init] [-N|--no-fetch]
# [-f|--force] [--rebase] [--reference <repository>] [--merge]
# [--recursive] [--] [<path>...]
# run a bad submodule command to get valid params
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
# use existing git_ssh/ssh_command, fallback to 'ssh'
# write it
# set execute
# ensure we cleanup after ourselves
# initialise to existing ssh opts and/or append user provided
# hostkey acceptance
# avoid prompts
# deal with key file
# older than 2.3 does not know how to use git_ssh_command,
# so we force it into get_ssh var
# https://github.com/gitster/git/commit/09d60d785c68c8fa65094ecbe46fbc2a38d0fc1f
# for use in wrapper
# these versions don't support GIT_SSH_OPTS so have to write wrapper
# force use of git_ssh_opts via wrapper, git_ssh cannot not handle arguments
# we construct full finalized command string here
# git_ssh_command can handle arguments to ssh
# only use depth if the remote object is branch or tag (i.e. fetchable)
# git before 1.7.5 doesn't have separate-git-dir argument, do fallback
# Ensure we have the object we are referring to during git diff !
# cloning the repo, just get the remote's HEAD version
# appears to be a sha1.  return as-is since it appears
# cannot check for a specific sha1 on remote
# Find the dereferenced tag if this is an annotated tag.
# Check if the .git is a file. If it is a file, it means that the repository is in external directory respective to the working copy (e.g. we are in a
# submodule structure).
# Use original destination directory with data from .git file.
# No repo path found
# ``.git`` file does not have a valid format for detached Git dir.
# Read .git/HEAD for the name of the branch.
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
# There was an issue getting remote URL, most likely
# command is not available in this version of Git.
# Return if remote URL isn't changing.
# Return False if remote_url is None to maintain previous behavior
# for Git versions prior to 1.7.5 that lack required functionality.
# try to find the minimal set of refs we need to fetch to get a
# successful checkout
# this workaround is only needed for older git versions
# 1.8.3 is broken, 1.9.x works
# ensure that remote branch is available as both local and remote ref
# if refspecs is empty, i.e. version is neither heads nor tags
# assume it is a version hash
# fall back to a full clone, otherwise we might not be able to checkout
# don't try to be minimalistic but do a full clone
# also do this if depth is given, but version is something that can't be fetched directly
# ensure all tags are fetched
# old git versions have a bug in --tags that prevents updating existing tags
# no submodules
# Check for new submodules
# Check that dest/path/.git exists
# Check for updates to existing modules
# Fetch updates
# Compare against submodule HEAD
# FIXME: determine this from .gitmodules
# Compare against the superproject's expectation
# get the valid submodule params
# skip submodule commands if .gitmodules is not present
# FIXME check for local_branch first, should have been fetched already
# git clone --depth implies --single-branch, which makes
# the checkout fail if the version changes
# fetch the remote branch, to be able to check it out next
# if signed with a subkey, this contains the primary key fingerprint
# one could fail_json here, but the version info is not that important,
# so let's try to fail only on actual git commands
# If git archive file exists, then compare it with new git archive file.
# if match, do nothing
# if does not match, then replace existing with temp archive file.
# filecmp is supposed to be efficient than md5sum checksum
# Cleanup before exiting
# Perform archive from local directory
# evaluate and set the umask before doing anything else
# Certain features such as depth require a file:/// protocol for path based urls
# so force a protocol here ...
# We screenscrape a huge amount of git commands so use C locale anytime we
# call run_command()
# iface changes so need it to make decisions
# GIT_SSH=<path> as an environment variable, might create sh wrapper script for older versions.
# if there is no git configuration, do a clone operation unless:
# * the user requested no clone (they just want info)
# * we're doing a check mode test
# In those cases we do an ls-remote
# there's no git config, so clone
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
# Git archive is not supported by all git servers, so
# we will first clone and perform git archive from local directory
# else do a pull
# failure should happen regardless of check mode
# if force and in non-check mode, do a reset
# exit if already at desired sha version
# FIXME: This diff should fail since the new remote_head is not fetched yet?!
# switch to version specified regardless of whether
# we got new revisions from the repository
# Deal with submodules
# Switch to version specified
# determine if we changed anything
# Copyright: (c) 2013, Hiroaki Nakamura <hnakamur@gmail.com>
# Must set the permanent hostname prior to current to avoid NetworkManager complaints
# about setting the hostname outside of NetworkManager
# Replace all these characters with a single dash
# Replace multiple dashes with a single dash
# Get all the current host name values in the order of self.name_types
# Get the expected host name values based on the order in self.name_types
# Ensure all three names are updated
# type: t.Type[BaseStrategy]
# This is Linux and systemd is active
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
# Not a file, nor a URL, just pass it through
# non-deb822 args
# Make a copy, so we don't mutate module.params to avoid future issues
# popped non-deb822 args
# The distutils module is not shipped with SUNWPython on Solaris.
# It's in the SUNWPython-devel package which also contains development files
# that don't belong on production boxes.  Since our Solaris code doesn't
# depend on LooseVersion, do not import it on Solaris.
# Platform specific methods (must be replaced by subclass).
# Generic methods that should be used on all platforms.
# Most things don't need to be daemonized
# chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale
# This is complex because daemonization is hard for people.
# What we do is daemonize a part of this module, the daemon runs the
# command, picks up the return code and output, and returns it to the
# main process.
# Set stdin/stdout/stderr to /dev/null
# Make us a daemon. Yes, that's all it takes.
# Start the command
# In either of the above cases, pass a list of byte strings to Popen
# Wait for all output, or until the main process is dead and its output is done.
# Return a JSON blob to parent
# Wait for data from daemon process and process it.
# Set ps flags
# Find ps binary
# If rc is 0, set running as appropriate
# so as to not confuse ./hacking/test-module.py
# Find out if state has changed
# Only do something if state will change
# Control service
# If nothing needs to change just say all is well
# Build a list containing the possibly modified file.
# Parse line removing whitespaces, quotes, etc.
# Since the proper entry already exists we can stop iterating.
# We found the key but the value is wrong, replace with new entry.
# Add line to the list.
# If we did not see any trace of our entry we need to add it.
# Create a temporary file next to the current rc.conf (so we stay on the same filesystem).
# This way the replacement operation is atomic.
# Write out the contents of the list into our temporary file.
# Close temporary file.
# Replace previous rc.conf.
# tools must be installed
# Locate a tool to enable/disable a service
# service is managed by systemd
# service is managed by upstart
# set the upstart version based on the output of 'initctl version'
# we'll use the default of 0.0.0
# service is managed by OpenRC
# already have service start/stop tool too!
# service is managed by with SysV init scripts
# and uses update-rc.d
# and uses insserv
# and uses chkconfig
# If no service control tool selected yet, try to see if 'service' is available
# couldn't find anything yet
# Check status first as show will not fail if service does not exist
# systemd fields that are shell commands can be multi-line
# We take a value that begins with a "{" as the start of
# a shell command and a line that ends with "}" as the end of
# the command
# run-once services (for which a single successful exit indicates
# that they are running as designed) should not be restarted here.
# Thus, we are not checking d['SubState'].
# if we have decided the service is managed by upstart, we check for some additional output...
# check the job status by upstart response
# Prefer a non-zero return code. For reference, see:
# http://refspecs.linuxbase.org/LSB_4.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
# if the job status is still not known check it by status output keywords
# Only check keywords if there's only one line of output (some init
# scripts will output verbosely in case of error and those can emit
# keywords that are picked up as false positives
# first transform the status output that could irritate keyword matching
# if the job status is still not known and we got a zero for the
# return code, assume here that the service is running
# if the job status is still not known check it by special conditions
# iptables status command output is lame
# TODO: lookup if we can use a return code for this instead?
# Upstart's initctl
# Check to see if files contain the manual line in .conf and fail if True
# Remove manual stanza if present and service enabled
# Add manual stanza if not present and service disabled
# service already in desired state
# Add file with manual stanza if service disabled
# The initctl method of enabling and disabling services is much
# different than for the other service methods.  So actually
# committing the change is done in this conditional and then we
# skip the boilerplate at the bottom of the method
# SysV's chkconfig
# TODO: look back on why this is here
# state = out.split()[-1]
# Check if we're already in the correct state
# Systemd's systemctl
# self.changed should already be true
# OpenRC's rc-update
# service already enabled for the runlevel
# service already disabled for the runlevel
# service already disabled altogether
# update-rc.d style
# insserv (Debian <=7, SLES, others)
# If we've gotten to the end, the service needs to be updated
# we change argument order depending on real binary used:
# rc-update and systemctl need the argument order reversed
# Decide what command to run
# initctl commands take the form <cmd> <action> <name>
# SysV and OpenRC take the form <cmd> <name> <action>
# systemd commands take the form <cmd> <action> <name>
# upstart
# In OpenRC, if a service crashed, we need to reset its status to
# stopped with the zap command, before we can start it back.
# upstart or systemd or OpenRC
# SysV
# All services in OpenRC support restart.
# In other systems, not all services support restart. Do it the hard way.
# upstart or systemd
# merge return information
# TODO: add a warning to the output with the failure
# In rare cases, i.e. sendmail, rcvar can return several key=value pairs
# Usually there is just one, however.  In other rare cases, i.e. uwsgi,
# rcvar can return extra uncommented data that is not at all related to
# the rcvar.  We will just take the first key=value pair we come across
# and hope for the best.
# FreeBSD >= 9.2
# it can happen that rcvar is not set (case of a system coming from the ports collection)
# so we will fallback on the default
# sysrc does not exit with code 1 on permission error => validate successful change using service(8)
# rc = 0 indicates enabled service, rc = 1 indicates disabled service
# Legacy (FreeBSD < 9.2)
# Overkill?
# should be explicit False at this point
# better: $rc_directories - how to get in here? Run: sh -c '. /etc/rc.conf ; echo $rc_directories'
# Support for synchronous restart/refresh is only supported on
# Oracle Solaris >= 11.2
# Only 'online' is considered properly running. Everything else is off
# or has some sort of problem.
# status is one of: online, offline, degraded, disabled, maintenance, uninitialized
# see man svcs(1)
# Get current service enablement status
# look for enabled line, which could be one of:
# Mark service as started or stopped (this will have the side effect of
# actually stopping or starting the service)
# if starting or reloading, clear maintenance states
# Only 'active' is considered properly running. Everything else is off
# Check subsystem status
# If check for subsystem is not ok, check if service name is a
# group subsystem
# Check all subsystem status, if one subsystem is not active
# the group is considered not active.
# status is one of: active, inoperative
# Check if service name is a subsystem of a group subsystem
# Define if service name parameter:
# -s subsystem or -g group subsystem
# Main control flow
# Find service management tools
# Enable/disable service startup at boot if requested
# FIXME: ideally this should detect if we need to toggle the enablement state, though
# it's unlikely the changed handler would need to fire in this case so it's a minor thing.
# Not changing the running state, so bail out now.
# Collect service status
# Calculate if request will change service state
# Modify service state if necessary
# upstart got confused, one such possibility is MySQL on Ubuntu 12.04
# where status may report it has no start/stop links and we could
# not get accurate status
# as we may have just bounced the service the service command may not
# report accurate state at this moment so just show what we ran
# (c) 2015, Matt Martz <matt@sivel.net>
# Prefer pexpect.run from pexpect>=4
# Use pexpect._run in pexpect>=3.3,<4
# pexpect.run doesn't support `echo`
# pexpect.runu doesn't support encoding=None
# This should catch all insufficient versions of pexpect
# We deem them insufficient for their lack of ability to specify
# to not echo responses via the run/runu functions, which would
# potentially leak sensitive information
# Copyright: (c) 2012, Jeroen Hoekx <jeroen@hoekx.be>
# just because we can import it on Linux doesn't mean we will use it
# Process is Zombie or other error state
# Subclass: Linux
# first wait for the stop condition
# Conditions not yet met, wait and try again
# wait for start condition
# If anything except file not present, throw an error
# file doesn't exist yet, so continue
# File exists.  Are there additional things to check?
# nope, succeed!
# cannot mmap this file, try normal read
# Failed to connect by connect_timeout. wait and try again
# Connected -- are there additional conditions?
# No new data.  Probably means our timeout
# expired
# Server shutdown
# Shutdown the client socket
# else, the server broke the connection on its end, assume it's not ready
# Found our string, success!
# Connection established, success!
# while-else
# Timeout expired
# wait until all active connections are gone
# (c) 2015, Ansible Project
# back to ansible
# Platform dependent flags:
# Some Linux
# Some Berkeley based
# RISCOS
# main stat data
# process base results
# resolved permissions
# symlink info
# user data
# group data
# checksums
# try to get mime data if requested
# try to get attr data
# Copyright: (c) 2015, Linus Unnebäck <linus@folkdatorn.se>
# Copyright: (c) 2017, Sébastien DA ROCHA <sebastien@da-rocha.net>
# Check if wait option is supported
# Flush the table
# Delete the chain if there is no rule in the arguments
# Create the chain if there are no rule arguments
# Check if target is up to date
# Target is already up to date
# Modify if not check_mode
# Copyright: (c) 2012, Dane Summers <dsummers@pinedesk.biz>
# Copyright: (c) 2013, Mike Grozak  <mike.grozak@gmail.com>
# Copyright: (c) 2013, Patrick Callahan <pmc@patrickcallahan.com>
# Copyright: (c) 2015, Evan Kaufman <evan@digitalflophouse.com>
# Copyright: (c) 2015, Luca Berruti <nadirio@gmail.com>
# Read in the crontab from the system
# read the cronfile
# cron file does not exist
# FIXME: using safely quoted shell for now, but this really should be two non-shell calls instead.
# 1 can mean that there are no jobs.
# return if making a backup
# Add the entire crontab back to the user crontab
# FIXME: quoting shell args for now but really this should be two non-shell calls.
# set SELinux permissions
# Add the comment
# Add the job
# attempt to find job by 'Ansible:' header comment
# failing that, attempt to find job by exact match
# if no leading ansible header, insert one
# if a leading blank ansible header AND job has a name, update header
# normalize any leading/trailing newlines (ansible/ansible-modules-core#3791)
# TODO add some more error testing
# The following example playbooks:
# - cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null"
# - name: do the job
# - name: no job
# - name: sets env
# Would produce:
# PATH=/bin:/usr/bin
# # Ansible: check dirs
# * * 5,2 * * ls -alh > /dev/null
# # Ansible: do the job
# * * 5,2 * * /some/dir/job.sh
# Ensure all files generated are only writable by the owning user.  Primarily relevant for the cron_file option.
# --- user input validation ---
# cannot support special_time on solaris
# if requested make a backup before making a change
# no changes to env/job, but existing crontab needs a terminating newline
# retain the backup only if crontab or cron file have changed
# --- should never get here
# Copyright: (c) 2016, Brian Coca <bcoca@ansible.com>
# The output of 'systemctl show' can contain values that span multiple lines. At first glance it
# appears that such values are always surrounded by {}, so the previous version of this code
# assumed that any value starting with { was a multi-line value; it would then consume lines
# until it saw a line that ended with }. However, it is possible to have a single-line value
# that starts with { but does not end with } (this could happen in the value for Description=,
# for example), and the previous version of this code would then consume all remaining lines as
# part of that value. Cryptically, this would lead to Ansible reporting that the service file
# couldn't be found.
# To avoid this issue, the following code only accepts multi-line values for keys whose names
# start with Exec (e.g., ExecStart=), since these are the only keys whose values are known to
# span multiple lines.
# initialize
# Set CLI options depending on params
# if scope is 'system' or None, we can ignore as there is no extra switch.
# The other choices match the corresponding switch
# Run daemon-reload first, if requested
# Run daemon-reexec
# check service data, cannot error out on rc as it changes across versions, assume not found
# load return of systemctl show into dictionary for easy access and return
# Check for loading error
# Workaround for https://github.com/ansible/ansible/issues/71528
# list taken from man systemctl(1) for systemd 244
# fallback list-unit-files as show does not work on some systems (chroot)
# not used as primary as it skips some services (like those using init.d) and requires .service/etc notation
# Check for systemctl command
# Does service exist?
# mask/unmask the service, if requested, can operate on services before they are installed
# state is not masked unless systemd affirms otherwise
# some versions of system CAN mask/unmask non existing services, we only fail on missing if they don't
# here if service was not missing, but failed for other reasons
# do we need to enable the service?
# check systemctl result or if it is a init script
# https://www.freedesktop.org/software/systemd/man/systemctl.html#is-enabled%20UNIT%E2%80%A6
# transiently enabled but we're trying to set a permanent enabled
# We've been asked to enable this unit so do so despite possible reasons
# that systemctl may have for thinking it's enabled already.
# Let systemd handle the alias as we can't be sure what's needed.
# if not a user or global user service and both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
# default to current state
# Change enable/disable if needed
# set service state if requested
# default to desired state
# What is current service state?
# remove 'ed' from restarted/reloaded
# check for chroot
# this should not happen?
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2014, 2015 YAEGASHI Takeshi <yaegashi@debian.org>
# insertbefore=BOF
# insertafter=EOF
# Ensure there is a line separator before the block of lines to be inserted
# Before the block: check if we need to prepend a blank line
# If yes, we need to add the blank line if we are not at the beginning of the file
# and the previous line is not a blank line
# In both cases, we need to shift by one on the right the inserting position of the block
# Insert the block
# After the block: check if we need to append a blank line
# If yes, we need to add the blank line if we are not at the end of the file
# and the line right after is not a blank line
# Example text matched by the regexp:
# switch to ensure we are pointing at correct repo.
# it also updates!
# The --quiet option will return only modified files.
# Match only revisioned files, i.e. ignore status '?'.
# Has local mods if more than 0 modified revisioned files.
# We screenscrape a huge amount of svn commands so use C locale anytime we
# Order matters. Need to get local mods before switch to avoid false
# positives. Need to switch before revert to ensure we are reverting to
# correct repo.
# Copyright: (c) 2014, Ruggero Marchei <ruggero.marchei@daemonzone.net>
# Copyright: (c) 2015, Brian Coca <bcoca@ansible.com>
# Copyright: (c) 2016-2017, Konstantin Shalygin <k0ste@k0ste.ru>
# Set the default match pattern to either a match-all glob or
# regex depending on use_regex being set.  This makes sure if you
# set excludes: without a pattern pfilter gets something it can
# convert age to seconds:
# convert size to bytes:
# Setting `topdown=True` to explicitly guarantee matches are made from the shallowest directory first
# Empty the list used by os.walk to avoid traversing deeper unnecessarily
# Breaks out of directory files loop only
# Copyright: (c) 2012, Ansible Project
# (c) 2016, Toshio Kuratomi <tkuratomi@ansible.com>
# Copyright 2023 Ansible Project
# Through dnf5-5.2.12 all exceptions raised through swig became RuntimeError
# Disable checking whether SPEC is a binary -> `/usr/(s)bin/<SPEC>`,
# this prevents scenarios like the following:
# If users wish to target the `sssd` binary they can by specifying the full path `name=/usr/sbin/sssd` explicitly
# due to settings.set_with_filenames(True) being default.
# Disable checking whether SPEC is provided by an installed package.
# Consider following real scenario from the rpmfusion repo:
# We disable provides only for this `is_installed` check, for actual installation we leave the default
# setting to mirror the dnf cmdline behavior.
# dnf5 < 5.2.0.0
# FIXME https://github.com/rpm-software-management/dnf5/issues/1104
# dnf module compat
# https://github.com/rpm-software-management/dnf5/issues/1460
# plugins functionality requires python3-libdnf5 5.2.0.0+
# silently ignore here, the module will fail later when
# base.enable_disable_plugins is attempted to be used if
# user specifies enable_plugin/disable_plugin
# probe well-known system Python locations for accessible bindings
# raises AttributeError only on getter if not available
# pylint: disable=pointless-statement
# dnf5 < 5.2.7.0
# needed for installroot
# FIXME hardcoding the filename does not seem right, should libdnf5 expose the default file name?
# ignore packages that are of a different type, for backwards compat
# FIXME use `is_glob_pattern` function when available:
# https://github.com/rpm-software-management/dnf5/issues/1563
# NOTE dnf module compat
# This is a virtual module that is entirely implemented server side
# pipe for communication between forked process and parent
# daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012
# exit first parent
# decouple from parent environment (does not chdir / to keep the directory context the same as for non async tasks)
# do second fork
# TODO: print 'async_wrapper_pid': pid, but careful as it will pollute expected output.
# NB: this function copied from module_utils/json_utils.py. Ensure any changes are propagated there.
# FUTURE: AnsibleModule-ify this module so it's Ansiballz-compatible and can use the module_utils copy of this function.
# Filter initial junk
# Filter trailing junk
# Trailing junk is uncommon and can point to things the user might
# want to change.  So print a warning if we find any
# DTFIX-FUTURE: needs rework for serialization profiles
# signal grandchild process started and isolated from being terminated
# by the connection being closed sending a signal to the job group
# call the module interpreter directly (for non-binary modules)
# merge JSON junk warnings with any existing module warnings
# this relies on the controller's fallback conversion of string warnings to WarningMessageDetail instances, and assumes
# that the module result and warning collection are basic JSON datatypes (eg, no tags or other custom collections).
# temporary notice only
# consider underscore as no argsfile so we can support passing of additional positional parameters
# setup job output directory
# TODO: Add checks for permissions on path.
# NB: task executor compat will coerce to the correct dataclass type
# immediately exit this process, leaving an orphaned process
# running which immediately forks a supervisory timing process
# Notify the overlord that the async process started
# we need to not return immediately such that the launched command has an attempt
# to initialize PRIOR to ansible trying to clean up the launch directory (and argsfile)
# this probably could be done with some IPC later.  Modules should always read
# the argsfile at the very first start of their execution anyway
# close off notifier handle in grandparent, probably unnecessary as
# this process doesn't hang around long enough
# allow waiting up to 2.5 seconds in total should be long enough for worst
# loaded environment in practice.
# The actual wrapper process
# close off the receiving end of the pipe from child process
# Daemonize, so we keep on running
# we are now daemonized, create a supervisory process
# close off inherited pipe handles
# the parent stops the process after the time limit
# set the child process group id to kill all children
# ensure we leave response in poll location
# actually kill it
# the child process runs the actual module
# Copyright: (c) 2013, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <matthew@flowroute.com>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# added to stave off future warnings about apt api
# "Del python3-q 2.4-1 [24 kB]"
# we need the module for later use (eg. fail_json)
# if policy_rc_d is null then we don't need to modify policy-rc.d
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
# if the /usr/sbin/policy-rc.d already exists we back it up
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
# 990 is the priority used in `apt-get -t`
# Installing a specific version from command line overrides all pinning
# We don't mimic this exactly, but instead set a priority which is higher than all APT built-in pin priorities.
# Even though we put in a pin policy, it can be ignored if there is no
# possible candidate.
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
# the low-level package object
# When this is a virtual package satisfied by only
# one installed package, return the status of the target
# package to avoid requesting re-install
# Otherwise return nothing so apt will sort out
# what package to satisfy this with
# python-apt version too old to detect virtual packages
# mark as not installed and let apt-get install deal with it
# older python-apt cannot be used to determine non-purged
# python-apt 0.7.X has very weak low-level object
# might not be necessary as python-apt post-0.7.X should have current_state property
# assume older version of python-apt is installed
# check if the version is matched as well
# Note: apt-get does implicit regex matching when an exact package name
# match is not found.  Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
# note that none of these chars is allowed in a (debian) pkgname
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
# Filter the multiarch packages from the cache only once
# pylint: disable=used-before-assignment
# noqa: F841
# Create a cache of pkg_names including multiarch only once
# No wildcards in name
# check for start marker from aptitude
# check for start marker from apt-get
# show everything
# check for end marker line from both apt-get and aptitude
# https://github.com/ansible/ansible/issues/40531
# Let apt decide what to install
# only_upgrade upgrades packages that are already installed
# since this package is not installed, skip it
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
# This is legacy behavior, and isn't documented (in fact it does
# things documentations says it shouldn't). It should not be relied
# upon.
# install_recommends is None uses the OS default
# Does not need to down-/upgrade, move on to next package
# Must not be installed, continue with installation
# Check if package is installable
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
# Install 'Recommends' of this deb file
# and add this deb to the list of packages to install
# install the deps through apt
# apt-get dist-upgrade
# aptitude full-upgrade
# aptitude safe-upgrade # mode=yes # default
# https://github.com/ansible/ansible-modules-core/issues/2951
# update cache until files are fixed or retries exceeded
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
# APT related constants
# 2) finding none, try to install a matching python3-apt package for the current interpreter version;
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
# try to install the apt Python binding
# If there is nothing else to do exit. This will set state as
# max times we'll retry
# keep running on lock issues unless timeout or resolution is hit.
# Get the cache object, this has 3 retries built in
# reopen cache w/ modified config
# Cache valid time is default 0, which will update the cache if
# Retry to update the cache with exponential backoff
# Use exponential backoff plus a little bit of randomness
# Store if the cache has been updated
# Store when the update time was last
# got here w/o exception and/or exit???
# (c) 2017, Ansible Project
# most of it copied from AWX's scan_packages module
# Store the cache to avoid running pkg_cache() for each item in the comprehension, which is very slow
# parse values of details that might extend over several lines
# append value to previous detail
# get supported pkg managers
# add aliases
# start work
# choices are not set for 'manager' as they are computed dynamically and validated below instead of in argspec
# keep order from user, we do dedupe below
# substitute aliases for aliased
# dedupe as per above
# only consider 'found' if it results in something
# Set the facts, this will override the facts in ansible_facts that might exist from previous runs
# when using operating system level or distribution package managers
# Copyright: (c) 2016, Ansible RedHat, Inc
# Copyright: (c) 2012, Jayson Vantuyl <jayson@aggressive.ly>
# Make sure the key_id is valid hexadecimal
# apt key format
# gpg format
# invalid line, skip
# note: validate_certs and other args are pulled from module directly
# assume we only want first key?
# check for proxy
# add recv argument as last one
# Out of retries
# internal vars
# ensure we have requirements met
# initialize result dict
# invalid key should fail well before this point, but JIC ...
# get existing keys to verify if we need to change
# this also takes care of url if key_id was not provided
# we hit this branch only if key_id is supplied with url
# verify it got added
# we use the "short" id: key_id[-8:], short_format=True
# it's a workaround for https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1481871
# Copyright: (c) 2012 Dag Wieers <dag@wieers.com>
# Copyright: (c) 2016, Ansible, a Red Hat company
# passed in from the async_status action plugin
# setup logging directory
# NOT in cleanup mode, assume regular status mode
# no remote kill mode currently exists, but probably should
# consider log_path + ".pid" file and also unlink that above
# file not written yet?  That means it is running
# just write the module output directly to stdout and exit; bypass other processing done by exit_json since it's already been done
# pylint: disable=ansible-bad-function
# Copyright: (c) 2013, Dylan Martin <dmartin@seattlecentral.edu>
# Copyright: (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# Copyright: (c) 2016, Dag Wieers <dag@wieers.com>
# Python < 3.9
# String from tar that shows the tar contents are different from the
# filesystem
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
# Python >= 3.12
# The unzip utility does not support setting the stST bits
# Python2.4 can't handle zipfiles with > 64K files.  Try using
# /usr/bin/unzip instead
# Assume epoch date
# BSD unzip doesn't support zipinfo listings with timestamp.
# Get some information related to user/group ownership
# Get current user and group information
# Get future user ownership
# Get future group ownership
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
# Too few fields... probably a piece of the header or footer
# Check first and seventh field in order to skip header/footer
# 7 or 8 are FAT, 10 is normal unix perms
# Possible entries:
# Skip excluded files
# Itemized change requires L for symlink
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set.  Otherwise, 666.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# Test string conformity
# DEBUG
# Compare file types
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
# Compare file timestamps
# Add to excluded files, ignore other changes
# Compare file sizes
# Compare file checksums
# Compare file permissions
# Do not handle permissions of symlinks
# Use the new mode provided with the action, if there is one
# Only special files require no umask-handling
# Compare file user ownership
# If we are not root and requested owner is not our user, fail
# Compare file group ownership
# Register changed files and finalize diff output
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
# We don't allow absolute filenames.  If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'".  This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
# Check whether the differences are in something that we're
# setting anyway
# What is different
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
# Fallback to tar
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
# Class to handle tar files that aren't compressed
# argument to tar
# Class to handle bzip2 compressed tar files
# Class to handle xz compressed tar files
# Class to handle zstd compressed tar files
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
# NOTE: adds 'l', which is default on most linux but not all implementations
# Ensure unzip -Z is available before we use it in is_unarchive
# try handlers in order and return the one that works or bail if none work
# We have them here so that the sanity tests pass without ignores, which
# check-mode only works for zip files, we cover that later
# did tar file arrive?
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
# ensure src is an absolute path before picking handlers
# skip working with 0 size archives
# is dest OK to receive tar file?
# do we need to do unpack?
# res_args['check_results'] = check_results
# do the unpack
# Get diff if required
# Run only if we found differences (idempotence) or diff was missing
# do we need to change perms?
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
# Ansible module to import third party repo keys to your rpm db
# Copyright: (c) 2013, Héctor Acosta <hector.acosta@gazzang.com>
# If the key is a url, we need to check if it's present to be idempotent,
# to do that, we need to check the keyid, which we can get from the armor.
# As mentioned here,
# https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob_plain;f=doc/DETAILS
# The description of the `fpr` field says
# "fpr :: Fingerprint (fingerprint is in field 10)"
# No key is installed on system
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
# Darwin needs cleartext password, so skip validation
# Allow setting certain passwords in order to disable the account
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
# contains character outside the crypto constraint
# md5
# sha256
# sha512
# yescrypt
# cast all args to strings ansible-modules-core/issues/4397
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
# luseradd uses -n instead of -N
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
# Convert seconds since Epoch to days since Epoch
# check if this version of usermod can append groups
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
# check if --append exists
# get a list of all groups for the user, including the primary
# Convert days since Epoch to seconds since Epoch as struct_time
# Current expires is negative or we compare year, month, and day only
# Lock if no password or unlocked, unlock only if locked
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
# Remove options that are mutually exclusive with -p
# Lock the account and set the hash in a single command
# skip if no usermod changes to be made
# Try group as a gid first
# Exclude the user's primary group by default
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
# target state already reached
# The key was created between us checking for existence and now
# If the keys were successfully created, we should be able
# to tweak ownership.
# by default we use the create_user_useradd method
# by default we use the remove_user_userdel method
# by default we use the modify_user_usermod method
# get umask from /etc/login.defs and set correct home mode
# fallback if neither HOME_MODE nor UMASK are set;
# follow behaviour of useradd initializing UMASK = 022
# HOME_MODE has higher precedence as UMASK
# higher precedence
# system cannot be handled currently - should we error if its requested?
# create the user
# we have to set the password in a second command
# we have to lock/unlock the password in a distinct command
# find current login class
# act only if login_class change
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
# modify the user if cmd will do anything
# skip if no changes to be made
# Read password aging defaults
# The line contains a hash / comment
# we have to set the password by editing the /etc/shadow file
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
# make the user hidden if option is set or defer to system option
# add hidden to processing if set
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
# sys.stderr.write('*** |%s| %s -> %s\n' %  (property, out, lines))
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
# We need to pass a string to dscl
# http://support.apple.com/kb/HT5017?viewlocale=en_US
# returned value is
# Make the Gecos (alias display name) default to username
# Make user group default to 'staff'
# Homedir is not created by default
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
# here we don't care about change status since it is a creation,
# thus changed is always true.
# set password with chpasswd
# Get password and lastupdate lines which come after the username
# Sanity check the lines because sometimes both are not present
# Add to additional groups
# Manage group membership
# Manage password
# following options are specific to macOS
# following options are specific to selinux
# following options are specific to userdel
# following options are specific to useradd
# following options are specific to usermod
# following are specific to ssh key generation
# Check to see if the provided home path contains parent directories
# that do not exist.
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
# modify user (note: this function is check mode aware)
# handle missing homedirs
# deal with ssh key
# generate ssh key (note: this function is check mode aware)
# target state reached, nothing to do
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# ==============================================================
# url handling
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
# create a temporary file and copy content to do checksum-based replacement
# tmp_dest should be an existing dir
# Since shutil.copyfileobj() will read from HTTPResponse in chunks, HTTPResponse.read() will not recognize
# if the entire content-length of data was not read. We need to do that validation here, unless a 'chunked'
# transfer-encoding was used, in which case we will not know content-length because it will not be returned.
# But in that case, HTTPResponse will behave correctly and recognize an IncompleteRead.
# If data is decompressed, then content-length won't match the amount of data we've read, so skip.
# Avoid directory traversal
# Only a single line with a single string
# treat it as a checksum only file
# The assumption here is the file is in the format of
# checksum filename
# main
# setup aliases
# checksum specified, parse for algorithm and checksum
# download checksum file to checksum_tmpsrc
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
# Ensure the checksum portion is a hexdigest
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
# Not forcing redownload, unless checksum does not match
# If the file already exists, prepare the last modified time for the
# request.
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
# download to tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# raise an error if there is no tmpsrc file
# check if there is no dest file
# raise an error if copy has no permission on dest
# If a checksum was provided, ensure that the temporary file matches this checksum
# before moving it to the destination.
# Copy temporary file to destination if necessary
# Backwards compat only.  We'll return None on FIPS enabled systems
# AIX and BSD don't have a file-based dynamic source, so the module also supports running a mount binary to collect these.
# Pattern for Linux, including OpenBSD and NetBSD
# Pattern for other BSD including FreeBSD, DragonFlyBSD, and MacOS
# Pattern for AIX, example in https://www.ibm.com/docs/en/aix/7.2?topic=m-mount-command
# a snippet of the output of the udevadm command below will be:
# ...
# ID_FS_TYPE=ext4
# ID_FS_USAGE=filesystem
# ID_FS_UUID=57b1a3e7-9019-4747-9809-7ec52bba9179
# TODO: NetBSD and FreeBSD can have UUIDs in /etc/fstab,
# but none of these methods work (mount always displays the label though)
# type: ignore # Missing return statement
# AIX has a couple header lines for some reason
# MacOS "map" lines are skipped (e.g. "map auto_home on /System/Volumes/Data/home (autofs, automounted, nobrowse)")
# TODO: include MacOS lines
# the group containing fstype is comma separated, and may include whitespace
# the last two fields are optional
# Expected for Linux, return an empty list since this doesn't appear to be AIX /etc/filesystems
# snip trailing :
# avoid clobbering the mount point with the AIX mount option "mount"
# mpypy error: misc: Incompatible types in "yield from" (actual type "object", expected type "Union[MountInfo, MountInfoOptions]
# only works if either
# * the list of functions excludes gen_aix_filesystems_entries
# * the list of functions only contains gen_aix_filesystems_entries
# Convert UUIDs in Linux /etc/fstab to device paths
# TODO need similar for OpenBSD which lists UUIDS (without the UUID= prefix) in /etc/fstab, needs another approach though.
# this will be set to False if curses.setupterm() fails
# avoid circular import at runtime
# Set argtypes, to avoid segfault if the wrong type is provided,
# restype is assumed to be c_int
# Max for c_int
# DEPRECATED_VALUE implies DEPRECATED
# DTFIX-FUTURE: move this capability into config using an AmbientContext-derived TaskContext (once it exists)
# A few characters result in a subtraction of length:
# BS, DEL, CCH, ESC
# ESC is slightly different in that it's part of an escape sequence, and
# while ESC is non printable, it's part of an escape sequence, which results
# in a single non printable length
# -1 signifies a non-printable character
# use 0 here as a best effort
# It doesn't make sense to have a negative printable width
# people like to make containers w/o actual valid passwd/shadow and use host uids
# TODO: make this a callback event instead
# NOTE: level is kept at INFO to avoid security disclosures caused by certain libraries when using DEBUG
# DO NOT set to logging.DEBUG
# map color to log levels, in order of priority (low to high)
# BSD path for cowsay
# MacPorts path for cowsay
# monkeypatching the underlying file-like object isn't great, but likely safer than subclassing
# Only set stdout to raw mode if it is a TTY. This is needed when redirecting
# stdout to a file since a file cannot be set to raw mode.
# Nest the try except since curses.error is not available if curses did not import
# curses.tigetstr() returns None in some circumstances
# NB: this lock is used to both prevent intermingled output between threads and to block writes during forks.
# Do not change the type of this lock or upgrade to a shared lock (eg multiprocessing.RLock).
# list of all deprecation messages to prevent duplicate display
# could not execute cowsay for some reason
# NB: we're relying on the display singleton behavior to ensure this only runs once
# This can't be removed as long as we have the possibility of encountering un-renderable strings
# created with `surrogateescape`; the alternative of having display methods hard fail is untenable.
# If _final_q is set, that means we are in a WorkerProcess
# and instead of displaying messages directly from the fork
# we will proxy them through the queue
# Convert Windows newlines to Unix newlines.
# Some environments, such as Azure Pipelines, render `\r` as an additional `\n`.
# Note: After Display() class is refactored need to update the log capture
# code in 'cli/scripts/ansible_connection_cli_stub.py' (and other relevant places).
# With locks, and the fact that we aren't printing from forks
# just write, and let the system flush. Everything should come out peachy
# I've left this code for historical purposes, or in case we need to add this
# back at a later date. For now ``TaskQueueManager.cleanup`` will perform a
# final flush at shutdown.
# except OSError as e:
# set logger level based on color (not great)
# but last resort and backwards compatible
# this should not happen if mapping is updated with new color configs, but JIC
# actually log
# we send to log if log was configured with higher verbosity
# DTFIX3: are there any deprecation calls where the feature is switching from enabled to disabled, rather than being removed entirely?
# DTFIX3: are there deprecated features which should going through deferred deprecation instead?
# This is the post-proxy half of the `deprecated` implementation.
# Any logic that must occur in the primary controller process needs to be implemented here.
# deprecated: description='The formatted argument has no effect.' core_version='2.23'
# This is the pre-proxy half of the `warning` implementation.
# Any logic that must occur on workers needs to be implemented here.
# This is the post-proxy half of the `warning` implementation.
# deprecated: description='The wrap_text argument has no effect.' core_version='2.23'
# deprecated: description='The stderr argument has no effect.' core_version='2.23'
# This is the pre-proxy half of the `error` implementation.
# This is the post-proxy half of the `error` implementation.
# if result is false and default is not None
# Circular import because encrypt needs a display class
# handle utf-8 chars
# to maintain backward compatibility, assume these values are safe to template
# Compare the current process group to the process group associated
# with terminal of the given file descriptor to determine if the process
# is running in the background.
# When seconds/interrupt_input/complete_input are all None, this does mostly the same thing as input/getpass,
# but self.prompt may raise a KeyboardInterrupt, which must be caught in the main thread.
# If the main thread handled this, it would also need to send a newline to the tty of any hanging pids.
# if seconds is None and interrupt_input is None and complete_input is None:
# flush the buffer to make sure no previous key presses
# are read in below
# read input 1 char at a time until the optional timeout or complete/interrupt condition is met
# restore the old settings for the duped stdin stdin_fd
# value for Ctrl+C
# unsupported/not present, use default
# throttle to prevent excess CPU consumption
# tuple with name and options
# pylint: disable=ansible-deprecated-unnecessary-collection-name,ansible-invalid-deprecated-version
# emit any warnings or deprecations
# in the event config fails before display is up, we'll lose warnings -- but that's OK, since everything is broken anyway
# If this becomes generally needed, change the signature to operate on
# a variable number of arguments instead.
# HASH_BEHAVIOUR == 'replace'
# verify x & y are dicts
# to speed things up: if x is empty or equal to y, return y
# (this `if` can be remove without impact on the function
# in the following we will copy elements from y to x, but
# we don't want to modify x, so we create a copy of it
# to speed things up: use dict.update if possible
# insert each element of y in x, overriding the one in x
# (as y has higher priority)
# we copy elements from y to x instead of x to y because
# there is a high probability x will be the "default" dict the user
# want to "patch" with y
# therefore x will have much more elements than y
# if `key` isn't in x
# update x and move on to the next element of y
# from this point we know `key` is in x
# if both x's element and y's element are dicts
# recursively "combine" them or override x's with y's element
# depending on the `recursive` argument
# and move on to the next element of y
# if both x's element and y's element are lists
# "merge" them depending on the `list_merge` argument
# replace x value by y's one as it has higher priority
# append all elements from y_value (high prio) to x_value (low prio)
# and remove x_value elements that are also in y_value
# we don't remove elements from x_value nor y_value that were already in double
# (we assume that there is a reason if there where such double elements)
# _rp stands for "remove present"
# same as 'append_rp' but y_value elements are prepend
# else 'keep'
# else just override x's element with y's one
# Argument is a YAML file (JSON is a subset of YAML)
# Arguments as YAML
# Arguments as Key-value
# deprecated: description='Use validate_variable_name instead.' core_version='2.23'
# show common scalar key names as strings
# show the type name of all non-string keys
# ensure that keys are also converted
# (c) 2018, Toshio Kuratomi <a.badger@gmail.com>
# curses library was not found
# curses returns an error (e.g. could not find terminal)
# --- begin "pretty"
# pretty - A miniature library that provides a Python print and stdout
# wrapper that makes colored terminal text easier to use (e.g. without
# having to mess around with ANSI escape sequences). This code is public
# domain - there is no license except that you must leave this header.
# Copyright (C) 2008 Brian Nez <thedude at bri1 dot com>
# This option is provided for use in cases when the
# formatting of a command line prompt is needed, such as
# `ansible-console`. As said in `readline` sources:
# readline/display.c:321
# /* Current implementation:
# assumes both structures have same type
# used in module deprecations
# used in plugin option deprecations
# doc_fragments are allowed to specify a fragment var other than DOCUMENTATION
# with a . separator; this is complicated by collections-hosted doc_fragments that
# use the same separator. Assume it's collection-hosted normally first, try to load
# as-specified. If failure, assume the right-most component is a var, split it off,
# and retry the load.
# if it's asking for something specific that's missing, that's an error
# TODO: this is still an error later since we require 'options' below...
# ensure options themselves are directly merged
# merge rest of the sections
# TODO deprecate is_module argument, now that we have 'type'
# add collection name to versions and dates
# add fragments to documentation
# check to see if it's a X.Y.0 non-rc prerelease or dev release, if so, assume devel (since the X.Y doctree
# isn't published until beta-ish)
# exclude rc; we should have the X.Y doctree live by rc1
# this should only affect filters/tests
# we're looking for an adjacent file, skip this since it's identical
# should only happen for filters/test
# only look for adjacent if plugin file does not support documents
# find plugin doc file, if it doesn't exist this will throw error, we let it through
# can raise exception and short circuit when 'not found'
# no good? try adjacent
# add extra data to docs[0] (aka 'DOCUMENTATION')
# (c) 2012-2014, Toshio Kuratomi <a.badger@gmail.com>
# Copyright (c) 2020 Matt Martz <matt@sivel.net>
# Python2 doesn't have ``nonlocal``
# assign the actual lock to ``_lock``
# don't follow symlinks for basedir, enables source reuse
# Importing here to avoid circular import
# child is shorter than parent so cannot be subpath
# deprecated: description="deprecate unsafe_proxy module" core_version="2.23"
# DTFIX5: add full unit test coverage
# maintain backward compat by recursively *un* marking TrustedAsTemplate
# subprocess should be passed byte strings.
# only break out if we've emptied the pipes, or there is nothing to
# read from and the process has finished.
# Calling wait while there are still pipes to read can cause a lock
# Strings first because they are also sequences
# (c) 2016, James Tanner
# type: dict[str, bool]
# If we've already checked this executable
# deal with 'smart' connection .. one time ..
# FIXME: object identity/singleton-ness does not appear to survive pickle roundtrip (eg, copy.copy, copy.deepcopy), implement __reduce_ex__ and __copy__?
# Regular expression taken from
# https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
# Extra is everything to the right of the core version
# Major version zero (0.y.z) is for initial development. Anything MAY change at any time.
# The public API SHOULD NOT be considered stable.
# https://semver.org/#spec-item-4
# if the core version doesn't match
# prerelease and buildmetadata doesn't matter
# Build metadata MUST be ignored when determining version precedence
# https://semver.org/#spec-item-10
# With the above in mind it is ignored here
# If we have made it here, things should be equal
# The Py2 and Py3 implementations of distutils.version.Version
# are quite different, this makes the Py2 and Py3 implementations
# the same
# (c) 2015, Marius Gedminas <marius@gedmin.as>
# alongwith Ansible.  If not, see <http://www.gnu.org/licenses/>.
# shlex.split() wants Unicode (i.e. ``str``) input on Python 3
# noinspection PyPep8Naming
# Add specific options for ignoring certificates if requested
# Assume we're running in FIPS mode here
# Backwards compat functions.  Some modules include md5s in their return values
# Continue to support that for now.  As of ansible-1.8, all of those modules
# should also return "checksum" (sha1 for now)
# Do not use md5 unless it is needed for:
# 1) Optional backwards compatibility
# 2) Compliance with a third party protocol
# MD5 will not work on systems which are FIPS-140-2 compliant.
# deprecated: description='warning suppression only required for Python 3.12 and earlier' python_version='3.12'
# Note passlib salt values must be pure ascii so we can't let the user
# configure this
# Ensure the salt has the correct padding
# The default rounds used by passlib depend on the passlib version.
# For consistency ensure that passlib behaves the same as crypt in case no rounds were specified.
# Thus use the crypt defaults.
# Not every hash algorithm supports every parameter.
# Thus create the settings dict only with set parameters.
# starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt'
# passlib.hash should always return something or raise an exception.
# Still ensure that there is always a result.
# Otherwise an empty password might be assumed by some modules, like the user module.
# Hashes from passlib.hash should be represented as ascii strings of hex
# digits so this should not traceback.  If it's not representable as such
# we need to traceback and then block such algorithms because it may
# impact calling code.
# type: set[object]
# json-rpc standard errors (-32768 .. -32000)
# For Backward compatibility
# (c) 2016, Ansible by Red Hat <info@ansible.com>
# Copyright (c) 2019 Matt Martz <matt@sivel.net>
# Explicit multiprocessing context using the fork start method
# This exists as a compat layer now that Python3.8 has changed the default
# start method for macOS to ``spawn`` which is incompatible with our
# code base currently
# This exists in utils to allow it to be easily imported into various places
# without causing circular import or dependency problems
# deprecated: description="Calling listify_lookup_plugin_terms function is not necessary; the function should be deprecated." core_version="2.23"
# display.deprecated(
# (c) 2020, Felix Fontein <felix@fontein.de>
# CAUTION: This implementation of the collection loader is used by ansible-test.
# DTFIX-FUTURE: collapse this with the one in config, once we can
# FUTURE: remove this method when _to_bytes is removed
# FUTURE: remove this method and rely on automatic str -> bytes conversions of filesystem methods instead
# FIXME: decide what of this we want to actually be public/toplevel, put other stuff on a utility class?
# NB: content_id is passed in, but not used by this implementation
# Import the `yaml` module only when needed, as it is not available for the module/module_utils import sanity tests.
# This also avoids use of shared YAML infrastructure to eliminate any Ansible dependencies outside the collection loader itself.
# Using BaseLoader ensures that all scalars are strings.
# Doing so avoids parsing unquoted versions as floats, dates as datetime.date, etc.
# if we return True, we want the caller to re-raise
# concrete class of our metaclass type that defines the class properties we want
# DO NOT add new non-stdlib import deps here, this loader is used by external tools (eg ansible-test import sanity)
# that only allow stdlib and module_utils
# Available on Python >= 3.11
# We ignore the import error that will trigger when running mypy with
# older Python versions.
# Used with Python 3.9 and 3.10 only
# This member is still available as an alias up until Python 3.14 but
# is deprecated as of Python 3.12.
# deprecated: description='TraversableResources move' python_version='3.10'
# deprecated: description='TraversableResources fallback' python_version='3.8'
# type: ignore[assignment,misc]
# NB: this supports import sanity test providing a different impl
# spec
# Short circuit our loaders
# Don't use ``spec_from_loader`` here, because that will point
# to exactly 1 location for a namespace. Use ``find_spec``
# to get a list of all locations for the namespace
# TODO: accept metadata loader override
# expand any placeholders in configured paths
# add syspaths if needed
# ensure we always have ansible_collections
# remove any path hooks that look like ours
# zap any cached path importer cache entries that might refer to us
# validate via the public property that we really killed it
# track visited paths; we have to preserve the dir order as-passed in case there are duplicate collections (first one wins)
# de-dupe
# HACK: playbook CLI sets this relatively late, so we've already loaded some packages whose paths might depend on this. Fix those up.
# NB: this should NOT be used for late additions; ideally we'd fix the playbook dir setup earlier in Ansible init
# to prevent this from occurring
# not interested in anything other than ansible_collections (and limited cases under ansible)
# sanity check what we're getting from import, canonicalize path values
# seed the path to the configured collection roots
# something under the ansible package, delegate to our internal loader in case of redirections
# ns pkg eg, ansible_collections, ansible_collections.somens
# collection pkg eg, ansible_collections.somens.somecoll
# anything below the collection
# NB: actual "find"ing is delegated to the constructors on the various loaders; they'll ImportError if not found
# TODO: log attempt to load context
# Figure out what's being asked for, and delegate to a special-purpose loader
# Implements a path_hook finder for iter_modules (since it's only path based). This finder does not need to actually
# function as a finder in most cases, since our meta_path finder is consulted first for *almost* everything, except
# pkgutil.iter_modules, and under py2, pkgutil.get_data if the parent package passed has not been loaded yet.
# when called from a path_hook, find_module doesn't usually get the path arg, so this provides our context
# cache the native FileFinder (take advantage of its filesystem cache for future find/load requests)
# class init is fun- this method has a self arg that won't get used
# try to find the FileFinder hook to call for fallback path-based imports in Py3
# collections content? delegate to the collection finder
# Something else; we'd normally restrict this to `ansible` descendent modules so that any weird loader
# behavior that arbitrary Python modules have can be serviced by those loaders. In some dev/test
# scenarios (eg a venv under a collection) our path_hook signs us up to load non-Ansible things, and
# it's too late by the time we've reached this point, but also too expensive for the path_hook to figure
# out what we *shouldn't* be loading with the limited info it has. So we'll just delegate to the
# normal path-based loader as best we can to service it. This also allows us to take advantage of Python's
# built-in FS caching and byte-compilation for most things.
# create or consult our cached file finder for this path
# FUTURE: log at a high logging level? This is normal for things like python36.zip on the path, but
# might not be in some other situation...
# we ignore the passed in path here- use what we got from the path hook init
# this codepath is erroneously used under some cases in py3,
# and the find_module method on FileFinder does not accept the path arg
# see https://github.com/pypa/setuptools/pull/2918
# NB: this currently represents only what's on disk, and does not handle package redirection
# eg ansible_collections for ansible_collections.somens, '' for toplevel
# eg somens for ansible_collections.somens
# allow subclasses to validate args and sniff split values before we start digging around
# allow subclasses to customize candidate path filtering
# allow subclasses to customize finding paths
# filter candidate paths for existence (NB: silently ignoring package init code and same-named modules)
# allow subclasses to customize state validation/manipulation before we return the loader instance
# handle all-or-nothing sys.modules creation/use-existing/delete-on-exception-if-created behavior
# always override the values passed, except name (allow reference aliasing)
# basic module/package location support
# NB: this does not support distributed packages!
# if the submodule is a package, assemble valid submodule paths, but stop looking for a module
# is there a package init?
# short-circuit redirect; avoid reinitializing existing modules
# execute the module's code in its namespace
# things like NS packages that can't have code on disk will return None
# short-circuit redirect; we've already imported the redirected module, so just alias it and return it
# we're actually loading a module/package
# sane default for non-packages
# eg, I am a package
# empty is legal
# per PEP366
# FIXME: what do we want encoding/newline requirements to be?
# TODO: ensure we're being asked for a path below something we own
# TODO: try to handle redirects internally?
# relative to current package, search package paths if possible (this may not be necessary)
# candidate_paths = [os.path.join(ssp, path) for ssp in self._subpackage_search_paths]
# HACK: if caller asks for __init__.py and the parent dir exists, return empty string (this keep consistency
# with "collection subpackages don't require __init__.py" working everywhere with get_data
# this may or may not be an actual filename, but it's the value we'll use for __file__
# for things like synthetic modules that really have no source on disk, don't return a code object at all
# vs things like an empty package init (which has an empty string source on disk)
# Implements Ansible's custom namespace package support.
# The ansible_collections package and one level down (collections namespaces) are Python namespace packages
# that search across all configured collection roots. The collection package (two levels down) is the first one found
# on the configured collection root path, and Python namespace package aggregation is not allowed at or below
# the collection. Implements implicit package (package dir) support for both Py2/3. Package init code is ignored
# by this loader.
# special-case the `ansible` namespace, since `ansible.builtin` is magical
# handles locating the actual collection package and associated metadata
# we don't want to allow this one to have on-disk search capability
# only search within the first collection we found
# TODO: load collection metadata, cache in __loader__ state
# ansible.builtin is a synthetic collection, get its routing config from the Ansible distro
# TODO: rewrite import keys and all redirect targets that start with .. (current namespace) and . (current collection)
# OR we could do it all on the fly?
# if not meta_dict:
# ns_name = '.'.join(self._split_name[0:2])
# collection_name = '.'.join(self._split_name[0:3])
# #
# for routing_type, routing_type_dict in iteritems(meta_dict.get('plugin_routing', {})):
# loads everything under a collection, including handling redirections defined by the collection
# HACK: stash this in a better place
# check for explicit redirection, as well as ancestor package-level redirection (only load the actual code once!)
# NB: package level redirection requires hooking all future imports beneath the redirected source package
# in order to ensure sanity on future relative imports. We always import everything under its "real" name,
# then add a sys.modules entry with the redirected name using the same module instance. If we naively imported
# the source for each redirection, most submodules would import OK, but we'd have N runtime copies of the module
# (one for each name), and relative imports that ascend above the redirected package would break (since they'd
# see the redirected ancestor package contents instead of the package where they actually live).
# FIXME: wrap this so we can be explicit about a failed redirection
# if the import target looks like a package, store its name so we can rewrite future descendent loads
# if we redirected, don't do any further custom package logic
# we're not doing a redirect- try to find what we need to actually load a module/package
# this will raise ImportError if we can't find the requested module/package at all
# noplace to look, just ImportError
# still here? we found something to load...
# always needs to be a list
# This loader only answers for intercepted Ansible Python modules. Normal imports will fail here and be picked up later
# by our path_hook importer (which proxies the built-in import mechanisms, allowing normal caching etc to occur)
# should never see this
# Replace the module with the redirect
# since we're delegating to other loaders, this should only be called for internal redirects where we answered
# find_module with this loader, in which case we'll just directly import the redirection target, insert it into
# sys.modules under the name it was requested by, and return the original module.
# FIXME: smuggle redirection context, provide warning/error that we tried and failed to redirect
# FUTURE: introspect plugin loaders to get these dynamically?
# FIXME: tighten this up to match Python identifier reqs, etc
# can have 0-N included subdirs as well
# we assume it's a plugin
# playbooks and roles are their own resource
# assuming the fq_name is of the form (ns).(coll).(optional_subdir_N).(resource_name),
# we split the resource name off the right, split ns and coll off the left, and we're left with any optional
# subdirs that need to be added back below the plugin-specific subdir we'll add. So:
# ns.coll.resource -> ansible_collections.ns.coll.plugins.(plugintype).resource
# ns.coll.subdir1.resource -> ansible_collections.ns.coll.plugins.subdir1.(plugintype).resource
# ns.coll.rolename -> ansible_collections.ns.coll.roles.rolename
# split the left two components of the collection package name off, anything remaining is plugin-type
# specific subdirs to be added back on below the plugin type
# NOTE: keywords and identifiers are different in different Pythons
# get_collection_path
# leaving ex as debug target, even though not used in normal code
# they are handled a bit diff due to 'extension variance' and no collection_list
# looks like a valid qualified collection ref; skip the collection_list
# not a FQ and no collection search list spec'd, nothing to do
# treat as unqualified, loop through the collection search list to try and resolve
# FIXME: error handling/logging; need to catch any import failures and move along
# the package is now loaded, get the collection's package and ask where it lives
# FIXME: pick out typical import errors first, then error logging
# ensure we compare full paths since pkg path will be abspath
# make sure it's followed by at least a namespace and collection name
# we've got a name for it, now see if the path prefix matches what the loader sees
# reassemble the original path prefix up the collection name, and it should match what we just imported. If not
# this is probably a collection root that's not configured.
# walk the requested module's ancestor packages to see if any have been previously redirected
# rewrite the prefix on fullname so we import the target first, then alias it
# NB: this currently only iterates what's on disk- redirected modules are not considered
# yield (module_loader, name, ispkg) for each module/pkg under path
# TODO: implement ignore/silent catch for unreadable?
# exclude things that obviously aren't Python package dirs
# FIXME: this dir is adjustable in py3.8+, check for it
# TODO: proper string handling?
# FIXME: match builtin ordering for package/dir/file, support compiled?
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
# NB: a copy of this function exists in ../../modules/core/async_wrapper.py. Ensure any
# changes are propagated there.
# # Copyright: (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# Contributing Authors:
# removed==absent, installed==present, these are accepted as aliases
# It's possible someone passed a comma separated string since it used
# to be a string type, so we should handle that
# Fail if someone passed a space separated string
# https://github.com/ansible/ansible/issues/46301
# Sanity checking for autoremove
# this should be configurable in the future, once the profile feature is more fully baked
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
# Backwards compat for people still calling it from this package
# Copyright: (c) 2015, Brian Coca, <bcoca@ansible.com>
# type: (t.Iterable[t.Any]) -> t.Generator
# Ref: https://stackoverflow.com/a/30232619/595220
# prevent accidental use elsewhere
# Only or final attempt
# Copyright (c) Ansible Inc, 2016
# FIXME: should add logic to prevent matching 'self', though that should be extremely rare
# clone stdin/out/err
# close otherwise
# Make us a daemon
# end if not in child
# get new process session and detach
# avoid possible problems with cwd being removed
# init some vars
# FIXME: pass in as arg?
# start it!
# we don't do any locking as this should be a unique module/process
# make sure we always use byte strings
# execute the command in forked process
# loop reading output till it is done
# even after fds close, we might want to wait for pid to die
# Return a pickled data of parent
# in parent
# Grab response data after child finishes
# This should show if systemd is the boot init system, if checking init failed to mark as systemd
# these mirror systemd's own sd_boot test http://www.freedesktop.org/software/systemd/man/sd_booted.html
# If all else fails, check if init is the systemd command, using comm as cmdline could be symlink
# If comm doesn't exist, old kernel, no systemd
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# here we encode the args, so we have a uniform charset to
# work with, and split on white space
# always clear the line continuation flag
# finally, we decode each param back to the unicode it was in the arg string
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com>, 2015
# Old import for GSSAPI authentication, this is not used in urls.py but kept for backwards compatibility.
# Handle before Digest authentication
# If we've already attempted the auth and we've reached this again then there was a failure.
# Get the peer certificate for the channel binding token if possible (HTTPS). A bug on macOS causes the
# authentication to fail when the CBT is present. Just skip that platform.
# TODO: We could add another option that is set to include the port in the SPN if desired in the future.
# The response could contain a token that the client uses to validate the server
# type: types.ModuleType | None  # type: ignore[no-redef]
# This method exists simply to ensure we monkeypatch
# http.client.HTTPConnection.connect to call UnixHTTPConnection.connect
# Disable pylint check for the super() call. It complains about UnixHTTPSConnection
# being a NoneType because of the initial definition above, but it won't actually
# be a NoneType when this code runs
# deprecated: description='deprecated check_hostname' python_version='3.12'
# deprecated: description='urllib http 308 support' python_version='3.11'
# Preserve urllib2 compatibility
# Handle disabled redirects
# Handle non-redirect HTTP status or invalid follow_redirects
# Be conciliant with URIs containing a space
# Support redirect with payload and original headers
# Preserve payload and headers
# Do not preserve payload and filter headers
# http://tools.ietf.org/html/rfc7231#section-6.4.4
# If cafile is passed, we are only using that for verification,
# don't add additional ca certs
# TLS 1.3 needs this to be set to True to allow post handshake cert
# authentication. This functionality was added in Python 3.8 and was
# backported to 3.6.7, and 3.7.1 so needs a check for now.
# tries to find a valid CA cert in one of the
# standard locations for the current distribution
# Using a dict, instead of a set for order, the value is meaningless and will be None
# Not directly using a bytearray to avoid duplicates with fast lookup
# build a list of paths to check for .crt/.pem files
# based on the platform type
# fall back to a user-deployed cert in a standard
# location if the OS platform one is not available
# for all of the paths, find any  .crt or .pem files
# and compile them into single temp file for use
# in the ssl check to speed up the test
# paths_checked isn't used any more, but is kept just for ease of debugging
# Not HTTPS
# Logic documented in RFC 5929 section 4 https://tools.ietf.org/html/rfc5929#section-4
# If the signature hash algorithm is unknown/unsupported or md5/sha1 we must use SHA256.
# reconstruct url without credentials
# this creates a password manager
# because we have put None at the start it will always
# use this username/password combination for  urls
# for which `theurl` is a super-url
# create the AuthHandler
# add some nicer cookie handling
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
# Cache control
# Either we directly force a cache refresh
# or we do it if the original is more recent than our copy
# user defined headers now, which may override things we've set above
# Content-Length does not match gzip decoded length
# Prevent ``r.read`` from stopping at Content-Length
# Ensure headers are not split over multiple lines
# The HTTP policy also uses CRLF by default
# Message converts to native strings
# Module-related functions
# ensure we use proper tempdir
# Get validate_certs from the module params
# Lowercase keys, to conform to py2 behavior
# Don't be lossy, append header values for duplicate headers
# The same as above, lower case keys to match py2 behavior, and create more consistent results
# parse the cookies into a nice dictionary
# Python sorts cookies in order of most specific (ie. longest) path first. See ``CookieJar._cookie_attrs``
# Cookies with the same path are reversed from response order.
# This code makes no assumptions about that, and accepts the order given by python
# finally update the result with a message about the fetch
# Certain HTTPError objects may not have the ability to call ``.read()`` on Python 3
# This is not handled gracefully in Python 3, and instead an exception is raised from
# tempfile, due to ``urllib.response.addinfourl`` not being initialized
# Try to add exception info to the output but don't fail if we can't
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
# Stop on the first invalid extension
# download file
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
# check if the system is running under systemd
# AttributeError would be caused from use of .booted() if wrong systemd
# Python2 & 3 way to get NoneType
# Make sure the algorithm is actually available for use.
# Not all algorithms listed as available are actually usable.
# For example, md5 is not available in FIPS mode.
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
# Internal global holding passed in params.  This is consulted in case
# multiple AnsibleModules are created.  Otherwise each AnsibleModule would
# attempt to read from stdin.  Other code should not use this directly as it
# is an internal implementation detail
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
# should be available to any module using atomic_move
# Used for parsing symbolic file perms
# Deprecated functions
# End deprecated functions
# Compat shims
# End compat shims
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# sep_search_end: where in the string to end a search for the sep
# Find the potential end of a passwd
# No passwd in the rest of the data
# Search for the beginning of a passwd
# URL-style username+password
# No url style in the data, check for ssh style in the
# rest of the string
# Search for separator
# No separator; choices:
# Searched the whole string so there's no password
# here.  Return the remaining data
# Search for a different beginning of the password field.
# Password was found; remove it.
# AnsibleModule mutates the returned dict, so a copy is needed
# initialize name until we can parse from options
# May be used to set modifications to the environment for any
# run_command invocation
# Save parameter values that should never be logged
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
# This is for backwards compatibility only.
# selinux state caching
# finally, make sure we're in a logical working dir
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
# selinux related options
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
# Determine whether we need a placeholder for selevel/mls
# If selinux fails to find a default, return an array of None
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
# Iterate over the current context instead of the
# argument context, which may have selevel.
# prevent mode from having extra info or being invalid long number
# FIXME: comparison against string above will cause this to be executed
# every time
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
# can't access symlink in sticky directory (stat)
# can't set mode on symbolic links (chmod)
# can't set mode on read-only filesystem
# Can't set mode on broken symbolic links
# Now parse all symbolic modes
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
# And find all the operators
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
# Check if there are illegal characters in the permissions
# mask out u, g, or o permissions from current_mode and apply new permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
# Permission bits constants documented at:
# https://docs.python.org/3/library/stat.html#stat.S_ISUID
# Insert X_perms into user_perms_to_modes
# set modes owners and context as needed
# secontext not yet supported
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
# fallback to the 'best' locale, per the function
# final fallback is 'C', which may cause unicode issues
# but is preferable to simply failing on unknown locale
# need to set several since many tools choose to ignore documented precedence and scope
# handle setting internal properties from internal ansible vars
# clean up internal top level params:
# use defaults if not already set
# deprecated: description='no longer used in the codebase' core_version='2.21'
# debug overrides to read args from file or cmdline
# 6655 - allow for accented characters
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
# TODO: surrogateescape is a danger here on Py3
# ensure we clean up secrets!
# If syslog_facility specified, it needs to convert
# fall back to syslog since logging to journal failed
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
# try to proactively capture password/passphrase fields
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
# deprecated: description='deprecate AnsibleModule.jsonify()' core_version='2.23'
# deprecate(
# pylint: disable=ansible-deprecated-unnecessary-collection-name,ansible-deprecated-no-version
# preserve bools/none from no_log
# strip no_log collisions
# graft preserved values back on
# coerce to str instead of raising an error due to an invalid type
# Include a `_messages.Event` in the result.
# The `msg` is included in the chain to ensure it is not lost when looking only at `exception` from the result.
# Include only a formatted traceback string in the result.
# The controller will combine this with `msg` to create an `_messages.ErrorSummary`.
# preserve old behaviour where the third parameter was a hash algorithm object
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
# shutil.copy2(src, dst)
# shutil.copystat(src, dst)
# Set the context
# chown it
# Set the attributes
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
# only try workarounds for errno 18 (cross device), 1 (not permitted),  13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
# Use bytes here.  In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
# leaves tmp file behind when sudo and not root
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
# make sure the file has the correct permissions
# based on the current value of umask
# We're okay with trying our best here.  If the user is not
# root (or old Unices) they won't be able to chown.
# rename might not preserve context
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
# used by clean args later on
# stringify args for unsafe/direct shell usage
# not set explicitly, check if set by controller
# ensure args are a list
# expand ``~`` in paths, and all environment vars
# We can set this from both an attribute and per call
# If using test-module.py and explode, the remote lib path will resemble:
# If using ansible or ansible-playbook with a remote system:
# Clean out python paths set by ansiballz
# make sure we're in the right working directory
# Mirror the CPython subprocess logic and preference for the selector to use.
# A timeout of 1 is both a little short and a little long.
# With None we could deadlock, with a lower value we would
# waste cycles. As it is, this is a mild inconvenience if
# we need to exit, and likely doesn't waste too many cycles
# if we're checking for prompts, do it now, but only if stdout
# actually changed since the last loop
# break out if no pipes are left to read or the pipes are completely read
# and the process is terminated
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
# for backwards compatibility
# Backwards compat
# In 2.0, moved from inside the module to the toplevel
# 1032 == FZ_GETPIPE_SZ
# not as exact as above, but should be good enough for most platforms that fail the previous call
# use logical default JIC
# size of a packed unsigned long long
# set_option(s) has sensitive info, and the details are unlikely to matter anyway
# import from the compat api because 2.0-2.3 had a module_utils.facts.ansible_facts
# and get_all_facts in top level namespace
# these should always be first due to most other facts depending on them
# type: t.List[t.Type[BaseFactCollector]]
# These restrict what is possible in others
# general info, not required but probably useful for other facts
# virtual, this might also limit hardware/networking
# other fact sources
# TODO: make config driven
# assume filter_spec='' or filter_spec=[] is equivalent to filter_spec='*'
# try to match with ansible_ prefix added when non empty
# Note: this collects with namespaces, so collected_facts also includes namespaces
# shallow copy of the new facts to pass to each collector in collected_facts so facts
# can reference other facts they depend on.
# NOTE: If we want complicated fact dict merging, this is where it would hook in
# type: t.Set[str]
# NOTE: deprecate/remove once DT lands
# we can return this data, but should not be top level key
# NOTE: this is just a boolean indicator that 'facts were gathered'
# and should be moved to the 'gather_facts' action plugin
# probably revised to handle modules/subsets combos
# Add a collector that knows what gather_subset we used so it it can provide a fact
# This method is supposed to return True/False if the package manager is currently installed/usable
# It can also 'prep' the required systems in the process of detecting availability
# If handle_exceptions is false it should raise exceptions related to manager discovery instead of handling them.
# This method should return a list of installed packages, each list item will be passed to get_package_details
# This takes a 'package' item and returns a dictionary with the package information, name and version are minimal requirements
# Take all of the above and return a dictionary of lists of dictionaries (package = list of installed versions)
# type: t.List[str]
# Not an interesting exception to raise, just a speculative probe
# It looks like this package manager is installed
# See if respawning will help
# The module will exit when the respawned copy completes
# handle multiline values, they will not have a starting key
# Add the newline back in so people can split on it to parse
# lines if they need to.
# self.namespace is a object with a 'transform' method that transforms
# the name to indicate the namespace (ie, adds a prefix or suffix).
# pop the item by old_key and replace it using new_key
# TODO/MAYBE: rename to 'collect' and add 'collect_without_namespace'
# collect, then transform the key names if needed
# Retrieve module parameters
# the list of everything that 'all' expands to
# if provided, minimal_gather_subset is always added, even after all negations
# Retrieve all facts elements
# total always starts with the min set, then
# adds of the additions in gather_subset, then
# excludes all of the excludes, then add any explicitly
# requested subsets.
# subsets we mention in gather_subset explicitly, except for 'all'/'min'
# include 'devices', 'dmi' etc for '!hardware'
# NOTE: this only considers adding an unknown gather subsetup an error. Asking to
# start from specific platform, then try generic
# ask the class if it is compatible with the platform info
# use gather_name etc to get the list of collectors
# tweak the modules GATHER_TIMEOUT
# maps alias names like 'hardware' to the list of names that are part of hardware
# like 'devices' and 'dmi'
# all_facts_subsets maps the subset name ('hardware') to the class that provides it.
# TODO: name collisions here? are there facts with the same name as a gather_subset (all, network, hardware, virtual, ohai, facter)
# expand any fact_id/collectorname/gather_subset term ('all', 'env', etc) to the list of names that represents
# timeout function to make sure some fact gathering
# steps do not exceed a time limit
# This is an ansible.module_utils.common.facts.timeout.TimeoutError
# If we were called as @timeout, then the first parameter will be the
# function we are to wrap instead of the number of seconds.  Detect this
# and correct it by setting seconds to our default value and return the
# inner decorator function manually wrapped around the function
# If we were called as @timeout([...]) then python itself will take
# care of wrapping the inner decorator around the function
# try to not enter kernel 'block' mode, which prevents timeouts
# not required to operate, but would have been nice!
# actually read the data
# ignore errors as some jails/containers might have readable permissions but not allow reads
# Block total/available/used
# Inode total/available/used
# don't add a prefix
# List all `scope host` routes/addresses.
# They belong to routes, but it means the whole prefix is reachable
# locally, regardless of specific IP addresses.
# E.g.: 192.168.0.0/24, any IP address is reachable from this range
# if assigned as scope host.
# Use the commands:
# to find out the default outgoing interface, address, and gateway
# v6 routing may result in
# A valid output starts with the queried address on the first line
# FIXME: maybe split into smaller methods?
# FIXME: this is pretty much a constructor
# Check whether an interface is in promiscuous mode
# The second byte indicates whether the interface is in promiscuous mode.
# 1 = promisc
# 0 = no promisc
# TODO: determine if this needs to be in a nested scope/closure
# pointopoint interfaces do not have a prefix
# NOTE: device is ref to outside scope
# NOTE: interfaces is also ref to outside scope
# add this secondary IP to the main device
# NOTE: default_ipv4 is ref to outside scope
# If this is the default address, update default_ipv4
# NOTE: macaddress is ref from outside scope
# If this is the default address, update default_ipv6
# possibly busybox, fallback to running without the "primary" arg
# https://github.com/ansible/ansible/issues/50871
# replace : by _ in interface name since they are hard to use in template
# i is a dict key (string) not an index int
# FIXME: exit early on falsey ethtool_path and un-indent
# FIXME: exit early on falsey if we can
# FIXME: remove load_on_init when we can
# TODO: more or less abstract/NotImplemented
# MAYBE: we could try to build this based on the arch specific implementation of Network() or its kin
# Network munges cached_facts by side effect, so give it a copy
# media line is different to the default FreeBSD one
# not sure if this is useful - we also drop information
# Mac does not give us this
# MacOSX sets the media to '<unknown type>' for bridge interface
# and parsing splits this into two words; this if/else helps
# OpenBSD 'ifconfig -a' does not have information about aliases
# Return macaddress instead of lladdr
# example of line:
# $ ifconfig
# ne0: flags=8863<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST> mtu 1500
# FIXME: build up a interfaces datastructure, then assign into network_facts
# remove '--'
# remove /dev/ from /dev/eth0
# Solaris 'ifconfig -a' will print interfaces twice, once for IPv4 and again for IPv6.
# MTU and FLAGS also may differ between IPv4 and IPv6 on the same interface.
# 'parse_interface_line()' checks for previously seen interfaces before defining
# 'current_if' so that IPv6 facts don't clobber IPv4 facts (or vice versa).
# 'parse_interface_line' and 'parse_inet*_line' leave two dicts in the
# ipv4/ipv6 lists which is ugly and hard to read.
# This quick hack merges the dictionaries. Purely cosmetic.
# will be overwritten later
# Solaris displays single digit octets in MAC addresses e.g. 0:1:2:d:e:f
# Add leading zero to each octet where needed.
# Fibre Channel WWN initiator related facts collection for ansible.
# on solaris 10 or solaris 11 should use `fcinfo hba-port`
# TBD (not implemented): on solaris 9 use `prtconf -pv`
# fcinfo hba-port  | grep "Port WWN"
# HBA Port WWN: 10000090fa1658de
# get list of available fibre-channel devices (fcs)
# if device is available (not in defined state), get its WWN
# example output
# lscfg -vpl fcs3 | grep "Network Address"
# go ahead if we have both commands available
# ioscan / get list of available fibre-channel devices (fcd)
# get device information
# lookup the following line
# AIX 'ifconfig -a' does not have three words in the interface line
# only this condition differs from GenericBsdIfconfigNetwork
# don't bother with wpars it does not work
# zero means not in wpar
# device must have mtu attribute in ODM
# AIX 'ifconfig -a' does not inform about MTU, so remove current_if['mtu'] here
# Collect output from route command
# help pick the right interface address on OpenBSD
# help pick the right interface address on NetBSD
# FreeBSD, DragonflyBSD, NetBSD, OpenBSD and macOS all implicitly add '-a'
# when running the command 'ifconfig'.
# Solaris must explicitly run the command 'ifconfig -a'.
# Newer FreeBSD versions
# Mac has options like this...
# FreeBSD has options like this...
# netbsd show aliases like this
# cidr style ip address (eg, 127.0.0.1/24) in inet line
# used in netbsd ifconfig -e output after 7.1
# Don't just assume columns, use "netmask" as the index for the prior column
# deal with hex netmask
# otherwise assume this is a dotted quad
# calculate the network
# broadcast may be given or we need to calculate
# add to our list of addresses
# using cidr style addresses, ala NetBSD ifconfig post 7.1
# we are going to ignore unknown lines here - this may be
# a bad idea - but you can override it in your subclass
# TODO: these are module scope static function candidates
# copy all the interface values across except addresses
# iSCSI initiator related facts collection for Ansible.
# NVMe initiator related facts collection for Ansible.
# TODO: flatten
# Check if we have SSLContext support
# Determine if a system is in 'fips' mode
# NOTE: this is populated even if it is not set
# Copyright: (c) Ansible Project
# not finding the file, exit early
# if just the path needs to exists (ie, it can be empty) we are done
# file exists but is empty and we dont allow_empty
# file exists with some content
# every distribution name mentioned here, must have one of
# keep names in sync with Conditionals page of docs
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
# can't find that dist file, or it is incorrectly empty
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
# this sets distribution=RedHat if 'Red Hat' shows up in data
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
# this should never happen, but if it does fail quietly and not with a traceback
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# try to find out which linux distribution this is
# distribution_release can be the empty string
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
# keep looking
# finally found the right os dist file and were able to parse it
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
# FIXME: split distro file parsing into its own module or class
# TODO: remove if tested without this
# example pattern are 13.04 13.0 13
# SLES doesn't got funny release names
# no minor number, so it is the first release
# Check VARIANT_ID first for SLES4SAP or SL-Micro
# Fallback for older SLES 15 using baseproduct symlink
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
# nothing else to do, Ubuntu gets correct info from python functions
# nothing else to do, SteamOS gets correct info from python functions
# Kali does not provide /etc/lsb-release anymore
# FIXME: pass in ro copy of facts for this kind of thing
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
# keep keys in sync with Conditionals page of docs
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
# look for an os family alias for the 'distribution', if there isn't one, use 'distribution'
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
# Data and time related facts collection for ansible.
# Store the timestamp once, then get local and UTC versions from that
# epoch returns float or string in some non-linux environments
# epoch_int always returns integer format of epoch
# i86pc is a Solaris and derivatives-ism
# platform.system() can be Linux, Darwin, Java, or Windows
# Attempt to use getconf to figure out architecture
# fall back to bootinfo if needed
# Get systemd version and features
# go over .fact files, run executables, read rest, skip bad with warning and note
# use filename for key where it will sit under local facts
# run it
# ignores exceptions and returns empty
# ensure we have unicode
# try to read it as json first
# if that fails read it with ConfigParser
# Collect facts related to apparmor
# check if my file system is the root one
# I'm not root or no proc, fallback to checking it is inode #2
# (c) 2021 Ansible Project
# (0.58, 0.82, 0.98)
# Collect facts related to the system package manager
# A list of dicts.  If there is a platform with more than one
# package manager, put the preferred one last.  If there is an
# ansible module, use that as the value for the 'name' key.
# NOTE the `path` key for dnf/dnf5 is effectively discarded when matched for Red Hat OS family,
# special logic to infer the default `pkg_mgr` is used in `PkgMgrFactCollector._check_rh_versions()`
# leaving them here so a list of package modules can be constructed by iterating over `name` keys
# the fact ends up being 'pkg_mgr' so stick with that naming/spelling
# Since /usr/bin/dnf and /usr/bin/microdnf can point to different versions of dnf in different distributions
# the only way to infer the default package manager is to look at the binary they are pointing to.
# /usr/bin/microdnf is likely used only in fedora minimal container so /usr/bin/dnf takes precedence
# Check if '/usr/bin/apt' is APT-RPM or an ordinary (dpkg-based) APT.
# There's rpm package on Debian, so checking if /usr/bin/rpm exists
# is not enough. Instead ask RPM if /usr/bin/apt-get belongs to some
# RPM package.
# No apt-get in RPM database. Looks like Debian/Ubuntu
# with rpm package installed
# Filter out the /usr/bin/pkg because on Altlinux it is actually the
# perl-Package (not Solaris package manager).
# Since the pkg5 takes precedence over apt, this workaround
# is required to select the suitable package manager on Altlinux.
# Handle distro family defaults when more than one package manager is
# installed or available to the distro, the ansible_fact entry should be
# the default package manager officially supported by the distro.
# It's possible to install dnf, zypper, rpm, etc inside of
# Debian. Doing so does not mean the system wants to use them.
# Check if /usr/bin/apt-get is ordinary (dpkg-based) APT or APT-RPM
# Collect facts related to selinux
# If selinux library is missing, only set the status and selinux_python_present since
# there is no way to tell if SELinux is enabled or disabled on the system
# without the library.
# Set a boolean for testing whether the Python library is present
# list of directories to check for ssh keys
# used in the order listed here, the first one with keys is used
# a previous keydir was already successful, stop looking
# for keys
# Collect facts related to system service manager and init.
# this should show if systemd is the boot init system, if checking init failed to mark as systemd
# check if /sbin/init is a symlink to systemd
# on SUSE, /sbin/init may be missing if systemd-sysvinit package is not installed.
# TODO: detect more custom init setups like bootscripts, dmd, s6, Epoch, etc
# also other OSs other than linux might need to check across several possible candidates
# Mapping of proc_1 values to more useful names
# try various forms of querying pid 1
# if command fails, or stdout is empty string or the output of the command starts with what looks like a PID,
# then the 'ps' command probably didn't work the way we wanted, probably because it's busybox
# The ps command above may return "COMMAND" if the user cannot read /proc, e.g. with grsecurity
# many systems return init, so this cannot be trusted, if it ends in 'sh' it probably is a shell in a container
# if not init/None it should be an identifiable or custom init, so we are done!
# Lookup proc_1 value in map and use proc_1 value itself if no match
# start with the easy ones
# FIXME: find way to query executable, version matching is not ideal
# FIXME: we might want to break out to individual BSDs or 'rc'
# FIXME: mv is_systemd_managed
# if we cannot detect, fallback to generic 'service'
# Collect facts related to LSB (Linux Standard Base)
# try the 'lsb_release' script first
# no lsb_release, try looking in /etc/lsb-release
# Collect facts related to systems 'capabilities' via capsh
# NOTE: -> get_caps_data()/parse_caps_data() for easier mocking -akl
# For more information, check: http://people.redhat.com/~rjones/virt-what/
# We want to maintain compatibility with the old "virtualization_type"
# and "virtualization_role" entries, so we need to track if we found
# them. We won't return them until the end, but if we found them early,
# we should avoid updating them again.
# But as we go along, we also want to track virt tech the new way.
# lxc/docker
# lxc does not always appear in cgroups anymore but sets 'container=lxc' environment var, requires root privs
# If docker/containerd has a custom cgroup parent, checking /proc/1/cgroup (above) might fail.
# https://docs.docker.com/engine/reference/commandline/dockerd/#default-cgroup-parent
# Fallback to more rudimentary checks.
# ensure 'container' guest_tech is appropriately set
# assume guest for this block
# FIXME: This does also match hyperv
# unassume guest
# Beware that we can have both kvm and virtualbox running on a single system
# Check whether this is a RHEV hypervisor (is vdsm running ?)
# We add both kvm and RHEV to host_tech in this case.
# It's accurate. RHEV uses KVM.
# In older Linux Kernel versions, /sys filesystem is not available
# dmidecode is the safest option to parse virtualization related values
# We still want to continue even if dmidecode is not available
# Strip out commented lines (specific dmidecode output)
# If none of the above matches, return 'NA' for virtualization_type
# and virtualization_role. This allows for proper grouping.
# base classes for virtualization facts
# FIXME: remove load_on_init if we can
# FIXME: just here for existing tests cases till they are updated
# Note the _fact_class impl is actually the FreeBSDVirtual impl
# Set empty values as default
# Check the dmesg if vmm(4) attached, indicating the host is
# capable of virtualization.
# We do similar to what we do in linux.py -- We want to allow multiple
# virt techs to show up, but maintain compatibility, so we have to track
# when we would have stopped, even though now we go through everything.
# The above logic is tried first for backwards compatibility. If
# something above matches, use it. Otherwise if the result is still
# empty, try machdep.hypervisor.
# We call update here, then re-set virtualization_tech_host/guest
# later.
# Check if it's a zone
# Check if it's a branded zone (i.e. Solaris 8/9 zone)
# If it's a zone check if we can detect if our global zone is itself virtualized.
# Relies on the "guest tools" (e.g. vmware tools) to be installed
# Detect domaining on Sparc hardware
# The output of virtinfo is different whether we are on a machine with logical
# domains ('LDoms') on a T-series or domains ('Domains') on a M-series. Try LDoms first.
# The output contains multiple lines with different keys like this:
# The output may also be not formatted and the returncode is set to 0 regardless of the error condition:
# Copyright (c) 2023 Ansible Project
# Prefer to use cfacter if available
# if facter is installed, and we can use --json because
# ruby-json is ALSO installed, include facter data in the JSON
# for some versions of facter, --puppet returns an error if puppet is not present,
# try again w/o it, other errors should still appear and be sent back
# Note that this mirrors previous facter behavior, where there isn't
# a 'ansible_facter' key in the main fact dict, but instead, 'facter_whatever'
# items are added to the main dict.
# TODO: if we fail, should we add a empty facter key or nothing?
# import this as a module to ensure we get the same module instance
# Originally only had these four as toplevelfacts
# Now we have all of these in a dict structure
# regex used against findmnt output to detect bind mounts
# regex used against mtab content to find entries that are bind mounts
# regex used for replacing octal escape sequences
# Only interested in the first line
# Check for vme cpu flag, Xen paravirt does not expose this.
# model name is for Intel arch, Processor (mind the uppercase P)
# works for some ARM devices, like the Sheevaplug.
# S390x classic cpuinfo
# SPARC
# Skip for platforms without vendor_id/model_name in cpuinfo (e.g ppc64le)
# The fields for ARM CPUs do not always include 'vendor_id' or 'model name',
# and sometimes includes both 'processor' and 'Processor'.
# The fields for Power CPUs include 'processor' and 'cpu'.
# Always use 'processor' count for ARM and Power systems
# getting sockets would require 5.7+ with CONFIG_SCHED_TOPOLOGY
# if the number of processors available to the module's
# thread cannot be determined, the processor count
# reported by /proc will be the default (as previously defined)
# In Python < 3.3, os.sched_getaffinity() is not available
# Use kernel DMI info, if available
# DMI SPEC -- https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.2.0.pdf
# Fall back to using dmidecode, if available
# call lsblk and collect all uuids
# --exclude 2 makes lsblk ignore floppy disks, which are slower to answer than typical timeouts
# this uses the linux major device number
# for details see https://www.kernel.org/doc/Documentation/devices.txt
# each line will be in format:
# <devicename><some whitespace><uuid>
# /dev/sda1  32caaec3-ef40-4691-a3b6-438c3f9bc1c0
# fallback for versions of lsblk <= 2.23 that don't have --paths, see _run_lsblk() above
# find bind mounts, in case /etc/mtab is a symlink to /proc/mounts
# fields[0] is the TARGET, fields[1] is the SOURCE
# bind mounts will have a [/directory_name] in the SOURCE column
# Convert to integer using base8 and then convert to character
# _udevadm_uuid is a fallback for versions of lsblk <= 2.23 that don't have --paths
# see _run_lsblk() above
# https://github.com/ansible/ansible/issues/36077
# gather system lists
# start threads to query each mount
# Transform octal escape sequences
# only add if not already there, we might have a plain /etc/mtab
# done with spawning new workers, start gc
# wait for workers and get results
# failed, try to find out why, if 'res.successful' we know there are no exceptions
# move results outside and make loop only handle pending
# avoid cpu churn, sleep between retrying for loop with remaining mounts
# we can get NVMe device's serial number from /sys/block/<name>/device/serial
# Historically, `support_discard` simply returned the value of
# `/sys/block/{device}/queue/discard_granularity`. When its value
# is `0`, then the block device doesn't support discards;
# _however_, it being greater than zero doesn't necessarily mean
# that the block device _does_ support discards.
# Another indication that a block device doesn't support discards
# is `/sys/block/{device}/queue/discard_max_hw_bytes` being equal
# to `0` (with the same caveat as above). So if either of those are
# `0`, set `support_discard` to zero, otherwise set it to the value
# of `discard_granularity` for backwards compatibility.
# sysfs sectorcount assumes 512 blocksize. Convert using the correct sectorsize
# domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7).
# vgs fields: VG #PV #LV #SN Attr VSize VFree
# lvs fields:
# LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
# pvs fields: PV VG #Fmt #Attr PSize PFree
# TODO: mounts isn't exactly hardware
# Intel
# PowerPC
# Free = Total - (Wired + active + inactive)
# Get a generator of tuples from the command output so we can later
# turn it into a dictionary
# Strip extra left spaces from the value
# Most values convert cleanly to integer values but if the field does
# not convert to an integer, just leave it alone.
# On Darwin, the default format is annoying to parse.
# Use -b to get the raw value and decode it.
# We need to get raw bytes, not UTF-8.
# kern.boottime returns seconds and microseconds as two 64-bits
# fields, but we are only interested in the first field.
# Note: This uses the freebsd fact class, there is no dragonfly hardware fact class
# storage devices notoriously prone to hang/block so they are under a timeout
# Get free memory. vmstat output looks like:
# Get swapctl info. swapctl output looks like:
# total: 69268 1K-blocks allocated, 0 used, 69268 available
# And for older OpenBSD:
# total: 69268k bytes allocated = 0k used, 69268k available
# On openbsd, we need to call it with -n to get this value as an int.
# The following is partly a lie because there is no reliable way to
# determine the number of physical CPUs in the system. We can only
# query the number of logical CPUs, which hides the number of cores.
# On amd64/i386 we could try to inspect the smt/core/package lines in
# dmesg, however even those have proven to be unreliable.
# So take a shortcut and report the logical number of processors in
# 'processor_count' and 'processor_cores' and leave it at that.
# We don't use dmidecode(8) here because:
# - it would add dependency on an external package
# - dmidecode(8) can only be ran as root
# So instead we rely on sysctl(8) to provide us the information on a
# best-effort basis. As a bonus we also get facts on non-amd64/i386
# platforms this way.
# On NetBSD, we need to call sysctl with -n to get this value as an int.
# Get swapinfo.  swapinfo output looks like:
# Device          1M-blocks     Used    Avail Capacity
# /dev/ada0p3        314368        0   314368     0%
# On FreeBSD, the default format is annoying to parse.
# TODO: rc, disks, err = self.module.run_command("/sbin/sysctl kern.disks")
# FIXME: why add the fact and then test if it is json?
# FIXME: could pass to run_command(environ_update), but it also tweaks the env
# Use C locale for hardware collection helpers to avoid locale specific number formatting (#24542)
# "brand" works on Solaris 10 & 11. "implementation" for Solaris 9.
# Add clock speed to description for SPARC CPU
# Counting cores on Solaris can be complicated.
# https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu
# Treat 'processor_count' as physical sockets and 'processor_cores' as
# virtual CPUs visible to Solaris. Not a true count of cores for modern SPARC as
# these processors have: sockets -> cores -> threads/virtual CPU.
# For a detailed format description see mnttab(4)
# On Solaris 8 the prtdiag wrapper is absent from /usr/sbin,
# but that's okay, because we know where to find the real thing:
# rc returns 1
# If you know of any other manufacturers whose names appear in
# the first line of prtdiag's output, please add them here:
# Device facts are derived for sdderr kstats. This code does not use the
# full output, but rather queries for specific stats.
# Example output:
# sderr:0:sd0,err:Hard Errors     0
# sderr:0:sd0,err:Illegal Request 6
# sderr:0:sd0,err:Media Error     0
# sderr:0:sd0,err:Predictive Failure Analysis     0
# sderr:0:sd0,err:Product VBOX HARDDISK   9
# sderr:0:sd0,err:Revision        1.0
# sderr:0:sd0,err:Serial No       VB0ad2ec4d-074a
# sderr:0:sd0,err:Size    53687091200
# sderr:0:sd0,err:Soft Errors     0
# sderr:0:sd0,err:Transport Errors        0
# sderr:0:sd0,err:Vendor  ATA
# sample kstat output:
# unix:0:system_misc:boot_time    1548249689
# uptime = $current_time - $boot_time
# TODO: very inefficient calls to machinfo,
# should just make one and then deal with finding the data (see facts/sysctl)
# but not going to change unless there is hp/ux for testing
# Working with machinfo mess
# if machinfo return cores strings release B.11.31 > 1204
# If hyperthreading is active divide cores by 2
# For systems where memory details aren't sent to syslog or the log has rotated, use parsed
# adb output. Unfortunately /dev/kmem doesn't have world-read, so this only works as root.
# FIXME: not clear how to detect multi-sockets
# On AIX, there are no options to get the uptime directly in seconds.
# Your options are to parse the output of "who", "uptime", or "ps".
# Only "ps" always provides a field with seconds.
# Parse out
# Calculate uptime in seconds
# AIX does not have mtab but mount command is only source of info (or to use
# api calls to get same info)
# normal mount
# nfs or cifs based mount
# in case of nfs if no mount options are provided on command line
# add into fields empty string...
# Copyright: 2017, Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause )
# prevent unhashable types from bombing, but keep the rest of the existing fallback/error behavior
# Copyright (c) 2023 Ansible
# pylint: disable=wildcard-import,unused-wildcard-import
# catch *all* exceptions to prevent type annotation support module bugs causing runtime failures
# (eg, https://github.com/ansible/ansible/issues/77857)
# type: ignore[assignment,no-redef]
# this import and patch occur after typing_extensions/typing imports since the presence of those modules affects dataclasses behavior
# Vendored copy of distutils/version.py from CPython 3.9.5
# Implements multiple version numbering conventions for the
# Python Module Distribution Utilities.
# PSF License (see licenses/PSF-license.txt or https://opensource.org/licenses/Python-2.0)
# Interface for version-number classes -- must be implemented
# by the following classes (the concrete ones -- Version should
# be treated as an abstract class).
# numeric versions don't match
# prerelease stuff doesn't matter
# have to compare prerelease
# case 1: neither has prerelease; they're equal
# case 2: self has prerelease, other doesn't; other is greater
# case 3: self doesn't have prerelease, other does: self is greater
# case 4: both have prerelease: must compare them!
# end class StrictVersion
# The rules according to Greg Stein:
# 1) a version number has 1 or more numbers separated by a period or by
# 2) sequences of letters are part of the tuple for comparison and are
# 3) recognize the numeric components may have leading zeroes
# The LooseVersion class below implements these rules: a version number
# string is split up into a tuple of integer and string components, and
# comparison is a simple tuple comparison.  This means that version
# numbers behave in a predictable and obvious way, but a way that might
# not necessarily be how people *want* version numbers to behave.  There
# wouldn't be a problem if people could stick to purely numeric version
# numbers: just split on period and compare the numbers as tuples.
# However, people insist on putting letters into their version numbers;
# the most common purpose seems to be:
# but of course this can't cover all version number schemes, and there's
# no way to know what a programmer means without asking them.
# The problem is what to do with letters (and other non-numeric
# characters) in a version number.  The current implementation does the
# obvious and predictable thing: keep them as strings and compare
# lexically within a tuple comparison.  This has the desired effect if
# an appended letter sequence implies something "post-release":
# eg. "0.99" < "0.99pl14" < "1.0", and "5.001" < "5.001m" < "5.002".
# However, if letters in a version number imply a pre-release version,
# the "obvious" thing isn't correct.  Eg. you would expect that
# "1.5.1" < "1.5.2a2" < "1.5.2", but under the tuple/lexical comparison
# implemented here, this just isn't so.
# Two possible solutions come to mind.  The first is to tie the
# comparison algorithm to a particular set of semantic rules, as has
# been done in the StrictVersion class above.  This works great as long
# as everyone can go along with bondage and discipline.  Hopefully a
# (large) subset of Python module programmers will agree that the
# particular flavour of bondage and discipline provided by StrictVersion
# provides enough benefit to be worth using, and will submit their
# version numbering scheme to its domination.  The free-thinking
# anarchists in the lot will never give in, though, and something needs
# to be done to accommodate them.
# Perhaps a "moderately strict" version class could be implemented that
# lets almost anything slide (syntactically), and makes some heuristic
# assumptions about non-digits in version number strings.  This could
# sink into special-case-hell, though; if I was as talented and
# idiosyncratic as Larry Wall, I'd go ahead and implement a class that
# somehow knows that "1.2.1" < "1.2.2a2" < "1.2.2" < "1.2.2pl3", and is
# just as happy dealing with things like "2g6" and "1.13++".  I don't
# think I'm smart enough to do it right though.
# In any case, I've coded the test suite for this module (see
# ../test/test_version.py) specifically to fail on things like comparing
# "1.2a2" and "1.2".  That's not because the *code* is doing anything
# wrong, it's because the simple, obvious design doesn't match my
# complicated, hairy expectations for real-world version numbers.  It
# would be a snap to fix the test suite to say, "Yep, LooseVersion does
# the Right Thing" (ie. the code matches the conception).  But I'd rather
# have a conception that matches common notions about version numbers.
# I've given up on thinking I can reconstruct the version string
# from the parsed tuple -- so I just store the string here for
# use by __str__
# end class LooseVersion
# FIXME: swap restype to errcheck
# NB: matchpathcon is deprecated and should be rewritten on selabel_lookup (but will be a PITA)
# all ctypes pointers share the same base type
# just patch simple directly callable functions directly onto the module
# NB: this validation code must run after all the wrappers have been declared
# begin wrapper function impls
# end wrapper function impls
# Blowfish has been moved, but the deprecated import is used by paramiko versions older than 2.9.5.
# See: https://github.com/paramiko/paramiko/pull/2039
# TripleDES has been moved, but the deprecated import is used by paramiko versions older than 3.3.2 and 3.4.1.
# See: https://github.com/paramiko/paramiko/pull/2421
# (c) 2018 Toshio Kuratomi <tkuratomi@ansible.com>
# The following additional changes have been made:
# * Remove optparse since it is not needed for our use.
# * A format string including {} has been changed to {0} (py2.6 compat)
# * Port two calls from subprocess.check_output to subprocess.Popen().communicate() (py2.6 compat)
# There could be a 'distro' package/module that isn't what we expect, on the
# PYTHONPATH. Rather than erroring out in this case, just fall back to ours.
# We require more functions than distro.id(), but this is probably a decent
# test that we have something we can reasonably use.
# Our bundled copy
# A local copy of the license can be found in licenses/Apache-License.txt
# Modifications to this code have been made by Ansible Project
# Copyright (c) 2024 Ansible Project
# deprecated: description='typing.Self exists in Python 3.11+' python_version='3.10'
# DTFIX-FUTURE: subclasses need to be able to opt-in to blocking nested contexts of the same type (basically optional per-callstack singleton behavior)
# DTFIX-FUTURE: this class should enforce strict nesting of contexts; overlapping context lifetimes leads to incredibly difficult to
# DTFIX-FUTURE: make frozen=True dataclass subclasses work (fix the mutability of the contextvar instance)
# pylint: disable=declare-non-slot  # pylint bug, see https://github.com/pylint-dev/pylint/issues/9950
# DTFIX-FUTURE: actively block multiple entry
# Using slots for reduced memory usage and improved performance.
# deprecated: description='always use dataclass slots and keyword-only args' python_version='3.9'
# required for deferred dataclass validation
# deprecated: description='types.UnionType is available in Python 3.10' python_version='3.9'
# DTFIX-FUTURE: when cls must have a __post_init__, enforcing it as a no-op would be nice, but is tricky on slotted dataclasses due to double-creation
# ignore annotations which are not fields, indicated by the t.ClassVar annotation
# check value
# DTFIX-FUTURE: support optional literals
# check elements (for containers)
# nothing to validate (empty dataclass)
# DTFIX-FUTURE: support non-str literal types
# avoid duplicate messages where the cause was already concatenated to the exception message
# circular import from messages
# CAUTION: This function is exposed in public API as ansible.module_utils.datatag.deprecator_from_collection_name.
# not ansible-core
# The plugin type isn't a known deprecator type, so we have to assume the caller is intermediate code.
# We have no way of knowing if the intermediate code is deprecating its own feature, or acting on behalf of another plugin.
# Callers in this case need to identify the deprecating plugin name, otherwise only ansible-core will be reported.
# Reporting ansible-core is never wrong, it just may be missing an additional detail (plugin name) in the "on behalf of" case.
# The plugin type is known, but the caller isn't a specific plugin -- instead, it's core plugin infrastructure (the base class).
# AnsiballZ Python package for core modules
# AnsiballZ Python package for non-core library/role modules
# non-plugin core path, safe to use ansible-core for the same reason as the non-deprecator plugin type case above
# We're able to detect the namespace, collection and plugin type -- but we have no way to identify the plugin name currently.
# To keep things simple we'll fall back to just identifying the namespace and collection.
# In the future we could improve the detection and/or make it easier for a caller to identify the plugin name.
# Callers in this case need to identify the deprecator to avoid ambiguity, since it could be the same collection or another collection.
# DTFIX-FUTURE: deprecations from __init__ will be incorrectly attributed to a plugin of that name
# DOC_FRAGMENTS - no code execution
# FILTER - basename inadequate to identify plugin
# only for collections
# TEST - basename inadequate to identify plugin
# implies DEPRECATED
# DTFIX-FUTURE: rewrite target-side tracebacks to point at controller-side paths?
# deprecated: description='use the single-arg version of format_traceback' python_version='3.9'
# Suboptimal error handling, but since import order can matter, and this is a critical error path, better to fail silently
# than to mask the triggering error by issuing a new error/warning here.
# if things failed early enough that we can't figure this out, assume we want a traceback for troubleshooting
# deprecated: description='move BaseExceptionGroup support here from ControllerEventFactory' python_version='3.10'
# name is already fully qualified
# the value of is_controller can change after import; always pick it up from the module
# NOTE: The preprocess_unsafe and vault_to_text arguments are features of LegacyControllerJSONEncoder.
# transformations to "final" JSON representations can only use:
# str, float, int, bool, None, dict, list
# NOT SUPPORTED: tuple, set -- the representation of these in JSON varies by profile (can raise an error, may be converted to list, etc.)
# This means that any special handling required on JSON types that are not wrapped/tagged must be done in a pre-pass before serialization.
# The final type map cannot contain any JSON types other than tuple or set.
# DTFIX-FUTURE: once we have separate control/data channels for module-to-controller (and back), warn about this conversion
# NOTE: Since JSON requires string keys, there is no support for preserving tags on dictionary keys during serialization.
# DTFIX-FUTURE: optimize this to use all known str-derived types in type map / allowed types
# DTFIX-FUTURE: optimized exact-type table lookup first
# Preserve the built-in JSON encoder support for subclasses of scalar types.
# Preserve the built-in JSON encoder support for subclasses of dict and list.
# Additionally, add universal support for mappings and sequences/sets by converting them to dict and list, respectively.
# no current need to preserve tags on controller-only types or custom behavior for anything in `allowed_serializable_types`
# always recognize tagged types
# custom types that do not extend JSON-native types
# JSON-native scalars lacking custom handling
# This is our last chance to intercept the values in containers, so they must be wrapped here.
# Only containers natively understood by the built-in JSONEncoder are recognized, since any other container types must be present in serialize_map.
# JSONEncoder converts tuple to a list, so just make it a list now
# Any value here is a type not explicitly handled by this encoder.
# The profile default handler is responsible for generating an error or converting the value to a supported type.
# DTFIX-FUTURE: add strict UTF8 string encoding checking to serialization profiles (to match the checks performed during deserialization)
# DTFIX3: the surrogateescape note above isn't quite right, for encoding use surrogatepass, which does work
# DTFIX-FUTURE: this config setting should probably be deprecated
# bypass AnsiballZ import scanning
# covers all profile-based deserializers, not just modules
# dict is handled by the JSON deserializer
# legacy behavior from jsonify and container_to_text
# legacy _json_encode_fallback behavior
# JSONEncoder built-in behavior
# legacy parameters.py does this before serialization
# legacy _json_encode_fallback behavior *and* legacy parameters.py does this before serialization
# The bytes type is not supported, use str instead (future module profiles may support a bytes wrapper distinct from `bytes`).
# DTFIX5: support serialization of every type that is supported in the Ansible variable type system
# bytes intentionally omitted as they are not a supported variable type, they were not originally supported by the old AnsibleJSONEncoder
# DTFIX-FUTURE: consider replacing this with a socket import shim that installs the patch
# shared empty frozenset for default values
# Technical Notes
# Tagged values compare (and thus hash) the same as their base types, so a value that differs only by its tags will appear identical to non-tag-aware code.
# This will affect storage and update of tagged values in dictionary keys, sets, etc. While tagged values can be used as keys in hashable collections,
# updating a key usually requires removal and re-addition.
# if no tags were removed, return the original instance
# DTFIX-FUTURE: provide a knob to optionally report the real type for debugging purposes
# if no tags to apply, just return what we got
# NB: this only works because the untaggable types are singletons (and thus direct type comparison works)
# include the existing tags first so new tags of the same type will overwrite
# this is needed to call __init__subclass__ on mixins for derived types
# DTFIX-FUTURE: is there a better way to exclude non-abstract types which are base classes?
# common usage assumes `data` is an intermediate dict provided by a deserializer
# omit None values when None is the field default
# DTFIX-FUTURE: this implementation means we can never change the default on fields which have None for their default
# DTFIX-FUTURE: optimize this to avoid the dataclasses fields metadata and get_origin stuff at runtime
# NOTE: only supports bare tuples, not optional or inside a union
# cannot use super() without arguments when using slots
# code gen a real __post_init__ method
# NOTE: This method is called twice when the datatag type is a dataclass.
# DTFIX-FUTURE: "freeze" this after module init has completed to discourage custom external tag subclasses
# When the datatag type is a dataclass, the first instance will be the non-dataclass type.
# It must be removed from the known tag types before adding the dataclass version.
# DTFIX-FUTURE: we really should have a way to use AnsibleError with obj in module_utils when it's controller-side
# used by the datatag Ansible/Jinja test plugin to find tags by name
# Include the key and value types in the type hints on Python 3.9 and later.
# Earlier versions do not support subscriptable dict.
# deprecated: description='always use subscriptable dict' python_version='3.8'
# try/except is a cheap way to determine if this is a tagged object without using isinstance
# handling Exception accounts for types that may raise something other than AttributeError
# handle cases where the instance always returns something, such as Marker or MagicMock
# by default, untag will revert to the native type when no tags remain
# by default, tagged types are assumed to subclass the type they augment
# NOTE: When not subclassing a native type, the derived type must set cls._native_type itself and cls._empty_tags_as_native to False.
# Subclasses of tagged types will already have a native type set and won't need to detect it.
# Special types which do not subclass a native type can also have their native type already set.
# Automatic item source selection is only implemented for types that don't set _native_type.
# Direct subclasses of native types won't have cls._native_type set, so detect the native type.
# Detect the item source if not already set.
# Use a collection specific factory for types with item sources.
# pylint: disable=abstract-class-instantiated
# There's no way to indicate cls is callable with a single arg without defining a useless __init__.
# use the underlying iterator to avoid access/iteration side effects (e.g. templating/wrapping on Lazy subclasses)
# type: ignore[call-arg,misc]
# this is used when the value is a generator
# nonempty __slots__ not supported for subtype of 'bytes'
# nonempty __slots__ not supported for subtype of 'int'
# NB: Tags are intentionally not preserved for operator methods that return a new instance. In-place operators ignore tags from the `other` instance.
# Propagation of tags in these cases is left to the caller, based on needs specific to their use case.
# trigger the bug by exposing typing.ClassVar via a module reference that is not `typing`
# this is the broken case requiring patching: ClassVar dot-referenced from a module that is not `typing` is treated as an instance field
# DTFIX-FUTURE: file/link CPython bug report, deprecate this patch if/when it's fixed in CPython
# this is the patched line; removed `is a_module`
# using a protocol lets us be more resilient to module unload weirdness
# __call__ requires an instance (otherwise it'll be __new__)
# NB: all new local module attrs are _ prefixed to ensure an identical public attribute surface area to the module we're proxying
# shadow the real Thread attr with our _DaemonThread
# clone the base class `_adjust_thread_count` method with a copy of its globals dict
# patch the method closure's `threading` module import to use our daemon-only thread factory instead
# don't expose this as a class attribute
# importing _ansiballz instead of _extensions avoids an unnecessary import when extensions are not in use
# not BaseException, since modules are expected to raise SystemExit
# Run the module. By importing it as '__main__', it executes as a script.
# An Ansible module must print its own results and exit. If execution reaches this point, that did not happen.
# A convenient place to put a breakpoint
# Enable code coverage analysis of the module.
# This feature is for internal testing and may change without notice.
# Verify coverage is available without importing it.
# This will detect when a module would fail with coverage enabled with minimal overhead.
# when suspend is True, execution pauses here -- it's also a convenient place to put a breakpoint
# This code is strewn with things that are not defined on Python3 (unicode,
# long, etc) but they are all shielded by version checks.  This is also an
# upstream vendored file that we're not going to modify on our own
# pylint: disable=undefined-variable
# The following makes it easier for us to script updates of the bundled code. It is not part of
# upstream six
# Copyright (c) 2016 Red Hat Inc
# General networking tools that may be used by all modules
# https://tools.ietf.org/rfc/rfc2374.txt
# Split by :: to identify omitted zeros
# Get the first four groups, or as many as are found + ::
# Concatenate network address parts
# Ensure network address ends with ::
# Get the first three groups, or as many as are found + ::
# skip passing collection_name; get_best_deprecator already accounted for it when present
# DTFIX7: add future deprecation comment
# deprecated: description='deprecate legacy encoder' core_version='2.23'
# if not name.startswith('_'):  # avoid duplicate deprecation warning for imports from ajson
# deprecated: description='deprecate legacy decoder' core_version='2.23'
# if adding boolean attribute, also add to PASS_BOOL
# some of this dupes defaults from controller config
# keep in sync with copy in lib/ansible/module_utils/csharp/Ansible.Basic.cs
# Use one of our builtin validators.
# Default type for parameters
# Use the custom callable for validation.
# alias:canon
# not alias specific but this is a good place to check this
# Check sub-argument spec
# Find the value for the no_log'd param
# Get no_log values from suboptions
# Sub parameters can be a dict or list of dicts. Ensure parameters are always a list.
# Validate dict fields in case they came in as strings
# This must come before int because bools are also ints
# Need native str type
# TODO: Change the default value from None to Sentinel to differentiate between
# This prevents setting defaults on required items on the 1st run,
# otherwise will set things without a default to None on the 2nd.
# Make sure any default value for no_log fields are masked.
# Need a mutable value
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
# Get the name of the parent key if this is a nested option
# Allow one or more when type='list' param with choices
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
# Extract from a set
# Keep track of context for warning messages
# Make sure we can iterate over the elements
# Set prefix for warning messages
# Handle nested specs
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
# Copyright (c) 2018, Ansible Project
# read by user, group, others
# write by user, group, others
# execute by user, group, others
# read, write by user, group, others
# read by user, group, others and write only by user
# read, execute by user, group, others and write only by user
# file mode permission bits
# execute permission bits
# default file permission bits
# This function's signature needs to be repeated
# as the first line of its docstring.
# This method is reused by the basic module,
# the repetition helps the basic module's html documentation come out right.
# http://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_docstring_signature
# These are all bitfields so first bitwise-or all the permissions we're
# looking for, then bitwise-and with the file's mode to determine if any
# execute bits are set.
# DTFIX-FUTURE: pass an actual deprecator instead of one derived from collection_name
# Construct possible paths with precedence
# passed in paths
# system configured paths
# existing /sbin dirs, if not there already
# Search for binary
# fist found wins
# Copyright: (c) 2018, Sviatoslav Sydorenko <ssydoren@redhat.com>
# Cope with pluralized abbreviations such as TargetGroupARNs
# that would otherwise be rendered target_group_ar_ns
# Handle when there was nothing before the plural_pattern
# Remainder of solution seems to be https://stackoverflow.com/a/1176023
# Copyright (c), Sviatoslav Sydorenko <ssydoren@redhat.com> 2018
# Although this was originally intended for internal use only, it has wide adoption in collections.
# This is due in part to sanity tests previously recommending its use over `collections` imports.
# (c) 2020 Matt Martz <matt@sivel.net>
# DTFIX-FUTURE: refactor this to share the implementation with the controller version
# type: ignore[attr-defined]  # pylint: disable=unused-import
# type: ignore[assignment]  # pylint: disable=unused-import
# CentoOS maintainers believe only the major version is appropriate
# but Ansible users desire minor version information, e.g., 7.5.
# https://github.com/ansible/ansible/issues/50141#issuecomment-449452781
# Debian does not include minor version in /etc/os-release.
# Bug report filed upstream requesting this be added to /etc/os-release
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=931197
# Until this gets merged and we update our bundled copy of distro:
# https://github.com/nir0s/distro/pull/230
# Fixes Fedora 28+ not having a code name and Ubuntu Xenial Xerus needing to be "xenial"
# get the most specific superclass for this platform
# FUTURE: we need a safe way to log that a respawn has occurred for forensic/debug purposes
# do not allow method calls to modules
# already templated to a datavaluestructure, perhaps?
# do not allow imports
# Support strings (single-item lists)
# is_one_of is True at least one requirement should be
# present, else all requirements should be present.
# FIXME: The param and prefix parameters here are coming from AnsibleModule._check_type_string()
# approximate pre-2.19 templating None->empty str equivalency here for backward compatibility
# DTFIX-FUTURE: deprecate legacy comma split functionality, eventually replace with `_check_type_list_strict`
# FUTURE: this impl should replace `check_type_list`
# no "=" to split on: "k1=v1, k2"
# deprecated: description='deprecate jsonarg type support' core_version='2.23'
# Copyright (c), Ansible Project
# default posix, its ascii but always there
# not using required=true as that forces fail_json
# new POSIX standard or English cause those are messages core team expects
# yes, the last 2 are the same but some systems are weird
# No unit given, returning raw number
# default value
# handling bits case
# check unit value if more than one character (KB, MB)
# We're given a text string
# If it has surrogates, we know because it will decode
# Try this first as it's the fastest
# We should only reach this if encoding was non-utf8 original_errors was
# surrogate_then_escape and errors was surrogateescape
# Slow but works
# Note: We do these last even though we have to call to_bytes again on the
# value because we're optimizing the common case
# Giving up
# python2.4 doesn't have b''
# Note: We don't need special handling for surrogate_then_replace
# because all bytes will either be made into surrogates or are valid
# to decode.
# Note: We do these last even though we have to call to_text again on the
# from ansible.module_utils.common.warnings import deprecate
# deprecated: description='deprecate jsonify()' core_version='2.23'
# DTFIX-FUTURE: deprecate
# Warning, can traceback
# Copyright (C) 2022 Max Bachmann
# todo cache map
# in FuzzyWuzzy this returns 0. For sake of compatibility return 0 here as well
# see https://github.com/rapidfuzz/RapidFuzz/issues/110
# one sentence is part of the other one
# todo is length sum without joining faster?
# string length sect+ab <-> sect and sect+ba <-> sect
# exit early since the other ratios are 0
# levenshtein distance sect+ab <-> sect and sect+ba <-> sect
# since only sect is similar in them the distance can be calculated based on
# the length difference
# todo write combined implementation
# exit early when there is a common word in both sequences
# do not calculate the same partial_ratio twice
# preprocess the query
# Apply score multiplier
# Round the result if dtype is integral
# Store the score in the results matrix
# Copyright (C) 2025 Max Bachmann
# This file is generated by tools/generate_python.py
# pyright: ignore[reportMissingImports]
# Copyright (C) 2023 Max Bachmann
# noqa: PLW0603
# used to detect the function hasn't been wrapped afterwards
# DamerauLevenshtein
# Hamming
# Indel
# Jaro
# JaroWinkler
# LCSseq
# Levenshtein
# OSA
# Postfix
# Prefix
# last occurrence of s1_i
# save H_k-1,j-2
# save H_i-2,l-1
# noqa:  E741
# since jaro uses a sliding window some parts of T/P might never be in
# range an can be removed ahead of time
# short circuit if score_cutoff can not be reached
# todo use bitparallel implementation
# looking only within search range, count & flag matched pairs
# count transpositions
# calculate the equivalent of popcount(~S) in C. This breaks for len(s1) == 0
# deletion
# insertion
# match
# keep operations are not relevant in editops
# validate order of editops
# merge similar adjacent blocks
# check if edit operations span the complete string
# offset to correct removed edit operation
# element of subsequence not part of the sequence
# add remaining elements
# matches between last and current editop
# matches after the last editop
# Step 1: Computing D0
# Step 2: Computing HP and HN
# Step 3: Computing the value D[m,j]
# Step 4: Computing Vp and VN
# replace (Matches are not recorded)
# sidestep input validation
# Test out the package by importing it, then running functions from it.
# Place all generated files in ``tmp_path``.
# NOTE: This is the canonical location for commonly-used vendored modules,
# which is the only spot that performs this try/except to allow repackaged
# Invoke to function (e.g. distro packages which unvendor the vendored bits and
# thus must import our 'vendored' stuff from the overall environment.)
# All other uses of Lexicon, etc should do 'from .util import lexicon' etc.
# Saves us from having to update the same logic in a dozen places.
# TODO: would this make more sense to put _into_ invoke.vendor? That way, the
# import lines which now read 'from .util import <third party stuff>' would be
# more obvious. Requires packagers to leave invoke/vendor/__init__.py alone tho
# type: ignore[no-redef] # noqa
# Allow from-the-start debugging (vs toggled during load of tasks module) via
# shell env var.
# Add top level logger functions to global namespace. Meh.
# First group/sort by non-leaf path components. This keeps everything
# grouped in its hierarchy, and incidentally puts top-level tasks
# (whose non-leaf path set is the empty list) first, where we want them
# Then we sort lexicographically by the actual task name
# TODO: Make part of public API sometime
# If there *is* an .isatty, ask it.
# If there wasn't, see if it has a fileno, and if so, ask os.isatty
# If we got here, none of the above worked, so it's reasonable to assume
# the darn thing isn't a real TTY.
# No record of why, but Fabric used daemon threads ever since the
# switch from select.select, so let's keep doing that.
# Track exceptions raised in run()
# TODO: legacy cruft that needs to be removed
# Allow subclasses implemented using the "override run()'s body"
# approach to work, by using _run() instead of run(). If that
# doesn't appear to be the case, then assume we're being used
# directly and just use super() ourselves.
# XXX https://github.com/python/mypy/issues/1424
# TODO: this could be:
# - io worker with no 'result' (always local)
# - tunnel worker, also with no 'result' (also always local)
# - threaded concurrent run(), sudo(), put(), etc, with a
# result (not necessarily local; might want to be a subproc or
# whatever eventually)
# TODO: so how best to conditionally add a "capture result
# value of some kind"?
# - update so all use cases use subclassing, add functionality
# alongside self.exception() that is for the result of _run()
# - split out class that does not care about result of _run()
# and let it continue acting like a normal thread (meh)
# - assume the run/sudo/etc case will use a queue inside its
# worker body, orthogonal to how exception handling works
# Store for actual reraising later
# And log now, in case we never get to later (e.g. if executing
# program is hung waiting for us to do something)
# Name is either target function's dunder-name, or just "_run" if
# we were run subclass-wise.
# NOTE: it seems highly unlikely that a thread could still be
# is_alive() but also have encountered an exception. But hey. Why not
# be thorough?
# TODO: beef this up more
# NOTE: ExceptionWrapper defined here, not in exceptions.py, to avoid circular
# dependency issues (e.g. Failure subclasses need to use some bits from this
# module...)
#: A namedtuple wrapping a thread-borne exception & that thread's arguments.
#: Mostly used as an intermediate between `.ExceptionHandlingThread` (which
#: preserves initial exceptions) and `.ThreadException` (which holds 1..N such
#: exceptions, as typically multiple threads are involved.)
# Import some platform-specific things at top level so they can be mocked for
# tests.
#: The `.Context` given to the same-named argument of `__init__`.
#: A `threading.Event` signaling program completion.
#: Typically set after `wait` returns. Some IO mechanisms rely on this
#: to know when to exit an infinite read loop.
# I wish Sphinx would organize all class/instance attrs in the same
# place. If I don't do this here, it goes 'class vars -> __init__
# docstring -> instance vars' :( TODO: consider just merging class and
# __init__ docstrings, though that's annoying too.
#: How many bytes (at maximum) to read per iteration of stream reads.
# Ditto re: declaring this in 2 places for doc reasons.
#: How many seconds to sleep on each iteration of the stdin read loop
#: and other otherwise-fast loops.
#: Whether pty fallback warning has been emitted.
#: A list of `.StreamWatcher` instances for use by `respond`. Is filled
#: in at runtime by `run`.
# Optional timeout timer placeholder
# Async flags (initialized for 'finally' referencing in case something
# goes REAL bad during options parsing)
# Normalize kwargs w/ config; sets self.opts, self.streams
# Environment setup
# Arrive at final encoding if neither config nor kwargs had one
# Echo running command (wants to be early to be included in dry-run)
# Prepare common result args.
# TODO: I hate this. Needs a deeper separate think about tweaking
# Runner.generate_result in a way that isn't literally just this same
# two-step process, and which also works w/ downstream.
# Prepare all the bits n bobs.
# If dry-run, stop here.
# Start executing the actual command (runs in background)
# If disowned, we just stop here - no threads, no timer, no error
# checking, nada.
# Stand up & kick off IO, timer threads
# Wrap up or promise that we will, depending
# Wait for subprocess to run, forwarding signals as we get them.
# done waiting!
# Don't locally stop on ^C, only forward it:
# - if remote end really stops, we'll naturally stop after
# - if remote end does not stop (eg REPL, editor) we don't want
# to stop prematurely
# TODO: honor other signals sent to our own process and
# transmit them to the subprocess before handling 'normally'.
# Make sure we tie off our worker threads, even if something exploded.
# Any exceptions that raised during self.wait() above will appear after
# this block.
# Inform stdin-mirroring worker to stop its eternal looping
# Join threads, storing inner exceptions, & set a timeout if
# necessary. (Segregate WatcherErrors as they are "anticipated
# errors" that want to show up at the end during creation of
# Failure objects.)
# If any exceptions appeared inside the threads, raise them now as an
# aggregate exception object.
# NOTE: this is kept outside the 'finally' so that main-thread
# exceptions are raised before worker-thread exceptions; they're more
# likely to be Big Serious Problems.
# Collate stdout/err, calculate exited, and get final result obj
# Any presence of WatcherError from the threads indicates a watcher was
# upset and aborted execution; make a generic Failure out of it and
# raise that.
# TODO: ambiguity exists if we somehow get WatcherError in *both*
# threads...as unlikely as that would normally be.
# If a timeout was requested and the subprocess did time out, shout.
# Pull in command execution timeout, which stores config elsewhere,
# but only use it if it's actually set (backwards compat)
# Handle invalid kwarg keys (anything left in kwargs).
# Act like a normal function would, i.e. TypeError
# Update disowned, async flags
# If hide was True, turn off echoing
# Conversely, ensure echoing is always on when dry-running
# Always hide if async
# Then normalize 'hide' from one of the various valid input values,
# into a stream-names tuple. Also account for the streams.
# Derive stream objects
# If in_stream hasn't been overridden, and we're async, we don't
# want to read from sys.stdin (otherwise the default) - so set
# False instead.
# Determine pty or no
# Set data
# At this point, we had enough success that we want to be returning or
# raising detailed info about our execution; so we generate a Result.
# "Universal newlines" - replace all standard forms of
# newline with \n. This is not technically Windows related
# (\r as newline is an old Mac convention) but we only apply
# the translation for Windows as that's the only platform
# it is likely to matter for these days.
# Get return/exit code, unless there were WatcherErrors to handle.
# NOTE: In that case, returncode() may block waiting on the process
# (which may be waiting for user input). Since most WatcherError
# situations lack a useful exit code anyways, skipping this doesn't
# really hurt any.
# TODO: as noted elsewhere, I kinda hate this. Consider changing
# generate_result()'s API in next major rev so we can tidy up.
# Add a timeout to out/err thread joins when it looks like they're not
# dead but their counterpart is dead; this indicates issue #351 (fixed
# by #432) where the subproc may hang because its stdout (or stderr) is
# no longer being consumed by the dead thread (and a pipe is filling
# up.) In that case, the non-dead thread is likely to block forever on
# a `recv` unless we add this timeout.
# Set up IO thread parameters (format - body_func: {kwargs})
# After opt processing above, in_stream will be a real stream obj or
# False, so we can truth-test it. We don't even create a stdin-handling
# thread if it's False, meaning user indicated stdin is nonexistent or
# problematic.
# Kick off IO threads
# NOTE: Typically, reading from any stdout/err (local, remote or
# otherwise) can be thought of as "read until you get nothing back".
# This is preferable over "wait until an out-of-band signal claims the
# process is done running" because sometimes that signal will appear
# before we've actually read all the data in the stream (i.e.: a race
# condition).
# TODO: store un-decoded/raw bytes somewhere as well...
# Echo to local stdout if necessary
# TODO: should we rephrase this as "if you want to hide, give me a
# dummy output stream, e.g. something like /dev/null"? Otherwise, a
# combo of 'hide=stdout' + 'here is an explicit out_stream' means
# out_stream is never written to, and that seems...odd.
# Store in shared buffer so main thread can do things with the
# result after execution completes.
# NOTE: this is threadsafe insofar as no reading occurs until after
# the thread is join()'d.
# Run our specific buffer through the autoresponder framework
# TODO: consider moving the character_buffered contextmanager call in
# here? Downside is it would be flipping those switches for every byte
# read instead of once per session, which could be costly (?).
# Assume EBADF in this situation implies running under nohup or
# similar, where:
# - we cannot reliably detect a bad FD up front
# - trying to read it would explode
# - user almost surely doesn't care about stdin anyways
# and ignore it (but not other OSErrors!)
# Decode if it appears to be binary-type. (From real terminal
# streams, usually yes; from file-like objects, often no.)
# TODO: will decoding 1 byte at a time break multibyte
# character encodings? How to square interactivity with that?
# TODO: reinstate lock/whatever thread logic from fab v1 which prevents
# reading from stdin while other parts of the code are prompting for
# runtime passwords? (search for 'input_enabled')
# TODO: fabric#1339 is strongly related to this, if it's not literally
# exposing some regression in Fabric 1.x itself.
# Mirror what we just read to process' stdin.
# We encode to ensure bytes, but skip the decode step since
# there's presumably no need (nobody's interacting with
# this data programmatically).
# Also echo it back to local stdout (or whatever
# out_stream is set to) when necessary.
# Empty string/char/byte != None. Can't just use 'else' here.
# When reading from file-like objects that aren't "real"
# terminal streams, an empty byte signals EOF.
# Dual all-done signals: program being executed is done
# running, *and* we don't seem to be reading anything out of
# stdin. (NOTE: If we only test the former, we may encounter
# race conditions re: unread stdin.)
# Take a nap so we're not chewing CPU.
# Join buffer contents into a single string; without this,
# StreamWatcher subclasses can't do things like iteratively scan for
# pattern matches.
# NOTE: using string.join should be "efficient enough" for now, re:
# speed and memory use. Should that become false, consider using
# StringIO or cStringIO (tho the latter doesn't do Unicode well?) which
# is apparently even more efficient.
# NOTE: fallback not used: no falling back implemented by default.
# Encode always, then request implementing subclass to perform the
# actual write to subprocess' stdin.
# NOTE: yes, this is a 1-liner. The point is to make it much harder to
# forget to use 'replace' when decoding :)
# TODO: probably wants to be 2 methods, one for local and one for
# subprocess. For now, good enough to assume both are the same.
# Timer expiry implies we did time out. (The timer itself will have
# killed the subprocess, allowing us to even get to this point.)
# Bookkeeping var for pty use case
# TODO: pass in & test in_stream, not sys.stdin
# Obtain useful read-some-bytes function
# Need to handle spurious OSErrors on some Linux platforms.
# Only eat I/O specific OSErrors so we don't hide others
# The typical default
# Some less common platforms phrase it this way
# The bad OSErrors happen after all expected output has
# appeared, so we return a falsey value, which triggers the
# "end of output" logic in code using reader functions.
# NOTE: when using a pty, this will never be called.
# TODO: do we ever get those OSErrors on stderr? Feels like we could?
# NOTE: parent_fd from os.fork() is a read/write pipe attached to our
# forked process' stdout/stdin, respectively.
# Try to write, ignoring broken pipes if encountered (implies child
# process exited before the process piping stdin to us finished;
# there's nothing we can do about that!)
# there is no working scenario to tell the process that stdin
# closed when using pty
# Encountered ImportError
# If we're the child process, load up the actual command in a
# shell, just as subprocess does; this replaces our process - whose
# pipes are all hooked up to the PTY - with the "real" one.
# TODO: both pty.spawn() and pexpect.spawn() do a lot of
# setup/teardown involving tty.setraw, getrlimit, signal.
# Ostensibly we'll want some of that eventually, but if
# possible write tests - integration-level if necessary -
# before adding it!
# Set pty window size based on what our own controlling
# terminal's window size appears to be.
# TODO: make subroutine?
# Use execve for bare-minimum "exec w/ variable # args + env"
# behavior. No need for the 'p' (use PATH to find executable)
# for now.
# NOTE: stdlib subprocess (actually its posix flavor, which is
# written in C) uses either execve or execv, depending.
# In odd situations where our subprocess is already dead, don't
# throw this upwards.
# NOTE:
# https://github.com/pexpect/ptyprocess/blob/4058faa05e2940662ab6da1330aa0586c6f9cd9c/ptyprocess/ptyprocess.py#L680-L687
# implies that Linux "requires" use of the blocking, non-WNOHANG
# version of this call. Our testing doesn't verify this, however,
# so...
# NOTE: It does appear to be totally blocking on Windows, so our
# issue #351 may be totally unsolvable there. Unclear.
# No subprocess.returncode available; use WIFEXITED/WIFSIGNALED to
# determine whch of WEXITSTATUS / WTERMSIG to use.
# TODO: is it safe to just say "call all WEXITSTATUS/WTERMSIG and
# return whichever one of them is nondefault"? Probably not?
# NOTE: doing this in an arbitrary order should be safe since only
# one of the WIF* methods ought to ever return True.
# Match subprocess.returncode by turning signals into negative
# 'exit code' integers.
# TODO: do we care about WIFSTOPPED? Maybe someday?
# If we opened a PTY for child communications, make sure to close() it,
# otherwise long-running Invoke-using processes exhaust their file
# descriptors eventually.
# If something weird happened preventing the close, there's
# nothing to be done about it now...
# TODO: inherit from namedtuple instead? heh (or: use attrs from pypi)
# TODO: more? e.g. len of stdout/err? (how to represent cleanly in a
# 'x=y' format like this? e.g. '4b' is ambiguous as to what it
# represents
# TODO: preserve alternate line endings? Mehhhh
# NOTE: no trailing \n preservation; easier for below display if
# normalized
# Basically just want exactly this (recently refactored) kwargs dict.
# TODO: consider proxying vs copying, but prob wait for refactor
# Normalize to list-of-stream-names
# Revert any streams that have been overridden from the default value
# TODO: move in here? They're currently platform-agnostic...
# STD_OUTPUT_HANDLE = -11
# Sentinel values to be replaced w/ defaults by caller
# We want two short unsigned integers (rows, cols)
# Note: TIOCGWINSZ struct contains 4 unsigned shorts, 2 unused
# Create an empty (zeroed) buffer for ioctl to map onto. Yay for C!
# Call TIOCGWINSZ to get window size of stdout, returns our filled
# Unpack buffer back into Python data types
# NOTE: this unpack gives us rows x cols, but we return the
# inverse.
# Fallback to emptyish return value in various failure cases:
# * sys.stdout being monkeypatched, such as in testing, and lacking
# * .fileno
# * sys.stdout having a .fileno but not actually being attached to a
# * TTY
# * termios not having a TIOCGWINSZ attribute (happens sometimes...)
# * other situations where ioctl doesn't explode but the result isn't
# TODO: make defaults configurable?
# Explicitly not docstringed to remain private, for now. Eh.
# Checks whether tty.setcbreak appears to have already been run against
# ``stream`` (or if it would otherwise just not do anything).
# Used to effect idempotency for character-buffering a stream, which also
# lets us avoid multiple capture-then-restore cycles.
# setcbreak sets ECHO and ICANON to 0/off, CC[VMIN] to 1-ish, and CC[VTIME]
# to 0-ish. If any of that is not true we can reasonably assume it has not
# yet been executed against this stream.
# A "real" terminal stdin needs select/kbhit to tell us when it's ready for
# a nonblocking read().
# Otherwise, assume a "safer" file-like object that can be read from in a
# nonblocking fashion (e.g. a StringIO or regular file).
# NOTE: we have to check both possibilities here; situations exist where
# it's not a tty but has a fileno, or vice versa; neither is typically
# going to work re: ioctl().
# TODO: store these kwarg defaults central, refer to those values both here
# and in @task.
# TODO: allow central per-session / per-taskmodule control over some of
# them, e.g. (auto_)positional, auto_shortflags.
# NOTE: we shadow __builtins__.help here on purpose - obfuscating to avoid
# it feels bad, given the builtin will never actually be in play anywhere
# except a debug shell whose frame is exactly inside this class.
# Real callable
# Copy a bunch of special properties from the body for the benefit of
# Sphinx autodoc or other introspectors.
# Default name, alternate names, and whether it should act as the
# default for its parent collection
# Arg/flag/parser hints
# Call chain bidness
# Whether to print return value post-execution
# Functions do not define __eq__ but func_code objects apparently do.
# (If we're wrapping some other callable, they will be responsible for
# defining equality on their end.)
# Presumes name and body will never be changed. Hrm.
# Potentially cleaner to just not use Tasks as hash keys, but let's do
# this for now.
# Guard against calling tasks with no context.
# TODO: raise a custom subclass _of_ TypeError instead
# Handle callable-but-not-function objects
# Rebuild signature with first arg dropped, or die usefully(ish trying
# TODO: this ought to also check if an extant 1st param _was_ a Context
# arg, and yell similarly if not.
# TODO: see TODO under __call__, this should be same type
# If positionals is None, everything lacking a default
# value will be automatically considered positional.
# Whether it's positional or not
# Whether it is a value-optional flag
# Whether it should be of an iterable (list) kind
# If user gave a non-None default, hopefully they know better
# than us what they want here (and hopefully it offers the list
# protocol...) - otherwise supply useful default
# Whether it should increment its value or not
# Argument name(s) (replace w/ dashed version if underscores present,
# and move the underscored version to be the attr_name instead.)
# For reference in eg help=
# Must know what short names are available
# Handle default value & kind if possible
# TODO: allow setting 'kind' explicitly.
# NOTE: skip setting 'kind' if optional is True + type(default) is
# bool; that results in a nonsensical Argument which gives the
# parser grief in a few ways.
# Help
# Core argspec
# Prime the list of all already-taken names (mostly for help in
# choosing auto shortflags)
# Build arg list (arg_opts will take care of setting up shortnames,
# etc)
# Update taken_names list with new argument's full name list
# (which may include new shortflags) so subsequent Argument
# creation knows what's taken.
# If any values were leftover after consuming a 'help' dict, it implies
# the user messed up & had a typo or similar. Let's explode.
# Now we need to ensure positionals end up in the front of the list, in
# order given in self.positionals, so that when Context consumes them,
# this order is preserved.
# @task -- no options were (probably) given.
# @task(pre, tasks, here)
# update_wrapper(inner, klass)
# TODO: just how useful is this? feels like maybe overkill magic
# NOTE: Not comparing 'called_as'; a named call of a given Task with
# same args/kwargs should be considered same as an unnamed call of the
# same Task with the same args/kwargs (e.g. pre/post task specified w/o
# name). Ditto tasks with multiple aliases.
# Normalize input
# Obtain copy of directly-given tasks since they should sometimes
# behave differently
# Expand pre/post tasks
# TODO: may make sense to bundle expansion & deduping now eh?
# Get some good value for dedupe option, even if config doesn't have
# the tree we expect. (This is a concession to testing.)
# Dedupe across entire run now that we know about all calls in order
# Execute
# TODO: maybe clone initial config here? Probably not necessary,
# especially given Executor is not designed to execute() >1 time at the
# moment...
# Hand in reference to our config, which will preserve user
# modifications across the lifetime of the session.
# But make sure we reset its task-sensitive levels each time
# (collection & shell env)
# TODO: load_collection needs to be skipped if task is anonymous
# (Fabric 2 or other subclassing libs only)
# Get final context from the Call (which will know how to generate
# an appropriate one; e.g. subclasses might use extra data from
# being parameterized), handing in this config for use there.
# TODO: handle the non-dedupe case / the same-task-different-args
# case, wherein one task obj maps to >1 result.
# Normalize to Call (this method is sometimes called with pre/post
# task lists, which may contain 'raw' Task objects)
# TODO: this is where we _used_ to call Executor.config_for(call,
# config)...
# TODO: now we may need to preserve more info like where the call
# came from, etc, but I feel like that shit should go _on the call
# itself_ right???
# TODO: we _probably_ don't even want the config in here anymore,
# we want this to _just_ be about the recursion across pre/post
# tasks or parameterization...?
# Specific kwargs if applicable
# splat-kwargs version of default value (auto_dash_names=True)
# Name if applicable
# Dispatch args/kwargs
# Dispatch kwargs
# Explicitly given name wins over root ns name (if applicable),
# which wins over actual module name.
# See if the module provides a default NS to use in lieu of creating
# our own collection.
# TODO: make this into Collection.clone() or similar?
# Explicitly given config wins over root ns config
# Failing that, make our own collection from the module's tasks.
# Again, explicit name wins over implicit one from module path
# Handle module-as-collection
# Ensure we have a name, or die trying
# Test for conflict
# Insert
# Our top level configuration
# Default task for this collection itself
# Normalize name to the format we're expecting
# Non-default tasks within subcollections -> recurse (sorta)
# Default task for subcollections (via empty-name lookup)
# Regular task lookup
# Short-circuit on anything non-applicable, e.g. empty strings, bools,
# None, etc.
# Don't replace leading or trailing underscores (+ taking dotted
# names into account)
# TODO: not 100% convinced of this / it may be exposing a
# discrepancy between this level & higher levels which tend to
# strip out leading/trailing underscores entirely.
# Lexicons exhibit only their real keys in most places, so this will
# only grab those, not aliases.
# Deepcopy the value so we're not just copying a reference
# Also copy all aliases, which are string-to-string key mappings
# Our own tasks get no prefix, just go in as-is: {name: [aliases]}
# Subcollection tasks get both name + aliases prefixed
# Tack on collection name to alias list if this task is the
# collection's default.
# Accumulator
# Obtain allowed env var -> existing value map
# Check for actual env var (honoring prefix) and try to set
# Sub-dict -> recurse
# Handle conflicts
# Merge and continue
# Other -> is leaf, no recursion
# Gets are from self._config because that's what determines valid env
# vars and/or values for typecasting.
# Sets are to self.data since that's what we are presenting to the
# outer config object and debugging.
# PyPy3
# Attributes which get proxied through to inner merged-dict config obj.
# NOTE: due to default Python attribute-lookup semantics, "real"
# attributes will always be yielded on attribute access and this method
# is skipped. That behavior is good for us (it's more intuitive than
# having a config key accidentally shadow a real attribute or method).
# Proxy most special vars to config for dict procotol.
# Otherwise, raise useful AttributeError to follow getattr proto.
# Turn attribute-sets into config updates anytime we don't have a real
# attribute with the given name/key.
# Make sure to trigger our own __setitem__ instead of going direct
# to our internal dict/cache
# For some reason Python is ignoring our __hasattr__ when determining
# whether we support __iter__. BOO
# NOTE: Can't proxy __eq__ because the RHS will always be an obj of the
# current class, not the proxied-to class, and that causes
# NotImplemented.
# Try comparing to other objects like ourselves, falling back to a not
# very comparable value (None) so comparison fails.
# But we can compare to vanilla dicts just fine, since our _config is
# itself just a dict.
# Short-circuit if pickling/copying mechanisms are asking if we've got
# __setstate__ etc; they'll ask this w/o calling our __init__ first, so
# we'd be in a RecursionError-causing catch-22 otherwise.
# At this point we should be able to assume a self._config...
# New object's keypath is simply the key, prepended with our own
# keypath if we've got one.
# If we have no _root, we must be the root, so it's us. Otherwise,
# pass along our handle on the root.
# Grab the root object responsible for tracking removals; either the
# referenced root (if we're a leaf) or ourselves (if we're not).
# (Intermediate nodes never have anything but __getitem__ called on
# them, otherwise they're by definition being treated as a leaf.)
# Make sure we don't screw up true attribute deletion for the
# situations that actually want it. (Uncommon, but not rare.)
# Must test this up front before (possibly) mutating self._config
# We always have a _config (whether it's a real dict or a cache of
# merged levels) so we can fall back to it for all the corner case
# handling re: args (arity, handling a default, raising KeyError, etc)
# If it looks like no popping occurred (key wasn't there), presumably
# user gave default, so we can short-circuit return here - no need to
# track a deletion that did not happen.
# Here, we can assume at least the 1st posarg (key) existed.
# In all cases, return the popped value.
# Must test up front whether the key existed beforehand
# Run locally
# Key already existed -> nothing was mutated, short-circuit
# Here, we can assume the key did not exist and thus user must have
# supplied a 'default' (if they did not, the real setdefault() above
# would have excepted.)
# TODO: complain if arity>1
# TODO: be stricter about input in this case
# On Windows, which won't have /bin/bash, check for a set COMSPEC env
# var (https://en.wikipedia.org/wiki/COMSPEC) or fallback to an
# unqualified cmd.exe otherwise.
# Else, assume Unix, most distros of which have /bin/bash available.
# TODO: consider an automatic fallback to /bin/sh for systems lacking
# /bin/bash; however users may configure run.shell quite easily, so...
# TODO: we document 'debug' but it's not truly implemented outside
# of env var and CLI flag. If we honor it, we have to go around and
# figure out at what points we might want to call
# `util.enable_logging`:
# - just using it as a fallback default for arg parsing isn't much
# use, as at that point the config holds nothing but defaults & CLI
# flag values
# - doing it at file load time might be somewhat useful, though
# where this happens may be subject to change soon
# - doing it at env var load time seems a bit silly given the
# existing support for at-startup testing for INVOKE_DEBUG
# 'debug': False,
# TODO: I feel like we want these to be more consistent re: default
# values stored here vs 'stored' as logic where they are
# referenced, there are probably some bits that are all "if None ->
# default" that could go here. Alternately, make _more_ of these
# default to None?
# This doesn't live inside the 'run' tree; otherwise it'd make it
# somewhat harder to extend/override in Fabric 2 which has a split
# local/remote runner situation.
# Technically an implementation detail - do not expose in public API.
# Stores merged configs and is accessed via DataProxy.
# Config file suffixes to search, in preference order.
# Default configuration values, typically a copy of `global_defaults`.
# Collection-driven config data, gathered from the collection tree
# containing the currently executing task.
# Path prefix searched for the system config file.
# NOTE: There is no default system prefix on Windows.
# Path to loaded system config file, if any.
# Whether the system config file has been loaded or not (or ``None`` if
# no loading has been attempted yet.)
# Data loaded from the system config file.
# Path prefix searched for per-user config files.
# Path to loaded user config file, if any.
# Whether the user config file has been loaded or not (or ``None`` if
# Data loaded from the per-user config file.
# As it may want to be set post-init, project conf file related attrs
# get initialized or overwritten via a specific method.
# Environment variable name prefix
# Config data loaded from the shell environment.
# As it may want to be set post-init, runtime conf file related attrs
# Overrides - highest normal config level. Typically filled in from
# command-line flags.
# Absolute highest level: user modifications.
# And its sibling: user deletions. (stored as a flat dict of keypath
# keys and dummy values, for constant-time membership testing/removal
# w/ no messy recursion. TODO: maybe redo _everything_ that way? in
# _modifications and other levels, the values would of course be
# valuable and not just None)
# Convenience loading of user and system files, since those require no
# other levels in order to function.
# Always merge, otherwise defaults, etc are not usable until creator or
# a subroutine does so.
# Just a refactor of something done in unlazy init or in clone()
# Path to the user-specified runtime config file.
# Data loaded from the runtime config file.
# Whether the runtime config file has been loaded or not (or ``None``
# if no loading has been attempted yet.)
# Force merge of existing data to ensure we have an up to date picture
# 'Prefix' to match the other sets of attrs
# Ensure the prefix is normalized to a directory-like path string
# Path to loaded per-project config file, if any.
# Whether the project config file has been loaded or not (or ``None``
# Data loaded from the per-project config file.
# Setup
# Short-circuit if loading appears to have occurred already
# Moar setup
# None -> expected absolute path but none set, short circuit
# Short circuit if loading seems unnecessary (eg for project config
# files when not running out of a project)
# Poke 'em
# Store data, the path it was found at, and fact that it was
# found
# Typically means 'no such file', so just note & skip past.
# Still None -> no suffixed paths were found, record this fact
# Merge loaded data in if any was found
# Strip special members, as these are always going to be builtins
# and other special things a user will not want in their config.
# Raise exceptions on module values; they are unpicklable.
# TODO: suck it up and reimplement copy() without pickling? Then
# again, a user trying to stuff a module into their config is
# probably doing something better done in runtime/library level
# code and not in a "config file"...right?
# yup
# None -> no loading occurred yet
# True -> hooray
# False -> did try, did not succeed
# TODO: how to preserve what was tried for each case but only for
# the negative? Just a branch here based on 'name'?
# Construct new object
# Also allow arbitrary constructor kwargs, for subclasses where passing
# (some) data in at init time is desired (vs post-init copying)
# TODO: probably want to pivot the whole class this way eventually...?
# No longer recall exactly why we went with the 'fresh init + attribute
# setting' approach originally...tho there's clearly some impedance
# mismatch going on between "I want stuff to happen in my config's
# instantiation" and "I want cloning to not trigger certain things like
# external data source loading".
# NOTE: this will include lazy=True, see end of method
# Copy/merge/etc all 'private' data sources and attributes
# Non-dict data gets carried over straight (via a copy())
# NOTE: presumably someone could really screw up and change these
# values' types, but at that point it's on them...
# Dict data gets merged (which also involves a copy.copy
# eventually)
# Do what __init__ would've done if not lazy, i.e. load user/system
# conf files.
# Finally, merge() for reals (_load_base_conf_files doesn't do so
# internally, so that data wouldn't otherwise show up.)
# NOTE: must pass in defaults fresh or otherwise global_defaults() gets
# used instead. Except when 'into' is in play, in which case we truly
# want the union of the two.
# The kwargs.
# TODO: consider making this 'hardcoded' on the calling end (ie
# inside clone()) to make sure nobody accidentally nukes it via
# subclassing?
# First, ensure we wipe the keypath from _deletions, in case it was
# previously deleted.
# Now we can add it to the modifications structure.
# TODO: could use defaultdict here, but...meh?
# TODO: generify this and the subsequent 3 lines...
# NOTE: because deletions are processed in merge() last, we do not need
# to remove things from _modifications on removal; but we *do* do the
# inverse - remove from _deletions on modification.
# TODO: may be sane to push this step up to callers?
# If we encounter None, it means something higher up than our
# requested keypath is already marked as deleted; so we don't
# have to do anything or go further.
# Otherwise it's presumably another dict, so keep looping...
# Key not found -> nobody's marked anything along this part of
# the path for deletion, so we'll start building it out.
# Then prep for next iteration
# Exited loop -> data must be the leafmost dict, so we can now set our
# deleted key to None
# TODO: for chrissakes just make it return instead of mutating?
# Dict values whose keys also exist in 'base' -> recurse
# (But only if both types are dicts.)
# Fileno-bearing objects are probably 'real' files which do not
# copy well & must be passed by reference. Meh.
# New values get set anew
# Dict values get reconstructed to avoid being references to the
# updates dict, which can lead to nasty state-bleed bugs otherwise
# Non-dict values just get set straight
# Not there, nothing to excise
# NOTE: not testing for whether base[key] exists; if something's
# listed in a deletions structure, it must exist in some source
# somewhere, and thus also in the cache being obliterated.
# implicitly None
# TODO: precompile the keys into regex objects
# NOTE: generifies scanning so it can be used to scan for >1 pattern at
# once, e.g. in FailingResponder.
# Only look at stream contents we haven't seen yet, to avoid dupes.
# Search, across lines if necessary
# Update seek index if we've matched
# Iterate over findall() response in case >1 match occurred.
# Behave like regular Responder initially
# Also check stream for our failure sentinel
# Error out if we seem to have failed after a previous response.
# Once we see that we had a response, take note
# Again, behave regularly by default.
# Typically either tasks.py or tasks/__init__.py
# Will be 'the dir tasks.py is in', or 'tasks/', in both cases this
# is what wants to be in sys.path for "from . import sibling"
# Will be "the directory above the spot that 'import tasks' found",
# namely the parent of "your task tree", i.e. "where project level
# config files are looked for". So, same as enclosing_dir for
# tasks.py, but one more level up for tasks/__init__.py...
# it's a package, so we have to go up again
# Get the enclosing dir on the path
# Actual import
# so 'from . import xxx' works
# Return the module and the folder it was found in
# TODO: could introduce config obj here for transmission to Collection
# TODO: otherwise Loader has to know about specific bits to transmit, such
# as auto-dashes, and has to grow one of those for every bit Collection
# ever needs to know
# Lazily determine default CWD if configured value is falsey
# walk the path upwards to check for dynamic import
# buffalo buffalo
# Arguments present always, even when wrapped as a different binary
# Arguments pertaining specifically to invocation as 'invoke' itself
# (or as other arbitrary-task-executing programs, like 'fab')
# Other class-level global variables a subclass might override sometime
# maybe?
# TODO 3.0: rename binary to binary_help_name or similar. (Or write
# code to autogenerate it from binary_names.)
# Now that we have parse results handy, we can grab the remaining
# config bits:
# - runtime config, as it is dependent on the runtime flag/env var
# - the overrides config level, as it is composed of runtime flag data
# NOTE: only fill in values that would alter behavior, otherwise we
# want the defaults to come through.
# Handle "fill in config values at start of runtime", which for now is
# just sudo password
# Create an initial config, which will hold defaults & values from
# most config file locations (all but runtime.) Used to inform
# loading & parsing behavior.
# Parse the given ARGV with our CLI parsing machinery, resulting in
# things like self.args (core args/flags), self.collection (the
# loaded namespace, which may be affected by the core flags) and
# self.tasks (the tasks requested for exec and their own
# args/flags)
# Handle collection concerns including project config
# Parse remainder of argv as task-related input
# End of parsing (typically bailout stuff like --list, --help)
# Update the earlier Config with new values from the parse step -
# runtime config file contents and flag-derived overrides (e.g. for
# run()'s echo, warn, etc options.)
# Create an Executor, passing in the data resulting from the prior
# steps, then tell it to execute the tasks.
# Print error messages from parser, runner, etc if necessary;
# prevents messy traceback but still clues interactive user into
# problems.
# Terminate execution unless we were told not to.
# Same behavior as Python itself outside of REPL
# Obtain core args (sets self.core)
# Set interpreter bytecode-writing flag
# Enable debugging from here on out, if debug flag was given.
# (Prior to this point, debugging requires setting INVOKE_DEBUG).
# Short-circuit if --version
# Print (dynamic, no tasks required) completion script if requested
# Load a collection of tasks unless one was already set.
# If no bundled namespace & --help was given, just print it and
# exit. (If we did have a bundled namespace, core --help will be
# handled *after* the collection is loaded & parsing is done.)
# Set these up for potential use later when listing tasks
# TODO: be nice if these came from the config...! Users would love to
# say they default to nested for example. Easy 2.x feature-add.
# TODO: load project conf, if possible, gracefully
# Core (no value given) --help output (only when bundled namespace)
# Print per-task help, if necessary
# TODO: feels real dumb to factor this out of Parser, but...we
# should?
# Print discovered tasks if necessary
# will be True or string
# Not just --list, but --list some-root - do moar work
# Print completion helpers if necessary
# NOTE: can't reuse self.parser as it has likely been mutated
# between when it was set and now.
# Fallback behavior if no tasks were given & no default specified
# (mostly a subroutine for overriding purposes)
# NOTE: when there is a default task, Executor will select it when no
# tasks were found in CLI parsing.
# TODO: why the heck is this not builtin to importlib?
# TODO: worth trying to wrap both of these and raising ImportError
# for cases where module exists but class name does not? More
# "normal" but also its own possible source of bugs/confusion...
# XXX: defaults to empty string if 'argv' is '[]' or 'None'
# TODO 3.0: ugh rename this or core_args, they are too confusing
# NOTE: start, coll_name both fall back to configuration values within
# Loader (which may, however, get them from our config.)
# This is the earliest we can load project config, so we should -
# allows project config to affect the task parsing step!
# TODO: is it worth merging these set- and load- methods? May
# require more tweaking of how things behave in/after __init__.
# Update core context w/ core_via_task args, if and only if the
# via-task version of the arg was truly given a value.
# TODO: push this into an Argument-aware Lexicon subclass and
# .update()?
# Really wish textwrap worked better for this.
# Short circuit if no tasks to show (Collection now implements bool)
# TODO: now that flat/nested are almost 100% unified, maybe rethink
# this a bit?
# Start with just the name and just the aliases, no prefixes or
# dots.
# If displaying a sub-collection (or if we are displaying a given
# namespace/root), tack on some dots to make it clear these names
# require dotted paths to invoke.
# Nested? Indent, and add asterisks to default-tasks.
# Flat? Prefix names and aliases with ancestor names to get full
# dotted path; and give default-tasks their collection name as the
# first alias.
# Make sure leading dots are present for subcollections if
# scoped display
# Generate full name and help columns and add to pairs.
# Determine whether we're at max-depth or not
# NOTE: only adding coll-oriented pair if limiting by depth
# Recurse, if not already at max depth
# Sanity: we can't cleanly honor the --list-depth argument without
# changing the data schema or otherwise acting strangely; and it also
# doesn't make a ton of sense to limit depth when the output is for a
# script to handle. So we just refuse, for now. TODO: find better way
# TODO: consider using something more formal re: the format this emits,
# eg json-schema or whatever. Would simplify the
# relatively-concise-but-only-human docs that currently describe this.
# TODO: do use cases w/ bundled namespace want to display things like
# root and depth too? Leaving off for now...
# TODO: worth stripping this out for nested? since it's signified with
# asterisk there? ugggh
# TODO: trim/prefix dots
# Calculate column sizes: don't wrap flag specs, give what's left over
# to the descriptions.
# Wrap descriptions/help text
# Print flag spec + padding
# Print help text as needed
#: The fully merged `.Config` object appropriate for this context.
#: `.Config` settings (see their documentation for details) may be
#: accessed like dictionary keys (``c.config['foo']``) or object
#: attributes (``c.config.foo``).
#: As a convenience shorthand, the `.Context` object proxies to its
#: ``config`` attribute in the same way - e.g. ``c['foo']`` or
#: ``c.foo`` returns the same value as ``c.config['foo']``.
#: A list of commands to run (via "&&") before the main argument to any
#: `run` or `sudo` calls. Note that the primary API for manipulating
#: this list is `prefix`; see its docs for details.
#: A list of directories to 'cd' into before running commands with
#: `run` or `sudo`; intended for management via `cd`, please see its
#: docs for details.
# Allows Context to expose a .config attribute even though DataProxy
# otherwise considers it a config key.
# NOTE: mostly used by client libraries needing to tweak a Context's
# config at execution time; i.e. a Context subclass that bears its own
# unique data may want to be stood up when parameterizing/expanding a
# call list at start of a session, with the final config filled in at
# runtime.
# NOTE: broken out of run() to allow for runner class injection in
# Fabric/etc, which needs to juggle multiple runner class types (local and
# remote).
# NOTE: this is for runner injection; see NOTE above _run().
# TODO: allow subclassing for 'get the password' so users who REALLY
# want lazy runtime prompting can have it easily implemented.
# TODO: want to print a "cleaner" echo with just 'sudo <command>'; but
# hard to do as-is, obtaining config data from outside a Runner one
# holds is currently messy (could fix that), if instead we manually
# inspect the config ourselves that duplicates logic. NOTE: once we
# figure that out, there is an existing, would-fail-if-not-skipped test
# for this behavior in test/context.py.
# TODO: once that is done, though: how to handle "full debug" output
# exactly (display of actual, real full sudo command w/ -S and -p), in
# terms of API/config? Impl is easy, just go back to passing echo
# through to 'run'...
# Ensure we merge any user-specified watchers with our own.
# NOTE: If there are config-driven watchers, we pull those up to the
# kwarg level; that lets us merge cleanly without needing complex
# config-driven "override vs merge" semantics.
# TODO: if/when those semantics are implemented, use them instead.
# NOTE: config value for watchers defaults to an empty list; and we
# want to clone it to avoid actually mutating the config.
# Transmute failures driven by our FailingResponder, into auth
# failures - the command never even ran.
# TODO: wants to be a hook here for users that desire "override a
# bad config value for sudo.password" manual input
# NOTE: as noted in #294 comments, we MAY in future want to update
# this so run() is given ability to raise AuthFailure on its own.
# For now that has been judged unnecessary complexity.
# NOTE: not bothering with 'reason' here, it's pointless.
# Reraise for any other error so it bubbles up normally.
# TODO: wonder if it makes sense to move this part of things inside Runner,
# which would grow a `prefixes` and `cwd` init kwargs or similar. The less
# that's stuffed into Context, probably the better.
# TODO: should this be None? Feels cleaner, though there may be
# benefits to it being an empty string, such as relying on a no-arg
# `cd` typically being shorthand for "go to user's $HOME".
# get the index for the subset of paths starting with the last / or ~
# TODO: see if there's a stronger "escape this path" function somewhere
# we can reuse. e.g., escaping tildes or slashes in filenames.
# Set up like any other Context would, with the config
# Pull out behavioral kwargs
# The rest must be things like run/sudo - mock Context method info
# For each possible value type, normalize to iterable of Result
# objects (possibly repeating).
# Unknown input value: cry
# Save results for use by the method
# Wrap the method in a Mock
# First turn everything into an iterable
# Then turn everything within into a Result
# Finally, turn that iterable into an iteratOR, depending on repeat
# TODO: _maybe_ make this more metaprogrammy/flexible (using __call__ etc)?
# Pretty worried it'd cause more hard-to-debug issues than it's presently
# worth. Maybe in situations where Context grows a _lot_ of methods (e.g.
# in Fabric 2; though Fabric could do its own sub-subclass in that case...)
# Dicts need to try direct lookup or regex matching
# TODO: could optimize by skipping this if not any regex
# objects in keys()?
# Nope, nothing did match.
# Here, the value was either never a dict or has been extracted
# from one, so we can assume it's an iterable of Result objects due
# to work done by __init__.
# Populate Result's command string with what matched unless
# explicitly given
# raise_from(NotImplementedError(command), None)
# TODO: perform more convenience stuff associating args/kwargs with the
# result? E.g. filling in .command, etc? Possibly useful for debugging
# if one hits unexpected-order problems with what they passed in to
# TODO: this completely nukes the top-level behavior of sudo(), which
# could be good or bad, depending. Most of the time I think it's good.
# No need to supply dummy password config, etc.
# TODO: see the TODO from run() re: injecting arg/kwarg values
# Get value & complain if it's not a dict.
# TODO: should we allow this to set non-dict values too? Seems vaguely
# pointless, at that point, just make a new MockContext eh?
# OK, we're good to modify, so do so.
# TODO: expand?
# TODO: truncate command?
#: A tuple of `ExceptionWrappers <invoke.util.ExceptionWrapper>` containing
#: the initial thread constructor kwargs (because `threading.Thread`
#: subclasses should always be called with kwargs) and the caught exception
#: for that thread as seen by `sys.exc_info` (so: type, value, traceback).
#: .. note::
#:     The ordering of this attribute is not well-defined.
#:     Thread kwargs which appear to be very long (e.g. IO
#:     buffers) will be truncated when printed, to avoid huge
#:     unreadable error display.
# Build useful display
# FIXME: initial should not be none
# FIXME: Why isn't there str.partition for lists? There must be a
# better way to do this. Split argv around the double-dash remainder
# sentinel.
# No remainder == body gets all
# [1:] to strip off remainder itself
# Handle non-space-delimited forms, if not currently expecting a
# flag value and still in valid parsing territory (i.e. not in
# "unknown" state which implies store-only)
# NOTE: we do this in a few steps so we can
# split-then-check-validity; necessary for things like when the
# previously seen flag optionally takes a value.
# Equals-sign-delimited flags, eg --foo=bar or -f=bar
# Contiguous boolean short flags, e.g. -qv
# Handle boolean flag block vs short-flag + value. Make
# sure not to test the token as a context flag if we've
# passed into 'storing unknown stuff' territory (e.g. on a
# core-args pass, handling what are going to be task args)
# Here, we've got some possible mutations queued up, and 'token'
# may have been overwritten as well. Whether we apply those and
# continue as-is, or roll it back, depends:
# - If the parser wasn't waiting for a flag value, we're already on
# the right track, so apply mutations and move along to the
# handle() step.
# - If we ARE waiting for a value, and the flag expecting it ALWAYS
# wants a value (it's not optional), we go back to using the
# original token. (TODO: could reorganize this to avoid the
# sub-parsing in this case, but optimizing for human-facing
# execution isn't critical.)
# - Finally, if we are waiting for a value AND it's optional, we
# inspect the first sub-token/mutation to see if it would otherwise
# have been a valid flag, and let that determine what we do (if
# valid, we apply the mutations; if invalid, we reinstate the
# original token.)
# In case StateMachine does anything in __init__
# Do we have a current flag, and does it expect a value (vs being a
# bool/toggle)?
# OK, this flag is one that takes values.
# Is it a list type (which has only just been switched to)? Then it'll
# always accept more values.
# TODO: how to handle somebody wanting it to be some other iterable
# like tuple or custom class? Or do we just say unsupported?
# Not a list, okay. Does it already have a value?
# If it doesn't have one, we're waiting for one (which tells the parser
# how to proceed and typically to store the next token.)
# TODO: in the negative case here, we should do something else instead:
# - Except, "hey you screwed up, you already gave that flag!"
# - Overwrite, "oh you changed your mind?" - which requires more work
# elsewhere too, unfortunately. (Perhaps additional properties on
# Argument that can be queried, e.g. "arg.is_iterable"?)
# Handle unknown state at the top: we don't care about even
# possibly-valid input if we've encountered unknown input.
# Flag
# Value for current flag
# Positional args (must come above context-name check in case we still
# need a posarg and the user legitimately wants to give it a value that
# just happens to be a valid context name.)
# New context
# Initial-context flag being given as per-task flag (e.g. --help)
# Special-case for core --help flag: context name is used as value.
# All others: just enter the 'switch to flag' parser state
# TODO: handle inverse core flags too? There are none at the
# moment (e.g. --no-dedupe is actually 'no_dedupe', not a
# default-False 'dedupe') and it's up to us whether we actually
# put any in place.
# Unknown
# Start off the unparsed list
# Ensure all of context's positional args have been given.
# Barf if we needed a value and didn't get one
# Handle optional-value flags; at this point they were not given an
# explicit value, but they were seen, ergo they should get treated like
# bools.
# Skip casting so the bool gets preserved
# No flag is currently being examined, or one is but it doesn't take an
# optional value? Ambiguity isn't possible.
# We *are* dealing with an optional-value flag, but it's already
# received a value? There can't be ambiguity here either.
# Otherwise, there *may* be ambiguity if 1 or more of the below tests
# fail.
# Unfilled posargs still exist?
# Value matches another valid task/context name?
# Sanity check for ambiguity w/ prior optional-value flag
# Also tie it off, in case prior had optional value or etc. Seems to be
# harmless for other kinds of flags. (TODO: this is a serious indicator
# that we need to move some of this flag-by-flag bookkeeping into the
# state machine bits, if possible - as-is it was REAL confusing re: why
# this was manually required!)
# Set flag/arg obj
# Update state
# Try fallback to initial/core flag
# If it wasn't in either, raise the original context's
# exception, as that's more useful / correct.
# Bookkeeping for iterable-type flags (where the typical 'value
# non-empty/nondefault -> clearly it got its value already' test is
# insufficient)
# Handle boolean flags (which can immediately be updated)
# TODO: dynamic type for kind
# T = TypeVar('T')
# Special case: list-type args start out as empty list, not None.
# Another: incrementable args start out as their default value.
# TODO: store this default value somewhere other than signature of
# Argument.__init__?
# TODO: should probably be optional instead
# Default to do-nothing/identity function
# If cast, set to self.kind, which should be str/int/etc
# If self.kind is a list, append instead of using cast func.
# If incrementable, just increment.
# TODO: explode nicely if self.value was not an int to start
# with
# TODO: is there no "split into two buckets on predicate" builtin?
# Long-style flags win over short-style ones, so the first item of
# comparison is simply whether the flag is a single character long (with
# non-length-1 flags coming "first" [lower number])
# Next item of comparison is simply the strings themselves,
# case-insensitive. They will compare alphabetically if compared at this
# stage.
# Finally, if the case-insensitive test also matched, compare
# case-sensitive, but inverse (with lowercase letters coming first)
# Named slightly more verbose so Sphinx references can be unambiguous.
# Got real sick of fully qualified paths.
# No need for Lexicon here
# Uniqueness constraint: no name collisions
# First name used as "main" name for purposes of aliasing
# NOT arg.name
# Note positionals in distinct, ordered list attribute
# Add names & nicknames to flags, args
# Add attr_name to args, but not flags
# Add to inverse_flags if required
# Invert the 'main' flag name here, which will be a dashed version
# of the primary argument name if underscore-to-dash transformation
# occurred.
# TODO: should probably be a method on Lexicon/AliasDict
# Obtain arg obj
# Determine expected value type, if any
# Format & go
# Short flags are -f VAL, long are --foo=VAL
# When optional, also, -f [VAL] and --foo[=VAL]
# no value => boolean
# check for inverse
# Tack together
# TODO: argument/flag API must change :(
# having to call to_flag on 1st name of an Argument is just dumb.
# To pass in an Argument object to help_for may require moderate
# changes?
# Regular flag names
# Inverse flag names sold separately
# Strip out program name (scripts give us full command line)
# TODO: this may not handle path/to/script though?
# Tokenize (shlex will have to do)
# Handle flags (partial or otherwise)
# Gently parse invocation to obtain 'current' context.
# Use last seen context in case of failure (required for
# otherwise-invalid partial invocations being completed).
# Fall back to core context if no context seen.
# Unknown flags (could be e.g. only partially typed out; could be
# wholly invalid; doesn't matter) complete with flags.
# Long flags - partial or just the dashes - complete w/ long flags
# Just a dash, completes with all flags
# Otherwise, it's something entirely invalid (a shortflag not
# recognized, or a java style flag like -foo) so return nothing
# (the shell will still try completing with files, but that doesn't
# hurt really.)
# Known flags complete w/ nothing or tasks, depending
# Flags expecting values: do nothing, to let default (usually
# file) shell completion occur (which we actively want in this
# case.)
# Not taking values (eg bools): print task names
# If not a flag, is either task name or a flag value, so just complete
# task names.
# Just stick aliases after the thing they're aliased to. Sorting isn't
# so important that it's worth bending over backwards here.
# Grab all .completion files in invoke/completion/. (These used to have no
# suffix, but surprise, that's super fragile.
# Choose one arbitrary program name for script's own internal invocation
# (also used to construct completion function names when necessary)
# based on simplejson by
# __author__ = 'Bob Ippolito <bob@redivi.com>'
# cached encoder
# could accelerate with writelines in some versions of Python, at
# a debuggability cost
# Modified from original to support Python 2.4, see
# http://code.google.com/p/simplejson/issues/detail?id=53
# sentinel node for doubly linked list
# key --> [key, prev, next]
#ESCAPE = re.compile(ur'[\x00-\x1f\\"\b\f\n\r\t\u2028\u2029]')
# This is required because u() will mangle the string and ur'' isn't valid
# python3 syntax
#ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
#return '\\u{0:04x}'.format(n)
# surrogate pair
#return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
# This is for extremely simple cases and benchmarks.
# This doesn't pass the iterator directly to ''.join() because the
# exceptions aren't as detailed.  The list call should be roughly
# equivalent to the PySequence_Fast that ''.join() would do.
# Check for specials. Note that this type of test is processor
# and/or platform-specific, so do tests which don't depend on
# the internals.
## HACK: hand-optimized bytecode; turn globals into locals
# _skipkeys must be True
# NOTE (3.1.0): HjsonDecodeError may still be imported from this module for
# compatibility, but it was never in the __all__
# The struct module in Python 2.4 would get frexp() out of range here
# when an endian is specified in the format string. Fixed in Python 2.5+
# Use a slice to prevent IndexError from being raised
# Skip whitespace.
# Hjson allows comments
# skip until eol
# callers make sure that string starts with " or '
# Content is contains zero or more unescaped string characters
# Terminator is the end of string, a literal control character,
# or a backslash denoting that an escape sequence follows
# If not a unicode escape sequence, must be in the lookup table
# Unicode escape sequence
# Check for surrogate pair on UCS-4 systems
# Note that this will join high/low surrogate pairs
# but will also pass unpaired surrogates through
# Append the unescaped character
# we are at ''' - get indent
# skip white/to (newline)
# When parsing multiline string values, we must look for ' characters
# remove last EOL
# Trivial empty object
# Look-ahead for trivial empty array
# Ensure that raw_decode bails on negative indexes, the regex
# would otherwise mask this behavior. #98
# strip UTF-8 bom
# If blank or comment only file, return dict
# assume we have a root object without braces
# test if we are dealing with a single JSON value instead (true/false/null/num/"")
# Note that this exception is used from _speedups
# NEEDSESCAPE tests if the string can be written without escapes
# NEEDSQUOTES tests if the string can be written as a quoteless string (like needsEscape but without \\ and \")
# NEEDSESCAPEML tests if the string can be written as a multiline string (like needsEscape but without \n, \r, \\, \", \t)
# Check if we can insert this name without quotes
# return without quotes
# Check if we can insert this string without quotes
# see hjson syntax (must not parse as true, false, null or number)
# If the string contains no control characters, no quote characters, and no
# backslash characters, then we can safely slap some quotes around it.
# Otherwise we first check if the string can be expressed in multiline
# format or we must replace the offending characters with safe escape
# sequences.
# gap += indent;
# The string contains only a single line. We still use the multiline
# format as it avoids escaping the \ character (e.g. when used in a
# regex).
# Python 2.5, at least the one that ships on Mac OS X, calculates
# 2 ** 53 as 0! It manages to calculate 1 << 53 correctly.
#'hjson.tests.test_tool', # fails on windows
#self.assertEqual(result, expect,
# Several optimizations were made that skip over calls to
# the whitespace regex, so this test is designed to try and
# exercise the uncommon cases. The array cases are already covered.
# the object_pairs_hook takes priority over the object_hook
# http://code.google.com/p/simplejson/issues/detail?id=85
# https://github.com/simplejson/simplejson/pull/38
# https://github.com/simplejson/simplejson/issues/98
# 2007-10-05
# http://json.org/JSON_checker/test/fail1.json
# '"A JSON payload should be an object or array, not a string."',
# http://json.org/JSON_checker/test/fail2.json
# http://json.org/JSON_checker/test/fail3.json
#'{unquoted_key: "keys must be quoted"}',
# http://json.org/JSON_checker/test/fail4.json
# '["extra comma",]',
# http://json.org/JSON_checker/test/fail5.json
# http://json.org/JSON_checker/test/fail6.json
# http://json.org/JSON_checker/test/fail7.json
# http://json.org/JSON_checker/test/fail8.json
# http://json.org/JSON_checker/test/fail9.json
# '{"Extra comma": true,}',
# http://json.org/JSON_checker/test/fail10.json
# http://json.org/JSON_checker/test/fail11.json
# http://json.org/JSON_checker/test/fail12.json
# http://json.org/JSON_checker/test/fail13.json
# http://json.org/JSON_checker/test/fail14.json
# http://json.org/JSON_checker/test/fail15.json
# http://json.org/JSON_checker/test/fail16.json
# http://json.org/JSON_checker/test/fail17.json
# http://json.org/JSON_checker/test/fail18.json
# '[[[[[[[[[[[[[[[[[[[["Too deep"]]]]]]]]]]]]]]]]]]]]',
# http://json.org/JSON_checker/test/fail19.json
# http://json.org/JSON_checker/test/fail20.json
# http://json.org/JSON_checker/test/fail21.json
# http://json.org/JSON_checker/test/fail22.json
# http://json.org/JSON_checker/test/fail23.json
# http://json.org/JSON_checker/test/fail24.json
#"['single quote']",
# http://json.org/JSON_checker/test/fail25.json
# http://json.org/JSON_checker/test/fail26.json
# http://json.org/JSON_checker/test/fail27.json
# http://json.org/JSON_checker/test/fail28.json
# http://json.org/JSON_checker/test/fail29.json
# http://json.org/JSON_checker/test/fail30.json
# http://json.org/JSON_checker/test/fail31.json
# http://json.org/JSON_checker/test/fail32.json
# http://json.org/JSON_checker/test/fail33.json
# http://code.google.com/p/simplejson/issues/detail?id=3
# misc based on coverage
# http://code.google.com/p/simplejson/issues/detail?id=46
# ('[42', "Expecting ',' delimiter", 3),
# ('["spam"', "Expecting ',' delimiter", 7),
# ('{"spam":42', "Expecting ',' delimiter", 10),
# Python 2.6+
# Python 2.5
# outfile will get overwritten by tool, so the delete
# may not work on some platforms. Do it manually.
# The type might not be the same (int and Decimal) but they
# should still compare equal.
# use_decimal=True is the default
# Simulate a subinterpreter that reloads the Python modules but not
# the C code https://github.com/simplejson/simplejson/issues/34
# Default is True
# Ensure that the "default" does not get called
# Ensure that the "default" gets called
# http://bugs.python.org/issue6105
# [construct from literals, objects, etc.]
# Finally, if args[0] is an integer, store it
# [various methods]
# [various ways to multiply AwesomeInt objects]
# ... finally, if the right-hand operand is not awesome enough,
# try to do a normal integer multiplication
# the C API uses an accumulator that collects after 100,000 appends
# https://github.com/simplejson/simplejson/issues/106
# from http://json.org/JSON_checker/test/pass2.json
# test in/out equivalence and parsing
# NOTE: Python 2.4 textwrap.dedent converts tabs to spaces,
# indent=0 should emit newlines
# indent=None is more compact
# Ensure that separators still works
# Force the new defaults
# Added in 2.1.4
# dump
# dbg
# with open(name + "_dbg1.txt", "w") as tmp: tmp.write(hjson1.encode("utf-8"))
# with open(name + "_dbg2.txt", "w") as tmp: tmp.write(hjson2.encode("utf-8"))
# with codecs.open(name + "_dbg3.txt", 'w', 'utf-8') as tmp: hjson.dump(data, tmp)
# final check fails on py2.6 because of string formatting issues
# ignore/not supported
# The bytes type is intentionally not used in most of these tests
# under Python 3 because the decoder immediately coerces to str before
# calling scanstring. In Python 2 we are testing the code paths
# for both unicode and str.
# The reason this is done is because Python 3 would require
# entirely different code paths for parsing bytes and str.
# It may look strange to join strings together, but Python is drunk.
# https://gist.github.com/etrepum/5538443
# these have different behavior given UTF8 input, because the surrogate
# pair may be joined (in maxunicode > 65535 builds)
#s = '"\\u{0:04x}"'.format(i)
# http://code.google.com/p/simplejson/issues/detail?id=48
# http://timelessrepo.com/json-isnt-a-javascript-subset
# incomplete escape sequence
# invalid escape sequence
# invalid escape sequence for low surrogate
# in the ascii range, ensure that everything is the same
# from http://json.org/JSON_checker/test/pass1.json
# ensure that the marker is cleared
# from http://json.org/JSON_checker/test/pass3.json
# None will use the default
# 2 ** 31 as 0! It manages to calculate 1 << 31 correctly.
# coding=utf-8
# Extra stuff people can import
# FIXME: In python 3.10 we can use subprocess.CompletedProcess[bytes | str] instead
# Handle non-numeric and out-of-bounds values
# Determine the bucket for this value
# Determine the color for this value
# Add this spark to the list
# We type `cli` as Any instead of MILC to avoid a circular import
# Store boolean will call us again with the enable/disable flag arguments
# FIXME: We should not be using self in this way
# FIXME: Replace Callable[..., Any] with better definitions
# Setup a lock for thread safety
# Define some basic info
# Initialize all the things
# Sanity Checking
# On some windows platforms (msys2, possibly others) you have to
# execute the command through a subshell. As well, after execution
# stdin is broken so things like milc.questions no longer work.
# We pass `stdin=subprocess.DEVNULL` by default to prevent that.
# Argument Processing
# Run the command
# FIXME: Find a better type signature
# Handle tab completion
# Record the default for this argument
# Determine if it was passed on the command line
# get the file name from the '=' assignment
# assume the file name is next space-sep arg
# Iterate over the config file options and write them into config
# Coerce values into useful datatypes
# Find the argument's section
# Determine the arg value and source
# Merge this argument into self.config
# Capture the default value
# Generate a sanitized version of our running configuration
# Write the config file atomically.
# Housekeeping
# Write config to disk
# FIXME(skullydazed): This should be simplified
# If they didn't use the context manager use it ourselves
# MILC is the only thing that should have root log handlers
# FIXME: Grab one of the ascii spinners at random instead of line
# Regex was gratefully borrowed from kfir on stackoverflow:
# https://stackoverflow.com/a/45448194
# Avoid .format() so we don't have to worry about the log content
# Check if we should return an answer without asking
# Format the prompt
# Get input from the user
# Prompt for an answer.
# If the user types in one of the options exactly use that
# Massage the answer into a valid integer
# Validate the answer
# Return the answer they chose.
# Separate the key (<section>.<option>) from the value
# Extract the section and option from the key
# Process config_tokens
# Validation
# Do what the user wants
# Write a configuration option
# Display a single key
# Display an entire section
# Ending actions
# Empty __init__.py to allow relative imports to work under mypy.
# pylint: disable=locally-disabled, broad-except
# any directory with at least one file is a target
# also detect targets which are ansible roles, looking for standard entry points
# remove the top-level directory if it was included
# ignore nested directories
# handle setup dependencies
# handle target dependencies
# handle symlink dependencies between targets
# this use case is supported, but discouraged
# intentionally primitive analysis of role meta to avoid a dependency on pyyaml
# script based targets are scanned as they may execute a playbook with role dependencies
# try and decode the file as a utf-8 string, skip if it contains invalid chars (binary file)
# script_path and type
# ansible will consider these empty roles, so ansible-test should as well
# static_aliases
# non-group aliases which need to be extracted before group mangling occurs
# modules
# backwards compatibility for when it was not a cloud plugin
# Collect skip entries before group expansion to avoid registering more specific skip entries as less specific versions.
# Collect file paths before group expansion to avoid including the directories.
# Ignore references to test targets, as those must be defined using `needs/target/*` or other target references.
# network platform
# target type
# targets which are non-posix test against the target, even if they also support posix
# allow users to query for the actual type
# aliases
# configuration
# CAUTION: Avoid third-party imports in this module whenever possible.
# assume running from install
# running from source
# Modes are set to allow all users the same level of access.
# This permits files to be used in tests that change users.
# The only exception is write access to directories for the user creating them.
# This avoids having to modify the directory permissions a second time.
# Linux, macOS
# FreeBSD
# Linux, FreeBSD
# ansible may not be in our sys.path
# avoids a symlink to release.py since ansible placement relative to ansible-test may change during delegation
# pylint: disable=import-error
# Load the vendoring code by file path, since ansible may not be in our sys.path.
# Convert warnings into errors, to avoid problems from surfacing later.
# allow the subprocess access to our stdin
# When not running interactively, send subprocess stdout/stderr through a pipe.
# This isolates the stdout/stderr of the subprocess from the current process, and also hides the current TTY from it, if any.
# This prevents subprocesses from sharing stdout/stderr with the current process or each other.
# Doing so allows subprocesses to safely make changes to their file handles, such as making them non-blocking (ssh does this).
# This also maintains consistency between local testing and CI systems, which typically do not provide a TTY.
# To maintain output ordering, a single pipe is used for both stdout/stderr when not capturing output unless the output stream is ORIGINAL.
# the process we're interrupting may have completed a partial line of output
# MacOS High Sierra Compatibility
# http://sealiesoftware.com/blog/archive/2017/6/5/Objective-C_and_fork_in_macOS_1013.html
# Example configuration for macOS:
# export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
# MacOS Homebrew Compatibility
# https://cryptography.io/en/latest/installation/#building-cryptography-on-macos
# This may also be required to install pyyaml with libyaml support when installed in non-standard locations.
# Example configuration for brew on macOS:
# export LDFLAGS="-L$(brew --prefix openssl)/lib/     -L$(brew --prefix libyaml)/lib/"
# export  CFLAGS="-I$(brew --prefix openssl)/include/ -I$(brew --prefix libyaml)/include/"
# FreeBSD Compatibility
# This is required to include libyaml support in PyYAML.
# The header /usr/local/include/yaml.h isn't in the default include path for the compiler.
# It is included here so that tests can take advantage of it, rather than only ansible-test during managed pip installs.
# If CFLAGS has been set in the environment that value will take precedence due to being an optional var when calling pass_vars.
# default to stderr until config is initialized to avoid early messages going to stdout
# pylint: disable=locally-disabled, invalid-name
# convert color resets in message to desired color
# use the default empty configuration unless one has been provided
# allow cleanup handlers to run when tests fail
# prevent tests from unintentionally passing when hosts are not found
# force tests to provide inventory
# Don't show warnings that CI is running devel
# give TQM worker processes time to report code coverage results
# without this the last task in a play may write no coverage file, an empty file, or an incomplete file
# enabled even when not using code coverage to surface warnings when worker processes do not exit cleanly
# ansible-test specific environment variables require an 'ANSIBLE_TEST_' prefix to distinguish them from ansible-core env vars defined by config
# used by the coverage injector
# standard path injection is not effective for the persistent connection helper, instead the location must be configured
# it only requires the injector for code coverage
# the correct python interpreter is already selected using the sys.executable used to invoke ansible
# force tests to set ansible_python_interpreter in inventory
# provide private copies of collections for integration tests
# provide private copies of plugins for integration tests
# 'shell' is not configurable
# most plugins follow a standard naming convention
# these plugins do not follow the standard naming convention
# only configure directories which exist
# when running from source there is no need for a temporary directory since we already have known entry point scripts
# when not running from source the installed entry points cannot be relied upon
# doing so would require using the interpreter specified by those entry points, which conflicts with using our interpreter and injector
# instead a temporary directory is created which contains only ansible entry points
# symbolic links cannot be used since the files are likely not executable
# not currently used
# when running from source there is no need for a temporary directory to isolate the ansible package
# when not running from source the installed directory is unsafe to add to PYTHONPATH
# doing so would expose many unwanted packages on sys.path
# instead a temporary directory is created which contains only ansible using a symlink
# run_playbook()
# This import should occur as early as possible.
# It must occur before subprocess has been imported anywhere in the current process.
# save target_names for use once we exit the exception handler
# save delegation args for use once we exit the exception handler
# display goes to stderr, this should be on stdout
# Setting a low soft RLIMIT_NOFILE value will improve the performance of subprocess.Popen on Python 2.x when close_fds=True.
# This will affect all Python subprocesses. It will also affect the current Python process if set before subprocess is imported for the first time.
# File used to track the ansible-test test execution timeout.
# This bin symlink map must exactly match the contents of the bin directory.
# It is necessary for payload creation to reconstruct the bin directory when running ansible-test from an installed version of ansible.
# It is also used to construct the injector directory at runtime.
# It is also used to construct entry points when not running ansible-test from source.
# The current process is running on the controller, so consult the controller directly when it is the target.
# The target is not the controller, so consult the remote config for that target.
# The target is a type which either cannot be macOS or for which the OS is unknown.
# There is currently no means for the user to override this for user provided hosts.
# When using sudo on macOS we may encounter permission denied errors when dropping privileges due to inability to access the current working directory.
# To compensate for this we'll perform a `cd /` before running any commands after `sudo` succeeds.
# The `testhost` group is needed to support the `binary_modules_winrm` integration test.
# The test should be updated to remove the need for this.
# The `net` group was added to support platform agnostic testing. It may not longer be needed.
# see: https://github.com/ansible/ansible/pull/34661
# see: https://github.com/ansible/ansible/pull/34707
# information about support containers provisioned by the current ansible-test instance
# SSH is required for publishing ports, as well as modifying the hosts file.
# Initializing the SSH key here makes sure it is available for use after delegation.
# publishing ports is not needed when test hosts are on the docker network
# publishing ports is pointless if already running in a docker container
# the -t option is required to cause systemd in the container to log output to the console
# Only when the network is not the default bridge network.
# podman doesn't remove containers after create if run fails
# Sort networks and use the first available.
# This assumes all containers will have access to the same networks.
# Make sure any additional containers we launch use the same network as the current container we're running in.
# This is needed when ansible-test is running in a container that is not connected to Docker's default network.
# The default docker behavior puts containers on the same network.
# The default podman behavior puts containers on isolated networks which don't allow communication between containers or network disconnect.
# Starting with podman version 2.1.0 rootless containers are able to join networks.
# Starting with podman version 2.2.0 containers can be disconnected from networks.
# To maintain feature parity with docker, detect and use the default "podman" network when running under podman.
# if forwards is set
# else
# primary name + any aliases -- these go into the hosts file and reference the appropriate ip for the origin/control/managed host
# ports available (set if forwards is not set)
# port redirections to create through host_ip -- if not set, no port redirections will be used
# no published access without published ports (ports are only published if needed)
# docker containers, and rootfull podman containers should have a container IP address
# published ports for rootless podman containers should be accessible from the host's IP
# no container access without an IP address
# origin does not have network access to the containers
# SSH forwarding required
# hack to avoid exposing the controller container to the controller
# hack to avoid exposing the controller and target containers to the target
# forwarding not in use
# containers are only needed for commands that have targets (hosts or pythons)
# no containers are being used, return an empty database
# provide enough mock data to keep --explain working
# inspect the support container to locate the published ports
# forwards require port redirection through localhost
# Using gzip to compress the archive allows this to work on all POSIX systems we support.
# change detection not enabled, do not filter targets
# change detection not enabled
# act as though change detection not enabled, do not filter targets
# This only works with the default PyCharm debugger.
# Using it with PyCharm's "Python Debug Server" results in hangs in Ansible workers.
# Further investigation is required to understand the cause.
# Although `pydevd_pycharm` can be used to invoke `settrace`, it cannot be used to run the debugger on the command line.
# after delegation
# On-demand debugging only enables debugging if we're running under a debugger, otherwise it's a no-op.
# Assume the user wants all debugging features enabled, since on-demand debugging with no features is pointless.
# detect debug type based on env var
# get_cli_options is the new public API introduced after debugpy 1.8.15.
# We should remove the debugpy.server cli fallback once the new version is
# released.
# address can be None if the debugger is not configured through the CLI as
# we expected.
# status is returned for all submodules in the current git repository relative to the current directory
# when the current directory is not the root of the git repository this can yield relative paths which are not below the current directory
# this can occur when multiple collections are in a git repo and some collections are submodules when others are not
# specifying "." as the path to enumerate would limit results to the current directory, but can cause the git command to fail with the error:
# this can occur when the current directory contains no files tracked by git
# instead we'll filter out the relative paths, since we're only interested in those at or below the current directory
# Configuration specific to modules/module_utils.
# Python versions supported by the controller, combined with Python versions supported by modules/module_utils.
# Mainly used for display purposes and to limit the Python versions used for sanity tests.
# pylint: disable=broad-exception-caught
# all currently implemented methods are idempotent, so retries are unconditionally supported
# Populated by content_config.get_content_config on the origin.
# Serialized and passed to delegated instances to avoid parsing a second time.
# Set by check_controller_python once HostState has been created by prepare_profiles.
# This is here for convenience, to avoid needing to pass HostState to some functions which already have access to EnvironmentConfig.
# allow shell to be used without a valid layout as long as no delegation is required
# delegation should only be interactive when stdin is a TTY and no command was given
# make sure the python interpreter has been initialized before serializing host state
# type: t.Optional[ContainerDatabase]
# Run unit tests unprivileged to prevent stray writes to the source tree.
# Also disconnect from the network once requirements have been installed.
# When delegating, preserve the original separate stdout/stderr streams, but only when the following conditions are met:
# 1) Display output is being sent to stderr. This indicates the output on stdout must be kept separate from stderr.
# 2) The delegation is non-interactive. Interactive mode, which generally uses a TTY, is not compatible with intercepting stdout/stderr.
# The downside to having separate streams is that individual lines of output from each are more likely to appear out-of-order.
# when the controller is delegated, report failures after delegation fails
# make sure directory exists for collections which have no tests
# download errors are fatal if tests succeeded
# surface download failures as a warning here to avoid masking test failures
# Expose the ansible and ansible_test library directories to the Python environment.
# This is only required when delegation is used on the origin host.
# When delegating to a host other than the origin, the locale must be explicitly set.
# Setting of the locale for the origin host is handled by common_environment().
# Not all connections support setting the locale, and for those that do, it isn't guaranteed to work.
# This is needed to make sure the delegated environment is configured for UTF-8 before running Python.
# Propagate the TERM environment variable to the remote host when using the shell command.
# make sure old paths which were renamed or deleted are registered in changes
# old path was replaced with another file
# failed tests involving deleted files should be using line 0 since there is no content remaining
# most containers need this, so the default is required, leaving it to be opt-out for containers which don't need it
# verify properties can be correctly parsed to enums
# The cast is needed because mypy gets confused here and forgets that completion values are TCompletionConfig.
# See: https://man7.org/linux/man-pages/man5/proc.5.html
# See: https://github.com/torvalds/linux/blob/aea23e7c464bfdec04b52cf61edb62030e9e0d0a/fs/proc_namespace.c#L135
# See: https://github.com/torvalds/linux/blob/aea23e7c464bfdec04b52cf61edb62030e9e0d0a/fs/proc_namespace.c#L150
# Max number of open files in a docker container.
# Passed with --ulimit option to the docker run command.
# The value of /proc/*/loginuid when it is not set.
# It is a reserved UID, which is the maximum 32-bit unsigned integer value.
# See: https://access.redhat.com/solutions/25404
# This can occur when a remote docker instance is in use and the instance is not responding, such as when the system is still starting up.
# In that case an error such as the following may be returned:
# error during connect: Get "http://{hostname}:2375/v1.24/info": dial tcp {ip_address}:2375: connect: no route to host
# Some Podman versions always report server version info (verified with 1.8.0 and 1.9.3).
# Others do not unless Podman remote is being used.
# To provide consistency, use the client version if the server version isn't provided.
# See: https://github.com/containers/podman/issues/2671#issuecomment-804382934
# Docker added support for the `--cgroupns` option in version 20.10.
# Both the client and server must support the option to use it.
# See: https://docs.docker.com/engine/release-notes/#20100
# When the container host reports cgroup v1 it is running either cgroup v1 legacy mode or cgroup v2 hybrid mode.
# When the container host reports cgroup v2 it is running under cgroup v2 unified mode.
# See: https://github.com/containers/podman/blob/8356621249e36ed62fc7f35f12d17db9027ff076/libpod/info_linux.go#L52-L56
# See: https://github.com/moby/moby/blob/d082bbcc0557ec667faca81b8b33bec380b75dac/daemon/info_unix.go#L24-L27
# podman
# docker
# Docker 20.10 (API version 1.41) added support for cgroup v2.
# Unfortunately the client or server is too old to report the cgroup version.
# If the server is old, we can infer the cgroup version.
# Otherwise, we'll need to fall back to detection.
# See: https://docs.docker.com/engine/api/version-history/#v141-api-changes
# old docker server with only cgroup v1 support
# Tell the user what versions they have and recommend they upgrade the client.
# Downgrading the server should also work, but we won't mention that.
# Unfortunately cgroup v2 was detected on the Docker server.
# A newer client is needed to support the `--cgroupns` option for use with cgroup v2.
# docker server is using cgroup v1 (or cgroup v2 hybrid)
# Podman will use the highest possible limits, up to its default of 1M.
# See: https://github.com/containers/podman/blob/009afb50b308548eb129bc68e654db6c6ad82e7a/pkg/specgen/generate/oci.go#L39-L58
# Docker limits are less predictable. They could be the system limit or the user's soft limit.
# If Docker is running as root it should be able to use the system limit.
# When Docker reports a limit below the preferred value and the system limit, attempt to use the preferred value, up to the system limit.
# Check the audit error code from attempting to query the container host's audit status.
# The following error codes are known to occur:
# EPERM - Operation not permitted
# This occurs when the root user runs a container but lacks the AUDIT_WRITE capability.
# This will cause patched versions of OpenSSH to disconnect after a login succeeds.
# See: https://src.fedoraproject.org/rpms/openssh/blob/f36/f/openssh-7.6p1-audit.patch
# EBADF - Bad file number
# This occurs when the host doesn't support the audit system (the open_audit call fails).
# This allows SSH logins to succeed despite the failure.
# See: https://github.com/Distrotech/libaudit/blob/4fc64f79c2a7f36e3ab7b943ce33ab5b013a7782/lib/netlink.c#L204-L209
# ECONNREFUSED - Connection refused
# This occurs when a non-root user runs a container without the AUDIT_WRITE capability.
# When sending an audit message, libaudit ignores this error condition.
# See: https://github.com/Distrotech/libaudit/blob/4fc64f79c2a7f36e3ab7b943ce33ab5b013a7782/lib/deprecated.c#L48-L52
# The errno (audit_status) is intentionally not exposed here, as it can vary across systems and architectures.
# Instead, the symbolic name (audit_code) is used, which is resolved inside the container which generated the error.
# See: https://man7.org/linux/man-pages/man3/errno.3.html
# fmt: skip
# avoid detecting podman as docker
# A trailing indicates the default
# URL value resolution precedence:
# - command line value
# - environment variable CONTAINER_HOST
# - containers.conf
# - unix://run/podman/podman.sock
# NOTE: This method of detecting the container engine and container ID relies on implementation details of each container engine.
# Podman generates /etc/hostname in the makePlatformBindMounts function.
# That function ends up using ContainerRunDirectory to generate a path like: {prefix}/{container_id}/userdata/hostname
# NOTE: The {prefix} portion of the path can vary, so should not be relied upon.
# See: https://github.com/containers/podman/blob/480c7fbf5361f3bd8c1ed81fe4b9910c5c73b186/libpod/container_internal_linux.go#L660-L664
# See: https://github.com/containers/podman/blob/480c7fbf5361f3bd8c1ed81fe4b9910c5c73b186/vendor/github.com/containers/storage/store.go#L3133
# This behavior has existed for ~5 years and was present in Podman version 0.2.
# See: https://github.com/containers/podman/pull/248
# Docker generates /etc/hostname in the BuildHostnameFile function.
# That function ends up using the containerRoot function to generate a path like: {prefix}/{container_id}/hostname
# See: https://github.com/moby/moby/blob/cd8a090e6755bee0bdd54ac8a894b15881787097/container/container_unix.go#L58
# See: https://github.com/moby/moby/blob/92e954a2f05998dc05773b6c64bbe23b188cb3a0/daemon/container.go#L86
# This behavior has existed for at least ~7 years and was present in Docker version 1.0.1.
# See: https://github.com/moby/moby/blob/v1.0.1/daemon/container.go#L351
# See: https://github.com/moby/moby/blob/v1.0.1/daemon/daemon.go#L133
# Stop the container with SIGKILL immediately, then remove the container.
# Docker supports `--timeout` for stop. The `--time` option was deprecated in v28.0.
# Podman supports `--time` for stop. The `--timeout` option was deprecated in 1.9.0.
# Both Docker and Podman support the `-t` option for stop.
# Both Podman and Docker report an error if the container does not exist.
# The error messages contain the same "no such container" string, differing only in capitalization.
# primary properties
# nested properties
# functions
# podman remote
# no changes were made to the file
# changes were made to the same file and line
# changes were made to the same file and the line number is unknown
# changes were made to the same file and the line number is different
# Include a leading newline to improve readability on Shippable "Tests" tab.
# Without this, the first line becomes indented.
# only sanity tests have docs links
# Hack to remove ANSI color reset code from SubprocessError messages.
# a virtualenv without a marker is assumed to have been partially created
# get_virtual_python()
# touch the marker to keep track of when the virtualenv was last used
# the requested python version could not be found
# creating a virtual environment using 'venv' when running in a virtual environment created by 'virtualenv' results
# in a copy of the original virtual environment instead of creation of a new one
# avoid this issue by only using "real" python interpreters to invoke 'venv'
# something went wrong, most likely the package maintainer for the Python installation removed ensurepip
# which will prevent creation of a virtual environment without installation of other OS packages
# IMPORTANT: Keep this in sync with the ansible-test.txt requirements file.
# previous versions of ansible-test used "local-{python_version}"
# unit tests, sanity tests and other special cases (localhost only)
# config is in a temporary directory
# results are in the source tree
# cause the 'coverage' module to be found, but not imported or enabled
# Enable code coverage collection on local Python programs (this does not include Ansible modules).
# Used by the injectors to support code coverage.
# Used by the pytest unit test plugin to support code coverage.
# The COVERAGE_FILE variable is also used directly by the 'coverage' module.
# {base}/ansible_collections/{ns}/{col}/*
# */ansible_collections/{ns}/{col}/* (required to pick up AnsiballZ coverage)
# {base}/ansible_collections/{ns}/{col}/tests/output/*
# Verify all supported Python versions have a coverage version.
# Verify all controller Python versions are mapped to the latest coverage version.
# A symlink is faster than the execv wrapper, but isn't guaranteed to provide the correct result.
# There are several scenarios known not to work with symlinks:
# - A virtual environment where the target is a symlink to another directory.
# - A pyenv environment where the target is a shell script that changes behavior based on the program name.
# To avoid issues for these and other scenarios, only an exec wrapper is used.
# sys.executable is used for the shebang to guarantee it is a binary instead of a script
# injected_interpreter could be a script from the system or our own wrapper created for the --venv option
# make sure scripts (including injector.py) find the correct Python interpreter
# results are cached only if pyyaml is required or present
# it is assumed that tests will not uninstall/re-install pyyaml -- if they do, those changes will go undetected
# format used to maintain backwards compatibility with previous versions of ansible-test
# minutes
# only tests are subject to the timeout
# turn the label into something suitable for use as a filename
# legacy format
# AWS Lambda on Python 2.7 returns a list of tuples
# AWS Lambda on Python 3.7 returns a list of strings
# RSA is used to maintain compatibility with paramiko and EC2
# newer ssh-keygen PEM output (such as on RHEL 8.1) is not recognized by paramiko
# NOTE: Switching the inventory generation to write JSON would be nice, but is currently not possible due to the use of hard-coded inventory filenames.
# this profile is a controller whenever the `controller` arg was not provided
# only keep targets if this profile is a controller
# args will be populated after the instances are restored
# pylint: disable=unnecessary-pass  # when suspend is True, execution pauses here -- it's also a convenient place to put a breakpoint
# instance has not been provisioned
# SSH to the controller is not required unless remote debugging is enabled
# Podman 4.4.0 updated containers/common to 0.51.0, which removed the SYS_CHROOT capability from the default list.
# This capability is needed by services such as sshd, so is unconditionally added here.
# See: https://github.com/containers/podman/releases/tag/v4.4.0
# See: https://github.com/containers/common/releases/tag/v0.51.0
# See: https://github.com/containers/common/pull/1240
# Without AUDIT_WRITE the following errors may appear in the system logs of a container after attempting to log in using SSH:
# This occurs when running containers as root when the container host provides audit support, but the user lacks the AUDIT_WRITE capability.
# The AUDIT_WRITE capability is provided by docker by default, but not podman.
# See: https://github.com/moby/moby/pull/7179
# OpenSSH Portable requires AUDIT_WRITE when logging in with a TTY if the Linux audit feature was compiled in.
# Containers with the feature enabled will require the AUDIT_WRITE capability when EPERM is returned while accessing the audit system.
# See: https://github.com/openssh/openssh-portable/blob/2dc328023f60212cd29504fc05d849133ae47355/audit-linux.c#L90
# See: https://github.com/openssh/openssh-portable/blob/715c892f0a5295b391ae92c26ef4d6a86ea96e8e/loginrec.c#L476-L478
# Some containers will be running a patched version of OpenSSH which blocks logins when EPERM is received while using the audit system.
# These containers will require the AUDIT_WRITE capability when EPERM is returned while accessing the audit system.
# Since only some containers carry the patch or enable the Linux audit feature in OpenSSH, this capability is enabled on a per-container basis.
# No warning is provided when adding this capability, since there's not really anything the user can do about it.
# Without AUDIT_CONTROL the following errors may appear in the system logs of a container after attempting to log in using SSH:
# Containers configured to use the pam_loginuid module will encounter this error. If the module is required, logins will fail.
# Since most containers will have this configuration, the code to handle this issue is applied to all containers.
# This occurs when the loginuid is set on the container host and doesn't match the user on the container host which is running the container.
# Container hosts which do not use systemd are likely to leave the loginuid unset and thus be unaffected.
# The most common source of a mismatch is the use of sudo to run ansible-test, which changes the uid but cannot change the loginuid.
# This condition typically occurs only under podman, since the loginuid is inherited from the current user.
# See: https://github.com/containers/podman/issues/13012#issuecomment-1034049725
# This condition is detected by querying the loginuid of a container running on the container host.
# When it occurs, a warning is displayed and the AUDIT_CONTROL capability is added to containers to work around the issue.
# The warning serves as notice to the user that their usage of ansible-test is responsible for the additional capability requirement.
# Containers which do not require cgroup do not use systemd.
# Disabling systemd support in Podman will allow these containers to work on hosts without systemd.
# Without this, running a container on a host without systemd results in errors such as (from crun):
# A similar error occurs when using runc:
# A private cgroup namespace limits what is visible in /proc/*/cgroup.
# Mounting a tmpfs overrides the cgroup mount(s) that would otherwise be provided by Podman.
# This helps provide a consistent container environment across various container host configurations.
# Podman hosts providing cgroup v1 will automatically bind mount the systemd hierarchy read-write in the container.
# They will also create a dedicated cgroup v1 systemd hierarchy for the container.
# On hosts with systemd this path is: /sys/fs/cgroup/systemd/libpod_parent/libpod-{container_id}/
# On hosts without systemd this path is: /sys/fs/cgroup/systemd/{container_id}/
# Force Podman to enable systemd support since a command may be used later (to support pre-init diagnostics).
# The host namespace must be used to permit the container to access the cgroup v1 systemd hierarchy created by Podman.
# Mask the host cgroup tmpfs mount to avoid exposing the host cgroup v1 hierarchies (or cgroup v2 hybrid) to the container.
# Podman will provide a cgroup v1 systemd hierarchy on top of this.
# The mount point can be writable or not.
# The reason for the variation is not known.
# The filesystem type can be tmpfs or devtmpfs.
# Podman hosts providing cgroup v2 will give each container a read-write cgroup mount.
# A private cgroup namespace is used to avoid exposing the host cgroup to the container.
# Containers which require cgroup v1 need explicit volume mounts on container hosts not providing that version.
# We must put the container PID 1 into the cgroup v1 systemd hierarchy we create.
# Force Podman to enable systemd support since a command is being provided.
# A private cgroup namespace is required. Using the host cgroup namespace results in errors such as the following (from crun):
# Unlike Docker, Podman ignores a /sys/fs/cgroup tmpfs mount, instead exposing a cgroup v2 mount.
# The exposed volume will be read-write, but the container will have its own private namespace.
# Provide a read-only cgroup v1 systemd hierarchy under which the dedicated ansible-test cgroup will be mounted read-write.
# Without this systemd will fail while attempting to mount the cgroup v1 systemd hierarchy.
# Podman doesn't support using a tmpfs for this. Attempting to do so results in an error (from crun):
# Provide the container access to the cgroup v1 systemd hierarchy created by ansible-test.
# Use the `--cgroupns` option if it is supported.
# Older servers which do not support the option use the host group namespace.
# Older clients which do not support the option cause newer servers to use the host cgroup namespace (cgroup v1 only).
# See: https://github.com/moby/moby/blob/master/api/server/router/container/container_routes.go#L512-L517
# If the host cgroup namespace is used, cgroup information will be visible, but the cgroup mounts will be unavailable due to the tmpfs below.
# Mounting a tmpfs overrides the cgroup mount(s) that would otherwise be provided by Docker.
# Docker hosts providing cgroup v1 will automatically bind mount the systemd hierarchy read-only in the container.
# The cgroup v1 system hierarchy path is: /sys/fs/cgroup/systemd/{container_id}/
# The host cgroup namespace must be used.
# Otherwise, /proc/1/cgroup will report "/" for the cgroup path, which is incorrect.
# See: https://github.com/systemd/systemd/issues/19245#issuecomment-815954506
# It is set here to avoid relying on the current Docker configuration.
# A cgroup v1 systemd hierarchy needs to be mounted read-write over the read-only one provided by Docker.
# Alternatives were tested, but were unusable due to various issues:
# Docker hosts providing cgroup v2 will give each container a read-only cgroup mount.
# It must be remounted read-write before systemd starts.
# This must be done in a privileged container, otherwise a "permission denied" error can occur.
# This matches the behavior in Podman 1.7.0 and later, which select cgroupns 'host' mode for cgroup v1 and 'private' mode for cgroup v2.
# See: https://github.com/containers/podman/pull/4374
# See: https://github.com/containers/podman/blob/main/RELEASE_NOTES.md#170
# A private cgroup namespace is used since no access to the host cgroup namespace is required.
# This matches the configuration used for running cgroup v1 containers under Podman.
# Provide a read-write tmpfs filesystem to support additional cgroup mount points.
# Without this Docker will provide a read-only cgroup2 mount instead.
# Provide a read-write tmpfs filesystem to simulate a systemd cgroup v1 hierarchy.
# cgroup probe failed
# Privileged mode is required to create the cgroup directories on some hosts, such as Fedora 36 and RHEL 9.0.
# The mkdir command will fail with "Permission denied" otherwise.
# cgroup create permission denied
# Privileged mode is required to remove the cgroup directories on some hosts, such as Fedora 36 and RHEL 9.0.
# The BusyBox find utility will report "Permission denied" otherwise, although it still exits with a status code of 0.
# Stop early for containers which require cgroup v2 when the container host does not provide it.
# None of the containers included with ansible-test currently use this configuration.
# Support for v2-only was added in preparation for the eventual removal of cgroup v1 support from systemd after EOY 2023.
# See: https://github.com/systemd/systemd/pull/24086
# Containers which use old versions of systemd (earlier than version 226) require cgroup v1 support.
# If the host is a cgroup v2 (unified) host, changes must be made to how the container is run.
# See: https://github.com/systemd/systemd/blob/main/NEWS
# NOTE: The container host must have the cgroup v1 mount already present.
# The following commands can be used to make the mount available:
# See: https://github.com/containers/crun/blob/main/crun.1.md#runocisystemdforce_cgroup_v1path
# cgroup probe reported invalid state
# when the controller is not delegated, report failures immediately
# CentOS 6 uses OpenSSH 5.3, making it incompatible with the default configuration of OpenSSH 8.8 and later clients.
# Since only CentOS 6 is affected, and it is only supported by ansible-core 2.12, support for RSA SHA-1 is simply hard-coded here.
# A substring is used to allow custom containers to work, not just the one provided with ansible-test.
# Containers with cgroup support are assumed to be running systemd.
# These temporary mount points need to be created at run time when using Docker.
# They are automatically provided by Podman, but will be overridden by VOLUME instructions for the container, if they exist.
# If supporting containers with VOLUME instructions is not desired, these options could be limited to use with Docker.
# See: https://github.com/containers/podman/pull/1318
# Previously they were handled by the VOLUME instruction during container image creation.
# However, that approach creates anonymous volumes when running the container, which are then left behind after the container is deleted.
# These options eliminate the need for the VOLUME instruction, and override it if they are present.
# The mount options used are those typically found on Linux systems.
# Of special note is the "exec" option for "/tmp", which is required by ansible-test for path injection of executables using temporary directories.
# some systemd containers require a separate tmpfs here, such as Ubuntu 20.04 and Ubuntu 22.04
# VyOS 1.1.8 uses OpenSSH 5.5, making it incompatible with RSA SHA-256/512 used by Paramiko 2.9 and later.
# IOS CSR 1000V uses an ancient SSH server, making it incompatible with RSA SHA-256/512 used by Paramiko 2.9 and later.
# That means all network platforms currently offered by ansible-core-ci require support for RSA SHA-1, so it is simply hard-coded here.
# NOTE: This option only exists in ansible-core 2.14 and later. For older ansible-core versions, use of Paramiko 2.8.x or earlier is required.
# skip extended checks unless we're running integration tests
# VyOS 1.1.8 uses OpenSSH 5.5, making it incompatible with the default configuration of OpenSSH 8.8 and later clients.
# IOS CSR 1000V uses an ancient SSH server, making it incompatible with the default configuration of OpenSSH 8.8 and later clients.
# a target uses a single python version, but a controller may include additional versions for targets running on the controller
# No "Permission denied" check is performed here.
# Unlike containers, with remote instances, user configuration isn't guaranteed to have been completed before SSH connections are attempted.
# ansible_port is intentionally not set using connection.port -- connection-specific variables can set this instead
# required for scenarios which change the connection plugin to SSH
# required for scenarios which change the connection plugin to require a password
# Begin the search for the source provider at the layout provider root.
# This intentionally ignores version control within subdirectories of the layout root, a condition which was previously an error.
# Doing so allows support for older git versions for which it is difficult to distinguish between a super project and a sub project.
# It also provides a better user experience, since the solution for the user would effectively be the same -- to remove the nested version control.
# FUTURE: If the host is origin, the python path could be validated here.
# pylint: disable=unexpected-keyword-arg  # see: https://github.com/PyCQA/pylint/issues/7434
# The user did not specify a target Python and supported Pythons are unknown, so use the controller Python specified by the user instead.
# We truly want to catch anything that the worker thread might do including call sys.exit.
# Therefore, we catch *everything* (including old-style class exceptions)
# pylint: disable=locally-disabled, bare-except
# type: ignore[return-value]  # requires https://www.python.org/dev/peps/pep-0612/ support
# Pip Abstraction
# Entry Points
# false positive: pylint: disable=no-member
# bootstrap the managed virtual environment, which will have been created without any installed packages
# sanity tests which install no packages skip this step
# most infrastructure packages can be removed from sanity test virtual environments after they've been created
# removing them reduces the size of environments cached in containers
# Integration tests can involve two hosts (controller and target).
# The connection type can be used to disambiguate between the two.
# The interpreter path is not included below.
# It can be seen by running ansible-test with increased verbosity (showing all commands executed).
# Collect
# Support for prefixed files was added to ansible-test in ansible-core 2.12 when split controller/target testing was implemented.
# Previous versions of ansible-test only recognize non-prefixed files.
# If a prefixed file exists (even if empty), it takes precedence over the non-prefixed file.
# listing content constraints first gives them priority over constraints provided by ansible-test
# Support
# NOTE: This same information is needed for building the base-test-container image.
# user has overridden the proxy endpoint, there is nothing to provision
# preserved for future use, no versions currently require this
# proxy not configured
# improve performance by disabling uid/gid lookups
# type: ignore[attr-defined]  # undocumented attribute
# Exclude vendored files from the payload.
# They may not be compatible with the delegated environment.
# If any execute bit is set, treat the file as executable.
# This ensures that sanity tests which check execute bits behave correctly.
# reconstruct the bin directory which is not available when running from an ansible install
# exclude unnecessary files when not testing ansible itself
# exclude built-in ansible modules when they are not needed
# include files from the current collection (layout.collection.directory will be added later)
# include files from each collection in the same collection root as the content being tested
# when testing ansible itself the ansible source is the content
# there are no extra files when testing ansible itself
# execute callbacks only on the content paths
# this is done before placing them in the appropriate subdirectory (see below)
# place ansible source files under the 'ansible' directory on the delegated host
# place collection files under the 'ansible_collections/{namespace}/{collection}' directory on the delegated host
# extra files already have the correct destination path
# maintain predictable file order
# Newer OpenSSH clients connecting to older SSH servers must explicitly enable ssh-rsa support.
# OpenSSH 8.8, released on 2021-09-26, deprecated using RSA with the SHA-1 hash algorithm (ssh-rsa).
# OpenSSH 7.2, released on 2016-02-29, added support for using RSA with SHA-256/512 hash algorithms.
# See: https://www.openssh.com/txt/release-8.8
# append the algorithm to the default list, requires OpenSSH 7.0 or later
# Host key signature algorithms that the client wants to use.
# Available options can be found with `ssh -Q HostKeyAlgorithms` or `ssh -Q key` on older clients.
# This option was updated in OpenSSH 7.0, released on 2015-08-11, to support the "+" prefix.
# See: https://www.openssh.com/txt/release-7.0
# Signature algorithms that will be used for public key authentication.
# Available options can be found with `ssh -Q PubkeyAcceptedAlgorithms` or `ssh -Q key` on older clients.
# This option was added in OpenSSH 7.0, released on 2015-08-11.
# This option is an alias for PubkeyAcceptedAlgorithms, which was added in OpenSSH 8.5.
# See: https://www.openssh.com/txt/release-8.5
# explain mode
# prevent reading from stdin
# file from which the identity for public key authentication is read
# do not execute a remote command
# port to connect to on the remote host
# user to log in as on the remote machine
# info level required to get messages on stderr indicating the ports assigned to each forward
# if the user has ControlPath set up for every host, it will prevent creation of forwards
# avoid changing the test environment
# controller-only collections run modules/module_utils unit tests as controller-only tests
# normal collections run modules/module_utils unit tests isolated from controller code due to differences in python version requirements
# requested python versions that are remote-only and not supported by this collection
# all selected unit tests are controller tests
# requested python versions that are remote-only
# all selected unit tests are remote tests
# requested python versions that are not supported by remote tests for this collection
# units
# When using pytest-mock, make sure that features introduced in Python 3.8 are available to older Python versions.
# This is done by enabling the mock_use_standalone_module feature, which forces use of mock even when unittest.mock is available.
# Later Python versions have not introduced additional unittest.mock features, so use of mock is not needed as of Python 3.8.
# If future Python versions introduce new unittest.mock features, they will not be available to older Python versions.
# Having the cutoff at Python 3.8 also eases packaging of ansible-core since no supported controller version requires the use of mock.
# NOTE: This only affects use of pytest-mock.
# added in pytest 4.5.0
# avoid permission errors when running from an installed version and using pytest >= 8
# fmt:skip
# pytest exits with status code 5 when all tests are skipped, which isn't an error for our use case
# built-in runtime configuration for the collection loader
# current collection loader required by all python versions supported by the controller
# legacy collection loader required by all python versions not supported by the controller
# only non-collection ansible module tests should have access to ansible built-in modules
# default to --show if no options were given
# NOTE: must match ansible.constants.DOCUMENTABLE_PLUGINS, but with 'module' replaced by 'modules'!
# Plugin types that can have multiple plugins per file, and where filenames not always correspond to plugin names
# sanity
# make sure content config has been parsed prior to delegation
# version was not requested, skip it silently
# only multi-version sanity tests use target versions, the rest use the controller version
# Deferred checking of Python availability. Done here since it is now known to be required for running the test.
# Earlier checking could cause a spurious warning to be generated for a collection which does not support the Python version.
# If the user specified a Python version, an error will be generated before reaching this point when the Python interpreter is not found.
# multi-version sanity tests handle their own requirements (if any) and use the target python
# single version sanity tests use the controller python
# version neutral sanity tests handle their own requirements (if any)
# include Ansible specific code-smell tests which are not configured to be skipped
# unused errors
# tests which do not accept a target list, or which use all targets, always return all possible errors, so all ignores can be checked
# remove all symlinks unless supported by the test
# exclude symlinked directories unless supported by the test
# include directories containing any of the included files
# remove all directory symlinks unless supported by the test
# drop Test suffix
# use dashes instead of capitalization
# Optional error codes represent errors which spontaneously occur without changes to the content under test, such as those based on the current date.
# Because these errors can be unpredictable they behave differently than normal error codes:
# args is not used here, but derived classes may make use of it
# python_version is not used here, but derived classes may make use of it
# include modules/module_utils within integration test library directories
# special handling for content in ansible-core
# utility code that runs in target environments and requires support for remote-only Python versions
# integration test support modules/module_utils continue to require support for remote-only Python versions
# collection loader requires support for remote-only Python versions
# force all code-smell sanity tests to run with Python UTF-8 Mode enabled
# when a remote-only python version is not supported there are no paths to test
# when a remote-only python version is supported, tests must be applied only to targets that support remote-only Python versions
# SanityCodeSmellTest
# create_sanity_virtualenv()
# use a single virtualenv name for tests which have no requirements
# The path to the virtual environment must be kept short to avoid the 127 character shebang length limit on Linux.
# If the limit is exceeded, generated entry point scripts from pip installed packages will fail with syntax errors.
# The pre-build instructions for pip installs must be omitted, so they do not affect the hash.
# This is allows the pre-build commands to be added without breaking sanity venv caching.
# It is safe to omit these from the hash since they only affect packages used during builds, not what is installed in the venv.
# CAUTION: This code must be kept in sync with the code which processes pre-build instructions in:
# parse errors
# file not found errors
# conflicting ignores and skips
# List plugins
# ignore removed module/plugin warnings
# replace unicode smart quotes and ellipsis with ascii versions
# all of ansible-core must pass the import test, not just plugins/modules
# modules/module_utils will be tested using the module context
# everything else will be tested using the plugin context
# only plugins/modules must pass the import test for collections
# Plugins are not supported on remote-only Python versions.
# However, the collection loader is used by the import sanity test and unit tests on remote-only Python versions.
# To support this, it is tested as a plugin, but using a venv which installs no requirements.
# Filtering of paths relevant to the Python version tested has already been performed by filter_remote_targets.
# add the importer to the path so it can be accessed through the coverage injector
# json output is missing file paths in older versions of shellcheck, so we'll use xml instead
# plugin: deprecated (ansible-test)
# plugin: pylint.extensions.mccabe
# expose plugin paths for use in custom plugins
# Set PYLINTHOME to prevent pylint from checking for an obsolete directory, which can result in a test failure due to stderr output.
# See: https://github.com/PyCQA/pylint/blob/e6c6bf5dfd61511d64779f54264b27a368c43100/pylint/constants.py#L148
# Remove plugins that cannot be associated to a single file (test and filter plugins).
# deprecated: description='extractall fallback without filter' python_version='3.11'
# this will be changed at some point in the future
# HACK: ignore special groups 6 and 7
# special test target which uses group 6 -- nothing else should be in that group
# special test targets which use group 7 -- nothing else should be in that group
# coverage
# This is most likely due to using an unsupported version of coverage.
# Input data was previously aggregated and thus uses the standard ansible-test output format for PowerShell coverage.
# This format differs from the more verbose format of raw coverage data from the remote Windows hosts.
# PowerShell unpacks arrays if there's only a single entry so this is a defensive check on that
# Rewrite the module_utils path from the remote host to match the controller. Ansible 2.6 and earlier.
# Rewrite the path of code running from an integration test temporary directory.
# Rewrite the module_utils path from the remote host to match the controller. Ansible 2.7 and later.
# Rewrite the module path from the remote host to match the controller. Ansible 2.6 and earlier.
# Rewrite the module path from the remote host to match the controller. Ansible 2.7 and later.
# AnsiballZ versions using zipimporter will match the `.zip` portion of the regex.
# AnsiballZ versions not using zipimporter will match the `_[^/]+` portion of the regex.
# Rewrite the path of code running on a remote host or in a docker container as root.
# make sure path is absolute (will be relative if previously exported)
# the collection loader uses implicit namespace packages, so __init__.py does not need to exist on disk
# coverage is still reported for these non-existent files, but warnings are not needed
# coverage erase
# coverage combine
# exported paths must be relative since absolute paths may differ between systems
# always write files to make sure stale files do not exist
# only report files which are non-empty to prevent coverage from reporting errors
# excludes symlinks of regular files to avoid reporting on the same file multiple times
# in the future it would be nice to merge any coverage for symlinks into the real files
# only available to coverage combine
# coverage xml
# coverage html
# coverage.py does not support non-Python files so we just skip the local html report.
# coverage report
# avoid mixing log messages with file output when using `/dev/stdout` for the output file on commands
# this may be worth considering as the default behavior in the future, instead of being dependent on the command or options used
# putting this in a function keeps both pylint and mypy happy
# coverage analyze targets expand
# coverage analyze targets filter
# coverage analyze targets generate
# coverage analyze targets combine
# coverage analyze targets missing
# post-delegation, path is relative to the content root
# When testing a collection, the temporary directory must reside within the collection.
# This is necessary to enable support for the default collection for non-collection content (playbooks and roles).
# type: ignore[arg-type]  # incorrect type stub omits bytes path support
# integration
# create a fresh test directory for each test target
# type: IntegrationEnvironment
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
# type: t.Optional[str]
# special behavior when the --changed-all-target target is selected based on changes
# act as though the --changed-all-target target was in the include list
# act as though the --changed-all-target target was in the exclude list
# requirements are installed using a callback since the windows-integration and network-integration host status checks depend on them
# integration, windows-integration, network-integration
# temporary solution to keep DCI tests working
# Common temporary directory used on all POSIX hosts that will be created world writeable.
# Enable code coverage collection on Ansible modules (both local and remote).
# Used by the AnsiballZ wrapper generator in lib/ansible/executor/module_common.py to support code coverage.
# Include the command, target and platform marker so the remote host can create a filename with that info.
# The generated AnsiballZ wrapper is responsible for adding '=python-{X.Y}=coverage.{hostname}.{pid}.{id}'
# Common temporary directory used on all Windows hosts that will be created writable by everyone.
# The remote is responsible for adding '={language-version}=coverage.{hostname}.{pid}.{id}'
# values which are not host specific
# Skip only targets which skip all hosts.
# Targets that skip only some hosts will be handled during inventory generation.
# legacy syntax, use above format
# cloud filter already performed prior to delegation
# cloud configuration already established prior to delegation
# platform was initialized, but not used -- such as being skipped due to all tests being disabled
# We may be in a container, so we cannot just reach VMWARE_TEST_PLATFORM,
# We do a try/except instead
# static
# Default image to run the nios simulator.
# The simulator must be pinned to a specific version
# to guarantee CI passes with the version used.
# It's source source itself resides at:
# https://github.com/ansible/nios-test-container
# The simulator must be pinned to a specific version to guarantee CI passes with the version used.
# control port for flask app in container
# Pebble ACME CA
# (c) 2018, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# The image must be pinned to a specific version to guarantee CI passes with the version used.
# These paths are unique to the container image which has an nginx location for /pulp/content to route
# requests to the content backend
# This should probably be false see https://issues.redhat.com/browse/AAH-2328
# Copyright: (c) 2018, Google Inc.
# check required variables
# Read the password from the container environment.
# This allows the tests to work when reusing an existing container.
# The password is marked as sensitive, since it may differ from the one we generated.
# backwards compatibility for tests intended to work with or without HTTP Tester
# apply work-around for OverlayFS issue
# https://github.com/docker/for-linux/issues/72#issuecomment-319904698
# sometimes the file exists but is not yet valid JSON
# shell
# run the shell locally unless a target was requested
# a target was requested, connect to it over SSH
# HACK: ensure the debugger port visible in the shell is the forwarded port, not the original
# Running a command is assumed to be non-interactive. Only a shell (no command) is interactive.
# If we want to support interactive commands in the future, we'll need an `--interactive` command line option.
# Command stderr output is allowed to mix with our own output, which is all sent to stderr.
# shell required for non-ssh connection
# make sure the python interpreter has been initialized before opening a shell
# 255 indicates SSH itself failed, rather than a command run on the remote host.
# In this case, report a host connection error so additional troubleshooting output is provided.
# keep backspace working
# configure the controller environment
# child of mount.path
# not really a CI provider, so use an empty string for the code
# not yet implemented for local
# tracked files (including unchanged)
# untracked files (except ignored)
# tracked changes (including deletions) committed since the branch was forked
# tracked changes (including deletions) which are staged
# tracked changes (including deletions) which are not staged
# diff of all tracked files from fork point to working copy
# temporary backward compatibility for legacy API keys
# There are several likely causes of this:
# - First run on a new branch.
# - Too many pull requests passed since the last merge run passed.
# the temporary file cannot be deleted because we do not know when the agent has processed it
# placing the file in the agent's temp directory allows it to be picked up when the job is running in a container
# make the agent aware of the public key by declaring it as an attachment
# ex: https://dev.azure.com/{org}/
# ex: GitHub
# HEAD is a merge commit of the PR branch into the target branch
# HEAD^1 is HEAD of the target branch (first parent of merge commit)
# HEAD^2 is HEAD of the PR branch (second parent of merge commit)
# see: https://git-scm.com/docs/gitrevisions
# <commit>...<commit>
# This form is to view the changes on the branch containing and up to the second <commit>, starting at a common ancestor of both <commit>.
# see: https://git-scm.com/docs/git-diff
# max 5000
# assumes under normal circumstances that later queued jobs are for later commits
# may miss some non-PR reasons, the alternative is to filter the list after receiving it
# most likely due to a private project, which returns an HTTP 203 response with HTML
# virtual packages depend on the modules they contain instead of the reverse
# process imports in reverse so the deepest imports come first
# recurse over module_utils imports while excluding self
# add recursive imports to all path entries which import this module_util
# for purposes of mapping module_utils to paths, treat imports of virtual utils the same as the parent package
# ignore empty __init__.py files
# Python code must be read as bytes to avoid a SyntaxError when the source uses comments to declare the file encoding.
# See: https://www.python.org/dev/peps/pep-0263
# Specifically: If a Unicode string with a coding declaration is passed to compile(), a SyntaxError will be raised.
# Treat this error as a warning so tests can be executed as best as possible.
# The compile test will detect and report this syntax error.
# Ensure the correct relative module is calculated for both not_init.py and __init__.py:
# a/b/not_init.py -> a.b.not_init  # used as-is
# a/b/__init__.py -> a.b           # needs "__init__" part appended to ensure relative imports work
# implicitly import parent package
# Various parts of the Ansible source tree execute within different modules.
# To support import analysis, each file which uses relative imports must reside under a path defined here.
# The mapping is a tuple consisting of a path pattern to match and a replacement path.
# During analysis, any relative imports not covered here will result in warnings, which can be fixed by adding the appropriate entry.
# This assumes that all files within the collection are executed by Ansible as part of the collection.
# While that will usually be true, there are exceptions which will result in this resolution being incorrect.
# import ansible.module_utils.MODULE[.MODULE]
# import ansible_collections.{ns}.{col}.plugins.module_utils.module_utils.MODULE[.MODULE]
# from ansible.module_utils import MODULE[, MODULE]
# from ansible.module_utils.MODULE[.MODULE] import MODULE[, MODULE]
# from ansible_collections.{ns}.{col}.plugins.module_utils import MODULE[, MODULE]
# from ansible_collections.{ns}.{col}.plugins.module_utils.MODULE[.MODULE] import MODULE[, MODULE]
# duplicate imports are ignored
# invalid imports in tests are ignored
# This error should be detected by unit or integration tests.
# don't count changed paths as additional paths
# not categorized, run all tests
# path triggers no integration tests
# identify targeted integration tests (those which only target a single integration command)
# minimize excessive output from potentially thousands of files which do not trigger tests
# changes require testing all targets, do not filter targets
# populated on first use to reduce overhead when not needed
# run all tests when no result given
# run sanity on path unless result specified otherwise
# test infrastructure, run all tests
# already expanded using get_dependent_paths
# test infrastructure, run all sanity checks
# changes to files which are not unit tests should trigger tests from the nearest parent directory
# entire integration test commands depend on these connection plugins
# other connection plugins have isolated integration and unit tests
# broad impact, run all tests
# These inventory plugins are enabled by default (see INVENTORY_ENABLED).
# Without dedicated integration tests for these we must rely on the incidental coverage from other tests.
# Early classification that needs to occur before common classification belongs here.
# Classification common to both ansible and collections.
# Classification here is specific to ansible, and runs after common classification.
# these tests are not invoked from ansible-test
# force all tests due to risk of breaking changes in new test environment
# run ansible-test self tests
# test infrastructure, run all unit tests
# unknown, will result in fall-back to run all tests
# We don't support relative paths for builtin utils, there's no point.
# type: ignore[attr-defined]  # intentionally using an attribute that does not exist
# for internal use only by ansible-test
# type: ignore[assignment]  # real type private
# Docker support isn't related to ansible-core-ci.
# However, ansible-core-ci support is a reasonable indicator that the user may need the `--dev-*` options.
# These `--dev-*` options are experimental features that may change or be removed without regard for backward compatibility.
# Additionally, they're features that are not likely to be used by most users.
# To avoid confusion, they're hidden from `--help` and tab completion by default, except for ansible-core-ci users.
# lower cost than RHEL and macOS
# windows-integration
# network-integration
# Target sanity tests either have no Python requirements or manage their own virtual environments.
# Thus, there is no point in setting up virtual environments ahead of time for them.
# local/unspecified
# There are several changes in behavior from the legacy implementation when using no delegation (or the `--local` option).
# These changes are due to ansible-test now maintaining consistency between its own Python and that of controller Python subprocesses.
# 1) The `--python-interpreter` option (if different from sys.executable) now affects controller subprocesses and triggers re-execution of ansible-test.
# 2) The `--python` option now triggers re-execution of ansible-test if it differs from sys.version_info.
# place this section before the sections created by the parsers below
# type: PosixConfig
# type: WindowsRemoteConfig
# type: NetworkRemoteConfig
# FUTURE: For OriginConfig or ControllerConfig->OriginConfig the path could be validated with an absolute path parser (file or directory).
# type: ParserBoundary
# kept for backwards compatibility, but no point in advertising since it's the default
# env
# since testcases are module specific, don't autocomplete if more than one
# module is specified
# prevent argcomplete from including unrelated arguments in the completion results
# If one of the completion handlers registered their results, only allow those exact results to be returned.
# This prevents argcomplete from adding results from other completers when they are known to be invalid.
# FUTURE: It may be possible to enhance error handling by surfacing this error message during downstream completion.
# ignore parse errors during completion to avoid breaking downstream completion
# Displaying a warning before the file listing informs the user it is invalid. Bash will redraw the prompt after the list.
# If the file listing is not shown, a warning could be helpful, but would introduce noise on the terminal since the prompt is not redrawn.
# When the current prefix provides no matches, but matches files a single file on disk, Bash will perform an incorrect completion.
# Returning multiple invalid matches instead of no matches will prevent Bash from using its own completion logic in this case.
# abuse list mode to enable preservation of the literal results
# argcomplete 3+
# see: https://github.com/kislyuk/argcomplete/commit/bd781cb08512b94966312377186ebc5550f46ae0
# argcomplete <3
# Word breaks have already been handled when generating completions, don't mangle them further.
# This is needed in many cases when returning completion lists which lack the existing completion prefix.
# NOTE: When choosing delimiters, take into account Bash and argcomplete behavior.
# Recommended characters for assignment and/or continuation: `/` `:` `=`
# The recommended assignment_character list is due to how argcomplete handles continuation characters.
# see: https://github.com/kislyuk/argcomplete/blob/5a20d6165fbb4d4d58559378919b05964870cc16/argcomplete/__init__.py#L557-L558
# This class was originally frozen. However, that causes issues when running under Python 3.11.
# See: https://github.com/python/cpython/issues/99856
# include the existing prefix to avoid rewriting the word undergoing completion
# choice is not delimited
# value matched
# choice is delimited
# value and delimiter matched
# NOTE: the minimum is currently fixed at 1
# complete relative names, but avoid suggesting them unless the current name is relative
# unfortunately this will be sorted in reverse of what bash presents ("../ ./" instead of "./ ../")
# type: c.Iterator[os.DirEntry]
# allow absolute paths
# suggest relative paths
# disable automatic detection
# older versions of git require submodule commands to be executed from the top level of the working tree
# git version 2.18.1 (centos8) does not have this restriction
# git version 1.8.3.1 (centos7) does
# fall back to using the top level directory of the working tree only when needed
# this avoids penalizing newer git versions with a potentially slower analysis due to additional submodules
# git reports submodule directories as regular files
# directory symlinks are reported by git as regular files but they need to be treated as directories
# NOTE: directory symlinks are ignored as there should be no directory symlinks for an install
# include directory symlinks since they will not be traversed and would otherwise go undetected
# contains both file paths and symlinked directory paths (ending with os.path.sep)
# contains only file paths
# The following are plugin directories not directly supported by ansible-core, but used in collections
# (https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#modules--plugins)
# these apply to all test commands
# these apply to specific test commands
# test/units/ will be covered by the warnings for test/ vs tests/
# unit tests only run from one directory so no message is needed
# Empty __init__.py to keep pylint happy.
# type: () -> None
# pylint: disable=global-statement
# Support for pre-build is currently limited to requirements embedded in ansible-test and those used by ansible-core.
# Requirements from ansible-core can be found in the 'test' and 'requirements' directories.
# This feature will probably be extended to support collections after further testing.
# Requirements from collections can be found in the 'tests' directory.
# type: (str) -> None
# type: (str) -> list[PreBuild]
# CAUTION: This code must be kept in sync with the sanity test hashing code in:
# type: list[PreBuild]
# type: () -> t.Dict[str, str]
# When ansible-test installs requirements outside a virtual environment, it does so under one of two conditions:
# 1) The environment is an ephemeral one provisioned by ansible-test.
# 2) The user has provided the `--requirements` option to force installation of requirements.
# It seems reasonable to bypass PEP 668 checks in both of these cases.
# Doing so with an environment variable allows it to work under any version of pip which supports it, without breaking older versions.
# NOTE: pip version 23.0 enforces PEP 668 but does not support the override, in which case upgrading pip is required.
# type: () -> t.List[str]
# type: () -> t.IO[bytes]
# type: ignore[attr-defined]  # pylint: disable=consider-using-with
# type: (str, str) -> None
# type: (t.List[str], int, str, str) -> None
# type: (str, int) -> None
# type: (t.List[str], t.Optional[str], bool, t.Optional[t.Dict[str, str]]) -> None
# type: (str, str, bool) -> None
# type: (str, str) -> t.IO[bytes]
# pylint: disable=consider-using-with,unspecified-encoding
# type: (t.Optional[str | bytes], str) -> t.Optional[bytes]
# type: (t.Optional[str | bytes], str) -> t.Optional[t.Text]
# type: (str | bytes, str) -> bytes
# type: (str | bytes, str) -> t.Text
# base-64 encoded JSON payload which will be populated before this script is executed
# custom Fedora patch [1]
# pip 21.1
# pypi-test-container
# [1] https://src.fedoraproject.org/rpms/python-pip/blob/f34/f/emit-a-warning-when-running-with-root-privileges.patch
# Filtering logging output globally avoids having to intercept stdout/stderr.
# It also avoids problems with loss of color output and mixing up the order of stdout/stderr messages.
# virtualenv <20
# venv and virtualenv >= 20
# preload an empty ansible._vendor module to prevent use of any embedded modules during the import test
# noinspection PyCompatibility
# type: (str, str | None) -> types.ModuleType
# allow importing code from collections when testing a collection
# unique ISO date marker matching the one present in yaml_to_json.py
# do not support collection loading when not testing a collection
# do not unload ansible code for collection plugin (not module) tests
# doing so could result in the collection loader being initialized multiple times
# remove all modules under the ansible package, except the preloaded vendor module
# pre-load an empty ansible package to prevent unwanted code in __init__.py from loading
# this more accurately reflects the environment that AnsiballZ runs modules under
# it also avoids issues with imports in the ansible package that are not allowed
# type: (RestrictedModuleLoader, str, list[str], types.ModuleType | None ) -> ModuleSpec | None | ImportError
# loader is expected to be Optional[importlib.abc.Loader], but RestrictedModuleLoader does not inherit from importlib.abc.Loader
# type: (RestrictedModuleLoader, str, list[str]) -> RestrictedModuleLoader | None
# ignore modules that are already being loaded
# for non-modules, everything in the ansible namespace is allowed
# intercept loading so we can modify the result
# module_utils and module under test are always allowed
# restrict access to ansible files that exist
# ansible file does not exist, do not restrict access
# restrict access to collections when we are not testing a collection
# restrict access to collection files that exist
# collection file does not exist, do not restrict access
# not a namespace we care about
# type: (RestrictedModuleLoader, ModuleSpec) -> None
# type: (RestrictedModuleLoader, types.ModuleType) -> None | ImportError
# type: ignore[attr-defined] # pylint: disable=protected-access
# type: (RestrictedModuleLoader, str) -> types.ModuleType | ImportError
# stop Ansible module execution during AnsibleModule instantiation
# no-op for _load_params since it may be called before instantiating AnsibleModule
# type: (RestrictedModuleLoader, str) -> types.ModuleType
# cannot be tested because it has already been loaded
# async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function
# module instantiated AnsibleModule without raising an exception
# intentionally catch all exceptions, including calls to sys.exit
# avoid line wraps in messages
# save the line number for the file under test
# save the first path and line number in the traceback which is in our source tree
# SyntaxError has better information than the traceback
# pylint: disable=locally-disabled, no-member
# syntax error was reported in the file under test
# syntax error was reported in our source tree
# remove the filename and line number from the message
# either it was extracted above, or it's not really useful information
# empty parts in the namespace are treated as wildcards
# to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well
# example: name=ansible, allowed_name=ansible.module_utils
# example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils
# additions are checked by our custom PEP 302 loader, so we don't need to check them again here
# when testing collections the relative paths (and names) being tested are within the collection under test
# when testing ansible all files being imported reside under the lib directory
# use buffered IO to simulate StringIO; allows Ansible's stream patching to behave without warnings
# since we're using buffered IO, flush before checking for data
# noinspection PyTypeChecker
# only unload our own code since we know it's native Python
# clear all warnings registries to make all warnings available
# set by ansible-test to a single directory, rather than a list of directories as supported by Ansible itself
# set by ansible-test to the minimum python version supported on the controller
# this monkeypatch to _pytest.pathlib.resolve_package_path fixes PEP420 resolution for collections in pytest >= 6.0.0
# NB: this code should never run under py2
# this monkeypatch to py.path.local.LocalPath.pypkgpath fixes PEP420 resolution for collections in pytest < 6.0.0
# This is based on `_AnsibleCollectionPkgLoaderBase.exec_module` from `ansible/utils/collection_loader/_collection_finder.py`.
# This logic is loosely based on `AssertionRewritingHook._should_rewrite` from pytest.
# See: https://github.com/pytest-dev/pytest/blob/779a87aada33af444f14841a04344016a087669e/src/_pytest/assertion/rewrite.py#L209
# pylint: disable=exec-used
# allow unit tests to import code from collections
# looks like pytest <= 6.0.0, use the old hack against py.path
# force collections unit tests to be loaded with the ansible_collections namespace
# original idea from https://stackoverflow.com/questions/50174130/how-do-i-pytest-a-project-using-pep-420-namespace-packages/50175552#50175552
# skip coverage instances which have no collector, or the collector is not the active collector
# this avoids issues with coverage 7.4.0+ when tests create subprocesses which inherit our overridden os._exit method
# MIT License (see licenses/MIT-license.txt or https://opensource.org/licenses/MIT)
# Based on code originally from:
# https://github.com/pytest-dev/pytest-forked
# https://github.com/pytest-dev/py
# TIP: Disable pytest-xdist when debugging internal errors in this plugin.
# type: (Item, Item | None) -> object | None
# This is needed because pytest-xdist creates an OS thread (using execnet).
# See: https://github.com/pytest-dev/execnet/blob/d6aa1a56773c2e887515d63e50b1d08338cb78a7/execnet/gateway_base.py#L51
# type: (Item, Item | None) -> list[TestReport]
# type: (Item, Item | None, str) -> None
# type: (Item, int, str) -> list[TestReport]
# type: (int) -> int
# This function was added in Python 3.9.
# See: https://docs.python.org/3/library/os.html#os.waitstatus_to_exitcode
# NOTE: This file resides in the _util/target directory to ensure compatibility with all supported Python versions.
# running from source, use that version of ansible-test instead of any version that may already be installed
# auto-shebang
# prevent simple misuse of python.py with -c which does not work with coverage
# type: (str, bool) -> str
# unique ISO date marker matching the one present in importer.py
# See semantic versioning specification (https://semver.org/)
# equivalent to r'(?:[0-9]+|' + ALPHANUMERIC_IDENTIFIER + r')'
# Copyright (C) 2015 Matt Martz <matt@sivel.net>
# Copyright (C) 2015 Rackspace US, Inc.
# Copyright (C) 2016 Matt Martz <matt@sivel.net>
# Copyright (C) 2016 Rackspace US, Inc.
# Make sure, due to creative calling, that we didn't end up with
# ``self`` in ``args``
# Used to clean up imports later
# Clean up imports to prevent issues with mutable data being used in modules
# It's faster if we limit to items in ansible.module_utils
# But if this causes problems later, we should remove it
# the validate-modules code expects the options spec to be under the argument_spec key not options as set in PS
# we want to catch all exceptions here, including sys.exit
# Convert positional arguments to kwargs to make sure that all parameters are actually checked
# for ping kwargs == {'argument_spec':{'data':{'type':'str','default':'pong'}}, 'supports_check_mode':True}
# If add_file_common_args is truish, add options from FILE_COMMON_ARGUMENTS when not present.
# This is the only modification to argument_spec done by AnsibleModule itself, and which is
# not caught by setup_env's AnsibleModule replacement
# Copyright: (c) 2015, Matt Martz <matt@sivel.net>
# Copyright: (c) 2015, Rackspace US, Inc.
# Valid DOCUMENTATION.author lines
# Based on Ansibulbot's extract_github_id()
# We do not accept floats for versions in collections
# Roles can also be referenced by semantic markup
# If they are not strings, schema validation will have already complained.
# If it is not a string, schema validation will have already complained
# - or we have a float and we are in ansible/ansible, in which case we're
# also happy.
# Must have been manual intervention, since version_added_collection is only
# added automatically when version_added is present
# Check whether elements is there iff type == 'list'
# FIXME: adjust error code?
# Check whether choices have the correct type
# choices for a list type means that every list element must be one of these choices
# choices are still a list (the keys) but dict form serves to document each choice.
# Check whether default is only present if required=False, and whether default has correct type
# Note: Types are strings, not literal bools, such as True or False
# in case of type='list' elements define type of individual item in list
# This definition makes sure everything has the correct types/values
# TODO: phase out either plural or singular, 'alt' is exclusive group
# vod stands for 'version or date'; this is the name of the exclusive group
# This definition makes sure that everything we require is there
# Recursive suboptions
# This generates list of dicts with keys from string_types and suboption_schema value
# for example in Python 3: {str: suboption_schema}
# This generates list of dicts with keys from string_types and option_schema value
# for example in Python 3: {str: option_schema}
# type is only required for modules right now
# This generates list of dicts with keys from string_types and return_contains_schema value
# for example in Python 3: {str: return_contains_schema}
# 'returned' is required on top-level
# let schema checks handle
# The plugin loader has a hard-coded exception: when the builtin connection 'paramiko' is
# referenced, it loads 'paramiko_ssh' instead. That's why in this plugin, the name must be
# 'paramiko' and not 'paramiko_ssh'.
# author is optional for plugins (for now)
# Optional
# Things to add soon
####################
# 1) Recursively validate `type: complex` fields
# Possible Future Enhancements
##############################
# 1) Don't allow empty options for choices, aliases, etc
# 2) If type: bool ensure choices isn't set - perhaps use Exclusive
# 3) both version_added should be quoted floats
# Because there is no ast.TryExcept in Python 3 ast module
# REPLACER_WINDOWS from ansible.executor.module_common is byte
# string but we need unicode for Python 3
# If this is a count, type, algorithm, timeout, filename, or name, it is probably not a secret
# 'key' also matches 'publickey', which is generally not secret
# At least one of d1 and d2 cannot be parsed. Simply compare values.
# win_dsc is a dynamic arg spec, the docs won't ever match
# allow string constant expressions (these are docstrings)
# allow __future__ imports (the specific allowed imports are checked by other sanity tests)
# shebang optional, but if present must match
# shebang required
# Optimize out the happy path
# TODO: add column
# TODO: Add line/col
# loop all (for/else + error)
# get module list for each
# check "shape" of each module name
# this will bomb on dictionary format - "don't do that"
# also accept the legacy #POWERSHELL_COMMON replacer signal
# We have three ways of marking deprecated/removed files.  Have to check each one
# individually and then make sure they all agree
# doc legally might not exist
# We are testing a collection
# meta/runtime.yml says this is deprecated
# This module has an alias, which we can tell as it's a symlink
# Rather than checking for `module: $filename` we need to check against the true filename
# This is the normal case
# Check for mismatched deprecation
# DOCUMENTATION.deprecated and meta/runtime.yml disagree
# Both DOCUMENTATION.deprecated and meta/runtime.yml agree that the module is deprecated.
# Make sure they give the same version or date.
# The versions and dates in the module documentation are auto-tagged, so remove the tag
# to make comparison possible and to avoid confusing the user.
# In the future we should error if ANSIBLE_METADATA exists in a collection
# Make sure we operate on strings
# Errors are already covered during schema validation, we only check for option and
# return value references
# already reported during schema validation, except:
# This is already reported by schema checking
# Cannot merge fragments
# Use this to access type checkers later
# Could this a place where secrets are leaked?
# If it is type: path we know it's not a secret key as it's a file path.
# If it is type: bool it is more likely a flag indicating that something is secret, than an actual secret.
# This should only happen when removed_at_date is not in ISO format. Since schema
# validation already reported this as an error, don't report it a second time.
# This should only happen when deprecated_alias['date'] is not in ISO format. Since
# schema validation already reported this as an error, don't report it a second
# time.
# Record provider options from network modules, for later comparison
# Provider args are being removed from network module top level
# don't validate docs<->arg_spec checks below
# Undocumented arguments will be handled later (search for undocumented-parameter)
# hidden parameter, for example _raw_params
# So they are likely not documented on purpose
# Reporting of this syntax error will be handled by schema validation.
# The option already existed. Make sure version_added didn't change.
# already reported during schema validation
# See if current version => deprecated.removed_in, ie, should be docs only
# handle deprecation by date
# This happens if the date cannot be parsed. This is already checked by the schema.
# use a bogus "high" line number if no callable exists
# We can only validate PowerShell arg spec if it is using the new Ansible.Basic.AnsibleModule util
# Load meta/runtime.yml if it exists, as it may contain deprecation information
# Calculate the module's name so that relative imports work correctly
# collection is a relative path, example: ansible_collections/my_namespace/my_collection
# filename is a relative path, example: plugins/modules/my_module.py
# filename is a relative path, example: lib/ansible/modules/system/ping.py
# TODO: Better line/column detection
# From Python 3.7 in, there is datetime.date.fromisoformat(). For older versions,
# we have to do things manually.
# type: (t.List[str]) -> None
# type: (YamlLintConfig, str, str) -> None
# type: (t.Any, str, int, str) -> t.Dict[str, t.Any]
# type: (str, str) -> t.Dict[str, t.Any]
# type: (str, str) -> t.Optional[ast.Module]
# (c) 2018, Matt Martz <matt@sivel.net>
# kept for backwards compatibility with inline ignores, remove after 2.14 is EOL
# work around Python finalizer issue that sometimes spews this error message to stdout
# see https://github.com/pylint-dev/pylint/issues/9138
# minimum supported Python version provided by ansible-test
# type: t.Optional[t.Tuple[str, ...]]
# type: (str, t.Optional[str]) -> bool
# Additional imports that we may want to start checking:
# boto=UnwantedEntry('boto3', modules_only=True),
# requests=UnwantedEntry('ansible.module_utils.urls', modules_only=True),
# urllib=UnwantedEntry('ansible.module_utils.urls', modules_only=True),
# see https://docs.python.org/2/library/urllib2.html
# see https://docs.python.org/3/library/collections.abc.html
# see https://docs.python.org/3/library/tempfile.html#tempfile.mktemp
# os.chmod resolves as posix.chmod
# type: (astroid.node_classes.Import) -> None
# type: (astroid.node_classes.ImportFrom) -> None
# type: (astroid.node_classes.Attribute) -> None
# this is faster than using type inference and will catch the most common cases
# type: (astroid.node_classes.Call) -> None
# type: (astroid.node_classes.Import, str) -> None
# type: (astroid.node_classes.ImportFrom, str, t.List[str]) -> None
# type: (t.Union[astroid.node_classes.Import, astroid.node_classes.ImportFrom], str) -> None
# only on Display.deprecated, warnings.deprecate and deprecate_value
# only on Display.deprecated and warnings.deprecate
# only on Display.deprecated
# only on deprecate_value
# ignore pre-release for version comparison to catch issues before the final release is cut
# in ansible-core, never provide the deprecator -- if it really is needed, disable the sanity test inline for that line of code
# deprecator cannot be detected, caller must provide deprecator
# deprecation: description='deprecate collection_name/deprecator now that detection is widely available' core_version='2.23'
# When this deprecation triggers, change the return type here to False.
# At that point, callers should be able to omit the collection_name/deprecator in all but a few cases (inline ignores can be used for those cases)
# avoid unnecessary import overhead
# assume collection maintainers know what they're doing if all args are dynamic
# collection_name may be needed for backward compat with 2.18 and earlier, since it is only detected in 2.19 and later
# Unlike collection_name, which is needed for backward compat, deprecator is generally not needed by collections.
# For the very rare cases where this is needed by collections, an inline pylint ignore can be used to silence it.
# datetime.date objects come from YAML dates, these are ok
# make sure we have a string
# Make sure date is correct
# For a tombstone, the removal date must be in the past
# For a deprecation, the removal date must be in the future. Only test this if
# check_deprecation_date is truish, to avoid checks to suddenly start to fail.
# We're storing Ansible's version as a LooseVersion
# For a tombstone, the removal version must not be in the future
# For a deprecation, the removal version must be in the future
# We do not care why it fails, in case we cannot get the version
# just return None to indicate "we don't know".
# Updates to schema MUST also be reflected in the documentation
# ~https://docs.ansible.com/ansible-core/devel/dev_guide/developing_collections.html
# plugin_routing schema
# The first schema validates the input, and the second makes sure no extra keys are specified
# Adjusted schema for modules only
# Adjusted schema for module_utils
# import_redirection schema
# import_redirect doesn't currently support deprecation
# action_groups schema
# top level schema
# All of these are optional
# requires_ansible: In the future we should validate this with SpecifierSet
# Ensure schema is valid
# No way to get line/column numbers
# This is currently disabled, because if it is enabled this test can start failing
# at a random date. For this to be properly activated, we (a) need to be able to return
# codes for this test, and (b) make this error optional.
# basic.py is allowed to import get_exception for backwards compatibility but should not call it anywhere
# see https://unicode.org/faq/utf_bom.html#bom1
# ansible-test entry point must be executable and have a shebang
# cli entry points must be executable and have a shebang
# examples trigger some false positives due to location
# non-standard module library directories
# config must be detected independent of the file list since the file list only contains files under test (changed)
# The sphinx module is a soft dependency for rstcheck, which is used by the changelog linter.
# If sphinx is found it will be loaded by rstcheck, which can affect the results of the test.
# To maintain consistency across environments, loading of sphinx is blocked, since any version (or no version) of sphinx may be present.
# ignore the return code, rely on the output instead
# Need to avoid combining AliasDict's initial attribute write on
# self.aliases, with AttributeDict's __setattr__. Doing so results in
# an infinite loop. Instead, just skip straight to dict() for both
# explicitly (i.e. we override AliasDict.__init__ instead of extending
# it.)
# NOTE: could tickle AttributeDict.__init__ instead, in case it ever
# grows one.
# Intercept deepcopy/etc driven access to self.aliases when not
# actually set. (Only a problem for us, due to abovementioned combo of
# Alias and Attribute Dicts, so not solvable in a parent alone.)
# self.aliases keys are aliases, not realkeys. Easy test to see if we
# should flip around to the POV of a realkey when given an alias.
# Ensure the real key shows up in output.
# 'key' is now a realkey, whose aliases are all keys whose value is
# itself. Filter out the original name given.
# Attribute existence test required to not blow up when deepcopy'd
# Single-string targets
# Multi-string targets
# to conform with __getattr__ spec
# Ensure that Python is compiled with OpenSSL 1.1.1+
# If the 'ssl' module isn't available at all that's
# fine, we only care if the module is available.
# noqa: 401
# We can only import Protocol if TYPE_CHECKING because it's a development
# dependency, and is not available at runtime.
# Key type
# Value type
# Default type
# Full runtime checking of the contents of a Mapping is expensive, so for the
# purposes of typechecking, we assume that any Mapping is the right shape.
# Similarly to Mapping, full runtime checking of the contents of an Iterable is
# expensive, so for the purposes of typechecking, we assume that any Iterable
# is the right shape.
# If the key exists, we'll overwrite it, which won't change the
# size of the pool. Because accessing a key should move it to
# the end of the eviction line, we pop it out first.
# When the key does not exist, we insert the value first so that
# evicting works in all cases, including when self._maxsize is 0
# If we didn't evict an existing value, and we've hit our maximum
# size, then we have to evict the least recently used item from
# the beginning of the container.
# After releasing the lock on the pool, dispose of any evicted value.
# 'dict' is insert-ordered
# avoid a bytes/str comparison by decoding before httplib
# if there are values here, then there is at least the initial
# key/value pair
# THIS IS NOT A TYPESAFE BRANCH
# In this branch, the object has a `keys` attr but is not a Mapping or any of
# the other types indicated in the method signature. We do some stuff with
# it as though it partially implements the Mapping interface, but we're not
# doing that stuff safely AT ALL.
# _DT is unbound; empty list is instance of List[str]
# _DT is bound; default is instance of _DT
# _DT may or may not be bound; vals[1:] is instance of List[str], which
# meets our external interface requirement of `Union[List[str], _DT]`.
# Supports extending a header dict in-place using operator |=
# combining items with add instead of __setitem__
# Supports merging header dicts using operator |
# Supports merging header dicts using operator | when other is on left side
# Python 3.14+
# type: ignore[import-not-found] # noqa: F401
# Python 3.13 and earlier require the 'zstandard' module.
# The package 'zstandard' added the 'eof' property starting
# in v0.18.0 which we require to ensure a complete and
# valid zstd stream was fed into the ZstdDecoder.
# See: https://github.com/urllib3/urllib3/pull/2624
# note: this is a no-op
# According to RFC 9110 section 8.4.1.3, recipients should
# consider x-gzip equivalent to gzip
# Override the request_url if retries has a redirect location.
# Compatibility methods for `io` module
# Compatibility methods for http.client.HTTPResponse
# Compatibility method for http.cookiejar
# Used to return the correct amount of bytes for partial read()s
# if content_length is None
# All data has been read, but `self._fp.read1` in
# CPython 3.12 and older doesn't always close
# `http.client.HTTPResponse`, so we close it here.
# See https://github.com/python/cpython/issues/113199
# Negative numbers and `None` should be treated the same.
# do not waste memory on buffer when not decoding
# TODO make sure to initially read enough data to get past the headers
# For example, the GZ file header takes 10 bytes, we don't want to read
# it one byte at a time
# try and respond without going to the network
# FIXME, this method's type doesn't say returning None is possible
# Truncated at start of next chunk
# type: ignore[union-attr] # Toss the CRLF at the end of the chunk.
# Negative numbers and `None` should be treated the same,
# but httplib handles only `None` correctly.
# percent encode \n \r "
# Default value for `blocksize` - a new parameter introduced to
# http.client.HTTPConnection & http.client.HTTPSConnection in Python 3.7
# Default key_blocksize to _DEFAULT_BLOCKSIZE if missing from the context
# When Retry is initialized, raise_on_redirect is based
# on a redirect boolean value.
# But requests made via a pool manager always set
# redirect to False, and raise_on_redirect always ends
# up being False consequently.
# Here we fix the issue by setting raise_on_redirect to
# a value needed by the pool manager without considering
# the redirect boolean.
# Default blocksize to _DEFAULT_BLOCKSIZE if missing or explicitly
# set to 'None' in the request_context.
# TODO: Remove this in favor of a better
# HTTP request/response lifecycle tracking.
# Instance doesn't store _DEFAULT_TIMEOUT, must be resolved.
# We know *at least* botocore is depending on the order of the
# first 3 parameters so to be safe we only mark the later ones
# as keyword-only to ensure we have space to extend.
# Certificate verification methods
# Trusted CAs
# TLS version
# Client certificates
# The original error is also available as __cause__.
# This property uses 'normalize_host()' (not '_normalize_host()')
# to avoid removing square braces around IPv6 addresses.
# This value is sent to `HTTPConnection.set_tunnel()` if called
# because square braces are required for HTTP CONNECT tunneling.
# This should never happen if you got the conn from self._get_conn
# See the above comment about EAGAIN in Python 3.
# _validate_conn() starts the connection to an HTTPS proxy
# so we need to wrap errors with 'ProxyError' here too.
# If the connection didn't successfully connect to it's proxy
# then there
# MacOS/Linux
# EPROTOTYPE and ECONNRESET are needed on macOS
# Condition changed later to emit ECONNRESET instead of only EPROTOTYPE.
# Set properties that are used by the pooling layer.
# Is this a closed/new connection that requires CONNECT tunnelling?
# Make the request on the HTTPConnection object
# type: ignore[comparison-overlap]
# TODO revise this, see https://github.com/urllib3/urllib3/issues/2791
#: Whether this proxy connection verified the proxy host's certificate.
# If no proxy is currently connected to the value will be ``None``.
# Taken from python/cpython#100986 which was backported in 3.11.9 and 3.12.3.
# When using connection_from_host, host will come without brackets.
# `_tunnel` copied from 3.11.13 backporting
# https://github.com/python/cpython/commit/0d4026432591d43185568dd31cef6a034c4b9261
# and https://github.com/python/cpython/commit/6fbc61070fda2ffb8889e77e3b24bca4249ab4d1
# type: ignore[str-format]
# Making a single send() call instead of one per line encourages
# the host OS to use a more optimal packet size instead of
# potentially emitting a series of small packets.
# for sites which EOF without sending a trailer
# `_tunnel` copied from 3.12.11 backporting
# https://github.com/python/cpython/commit/23aef575c7629abcd4aaf028ebd226fb41a4b3c8
# If we're tunneling it means we're connected to our proxy.
# If there's a proxy to be connected to we are fully connected.
# This is set twice (once above and here) due to forwarding proxies
# not using tunnelling.
# Reset all stateful properties so connection
# can be re-used without leaking prior configs.
# `request` method's signature intentionally violates LSP.
# urllib3's API is different from `http.client.HTTPConnection` and the subclassing is only incidental.
# Store these values to be fed into the HTTPResponse
# object later. TODO: Remove this in favor of a real
# HTTP lifecycle mechanism.
# We have to store these before we call .request()
# because sometimes we can still salvage a response
# off the wire even if we aren't able to completely
# send the request body.
# Transform the body into an iterable of sendall()-able chunks
# and detect if an explicit Content-Length is doable.
# When chunked is explicit set to 'True' we respect that.
# Detect whether a framing mechanism is already in use. If so
# we respect that value, otherwise we pick chunked vs content-length
# depending on the type of 'body'.
# Otherwise we go off the recommendation of 'body_to_chunks()'.
# Now that framing headers are out of the way we send all the other headers.
# If we're given a body we start sending that in chunks.
# Sending empty chunks isn't allowed for TE: chunked
# as it indicates the end of the body.
# Regardless of whether we have a body or not, if we're in
# chunked mode we want to send an explicit empty chunk.
# Raise the same error as http.client.HTTPConnection
# Reset this attribute for being used again.
# Since the connection's timeout value may have been updated
# we need to set the timeout on the socket.
# This is needed here to avoid circular import errors
# Save a reference to the shutdown function before ownership is passed
# to httplib_response
# TODO should we implement it everywhere?
# Get the response from http.client.HTTPConnection
# cert_reqs depends on ssl_context so calculate last.
# Today we don't need to be doing this step before the /actual/ socket
# connection, however in the future we'll need to decide whether to
# create a new socket or re-use an existing "shared" socket as a part
# of the HTTP/2 handshake dance.
# Check if the target origin supports HTTP/2.
# If the value comes back as 'None' it means that the current thread
# is probing for HTTP/2 support. Otherwise, we're waiting for another
# probe to complete, or we get a value right away.
# If HTTP/2 isn't going to be offered it doesn't matter if
# the target supports HTTP/2. Don't want to make a probe.
# Do we need to establish a tunnel?
# We're tunneling to an HTTPS origin so need to do TLS-in-TLS.
# _connect_tls_proxy will verify and assign proxy_is_verified
# Remove trailing '.' from fqdn hostnames to allow certificate validation
# If an error occurs during connection/handshake we may need to release
# our lock so another connection can probe the origin.
# If this connection doesn't know if the origin supports HTTP/2
# we report back to the HTTP/2 probe our result.
# Forwarding proxies can never have a verified target since
# the proxy is the one doing the verification. Should instead
# use a CONNECT tunnel in order to verify the target.
# See: https://github.com/urllib3/urllib3/issues/3267.
# Set `self.proxy_is_verified` unless it's already set while
# establishing a tunnel.
# `_connect_tls_proxy` is called when self._tunnel_host is truthy.
# Features that aren't implemented for proxies yet:
# In some cases, we want to verify hostnames ourselves
# `ssl` can't verify fingerprints or alternate hostnames
# assert_hostname can be set to False to disable hostname checking
# We still support OpenSSL 1.0.2, which prevents us from verifying
# hostnames easily: https://github.com/pyca/pyopenssl/pull/933
# Try to load OS default certs if none are given. We need to do the hasattr() check
# for custom pyOpenSSL SSLContext objects because they don't support
# load_default_certs().
# Ensure that IPv6 addresses are in the proper format and don't have a
# scope ID. Python's SSL module fails to recognize scoped IPv6 addresses
# and interprets them as DNS hostnames.
# Need to signal to our match_hostname whether to use 'commonName' or not.
# If we're using our own constructed SSLContext we explicitly set 'False'
# because PyPy hard-codes 'True' from SSLContext.hostname_checks_common_name.
# Look for the phrase 'wrong version number', if found
# then we should warn the user that we're very sure that
# this proxy is HTTP-only and they have a configuration issue.
# type: ignore[misc, assignment] # noqa: F811
# First check if h2 version is valid
# Import here to avoid circular dependencies.
# TODO: Offer 'http/1.1' as well, but for testing purposes this is handy.
# By the end of this block we know that
# _cache_[values,locks] is available.
# If it's a known value we return right away.
# If the value is unknown, we acquire the lock to signal
# to the requesting thread that the probe is in progress
# or that the current thread needs to return their findings.
# If the by the time we get the lock the value has been
# updated we want to return the updated value.
# In case an exception like KeyboardInterrupt is raised here.
# KeyError shouldn't be possible.
# Uses an RLock, so can be locked again from same thread.
# Defensive: not expected in normal usage
# type: ignore[import-untyped]
# TODO SKIPPABLE_HEADERS from urllib3 are ignored.
# A lot of upstream code uses capitalized headers.
# Reset headers for the next request.
# file-like objects
# str -> bytes
# TODO: Arbitrary read value.
# TODO this is often present from upstream.
# raise NotImplementedError("`chunked` isn't supported with HTTP/2")
# Reset all our HTTP/2 connection state.
# TODO: This is a woefully incomplete response object, but works for non-streaming.
# TODO: support decoding
# Following CPython, we map HTTP versions to major * 10 + minor integers
# No reason phrase in HTTP/2
# The SSLvX values are the most likely to be missing in the future
# but we check them all just to be sure.
# type: ignore[dict-item]
# If server_hostname is an IP, don't use it for SNI, per RFC6066 Section 3
# Adding `from e` messes with coverage somehow, so it's omitted.
# See #2386.
# override connection classes to use emscripten specific classes
# n.b. mypy complains about the overriding of classes below
# if it isn't ignored
# set by pool class
# ignored because browser decodes always
# body has been preloaded as a string by XmlHttpRequest
# wrap body in IOStream
# don't cache partial content
# read all we can (and cache it)
# definitely finished reading, close response stream
# chunked is handled by browser
# anymore so close it now
# release the connection back to the pool
# If we have read everything from the response stream,
# this is compatible with _base_connection
# for compatibility with RawIOBase
# wait for the worker to send something
# decode the error string
# EOF, free the buffers and return zero
# and free the request
# copy from int32array to python bytes
# make web-worker and data buffer on startup
# Defensive: never happens in ci
# start the request off in the worker
# got response
# header length is in second int of intBuffer
# decode the rest to a JSON string
# this does a copy (the slice) because decode can't work on shared array
# for some silly reason
# get it as an object
# JavaScript AbortController for timeouts
# check if we are in a worker or not
# Defensive: this is always False in browsers that we test in
# timeout isn't available on the main thread - show a warning in console
# if it is set
# general http error
# Node.js returns the whole response (unlike opaqueredirect in browsers),
# so urllib3 can set `redirect: manual` to control redirects itself.
# https://stackoverflow.com/a/78524615
# Call JavaScript fetch (async api, returns a promise)
# Now suspend WebAssembly until we resolve that promise
# or time out.
# get via inputstream
# get a reader from the fetch response
# get directly via arraybuffer
# n.b. this is another async JavaScript call.
# run_sync here uses WebAssembly JavaScript Promise Integration to
# suspend python until the JavaScript promise resolves.
# According to the Node.js documentation, the release name is always "node".
# no fetcher, return None to signify that
# use http.client.HTTPException for consistency with non-emscripten
# ignore these things because we don't
# have control over that stuff
# no scheme / host / port included, make a full url
# all this is basically ignored, as browser handles https
# The browser will automatically verify all requests.
# We have no control over that setting.
# verify that this class implements BaseHTTP(s) connection correctly
# func is sslobj.do_handshake or sslobj.unwrap
# func is sslobj.write, arg1 is data
# func is sslobj.read, arg1 is len, arg2 is buffer
# type: ignore[no-any-return, attr-defined]
# type: str  # type: ignore[attr-defined]
# When sending a request with these methods we aren't expecting
# a body so don't need to set an explicit 'Content-Length: 0'
# The reason we do this in the negative instead of tracking methods
# which 'should' have a body is because unknown methods should be
# treated as if they were 'POST' which *does* expect a body.
# No body, we need to make a recommendation on 'Content-Length'
# based on whether that request method is expected to have
# a body or not.
# Bytes or strings become bytes
# File-like object, TODO: use seek() and tell() for length?
# Otherwise we need to start checking via duck-typing.
# Check if the body implements the buffer API.
# Check if the body is an iterable
# Since it implements the buffer API can be passed directly to socket.sendall()
# type: ignore[no-untyped-def]
# Will return a single character bytestring
# It is modified to remove commonName support.
# The ipaddress module shipped with Python < 3.9 does not support
# scoped IPv6 addresses so we unconditionally strip the Zone IDs for
# now. Once we drop support for Python 3.9 we can remove this branch.
# Not an IP address (common case)
# We only check 'commonName' if it's enabled and we're not verifying
# an IP address. IP addresses aren't valid within 'commonName'.
# Defensive: for Python < 3.9.3
#: Default maximum backoff time.
# Backward compatibility; assigned outside of the class.
# This value should never be passed to socket.settimeout() so for safety we use a -1.
# socket.settimout() raises a ValueError for negative values.
# https://foss.heptapod.net/pypy/pypy/-/issues/3129
# As of May 2023, all released versions of LibreSSL fail to reject certificates with
# only common names, see https://github.com/urllib3/urllib3/pull/3024
# Before fixing OpenSSL issue #14579, the SSL_new() API was not copying hostflags
# like X509_CHECK_FLAG_NEVER_CHECK_SUBJECT, which tripped up CPython.
# https://github.com/openssl/openssl/issues/14579
# This was released in OpenSSL 1.1.1l+ (>=0x101010cf)
# Mapping from 'ssl.PROTOCOL_TLSX' to 'TLSVersion.X'
# Do we have ssl at all?
# Needed for Python 3.9 which does not define this
# Setting SSLContext.hostname_checks_common_name = False didn't work before CPython
# 3.9.3, and 3.10 (but OK on PyPy) or OpenSSL 1.1.1l+
# Need to be careful here in case old TLS versions get
# removed in future 'ssl' module implementations.
# This means 'ssl_version' was specified as an exact value.
# Disallow setting 'ssl_version' and 'ssl_minimum|maximum_version'
# to avoid conflicts.
# 'ssl_version' is deprecated and will be removed in the future.
# Use 'ssl_minimum_version' and 'ssl_maximum_version' instead.
# This warning message is pushing users to use 'ssl_minimum_version'
# instead of both min/max. Best practice is to only set the minimum version and
# keep the maximum version to be it's default value: 'TLSVersion.MAXIMUM_SUPPORTED'
# PROTOCOL_TLS is deprecated in Python 3.10 so we always use PROTOCOL_TLS_CLIENT
# Python <3.10 defaults to 'MINIMUM_SUPPORTED' so explicitly set TLSv1.2 here
# Unless we're given ciphers defer to either system ciphers in
# the case of OpenSSL 1.1.1+ or use our own secure default ciphers.
# In Python 3.13+ ssl.create_default_context() sets VERIFY_X509_PARTIAL_CHAIN
# and VERIFY_X509_STRICT so we do the same
# The attribute is None for OpenSSL <= 1.1.0 or does not exist when using
# an SSLContext created by pyOpenSSL.
# check_hostname=True, verify_mode=NONE/OPTIONAL.
# We always set 'check_hostname=False' for pyOpenSSL so we rely on our own
# 'ssl.match_hostname()' implementation.
# Defensive: for CPython < 3.9.3; for PyPy < 7.3.8
# Note: This branch of code and all the variables in it are only used in tests.
# We should consider deprecating and removing this code.
# try to load OS default certs; works well on Windows.
# use stat to do exists + can write to check without race condition
# noqa: PTH116
# swallow does not exist or other errors
# if os.stat returns but modification is zero that's an invalid os.stat - ignore it
# pragma: win32 cover
# On Windows, this is PermissionError
# pragma: win32 no cover # noqa: RET506
# On linux / macOS, this is IsADirectoryError
#: version of the project as a string
# pragma: win32 no cover # noqa: PLR5501
#: Alias for the lock, which should be used for the current platform.
#: a flag to indicate if the fcntl API is available
# pragma: win32 no cover
# This locked is not owned by this UID
# NotImplemented error
# Do not remove the lockfile:
# open for read and write
# create file if not exists
# truncate file if not empty
# has no access to this lock
# close file first
# file is already locked
# Probably another instance of the application hat acquired the file lock.
# noqa: N818
# Properly pickle the exception
# pragma: no cover (py311+)
# pragma: no cover (<py311)
# This is a helper class which is returned by :meth:`BaseFileLock.acquire` and wraps the lock to make sure __enter__
# is not called twice when entering the with statement. If we would simply return *self*, the lock would be acquired
# again in the *__enter__* method of the BaseFileLock, but not released again automatically. issue #37 (memory leak)
# The context is held in a separate class to allow optional use of thread local storage via the
# ThreadLocalFileContext class.
#: The path to the lock file.
#: The default timeout value.
#: The mode for the lock files
#: Whether the lock should be blocking or not
#: The file descriptor for the *_lock_file* as it is returned by the os.open() function, not None when lock held
#: The lock counter is used for implementing the nested locking mechanism.
# When the lock is acquired is increased and the lock is only released, when this value is 0
# noqa: PLR0913
# capture remaining kwargs for subclasses  # noqa: ANN401
# parameters do not match; raise error
# Workaround to make `__init__`'s params optional in subclasses
# E.g. virtualenv changes the signature of the `__init__` method in the `BaseFileLock` class descendant
# (https://github.com/tox-dev/filelock/pull/340)
# Create the context. Note that external code should not work with the context directly and should instead use
# properties of this class.
# Use the default timeout, if no timeout is provided.
# Increment the number right at the beginning. We can still undo it, if something fails.
# noqa: TRY301
# Something did go wrong, so decrement the counter.
#: Whether run in executor
#: The executor
#: The loop
# noqa: D107
# noqa: D105
# type: ignore[override] # noqa: PLR0913
# noqa: N805
# type: ignore[override]  # noqa: FBT001, FBT002
# first check for exists and read-only mode as the open will mask this case as EEXIST
# open for writing only
# together with above raise EEXIST if the file specified by filename exists
# truncate the file to zero byte
# re-raise unless expected exception
# lock already exist
# noqa: S101
# the lock file is definitely not None
# the file is already deleted and that's what we want
# Copyright 2009-2017 Wander Lairson Costa
# Copyright 2009-2021 PyUSB contributors
# modification, are permitted provided that the following conditions are
# met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# descriptor type
# endpoint direction
# endpoint type
# control request type
# control request recipient
# control request direction
# speed type
# Return an array with `length` zeros or raise a `TypeError`.
# The array is retrieved by asking for string descriptor zero, which is
# never the index of a real string. The returned descriptor has bLength
# and bDescriptorType bytes followed by pairs of bytes representing
# little-endian LANGIDs. That is, buf[0] contains the length of the
# returned array, buf[2] is the least-significant byte of the first LANGID
# (if any), buf[3] is the most-significant byte, and in general the LSBs of
# all the LANGIDs are given by buf[2:buf[0]:2] and MSBs by buf[3:buf[0]:2].
# If the length of buf came back odd, something is wrong.
# maximum even length
# should be even, ignore any trailing byte (see #154)
# Use Semantic Versioning, http://semver.org/
# We set the log level to avoid delegation to the
# parent log handler (if there is one).
# Thanks to Chris Clark to pointing this out.
# We import all 'legacy' module symbols to provide compatibility
# with applications that use 0.x versions.
# mA
# standard feature selectors from USB 2.0/3.0
# unconfigured state
# cache the index instead of the object to avoid cyclic references
# of the device and Configuration (Device tracks the _ResourceManager,
# which tracks the Configuration, which tracks the Device)
# we need the endpoint address, but the "endpoint" parameter
# can be either the a Endpoint object or the endpoint address itself
# Find the interface and endpoint objects which endpoint address belongs to
# Ignore errors when releasing the interfaces
# When the device is disconnected, the call may fail
# bit 7 is high, bit 4..0 are 0
# here we consider it is a integer
# Thanks to Johannes Stezenbach to point me out that we need to
# claim the recipient interface
# TODO: check if 'f' is a method or a free function
# decorator for methods calls tracing
# this if is just a optimization to avoid unecessary string formatting
# All the hacks necessary to assure compatibility across all
# supported versions come here.
# Please, note that there is one version check for each
# hack we need to do, this makes maintenance easier... ^^
# we support Python >= 3.9
# race-free?
# this is the "public" finalize method
# else object disappeared
# Note:   Do not pass a (hard) reference to instance to the
# Note 2: When using weakrefs and not calling finalize() in
# Note 3: the _finalize_called attribute is (probably) useless
# Workaround for CPython 3.3 issue#16283 / pyusb #14
# On Apple Silicon, also check in `/opt/homebrew/lib`: homebrew patches
# the Python interpreters it distributes to check that directory, but
# other/stock interpreters don't know about it
# usb.h
# libusb-win32 makes all structures packed, while
# default libusb only does for some structures
# _PackPolicy defines the structure packing according
# to the platform.
# usb_dev_handle *usb_open(struct usb_device *dev);
# int usb_close(usb_dev_handle *dev);
# int usb_get_string(usb_dev_handle *dev,
# int usb_get_string_simple(usb_dev_handle *dev,
# int usb_get_descriptor_by_endpoint(usb_dev_handle *udev,
# int usb_get_descriptor(usb_dev_handle *udev,
# int usb_bulk_write(usb_dev_handle *dev,
# int usb_bulk_read(usb_dev_handle *dev,
# int usb_interrupt_write(usb_dev_handle *dev,
# int usb_interrupt_read(usb_dev_handle *dev,
# int usb_control_msg(usb_dev_handle *dev,
# int usb_set_configuration(usb_dev_handle *dev, int configuration);
# int usb_claim_interface(usb_dev_handle *dev, int interface);
# int usb_release_interface(usb_dev_handle *dev, int interface);
# int usb_set_altinterface(usb_dev_handle *dev, int alternate);
# int usb_resetep(usb_dev_handle *dev, unsigned int ep);
# int usb_clear_halt(usb_dev_handle *dev, unsigned int ep);
# int usb_reset(usb_dev_handle *dev);
# char *usb_strerror(void);
# void usb_set_debug(int level);
# struct usb_device *usb_device(usb_dev_handle *dev);
# struct usb_bus *usb_get_busses(void);
# linux only
# int usb_get_driver_np(usb_dev_handle *dev,
# int usb_detach_kernel_driver_np(usb_dev_handle *dev, int interface);
# libusb-win32 only
# int usb_isochronous_setup_async(usb_dev_handle *dev,
# int usb_bulk_setup_async(usb_dev_handle *dev,
# int usb_interrupt_setup_async(usb_dev_handle *dev,
# int usb_submit_async(void *context, char *bytes, int size)
# int usb_reap_async(void *context, int timeout)
# int usb_reap_async_nocancel(void *context, int timeout)
# int usb_cancel_async(void *context)
# int usb_free_async(void **context)
# No error means that we need to get the error
# message from the return code
# Thanks to Nicholas Wheeler to point out the problem...
# Also see issue #2860940
# implementation of libusb 0.1.x backend
# based on the implementation of libusb_kernel_driver_active()
# (see op_kernel_driver_active() in libusb/os/linux_usbfs.c)
# and the fact that usb_get_driver_np() is a wrapper for
# IOCTL_USBFS_GETDRIVER
# 'usbfs' is not considered a [foreign] kernel driver because
# it is what we use to access the device from userspace
# ENODATA means that no kernel driver is attached
# on mac os/darwin we assume all users are running libusb-compat,
# which, in turn, uses libusb_kernel_driver_active()
# this is similar to the Linux implementation, but the generic
# driver is called 'ugen' and usb_get_driver_np() simply returns an
# empty string is no driver is attached (see comments on PR #366)
# 'ugen' is not considered a [foreign] kernel driver because
# exception already logged (if any)
# FIXME: cygwin name is "openusb"?
# int32_t openusb_init(uint32_t flags , openusb_handle_t *handle);
# void openusb_fini(openusb_handle_t handle );
# uint32_t openusb_get_busid_list(openusb_handle_t handle,
# void openusb_free_busid_list(openusb_busid_t * busids);
# uint32_t openusb_get_devids_by_bus(openusb_handle_t handle,
# void openusb_free_devid_list(openusb_devid_t * devids);
# int32_t openusb_open_device(openusb_handle_t handle,
# int32_t openusb_close_device(openusb_dev_handle_t dev);
# int32_t openusb_set_configuration(openusb_dev_handle_t dev,
# int32_t openusb_get_configuration(openusb_dev_handle_t dev,
# int32_t openusb_claim_interface(openusb_dev_handle_t dev,
# int32_t openusb_release_interface(openusb_dev_handle_t dev,
# int32_topenusb_set_altsetting(openusb_dev_handle_t dev,
# int32_t openusb_reset(openusb_dev_handle_t dev);
# int32_t openusb_parse_device_desc(openusb_handle_t handle,
# int32_t openusb_parse_config_desc(openusb_handle_t handle,
# int32_t openusb_parse_interface_desc(openusb_handle_t handle,
# int32_t openusb_parse_endpoint_desc(openusb_handle_t handle,
# const char *openusb_strerror(int32_t error );
# int32_t openusb_ctrl_xfer(openusb_dev_handle_t dev,
# int32_t openusb_intr_xfer(openusb_dev_handle_t dev,
# int32_t openusb_bulk_xfer(openusb_dev_handle_t dev,
# int32_t openusb_isoc_xfer(openusb_dev_handle_t dev,
# TODO: implement isochronous
# libusb.h
# transfer_type codes
# Control endpoint
# Isochronous endpoint
# Bulk endpoint
# Interrupt endpoint
# return codes
# map return codes to strings
# map return code to errno values
# Transfer status codes:
# Note that this does not indicate
# that the entire amount of requested data was transferred.
# map transfer codes to errno codes
# Isochronous packet descriptor.
# enum libusb_transfer_status
# Windows backend uses stdcall calling convention
# On FreeBSD 8/9, libusb 1.0 and libusb 0.1 are in the same shared
# object libusb.so, so if we found libusb library name, we must assure
# it is 1.0 version. We just try to get some symbol from 1.0 version
# void libusb_set_debug (libusb_context *ctx, int level)
# int libusb_init (libusb_context **context)
# void libusb_exit (struct libusb_context *ctx)
# ssize_t libusb_get_device_list (libusb_context *ctx,
# libusb_device *libusb_get_parent (libusb_device *dev)
# void libusb_free_device_list (libusb_device **list,
# libusb_device *libusb_ref_device (libusb_device *dev)
# void libusb_unref_device(libusb_device *dev)
# int libusb_open(libusb_device *dev, libusb_device_handle **handle)
# void libusb_close(libusb_device_handle *dev_handle)
# int libusb_set_configuration(libusb_device_handle *dev,
# int libusb_get_configuration(libusb_device_handle *dev,
# int libusb_claim_interface(libusb_device_handle *dev,
# int libusb_release_interface(libusb_device_handle *dev,
# int libusb_set_interface_alt_setting(libusb_device_handle *dev,
# int libusb_reset_device (libusb_device_handle *dev)
# int libusb_kernel_driver_active(libusb_device_handle *dev,
# int libusb_detach_kernel_driver(libusb_device_handle *dev,
# int libusb_attach_kernel_driver(libusb_device_handle *dev,
# int libusb_get_device_descriptor(
# int libusb_get_config_descriptor(
# void  libusb_free_config_descriptor(
# int libusb_get_string_descriptor_ascii(libusb_device_handle *dev,
# int libusb_control_transfer(libusb_device_handle *dev_handle,
#int libusb_bulk_transfer(
# int libusb_interrupt_transfer(
# libusb_transfer* libusb_alloc_transfer(int iso_packets);
# void libusb_free_transfer(struct libusb_transfer *transfer)
# int libusb_submit_transfer(struct libusb_transfer *transfer);
# const char *libusb_strerror(enum libusb_error errcode)
# int libusb_clear_halt(libusb_device_handle *dev, unsigned char endpoint)
# void libusb_set_iso_packet_lengths(
#int libusb_get_max_iso_packet_size(libusb_device* dev,
# void libusb_fill_iso_transfer(
# uint8_t libusb_get_bus_number(libusb_device *dev)
# uint8_t libusb_get_device_address(libusb_device *dev)
# uint8_t libusb_get_device_speed(libusb_device *dev)
# uint8_t libusb_get_port_number(libusb_device *dev)
# int libusb_get_port_numbers(libusb_device *dev,
#int libusb_handle_events(libusb_context *ctx);
# check a libusb function call
# wrap a device
# wrap a descriptor and keep a reference to another object
# Thanks to Thomas Reitmayr.
# wrap a configuration descriptor
# iterator for libusb devices
# When the device is disconnected, this list may
# return with length 0
# implementation of libusb 1.0 backend
# Only available in newer versions of libusb
# USB 3.0 maximum depth is 7
# do not assume LIBUSB_ERROR_TIMEOUT means no I/O.
# noqa: F401,F403
# in case we are installed
# in case we are running from source
# in case we are in an .egg
# }
# When using multi_line input mode the buffer is not handled on Enter (a new line is
# inserted instead), so we force the handling if we're not in a completion or
# history search, and one of several conditions are True
# map Pygments tokens (ptk 1.0) to class names (ptk 2.0).
# best guess
# reverse dict for cli_helpers, because they still expect Pygments tokens.
# prompt-toolkit used pygments tokens for styling before, switched to style
# names in 2.0. Convert old token types to new style names, for backwards compatibility.
# treat as pygments token (1.0)
# we don't want to support tokens anymore
# treat as prompt style name (2.0). See default style names here:
# https://github.com/prompt-toolkit/python-prompt-toolkit/blob/master/src/prompt_toolkit/styles/defaults.py
# TODO: cli helpers will have to switch to ptk.Style
# do nothing
# Create a new pgexecute method to populate the completions.
# If callbacks is a single function then push it into a list.
# Break out of while loop if the for loop finishes natually
# without hitting the break statement.
# Start over the refresh from the beginning if the for loop hit the
# break statement.
# Load history into pgcompleter so it can learn user preferences
# close connection established with pgexecute.copy()
# avoid config merges when possible. For writing, we need an umerged config instance.
# see https://github.com/dbcli/pgcli/issues/1240 and https://github.com/DiffSK/configobj/issues/171
# explain query results should always contain 1 row each
# A complete command is an sql statement that ends with a semicolon, unless
# there's an open quote surrounding it, as is common when writing a
# CREATE FUNCTION command
# Special Command
# Ended with \e which should launch the editor
# A complete SQL command
# Exit doesn't need semi-colon
# Quit doesn't need semi-colon
# To all the vim fans out there
# Just a plain enter without any text
# we added this funcion to strip beginning comments
# because sqlparse didn't handle tem well.  It won't be needed if sqlparse
# does parsing of this situation better
# Regular expression pattern to match comments
# Find and remove all comments from the beginning
# pg3: I don't know what is this
# def mogrify(self, query, params):
# The boolean argument to the current_schemas function indicates whether
# implicit schemas, e.g. pg_catalog
# When we connect using a DSN, we don't really know what db,
# user, etc. we connected to. Let's read it.
# Note: moved this after setting autocommit because of #664.
# _set_wait_callback(self.is_virtual_database())
# Remove spaces and EOL
# Empty string
# sql parse doesn't split on a comment first + special
# so we're going to do it
# could skip if statement doesn't match ^-- or ^/*
# now re-add the beginning comments if there are any, so that they show up in
# log files etc when running these commands
# run each sql query
# Remove spaces, eol and semi-colons.
# \G is treated specially since we have to set the expanded output.
# First try to run each query as special
# edge case when connection is already closed, but we
# don't need cursor for special_cmd.arg_type == NO_QUERY.
# See https://github.com/dbcli/pgcli/issues/1014.
# this would close connection. We should reconnect.
# e.g. execute_from_file already appends these
# Not a special command, so execute as normal sql
# see https://github.com/psycopg/psycopg/issues/303
# special case "show help" in pgbouncer
# cur.description will be None for operations that do not return
# rows.
# 2: relkind, v or m (materialized)
# 4: reloptions, null
# 5: checkoption: local or cascaded
# sql = cur.mogrify(self.tables_query, kinds)
# _logger.debug("Tables Query. sql: %r", sql)
# sql = cur.mogrify(columns_query, kinds)
# _logger.debug("Columns Query. sql: %r", sql)
# Ref: https://stackoverflow.com/questions/30425105/filter-special-chars-such-as-color-codes-from-shell-output
# Query tuples are used for maintaining history
# The entire text of the command
# True If all subqueries were successful
# Time elapsed executing the query and formatting results
# Time elapsed executing the query
# True if any subquery executed create/alter/drop
# True if any subquery changed the database
# True if any subquery changed the search path
# True if any subquery executed insert/update/delete
# True if the query is a special command
# Set default set of less recommended options, if they are not already set.
# They are ignored if pager is different than less.
# Load config.
# at this point, config should be written to pgclirc_file if it did not exist. Read it.
# make sure to use self.config_writer, not self.config
# if not specified, set to DEFAULT_MAX_FIELD_WIDTH
# if specified but empty, set to None to disable truncation
# ellipsis will take at least 3 symbols, so this can't be less than 3 if specified and > 0
# Initialize completer
# ensure writeable
# formatter setup
# Get all the parameters in pattern, handling double quotes if any.
# Now removing quotes.
# Disable logging if value is NONE by switching to a no-op handler.
# Set log level to a high value so it doesn't even waste cycles getting called.
# Connect to the database.
# If password prompt is not forced but no password is provided, try
# getting it from environment variable.
# Prompt for a password immediately if requested via the -W flag. This
# avoids wasting time trying to connect to the database and catching a
# no-password exception.
# If we successfully parsed a password from a URI, there's no need to
# prompt for it, even with the -W flag
# Prompt for a password after 1st attempt to connect
# fails. Don't prompt if the -w flag is supplied
# We add the protocol as urlparse doesn't find it by itself
# Hack: sshtunnel adds a console handler to the logger, so we revert handlers.
# We can remove this when https://github.com/pahaz/sshtunnel/pull/250 is merged.
# Attempt to connect to the database.
# Note that passwd may be empty on the first attempt. If connection
# fails because of a missing or incorrect password, but we're allowed to
# prompt for a password (no -w flag), prompt for a passwd and try again.
# Connecting to a database could fail.
# \ev or \ef
# Something went wrong. Raise an exception and bail.
# Restart connection to the database
# extra newline
# Log to file in addition to normal output
# timestamp log
# Only add humanized time display if > 1 second
# Check if we need to update completions, in order of most
# to least drastic changes
# Print newline if user aborts with `^C`, otherwise
# pgcli's prompt will be printed on the same line
# (just after the confirmation prompt).
# do not quit
# quit only if query is successful
# Allow PGCompleter to learn user's preferred keywords, etc.
# Initialize default metaquery in case execution fails
# If we run \watch without a command, apply it to the last query run.
# If there's a command to \watch, run it in a loop.
# Otherwise, execute it as a regular command.
# Highlight matching brackets while editing.
# Render \t as 4 spaces instead of "^I"
# N.b. pgcli's multi-line mode controls submit-on-Enter (which
# overrides the default behaviour of prompt_toolkit) and is
# distinct from prompt_toolkit's multiline mode here, which
# controls layout/display of the prompt/buffer
# set query to formatter in order to parse table name
# CREATE, ALTER, DROP, etc
# INSERT, DELETE, etc
# Run the query.
# Keep track of whether any of the queries are mutating or changing
# the database
# Don't get stuck in a retry loop
# After refreshing, redraw the CLI to clear the statusbar
# "Refreshing completions..." indicator
# Just swap over the entire prioritizer
# Swap over the entire prioritizer, but clear name priorities,
# leaving learned keyword priorities alone
# Leave the new prioritizer as is
# should be before replacing \\d
# The last 4 lines are reserved for the pgcli menu and padding
# Default host is '' so psycopg can default to either localhost or unix socket
# Migrate the config file from old location.
# Choose which ever one has a valid value.
# work as psql: when database is given as option and argument use the argument as user
# because option --ping, --list or -l are not supposed to have a db name
# e.args[0] is the pre-formatted message which includes a list
# of conflicting sources
# redshift does not return rowcount as part of status.
# See https://github.com/dbcli/pgcli/issues/1320
# The default CSV dialect is "excel" which is not handling newline values correctly
# Nevertheless, we want to keep on using "excel" on Windows since it uses '\r\n'
# as the line terminator
# https://github.com/dbcli/pgcli/issues/1102
# Only print the title if it's not None.
# Only print the status if it's not None
# try ~/.pg_service.conf (if that exists)
# nothing to do
# REPLACE_SINGLE is available in prompt_toolkit >= 3.0.6
# first, load the sql magic if it isn't already loaded
# register our own magic
# "get" was renamed to "set" in ipython-sql:
# https://github.com/catherinedevlin/ipython-sql/commit/f4283c65aaf68f961e84019e8b939e4a3c501d43
# a new positional argument was added to Connection.set in version 0.4.0 of ipython-sql
# A corresponding pgcli object already exists
# I can't figure out how to get the underylying psycopg2 connection
# from the sqlalchemy connection, so just grab the url and make a
# new connection
# For convenience, print the connection alias
# keyring will be loaded later
# Try best to load keyring (issue #1041).
# ImportError for Python 2, ModuleNotFoundError for Python 3
# Find password from store
# Used to strip trailing '::some_type' from default-value expressions
# keywords_tree: A dict mapping keywords to well known following keywords.
# e.g. 'CREATE': ['TABLE', 'USER', ...],
# schemata is a list of schema names
# dbmetadata.values() are the 'tables' and 'functions' dicts
# casing should be a dict {lowercasename:PreferredCasingName}
# dbmetadata['tables']['schema_name']['table_name'] should be an
# OrderedDict {column_name:ColumnMetaData}.
# func_data is a list of function metadata namedtuples
# dbmetadata['schema_name']['functions']['function_name'] should return
# the function metadata namedtuple for the corresponding function
# We keep a cache of {function_usage:{function_metadata: function_arg_list_string}}
# This is used when suggesting functions, to avoid the latency that would result
# if we'd recalculate the arg lists each time we suggest functions (in large DBs)
# fk_data is a list of ForeignKey namedtuples, with fields
# parentschema, childschema, parenttable, childtable,
# parentcolumns, childcolumns
# These are added as a list of ForeignKey namedtuples to the
# ColumnMetadata namedtuple for both the child and parent
# dbmetadata['datatypes'][schema_name][type_name] should store type
# metadata, such as composite type field names. Currently, we're not
# storing any metadata beyond typename, so just store None
# During completer initialization, only load keyword preferences,
# not names
# text starts with double quote; user is manually escaping a name
# Match on everything that follows the double-quote. Note that
# text_len is calculated before removing the quote, so the
# Completion.position value is correct
# Construct a `_match` function for either fuzzy or non-fuzzy matching
# The match function returns a 2-tuple used for sorting the matches,
# or None if the item doesn't match
# Note: higher priority values mean more important, so use negative
# signs to flip the direction of the tuple
# Exact match of first word in suggestion
# This is to get exact alias matches to the top
# E.g. for input `e`, 'Entries E' should be on top
# (before e.g. `EndUsers EU`)
# Use negative infinity to force keywords to sort after all
# fuzzy matches
# Nones need to be removed to avoid max() crashing in Python 3
# Truncate meta-text to 50 characters, if necessary
# Lexical order of items in the collection, used for
# tiebreaking items with the same match group length and start
# position. Since we use *higher* priority to mean "more
# important," we use -ord(c) to prioritize "aa" > "ab" and end
# with 1 to prioritize shorter strings (ie "user" > "users").
# We first do a case-insensitive sort and then a
# case-sensitive one as a tie breaker.
# We also use the unescape_name to make sure quoted names have
# the same priority as unquoted names.
# If smart_completion is off then match any word that starts with
# 'word_before_cursor'.
# Map suggestion type to method
# e.g. 'table' -> self.get_table_matches
# Sort matches so highest priorities are first
# require_last_table is used for 'tb11 JOIN tbl2 USING (...' which should
# suggest only columns that appear in the last table and one more
# User typed x.*; replicate "x." for all columns except the
# first, which gets the original (as we only replace the "*"")
# Set up some data structures for efficient access
# Iterate over FKs in existing tables to find potential joins
# Schema-qualify if (1) new table in same schema as old, and old
# is schema-qualified, or (2) new in other schema, except public
# The user typed an incorrect table qualifier
# Turns [(a, b), (a, c)] into {a: [b, c]}
# Tables that are closer to the cursor get higher prio
# Map (schema, table, col) to tables
# For each fk from the left table, generate a join condition if
# the other table is also in the scope
# For name matching, use a {(colname, coltype): TableReference} dict
# Find all name-match join conditions
# Only suggest functions allowed in FROM clause
# Function overloading means we way have multiple functions of the same
# name at this point, so keep unique names only
# also suggest hardcoded functions using startswith matching
# Unless we're sure the user really wants them, hide schema names
# starting with pg_, which are mostly temporary schemas
# Remove trailing ::(schema.)type
# Unless we're sure the user really wants them, don't suggest the
# pg_catalog tables that are implicitly on the search path
# Get well known following keywords for the last token. If any, narrow
# candidates to this list.
# suggest custom datatypes
# Also suggest hardcoded types
# Local tables should shadow database tables
# Return column names from a set-returning function
# Get an array of FunctionMetadata objects
# func is a FunctionMetadata object
# Because of multiple dispatch, we can have multiple functions
# with the same name, which is why `for meta in metas` is necessary
# in the comprehensions below
# Surround the keyword with word boundaries and replace interior whitespace
# with whitespace wildcards
# Count keywords. Can't rely for sqlparse for this, because it's
# database agnostic
# FromClauseItem is a table/view/function used in the FROM clause
# `table_refs` contains the list of tables/... already in the statement,
# used to ensure that the alias we suggest is unique
# JoinConditions are suggested after ON, e.g. 'foo.barid = bar.barid'
# Joins are suggested after JOIN, e.g. 'foo ON foo.barid = bar.barid'
# For convenience, don't require the `usage` argument in Function constructor
# If we've partially typed a word then word_before_cursor won't be an
# empty string. In that case we want to remove the partially typed
# string before sending it to the sqlparser. Otherwise the last token
# will always be the partially typed string which renders the smart
# completion useless because it will always return the list of
# keywords as completion.
# If schema name is unquoted, lower-case it
# This is a temporary hack; the exception handling
# here should be removed once sqlparse has been fixed
# Check for special commands and handle those separately
# Be careful here because trivial whitespace is parsed as a
# statement, but the statement won't have a first token
# Multiple statements being edited -- isolate the current one by
# cumulatively summing statement lengths to find the one that bounds
# the current position
# A single statement
# The empty string
# Trying to complete the special command itself
# Try to distinguish "\d name" from "\d schema.name"
# Note that this will fail to obtain a schema name if wildcards are
# used, e.g. "\d schema???.name"
# \d can describe tables or views
# If 'token' is a Comparison type such as
# 'select * FROM abc a JOIN def d ON a.id = d.'. Then calling
# token.value on the comparison type will only return the lhs of the
# comparison. In this case a.id. So we need to do token.tokens to get
# both sides of the comparison and pick the last token out of that
# sqlparse groups all tokens from the where clause into a single token
# list. This means that token.value may be something like
# 'where foo > 5 and '. We need to look "inside" token.tokens to handle
# suggestions in complicated where clauses correctly
# If the previous token is an identifier, we can suggest datatypes if
# we're in a parenthesized column/field list, e.g.:
# If we're not in a parenthesized list, the most likely scenario is the
# user is about to specify an alias, e.g.:
# Suggest datatypes
# Four possibilities:
# Check for a subquery expression (cases 3 & 4)
# e.g. "SELECT foo FROM bar WHERE foo = ANY("
# Get the token before the parens
# tbl1 INNER JOIN tbl2 USING (col1, col2)
# suggest columns that are present in more than one table
# If the lparen is preceded by a space chances are we're about to
# do a sub-select.
# We're probably in a function argument list
# Don't suggest anything for aliases
# Suggest tables from either the currently-selected schema or the
# public schema if no schema has been specified
# Suggest schemas
# stmt.get_previous_token will fail for e.g. `SELECT 1 FROM functions WHERE function:`
# Suggest functions from either the currently-selected schema or the
# E.g. 'ALTER TABLE <tablname>'
# E.g. 'ALTER TABLE foo ALTER COLUMN bar
# "ON parent.<suggestion>"
# parent can be either a schema name or table alias
# ON <suggestion>
# Use table alias if there is one, otherwise the table name
# "\c <db", "use <db>", "DROP DATABASE <db>",
# "CREATE DATABASE <newdb> WITH TEMPLATE <db>"
# DROP SCHEMA schema_name, SET SCHEMA schema name
# Note that tables are a form of composite type in postgresql, so
# they're suggested here as well
# token is a keyword we haven't implemented any special handling for
# go backwards in the query until we find one we do recognize
# TableExpression is a namedtuple representing a CTE, used internally
# name: cte alias assigned in the query
# columns: list of column names
# start: index into the original string of the left parens starting the CTE
# stop: index into the original string of the right parens ending the CTE
# Currently editing a cte - treat its body as the current full_text
# Append this cte to the list of available table metadata
# Editing past the last cte (ie the main body of the query)
# Make sure the first meaningful token is "WITH" which is necessary to
# define CTEs
# Get the next (meaningful) token, which should be the first CTE
# Multiple ctes
# A single CTE
# Collapse everything after the ctes into a remainder query
# Find the start position of the opening parens enclosing the cte body
# includes parens
# Find the first DML token to check if it's a SELECT or INSERT/UPDATE/DELETE
# Jump ahead to the RETURNING clause where the list of column names is
# Must be invalid CTE
# The next token should be either a column name, or a list of column names
# NB: IdentifierList.get_identifiers() can return non-identifiers!
# This matches only alphanumerics and underscores.
# This matches everything except spaces, parens, colon, and comma
# This matches everything except spaces, parens, colon, comma, and period
# This matches everything except a space.
# Find the location of token t in the original parsed statement
# We can't use parsed.token_index(t) because t may be a child token
# inside a TokenList, in which case token_index throws an error
# Minimal example:
# Combine the string values of all tokens in the original list
# up to and including the target keyword token t, to produce a
# query string with everything after the keyword token removed
# Postgresql dollar quote signs look like `$$` or `$tag$`
# parsed can contain one or more semi-colon separated commands
# Look for unmatched single quotes, or unmatched dollar sign quotes
# An unmatched double quote, e.g. '"foo', 'foo."', or 'foo."bar'
# Close the double quote, then reparse
# This code is borrowed from sqlparse example script.
# <url>
# An incomplete nested select won't be recognized correctly as a
# sub-select. eg: 'SELECT * FROM (SELECT id FROM user'. This causes
# the second FROM to trigger this elif condition resulting in a
# `return`. So we need to ignore the keyword if the keyword
# FROM.
# Also 'SELECT * FROM abc JOIN def' will trigger this elif
# condition. So we need to ignore the keyword JOIN and its variants
# INNER JOIN, FULL OUTER JOIN, etc.
# 'SELECT a, FROM abc' will detect FROM as part of the column list.
# So this check here is necessary.
# We need to do some massaging of the names because postgres is case-
# insensitive and '"Foo"' is not the same table as 'Foo' (while 'foo' is)
# Sometimes Keywords (such as FROM ) are classified as
# identifiers which don't have the get_real_name() method.
# extract_tables is inspired from examples in the sqlparse lib.
# INSERT statements must stop looking for tables at the sign of first
# Punctuation. eg: INSERT INTO abc (col1, col2) VALUES (1, 2)
# abc is the table name, but if we don't stop at the first lparen, then
# we'll identify abc, col1 and col2 as table names.
# Kludge: sqlparse mistakenly identifies insert statements as
# function calls due to the parenthesized column list, e.g. interprets
# "insert into foo (bar, baz)" as a function call to foo with arguments
# (bar, baz). So don't allow any identifiers in insert statements
# to have is_function=True
# In the case 'sche.<cursor>', we get an empty TableReference; remove that
# Skip space after comma separating default expressions
# End quote
# Begin quote
# End of expression
# Be flexible in not requiring arg_types -- use None as a placeholder
# for each arg. (Used for compatibility with old versions of postgresql
# where such info is hard to get.
# IN, INOUT, VARIADIC
# For functions  without output parameters, the function name
# is used as the name of the output column.
# E.g. 'SELECT unnest FROM unnest(...);'
# OUT, INOUT, TABLE
# Where `literal_type` is one of 'keywords', 'functions', 'datatypes',
# returns a tuple of literal values of that type.
# type: ignore[reportPrivateUsage]
#: A JSON Schema which is a JSON object
#: A JSON Schema of any kind
#: A Resource whose contents are JSON Schemas
#: A JSON Schema Registry
#: The empty JSON Schema Registry
#: JSON Schema draft 2020-12
#: JSON Schema draft 2019-09
#: JSON Schema draft 7
#: JSON Schema draft 6
#: JSON Schema draft 4
#: JSON Schema draft 3
#: A URI which identifies a `Resource`.
#: The type of documents within a registry.
#: A serialized document (e.g. a JSON string)
# Because here we allow subclassing below.
# type: ignore[reportUnknownMemberType]
# type: ignore[reportUnknownArgumentType]
#: A short human-readable name for the specification, used for debugging.
#: Find the ID of a given document.
#: Retrieve the subresources of the given document (without traversing into
#: the subresources themselves).
#: While resolving a JSON pointer, conditionally enter a subresource
#: (if e.g. we have just entered a keyword whose value is a subresource)
#: Retrieve the anchors contained in the given document.
#: An opaque specification where resources have no subresources
#: nor internal identifiers.
#: Attempt to discern which specification applies to the given contents.
#: May be called either as an instance method or as a class method, with
#: slightly different behavior in the following case:
#: Recall that not all contents contains enough internal information about
#: which specification it is written for -- the JSON Schema ``{}``,
#: for instance, is valid under many different dialects and may be
#: interpreted as any one of them.
#: When this method is used as an instance method (i.e. called on a
#: specific specification), that specification is used as the default
#: if the given contents are unidentifiable.
#: On the other hand when called as a class method, an error is raised.
#: To reiterate, ``DRAFT202012.detect({})`` will return ``DRAFT202012``
#: whereas the class method ``Specification.detect({})`` will raise an
#: error.
#: (Note that of course ``DRAFT202012.detect(...)`` may return some other
#: specification when given a schema which *does* identify as being for
#: another version).
#: Raises:
#:     `CannotDetermineSpecification`
#:         if the given contents don't have any discernible
#:         information which could be used to guess which
#:         specification they identify as
# type: ignore[reportGeneralTypeIssues]
# Empty fragment URIs are equivalent to URIs without the fragment.
# TODO: Is this true for non JSON Schema resources? Probably not.
# noqa: TRY003
#: An anchor or resource.
# And a second time we get the same value.
# This still succeeds, but evicts the first URI
# And now this fails (as we popped the value out of `mapping`)
# FIXME: The tests below aren't really representable in the current
# FIXME: The tests below should move to the referencing suite but I haven't yet
# hid_version API was added in
# https://github.com/libusb/hidapi/commit/8f72236099290345928e646d2f2c48f0187ac4af
# so if it is missing we are dealing with hidapi 0.8.0 or older
# Pass the id of the report to be read.
#: The :class:`ConfigObj` instance.
# ConfigObj does not set the encoding on the configspec.
# "minimal" is the same as "plain", but without headers
# not a date
# SPDX-License-Identifier: LGPL-2.1-or-later
# This file is part of libnvme.
# Copyright (c) 2022 Dell Inc.
# Authors: Martin Belanger <Martin.Belanger@dell.com>
# Version 4.3.1
# Register host_iter in _nvme:
# Register subsystem_iter in _nvme:
# Register ctrl_iter in _nvme:
# Register ns_iter in _nvme:
# Register root in _nvme:
# Keep a reference to parent to ensure garbage collection happens in the right order
# Register host in _nvme:
# Register subsystem in _nvme:
# Keep a reference to parent to ensure ctrl obj gets GCed before host
# Register ctrl in _nvme:
# Register ns in _nvme:
# Register aa_log_record in _LibAppArmor:
# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# According to "man 2 kill" PID 0 has a special meaning:
# it refers to <<every process in the process group of the
# calling process>> so we don't want to go any further.
# If we get here it means this UNIX platform *does* have
# a process with id 0.
# EPERM clearly means there's a process to deny access to
# According to "man 2 kill" possible error values are
# (EINVAL, EPERM, ESRCH)
# noqa: B008
# see "man waitpid"
# Sleep for some time and return a new increased interval.
# See: https://linux.die.net/man/2/waitpid
# This has two meanings:
# - PID is not a child of os.getpid() in which case
# - PID never existed in the first place
# In both cases we'll eventually return None as we
# can't determine its exit status code.
# WNOHANG flag was used and PID is still running.
# Process terminated normally by calling exit(3) or _exit(2),
# or by returning from main(). The return value is the
# positive integer passed to *exit().
# Process exited due to a signal. Return the negative value
# of that signal.
# elif os.WIFSTOPPED(status):
# elif os.WIFCONTINUED(status):
# Total space which is only available to root (unless changed
# at system level).
# Remaining free space usable by root.
# Remaining free space usable by user.
# Total space being used in general.
# see: https://github.com/giampaolo/psutil/pull/2152
# Total space which is available to user (same as 'total' but
# for the user).
# User usage percent compared to the total amount of space
# the user can use. This number would be higher if compared
# to root's because the user has less space (usually -5%).
# NB: the percentage is -5% than what shown by df due to
# reserved blocks that we are currently not considering:
# https://github.com/giampaolo/psutil/issues/829#issuecomment-223750462
# deprecated alias
# This is public API and it will be retrieved from _pslinux.py
# via sys.modules.
# This is public writable API which is read from _pslinux.py and
# _pssunos.py via sys.modules.
# exceptions
# "CONN_IDLE", "CONN_BOUND",
# "RLIM_INFINITY", "RLIMIT_AS", "RLIMIT_CORE", "RLIMIT_CPU", "RLIMIT_DATA",
# "RLIMIT_FSIZE", "RLIMIT_LOCKS", "RLIMIT_MEMLOCK", "RLIMIT_NOFILE",
# "RLIMIT_NPROC", "RLIMIT_RSS", "RLIMIT_STACK", "RLIMIT_MSGQUEUE",
# "RLIMIT_NICE", "RLIMIT_RTPRIO", "RLIMIT_RTTIME", "RLIMIT_SIGPENDING",
# classes
# proc
# memory
# cpu
# "cpu_freq", "getloadavg"
# network
# disk
# "sensors_temperatures", "sensors_battery", "sensors_fans"     # sensors
# others
# Populate global namespace with RLIM* constants.
# Sanity check in case the user messed up with psutil installation
# or did something weird with sys.path. In this case we might end
# up importing a python module using a C extension module which
# was compiled for a different version of psutil.
# We want to prevent that by failing sooner rather than later.
# See: https://github.com/giampaolo/psutil/issues/564
# =====================================================================
# --- Utils
# Faster version (Windows and Linux).
# --- Process class
# used for caching on Windows only (on POSIX ppid may change)
# platform-specific modules define an _psplatform.Process
# implementation class
# This should happen on Windows only, since we use the fast
# create time method. AFAIK, on all other platforms we are
# able to get create time for all PIDs.
# Zombies can still be queried by this class (although
# not always) and pids() return them so just go on.
# Use create_time() fast method in order to speedup
# `process_iter()`. This means we'll get AccessDenied for
# most ADMIN processes, but that's fine since it means
# we'll also get AccessDenied on kill().
# https://github.com/giampaolo/psutil/issues/2366#issuecomment-2381646555
# Use 'monotonic' process starttime since boot to form unique
# process identity, since it is stable over changes to system
# Test for equality with another Process object based
# on PID and creation time.
# Zombie processes on Open/NetBSD/illumos/Solaris have a
# creation time of 0.0.  This covers the case when a process
# started normally (so it has a ctime), then it turned into a
# zombie. It's important to do this because is_running()
# depends on __eq__.
# We may directly raise NSP in here already if PID is just
# not running, but I prefer NSP to be raised naturally by
# the actual Process API call. This way unit tests will tell
# us if the API is broken (aka don't raise NSP when it
# should). We also remain consistent with all other "get"
# APIs which don't use _raise_if_pid_reused().
# --- utility methods
# NOOP: this covers the use case where the user enters the
# context twice:
# >>> with p.oneshot():
# ...    with p.oneshot():
# Also, since as_dict() internally uses oneshot()
# I expect that the code below will be a pretty common
# "mistake" that the user will make, so let's guard
# against that:
# ...    p.as_dict()
# cached in case cpu_percent() is used
# cached in case memory_percent() is used
# cached in case parent() is used
# cached in case username() is used
# specific implementation cache
# in case of not implemented functionality (may happen
# on old or exotic systems) we want to crash only if
# the user explicitly asked for that particular attr
# Get a fresh (non-cached) ctime in case the system clock
# was updated. TODO: use a monotonic ctime on platforms
# where it's supported.
# ...else ppid has been reused by another process
# Checking if PID is alive is not enough as the PID might
# have been reused by another process. Process identity /
# uniqueness over time is guaranteed by (PID + creation
# time) and that is verified in __eq__.
# We should never get here as it's already handled in
# Process.__init__; here just for extra safety.
# --- actual API
# On POSIX we don't want to cache the ppid as it may unexpectedly
# change to 1 (init) in case this process turns into a zombie:
# https://github.com/giampaolo/psutil/issues/321
# http://stackoverflow.com/questions/356722/
# XXX should we check creation time here rather than in
# Process.parent()?
# Process name is only cached on Windows as on POSIX it may
# change, see:
# https://github.com/giampaolo/psutil/issues/692
# On UNIX the name gets truncated to the first 15 characters.
# If it matches the first part of the cmdline we return that
# one instead because it's usually more explicative.
# Examples are "gnome-keyring-d" vs. "gnome-keyring-daemon".
# Just pass and return the truncated name: it's better
# than nothing. Note: there are actual cases where a
# zombie process can return a name() but not a
# cmdline(), see:
# https://github.com/giampaolo/psutil/issues/2239
# try to guess exe from cmdline[0] in absence of a native
# exe representation
# the possible exe
# Attempt to guess only in case of an absolute path.
# It is not safe otherwise as the process might have
# changed cwd.
# underlying implementation can legitimately return an
# empty string; if that's the case we don't want to
# raise AD while guessing from the cmdline
# might happen if python was installed from sources
# the uid can't be resolved by the system
# Linux, BSD, AIX and Windows only
# Linux and Windows
# Linux / FreeBSD only
# Windows, Linux and FreeBSD only
# Linux, FreeBSD, SunOS
# All platforms has it, but maybe not in the future.
# Get a fresh (non-cached) ctime in case the system clock was
# updated. TODO: use a monotonic ctime on platforms where it's
# supported.
# if child happens to be older than its parent
# (self) it means child's PID has been reused
# Construct a {pid: [child pids]} dict
# Recursively traverse that dict, starting from self.pid,
# such that we only call Process() on actual children
# Since pids can be reused while the ppid_map is
# constructed, there may be rare instances where
# there's a cycle in the recorded process "tree".
# reset values for next call in case of interval == None
# This is the utilization split evenly between all CPUs.
# E.g. a busy loop process on a 2-CPU-cores system at this
# point is reported as 50% instead of 100%.
# interval was too low
# Note 1:
# in order to emulate "top" we multiply the value for the num
# of CPU cores. This way the busy process will be reported as
# having 100% (or more) usage.
# Note 2:
# taskmgr.exe on Windows differs in that it will show 50%
# Note 3:
# a percentage > 100 is legitimate as it can result from a
# process with multiple threads running on different CPU
# cores (top does the same), see:
# http://stackoverflow.com/questions/1032357
# https://github.com/giampaolo/psutil/issues/474
# use cached value if available
# we should never get here
# --- signals
# see "man 2 kill"
# We do this because os.kill() lies in case of
# zombie processes.
# The valid attr names which can be processed by Process.as_dict().
# --- Popen class
# Explicitly avoid to raise NoSuchProcess in case the process
# spawned by subprocess.Popen terminates too quickly, see:
# https://github.com/giampaolo/psutil/issues/193
# Flushing a BufferedWriter may raise an error.
# --- system processes related functions
# On POSIX we use os.kill() to determine PID existence.
# According to "man 2 kill" PID 0 has a special meaning
# though: it refers to <<every process in the process
# group of the calling process>> and that is not we want
# to do here.
# new process
# noqa: PLW0108
# Set new Process instance attribute.
# Make sure that every complete iteration (all processes)
# will last max 1 sec.
# We do this because we don't want to wait too long on a
# single process: in case it terminates too late other
# processes may disappear in the meantime and their PID
# reused.
# noqa: PLR6104
# Last attempt over processes survived so far.
# timeout == 0 won't make this function wait any further.
# --- CPU related functions
# Don't want to crash at import time.
# On Linux guest times are already accounted in "user" or
# "nice" times, so we subtract them from total.
# Htop does the same. References:
# https://github.com/giampaolo/psutil/pull/940
# http://unix.stackexchange.com/questions/178045
# https://github.com/torvalds/linux/blob/
# Linux 2.6.24+
# Linux 3.2.0+
# Linux: "iowait" is time during which the CPU does not do anything
# (waits for IO to complete). On Linux IO wait is *not* accounted
# in "idle" time so we subtract it. Htop does the same.
# CPU times are always supposed to increase over time
# or at least remain the same and that's because time
# cannot go backwards.
# Surprisingly sometimes this might not be the case (at
# least on Windows and Linux), see:
# https://github.com/giampaolo/psutil/issues/392
# https://github.com/giampaolo/psutil/issues/645
# https://github.com/giampaolo/psutil/issues/1210
# Trim negative deltas to zero to ignore decreasing fields.
# top does the same. Reference:
# https://gitlab.com/procps-ng/procps/blob/v3.3.12/top/top.c#L5063
# system-wide usage
# per-cpu usage
# Use a separate dict for cpu_times_percent(), so it's independent from
# cpu_percent() and they can both be used within the same program.
# "scale" is the value to multiply each delta with to get percentages.
# We use "max" to avoid division by zero (if all_delta is 0, then all
# fields are 0 so percentages will be 0 too. all_delta cannot be a
# fraction because cpu times are integers)
# make sure we don't return negative values or values over 100%
# On Linux if /proc/cpuinfo is used min/max are set
# to None.
# Perform this hasattr check once on import time to either use the
# platform based code or proxy straight from the os module.
# --- system memory related functions
# cached for later use in Process.memory_percent()
# --- disks/partitions related functions
# --- network related functions
# sort by family
# Linux defines AF_LINK as an alias for AF_PACKET.
# We re-set the family here so that repr(family)
# will show AF_LINK rather than AF_PACKET
# The underlying C function may return an incomplete MAC
# address in which case we fill it with null bytes, see:
# https://github.com/giampaolo/psutil/issues/786
# On Windows broadcast is None, so we determine it via
# ipaddress module.
# --- sensors
# Linux, Windows, FreeBSD, macOS
# --- other system related functions
# --- Windows services
# noqa: ICN001
# --- globals
# According to /usr/include/sys/proc.h SZOMB is unused.
# test_zombie_process() shows that SDEAD is the right
# equivalent. Also it appears there's no equivalent of
# psutil.STATUS_DEAD. SDEAD really means STATUS_ZOMBIE.
# cext.SZOMB: _common.STATUS_ZOMBIE,
# From http://www.eecs.harvard.edu/~margo/cs161/videos/proc.h.txt
# OpenBSD has SRUN and SONPROC: SRUN indicates that a process
# is runnable but *not* yet running, i.e. is on a run queue.
# SONPROC indicates that the process is actually executing on
# a CPU, i.e. it is no longer on a run queue.
# As such we'll map SRUN to STATUS_WAKING and SONPROC to
# STATUS_RUNNING
# --- named tuples
# psutil.virtual_memory()
# psutil.cpu_times()
# psutil.Process.memory_info()
# psutil.Process.memory_full_info()
# psutil.Process.cpu_times()
# psutil.Process.memory_maps(grouped=True)
# psutil.Process.memory_maps(grouped=False)
# psutil.disk_io_counters()
# --- memory
# On NetBSD buffers and shared mem is determined via /proc.
# The C ext set them to 0.
# Before avail was calculated as (inactive + cached + free),
# same as zabbix, but it turned out it could exceed total (see
# #2233), so zabbix seems to be wrong. Htop calculates it
# differently, and the used value seem more realistic, so let's
# match htop.
# https://github.com/htop-dev/htop/blob/e7f447b/netbsd/NetBSDProcessList.c#L162
# https://github.com/zabbix/zabbix/blob/af5e0f8/src/libs/zbxsysinfo/netbsd/memory.c#L135
# matches freebsd-memory CLI:
# * https://people.freebsd.org/~rse/dist/freebsd-memory
# * https://www.cyberciti.biz/files/scripts/freebsd-memory.pl.txt
# matches zabbix:
# * https://github.com/zabbix/zabbix/blob/af5e0f8/src/libs/zbxsysinfo/freebsd/memory.c#L143
# --- CPU
# OpenBSD and NetBSD do not implement this.
# From the C module we'll get an XML string similar to this:
# http://manpages.ubuntu.com/manpages/precise/man4/smp.4freebsd.html
# We may get None in case "sysctl kern.sched.topology_spec"
# is not supported on this BSD version, in which case we'll mimic
# os.cpu_count() and return None.
# get rid of padding chars appended at the end of the string
# needed otherwise it will memleak
# If logical CPUs == 1 it's obvious we' have only 1 core.
# Note: the C ext is returning some metrics we are not exposing:
# traps.
# Note about intrs: the C extension returns 0. intrs
# can be determined via /proc/stat; it has the same value as
# soft_intrs thought so the kernel is faking it (?).
# Note about syscalls: the C extension always sets it to 0 (?).
# traps, faults and forks.
# --- disks
# --- network
# https://github.com/giampaolo/psutil/issues/1279
# See: https://github.com/giampaolo/psutil/issues/1074
# reboot or shutdown
# --- processes
# On OpenBSD the kernel does not return PID 0 (neither does
# ps) but it's actually querable (Process(0) will succeed).
# We do this because _psposix.pid_exists() lies in case of
# OpenBSD seems to be the only BSD platform where
# _psposix.pid_exists() returns True for thread IDs (tids),
# so we can't use it.
# ENOENT (no such file or directory) gets raised on open().
# ESRCH (no such process) can get raised on read() if
# process is gone in meantime.
# For those C function who do not raise NSP, possibly returning
# incorrect or incomplete result.
# else NSP
# /proc/0 dir exists but /proc/0/exe doesn't
# OpenBSD: exe cannot be determined; references:
# https://chromium.googlesource.com/chromium/src/base/+/
# We try our best guess by using which against the first
# cmdline arg (may return None).
# ...else it crashes
# XXX - most of the times the underlying sysctl() call on
# NetBSD and OpenBSD returns a truncated string. Also
# /proc/pid/cmdline behaves the same so it looks like this
# is a kernel bug.
# XXX: this happens with unicode tests. It means the C
# routine is unable to decode invalid unicode chars.
# NetBSD: ctime subject to system clock updates.
# FreeBSD / NetBSD
# Note: on OpenSBD this (/dev/mem) requires root access.
# XXX is '?' legit? (we're not supposed to return it anyway)
# sometimes we get an empty string, in which case we turn
# it into None
# ...else it would raise EINVAL
# --- FreeBSD only APIs
# Pre-emptively check if CPUs are valid because the C
# function has a weird behavior in case of invalid CPUs,
# see: https://github.com/giampaolo/psutil/issues/586
# 'man cpuset_setaffinity' about EDEADLK:
# <<the call would leave a thread without a valid CPU to run
# on because the set does not overlap with the thread's
# anonymous mask>>
# same as run
# sunos specific
# psutil.cpu_times(percpu=True)
# we could have done this with kstat, but IMHO this is good enough
# note: there's no difference on Solaris
# we are supposed to get total/free by doing so:
# http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/
# ...nevertheless I can't manage to obtain the same numbers as 'swap'
# cmdline utility, so let's parse its output (sigh!)
# mimic os.cpu_count() behavior
# TODO - the filtering logic should be better checked so that
# it tries to reflect 'df' as much as possible
# Differently from, say, Linux, we don't have a list of
# common fs types so the best we can do, AFAIK, is to
# filter by filesystem having a total size > 0.
# https://github.com/giampaolo/psutil/issues/1674
# TODO: refactor and use _common.conn_to_ntuple.
# --- other system functions
# note: the underlying C function includes entries about
# system boot, run level and others.  We might want
# to use them in the future.
# note: max len == 15
# continue and guess the exe name from the cmdline
# Will be guessed later from cmdline but we want to explicitly
# invoke cmdline here in order to get an AccessDenied
# exception if the user has not enough privileges.
# Note #1: getpriority(3) doesn't work for realtime processes.
# Psinfo is what ps uses, see:
# https://github.com/giampaolo/psutil/issues/1194
# Special case PIDs: internally setpriority(3) return ESRCH
# (no such process), no matter what.
# The process actually exists though, as it has a name,
# creation time, etc.
# We may get here if we attempt to query a 64bit process
# with a 32bit python.
# Error originates from read() and also tools like "cat"
# fail in the same way (!).
# Since there simply is no way to determine CPU times we
# return 0.0 as a fallback. See:
# https://github.com/giampaolo/psutil/issues/857
# /proc/PID/path/cwd may not be resolved by readlink() even if
# it exists (ls shows it). If that's the case and the process
# is still alive return None (we can return None also on BSD).
# Reference: https://groups.google.com/g/comp.unix.solaris/c/tcqvhTNFCAs
# raise NSP or AD
# ENOENT == thread gone in meantime
# TODO: rewrite this in C (...but the damn netstat source code
# does not include this part! Argh!!)
# The underlying C implementation retrieves all OS connections
# and filters them by PID.  At this point we can't tell whether
# an empty list means there were no connections for process or
# process is no longer active so we force NSP in case the PID
# is no longer there.
# will raise NSP if process is gone
# UNIX sockets
# sometimes the link may not be resolved by
# readlink() even if it exists (ls shows it).
# If that's the case we just return the
# unresolved link path.
# This seems an inconsistency with /proc similar
# to: http://goo.gl/55XgO
# We may get here if:
# 1) we are on an old Windows version
# 2) psutil was installed via pip + wheel
# See: https://github.com/giampaolo/psutil/issues/811
# process priority constants, import from __init__.py:
# http://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx
# Process priority
# IO priority
# psutil.Process.io_counters()
# --- utils
# system memory (commit total/limit) is the sum of physical and swap
# thus physical memory values need to be subtracted to get swap values
# commit total is incremented immediately (decrementing free_system)
# while the corresponding free physical value is not decremented until
# pages are accessed, so we can't use free system memory for swap.
# instead, we calculate page file usage based on performance counter
# --- disk
# XXX: do we want to use "strict"? Probably yes, in order
# to fail immediately. After all we are accepting input here...
# Internally, GetSystemTimes() is used, and it doesn't return
# interrupt and dpc times. cext.per_cpu_times() does, so we
# rely on it to get those only.
# Drop to 2 decimal points which is what Linux does
# For constants meaning see:
# https://msdn.microsoft.com/en-us/library/windows/desktop/
# This dirty hack is to adjust the precision of the returned
# value which may have a 1 second fluctuation, see:
# https://github.com/giampaolo/psutil/issues/1007
# noqa: PLW1641
# Test for equality with another WindosService object based
# on name.
# XXX - update _self.display_name?
# config query
# status query
# utils
# XXX: the necessary C bindings for start() and stop() are
# implemented but for now I prefer not to expose them.
# I may change my mind in the future. Reasons:
# - they require Administrator privileges
# - can't implement a timeout for stop() (unless by using a thread,
# - would require adding ServiceAlreadyStarted and
# - we might also want to have modify(), which would basically mean
# - psutil is typically about "read only" monitoring stuff;
# def start(self, timeout=None):
# def stop(self):
# used internally by Process.children()
# retries for roughly 1 second
# --- oneshot() stuff
# This is how PIDs 0 and 4 are always represented in taskmgr
# and process-hacker.
# 24 = ERROR_TOO_MANY_OPEN_FILES. Not sure why this happens
# (perhaps PyPy's JIT delaying garbage collection of files?).
# May be "Registry", "MemCompression", ...
# PEB method detects cmdline changes but requires more
# privileges: https://github.com/giampaolo/psutil/pull/1398
# TODO: the C ext can probably be refactored in order
# to get this from cext.proc_info()
# on Windows RSS == WorkingSetSize and VSM == PagefileUsage.
# Underlying C function returns fields of PROCESS_MEMORY_COUNTERS
# struct.
# wset
# pagefile
# XXX - can't use wrap_exceptions decorator as we're
# returning a generator; probably needs refactoring.
# WaitForSingleObject() expects time in milliseconds.
# Exit code is supposed to come from GetExitCodeProcess().
# May also be None if OpenProcess() failed with
# ERROR_INVALID_PARAMETER, meaning PID is already gone.
# WaitForSingleObject() returned WAIT_TIMEOUT. Just raise.
# WaitForSingleObject() returned WAIT_ABANDONED, see:
# https://github.com/giampaolo/psutil/issues/1224
# We'll just rely on the internal polling and return None
# when the PID disappears. Subprocess module does the same
# (return None):
# https://github.com/python/cpython/blob/
# At this point WaitForSingleObject() returned WAIT_OBJECT_0,
# meaning the process is gone. Stupidly there are cases where
# its PID may still stick around so we do a further internal
# polling.
# incremental delay
# Note: proc_times() not put under oneshot() 'cause create_time()
# is already cached by the main Process class.
# Children user/system times are not retrievable (set to 0).
# return a normalized pathname since the native C function appends
# "\\" at the and of the path
# Filenames come in in native format like:
# "\Device\HarddiskVolume1\Windows\systemew\file.txt"
# Convert the first part in the corresponding drive letter
# (e.g. "C:\") by using Windows's QueryDosDevice()
# SetProcessAffinityMask() states that ERROR_INVALID_PARAMETER
# is returned for an invalid CPU but this seems not to be true,
# therefore we check CPUs validy beforehand.
# only voluntary ctx switches are supported
# OS constants
# connection constants
# net constants
# noqa: F822
# process status constants
# other constants
# named tuples
# utility functions
# shell utils
# ===================================================================
# --- OS constants
# --- API constants
# Process.status()
# Linux, macOS, FreeBSD
# Process.net_connections() and psutil.net_connections()
# net_if_stats()
# sensors_battery()
# --- others
# --- namedtuples
# --- for system functions
# psutil.swap_memory()
# psutil.disk_usage()
# psutil.disk_partitions()
# psutil.net_io_counters()
# psutil.users()
# psutil.net_connections()
# psutil.net_if_addrs()
# psutil.net_if_stats()
# psutil.cpu_stats()
# psutil.cpu_freq()
# psutil.sensors_temperatures()
# psutil.sensors_battery()
# psutil.sensors_fans()
# --- for Process methods
# psutil.Process.open_files()
# psutil.Process.threads()
# psutil.Process.uids()
# psutil.Process.gids()
# psutil.Process.ionice()
# psutil.Process.ctx_switches()
# psutil.Process.net_connections()
# psutil.net_connections() and psutil.Process.net_connections()
# --- Process.net_connections() 'kind' parameter mapping
# --- Exceptions
# invoked on `raise Error`
# invoked on `repr(Error)`
# case 1: we previously entered oneshot() ctx
# case 2: we never entered oneshot() ctx
# case 3: we entered oneshot() ctx but there's no cache
# for this entry yet
# multi-threading race condition, see:
# https://github.com/giampaolo/psutil/issues/1948
# The block is usually raw data from the target process.  It might contain
# trailing garbage and lines that do not look like assignments.
# localize global variable to speed up access.
# nul byte at the beginning or double nul byte means finish
# there might not be an equals sign
# Windows expects environment variables to be uppercase only
# ignore whatever C returned to us
# This was the first call.
# The input dict has a new key (e.g. a new disk or NIC)
# which didn't exist in the previous call.
# it wrapped!
# The read buffer size for open() builtin. This (also) dictates how
# much data we read(2) when iterating over file lines as in:
# Default per-line buffer size for binary files is 1K. For text files
# is 8K. We use a bigger buffer (32K) in order to have more consistent
# results when reading /proc pseudo files on Linux, see:
# https://github.com/giampaolo/psutil/issues/2050
# https://github.com/giampaolo/psutil/issues/708
# https://github.com/giampaolo/psutil/issues/675
# https://github.com/giampaolo/psutil/pull/733
# noqa: SIM115
# Dictates per-line read(2) buffer size. Defaults is 8k. See:
# https://github.com/giampaolo/psutil/issues/2050#issuecomment-1013387546
# --- shell utils
# ...because str(exc) may contain info about the file name
# This is how Zabbix calculate avail and used mem:
# https://github.com/zabbix/zabbix/blob/master/src/libs/zbxsysinfo/osx/memory.c
# Also see: https://github.com/giampaolo/psutil/issues/1277
# This is NOT how Zabbix calculates free mem but it matches "free"
# cmdline utility.
# not always available on ARM64
# no power source - return None according to interface
# Note: on macOS this will fail with AccessDenied unless
# the process is owned by root.
# On certain macOS versions pids() C doesn't return PID 0 but
# "ps" does and the process is querable via sysctl():
# https://travis-ci.org/giampaolo/psutil/jobs/309619941
# Note: should work with all PIDs without permission issues.
# Note: should work for PIDs owned by user only.
# children user / system times are not retrievable (set to 0)
# Unvoluntary value seems not to be available;
# getrusage() numbers seems to confirm this theory.
# We set it to 0.
# Copyright (c) 2009, Giampaolo Rodola'
# Copyright (c) 2017, Arnon Yaari
# TODO what status is this?
# try to get speed and duplex
# TODO: rewrite this in C (entstat forks, so use truss -f to follow.
# looks like it is using an undocumented ioctl?)
# note: max 16 characters
# there is no way to get executable path in AIX other than to guess,
# and guessing is more complex than what's in the wrapping class
# relative or absolute path
# if cwd has changed, we're out of luck - this may be wrong!
# not found, move to search in PATH using basename only
# search for exe name PATH
# The underlying C implementation retrieves all OS threads
# convert from 64-bit dev_t to 32-bit dev_t and then map the device
# try to match rdev of /dev/pts/* files ttydev
# TODO rewrite without using procfiles (stat /proc/pid/fd/* and then
# find matching name of the inode)
# no /proc/0/fd
# if process is terminated, proc_io_counters returns OSError
# instead of NSP
# io prio constants
# connection status constants
# Number of clock ticks per second
# "man iostat" states that sectors are equivalent with blocks and have
# a size of 512 bytes. Despite this value can be queried at runtime
# via /sys/block/{DISK}/queue/hw_sector_size and results may vary
# between 1k, 2k, or 4k... 512 appears to be a magic constant used
# throughout Linux source code:
# * https://stackoverflow.com/a/38136179/376587
# * https://lists.gt.net/linux/kernel/2241060
# * https://github.com/giampaolo/psutil/issues/1305
# * https://github.com/torvalds/linux/blob/
# * https://lkml.org/lkml/2015/8/17/234
# ioprio_* constants http://linux.die.net/man/2/ioprio_get
# https://github.com/torvalds/linux/blame/master/fs/proc/array.c
# ...and (TASK_* constants):
# https://github.com/torvalds/linux/blob/master/include/linux/sched.h
# https://github.com/torvalds/linux/blob/master/include/net/tcp_states.h
# psutil.Process().open_files()
# psutil.Process().memory_info()
# psutil.Process().memory_full_info()
# psutil.Process().memory_maps(grouped=True)
# psutil.Process().memory_maps(grouped=False)
# readlink() might return paths containing null bytes ('\x00')
# resulting in "TypeError: must be encoded string without NULL
# bytes, not str" errors when the string is passed to other
# fs-related functions (os.*, open(), ...).
# Apparently everything after '\x00' is garbage (we can have
# ' (deleted)', 'new' and possibly others), see:
# https://github.com/giampaolo/psutil/issues/717
# Certain paths have ' (deleted)' appended. Usually this is
# bogus as the file actually exists. Even if it doesn't we
# don't care.
# possible values: r, w, a, r+, a+
# Re-adapted from iostat source code, see:
# https://github.com/sysstat/sysstat/blob/
# Some devices may have a slash in their name (e.g. cciss/c0d0...).
# Linux >= 2.6.11
# Linux >= 2.6.24
# Linux >= 3.2.0
# --- system memory
# Note about "fallback" value. According to:
# https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/
# ...long ago "available" memory was calculated as (free + cached),
# We use fallback when one of these is missing from /proc/meminfo:
# "Active(file)": introduced in 2.6.28 / Dec 2008
# "Inactive(file)": introduced in 2.6.28 / Dec 2008
# "SReclaimable": introduced in 2.6.19 / Nov 2006
# /proc/zoneinfo: introduced in 2.6.13 / Aug 2005
# kernel 2.6.13
# /proc doc states that the available fields in /proc/meminfo vary
# by architecture and compile options, but these 3 values are also
# returned by sysinfo(2); as such we assume they are always there.
# https://github.com/giampaolo/psutil/issues/1010
# "free" cmdline utility sums reclaimable to cached.
# Older versions of procps used to add slab memory instead.
# This got changed in:
# https://gitlab.com/procps-ng/procps/commit/
# since kernel 2.6.19
# since kernel 2.6.32
# kernels 2.4
# - starting from 4.4.0 we match free's "available" column.
# - free and htop available memory differs as per:
# - MemAvailable has been introduced in kernel 3.14
# Yes, it can happen (probably a kernel bug):
# https://github.com/giampaolo/psutil/issues/1915
# In this case "free" CLI tool makes an estimate. We do the same,
# and it matches "free" CLI tool.
# If avail is greater than total or our calculation overflows,
# that's symptomatic of running within a LCX container where such
# values will be dramatically distorted over those of the host.
# https://gitlab.com/procps-ng/procps/blob/
# Warn about missing metrics which are set to 0.
# We prefer /proc/meminfo over sysinfo() syscall so that
# psutil.PROCFS_PATH can be used in order to allow retrieval
# for linux containers, see:
# https://github.com/giampaolo/psutil/issues/1015
# get pgin/pgouts
# see https://github.com/giampaolo/psutil/issues/722
# values are expressed in 4 kilo bytes, we want
# bytes instead
# we might get here when dealing with exotic Linux
# flavors, see:
# https://github.com/giampaolo/psutil/issues/313
# get rid of the first line which refers to system wide CPU stats
# as a second fallback we try to parse /proc/cpuinfo
# unknown format (e.g. amrel/sparc architectures), see:
# https://github.com/giampaolo/psutil/issues/200
# try to parse /proc/stat as a last resort
# mimic os.cpu_count()
# Method #1
# These 2 files are the same but */core_cpus_list is newer while
# */thread_siblings_list is deprecated and may disappear in the future.
# https://www.kernel.org/doc/Documentation/admin-guide/cputopology.rst
# https://github.com/giampaolo/psutil/pull/1727#issuecomment-707624964
# https://lkml.org/lkml/2019/2/26/41
# Method #2
# new section
# ongoing section
# take cached value from cpuinfo if available, see:
# https://github.com/giampaolo/psutil/issues/1851
# Likely an old RedHat, see:
# https://github.com/giampaolo/psutil/issues/1071
# if cpu core is offline, set to all zeroes
# The string represents the basename of the corresponding
# /proc/net/{proto_name} file.
# ENOENT == file which is gone in the meantime;
# os.stat(f"/proc/{self.pid}") will be done later
# to force NSP (if it's the case)
# not a link
# file name too long
# the process is using a socket
# os.listdir() is gonna raise a lot of access denied
# exceptions in case of unprivileged user; that's fine
# as we'll just end up returning a connection with PID
# and fd set to None anyway.
# Both netstat -an and lsof does the same so it's
# unlikely we can do any better.
# ENOENT just means a PID disappeared on us.
# this usually refers to a local socket in listen mode with
# no end-points connected
# see: https://github.com/giampaolo/psutil/issues/201
# IPv6
# see: https://github.com/giampaolo/psutil/issues/623
# IPv6 not supported
# skip the first line
# # We assume inet sockets are unique, so we error
# # out if there are multiple references to the
# # same inode. We won't do this for UNIX sockets.
# if len(inodes[inode]) > 1 and family != socket.AF_UNIX:
# see: https://github.com/giampaolo/psutil/issues/766
# noqa: B904
# noqa: SIM108
# With UNIX sockets we can have a single inode
# referencing many file descriptors.
# XXX: determining the remote endpoint of a
# UNIX socket on Linux is not possible, see:
# https://serverfault.com/questions/252723/
# no connections for this process
# in
# unused
# out
# OK, this is a bit confusing. The format of /proc/diskstats can
# have 3 variations.
# On Linux 2.4 each line has always 15 fields, e.g.:
# "3     0   8 hda 8 8 8 8 8 8 8 8 8 8 8"
# On Linux 2.6+ each line *usually* has 14 fields, and the disk
# name is in another position, like this:
# "3    0   hda 8 8 8 8 8 8 8 8 8 8 8"
# ...unless (Linux 2.6) the line refers to a partition instead
# of a disk, in which case the line has less fields (7):
# "3    1   hda1 8 8 8 8"
# 4.18+ has 4 fields added:
# "3    0   hda 8 8 8 8 8 8 8 8 8 8 8 0 0 0 0"
# 5.5 has 2 more fields.
# https://www.kernel.org/doc/Documentation/iostats.txt
# https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
# Linux 2.4
# Linux 2.6+, line referring to a disk
# Linux 2.6+, line referring to a partition
# perdisk=False means we want to calculate totals so we skip
# partitions (e.g. 'sda1', 'nvme0n1p1') and only include
# base disk devices (e.g. 'sda', 'nvme0n1'). Base disks
# include a total of all their partitions + some extra size
# of their own:
# https://github.com/giampaolo/psutil/pull/1313
# just for extra safety
# race condition
# We use exists() because the "/dev/*" part of the path is hard
# coded, so we want to be sure.
# ignore all lines starting with "nodev" except "nodev zfs"
# See: https://github.com/giampaolo/psutil/issues/1307
# CentOS has an intermediate /device directory:
# https://github.com/giampaolo/psutil/issues/971
# https://github.com/nicolargo/glances/issues/1060
# Only add the coretemp hwmon entries if they're not already in
# /sys/class/hwmon/
# https://github.com/giampaolo/psutil/issues/1708
# https://github.com/giampaolo/psutil/pull/1648
# A lot of things can go wrong here, so let's just skip the
# whole entry. Sure thing is Linux's /sys/class/hwmon really
# is a stinky broken mess.
# https://github.com/giampaolo/psutil/issues/1009
# https://github.com/giampaolo/psutil/issues/1101
# https://github.com/giampaolo/psutil/issues/1129
# https://github.com/giampaolo/psutil/issues/1245
# https://github.com/giampaolo/psutil/issues/1323
# Indication that no sensors were detected in /sys/class/hwmon/
# Get the first available battery. Usually this is "BAT0", except
# some rare exceptions:
# https://github.com/giampaolo/psutil/issues/1238
# Base metrics.
# Percent. If we have energy_full the percentage will be more
# accurate compared to reading /capacity file (float vs. int).
# Is AC power cable plugged in?
# Note: AC0 is not always available and sometimes (e.g. CentOS7)
# it's called "AC".
# Seconds left.
# Note to self: we may also calculate the charging ETA as per:
# https://github.com/thialfihar/dotfiles/blob/
# Linux's apparently does not distinguish between PIDs and TIDs
# (thread IDs).
# listdir("/proc") won't show any TID (only PIDs) but
# os.stat("/proc/{tid}") will succeed if {tid} exists.
# os.kill() can also be passed a TID. This is quite confusing.
# In here we want to enforce this distinction and support PIDs
# only, see:
# https://github.com/giampaolo/psutil/issues/687
# Note: already checked that this is faster than using a
# regular expr. Also (a lot) faster than doing
# 'return pid in pids()'
# If tgid and pid are the same then we're
# dealing with a process PID.
# /proc/PID directory may still exist, but the files within
# it may not, indicating the process is gone, see:
# https://github.com/giampaolo/psutil/issues/2418
# Note: most of the times Linux is able to return info about the
# process even if it's a zombie, and /proc/{pid} will exist.
# There are some exceptions though, like exe(), cmdline() and
# memory_maps(). In these cases /proc/{pid}/{file} exists but
# it's empty. Instead of returning a "null" value we'll raise an
# * https://github.com/giampaolo/psutil/issues/503
# * ENOENT may occur also if the path actually exists if PID is
# * https://github.com/giampaolo/psutil/issues/2514
# Process name is between parentheses. It can contain spaces and
# other parentheses. This is taken into account by looking for
# the first occurrence of "(" and the last occurrence of ")".
# aka 'delayacct_blkio_ticks'
# https://github.com/giampaolo/psutil/issues/2455
# XXX - gets changed later and probably needs refactoring
# may happen in case of zombie process
# 'man proc' states that args are separated by null bytes '\0'
# and last char is supposed to be a null byte. Nevertheless
# some processes may change their cmdline after being started
# (via setproctitle() or similar), they are usually not
# compliant with this rule and use spaces instead. Google
# Chrome process is an example. See:
# https://github.com/giampaolo/psutil/issues/1179
# Sometimes last char is a null byte '\0' but the args are
# separated by spaces, see: https://github.com/giampaolo/psutil/
# issues/1179#issuecomment-552984549
# May not be available on old kernels.
# https://github.com/giampaolo/psutil/issues/1004
# read syscalls
# write syscalls
# read bytes
# write bytes
# read chars
# write chars
# The 'starttime' field in /proc/[pid]/stat is expressed in
# jiffies (clock ticks per second), a relative value which
# represents the number of clock ticks that have passed since
# the system booted until the process was created. It never
# changes and is unaffected by system clock updates.
# Add the boot time, returning time expressed in seconds since
# the epoch. This is subject to system clock updates.
# | FIELD  | DESCRIPTION                         | AKA  | TOP  |
# | rss    | resident set size                   |      | RES  |
# | vms    | total program size                  | size | VIRT |
# | shared | shared pages (from shared mappings) |      | SHR  |
# | text   | text ('code')                       | trs  | CODE |
# | lib    | library (unused in Linux 2.6)       | lrs  |      |
# | data   | data + stack                        | drs  | DATA |
# | dirty  | dirty pages (unused in Linux 2.6)   | dt   |      |
# /proc/pid/smaps_rollup was added to Linux in 2017. Faster
# than /proc/pid/smaps. It reports higher PSS than */smaps
# (from 1k up to 200k higher; tested against all processes).
# IMPORTANT: /proc/pid/smaps_rollup is weird, because it
# raises ESRCH / ENOENT for many PIDs, even if they're alive
# (also as root). In that case we'll use /proc/pid/smaps as
# fallback, which is slower but has a +50% success rate
# compared to /proc/pid/smaps_rollup.
# Private_Clean, Private_Dirty, Private_Hugetlb
# Gets Private_Clean, Private_Dirty, Private_Hugetlb.
# /proc/pid/smaps does not exist on kernels < 2.6.14 or if
# CONFIG_MMU kernel configuration option is not enabled.
# Note: using 3 regexes is faster than reading the file
# line by line.
# You might be tempted to calculate USS by subtracting
# the "shared" value from the "resident" value in
# /proc/<pid>/statm. But at least on Linux, statm's "shared"
# value actually counts pages backed by files, which has
# little to do with whether the pages are actually shared.
# /proc/self/smaps on the other hand appears to give us the
# correct information.
# Note: smaps file can be empty for certain processes.
# The code below will not crash though and will result to 0.
# faster
# new block section
# see issue #369
# Note: smaps file can be empty for certain processes or for
# zombies.
# Using a re is faster than iterating over file line by line.
# no such file or directory or no such process;
# it means thread disappeared on us
# ignore the first two values ("pid (exe)")
# with open_text(f"{self._procfs_path}/{self.pid}/stat") as f:
# Use C implementation
# starting from CentOS 6.
# See: https://github.com/giampaolo/psutil/issues/956
# only starting from kernel 2.6.13
# If pid is 0 prlimit() applies to the calling process and
# we don't want that. We should never get here though as
# PID 0 is not supported on Linux.
# get
# set
# I saw this happening on Travis:
# https://travis-ci.org/giampaolo/psutil/jobs/51368273
# ENOENT == file which is gone in the meantime
# If path is not an absolute there's no way to tell
# whether it's a regular file or not, so we skip it.
# A regular file is always supposed to be have an
# absolute path though.
# Get file position and flags.
# fd gone in the meantime; process may
# still be alive
# Guard: We need at least two state to remain to both
# backtrack and push a new state
# Python label.
# Non-ASCII white space.
# ASCII-only case insensitivity.
# All encoding names are valid labels too:
# UTF-8 with BOM
# UTF-16-BE with BOM
# UTF-16-LE with BOM
# Some names in Encoding are not valid Python aliases. Remap these.
# This turns out to be faster than unicode.translate()
# Only strip ASCII whitespace: U+0009, U+000A, U+000C, U+000D, and U+0020.
# Any python_name value that gets to here should be valid.
#: The UTF-8 encoding. Should be used for new content and formats.
# Fail early if `encoding` is an invalid label.
# Input exhausted without determining the encoding
#: The actual :class:`Encoding` that is being used,
#: or :obj:`None` if that is not determined yet.
#: (Ie. if there is not enough input yet to determine
#: if there is a BOM.)
# Not known yet.
# Not enough data yet.
# No BOM
# for c in range(256): print('    %r' % chr(c if c < 128 else c + 0xF700))
# XXX Do not edit!
# This file is automatically generated by mklabels.py
# Copyright (c) 2008-2011 Volvox Development Team
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# Author: Konstantin Lepa <konstantin.lepa@gmail.com>
# Actually black but kept for backwards compatibility
# First check overrides:
# "User-level configuration files and per-instance command-line arguments should
# override $NO_COLOR. A user should be able to export $NO_COLOR in their shell
# configuration file as a default, but configure a specific program in its
# configuration file to specifically enable color."
# https://no-color.org
# Then check env vars:
# Then check system:
# Define _CustomList and _CustomDict as a workaround for:
# https://github.com/python/mypy/issues/11427
# According to this issue, the typeshed contains a "lie"
# (it adds MutableSequence to the ancestry of list and MutableMapping to
# the ancestry of dict) which completely messes with the type inference for
# Table, InlineTable, Array and Container.
# Importing from builtins is preferred over simple assignment, see issues:
# https://github.com/python/mypy/issues/8715
# https://github.com/python/mypy/issues/10068
# Single Line Basic
# Multi Line Basic
# Single Line Literal
# Multi Line Literal
# https://toml.io/en/v1.0.0#string
# Whitespace before a value.
# Whitespace after a value, but before a comment.
# Comment, starting with # character, or empty string if no comment.
# Trailing newline.
# int methods
# float methods
# when comma is met and no value is provided, add a dummy Null
# Comments are the last item in a group.
# insert position of the self._value list
# The last item is a pure whitespace(\n ), insert before it
# Prefer to copy the indentation from the item after
# Copy the comma from the last item if 1) it contains a value and
# 2) the array is multiline
# Add comma to the last item to separate it from the following items.
# apply default indent if it isn't the first item or the array is multiline.
# Remove the indentation of the first item if not newline
# Removed group had both commas. Add one to the next group.
# Insert the comma after the newline
# Removed group had no commas. Remove the next comma found.
# Restore the removed group's newline onto the next group
# if the next group does not have a newline.
# i.e. the two were on the same line
# remove the comma of the last item
# If the table has children and all children are tables, then it is a super table.
# Line feed
# Input to parse
# Take all keyvals outside of tables/AoT's.
# Break out if a table is found
# Otherwise, take and append one KV
# We actually have a table
# This is just the first table in an AoT. Parse the rest of the array
# along with it.
# Found a newline; Return all whitespace found up to this point.
# Found a comment, parse it
# Found a table, delegate to the calling function.
# Beginning of a KV pair.
# Return to beginning of whitespace so it gets included
# as indentation for the KV about to be parsed.
# Skip #
# The comment itself
# Leading indent
# Key
# Value
# Skip any leading whitespace
# Extract the leading whitespace
# Empty key
# Bare key with spaces in it
# Number
# Integer, Float, Date, Time or DateTime
# datetime
# only keep parsing for bool if the characters match the style
# try consuming rest of chars in style
# Consume opening bracket, EOF here is an issue (middle of array)
# consume whitespace
# consume comment
# consume indent
# consume value
# consume comma
# If the previous item is Whitespace, add to it
# consume closing bracket
# consume closing bracket, EOF here doesn't matter
# consume opening bracket, EOF here is an issue (middle of array)
# consume leading whitespace
# None: empty inline table
# False: previous key-value pair was not followed by a comma
# Either the previous key-value pair was not followed by a comma
# or the table has an unexpected leading comma.
# True: previous key-value pair was followed by a comma
# consume trailing whitespace
# consume trailing comma
# consume closing bracket, EOF here is an issue (middle of inline table)
# Leading zeros are not allowed
# Underscores should be surrounded by digits
# When the last non-whitespace character on a line is
# a \, it will be trimmed along with all whitespace
# (including newlines) up to the next non-whitespace
# character or closing delimiter.
# """\
# consume the whitespace, EOF here is an issue
# (middle of string)
# the escape followed by whitespace must have a newline
# before any other chars
# consume this char, EOF here is an issue (middle of string)
# this needs to be a unicode
# consume the U char and the unicode value
# only keep parsing for string if the current character matches the delim
# consume the opening/first delim, EOF here is an issue
# (middle of string or middle of delim)
# consume the closing/second delim, we do not care if EOF occurs as
# that would simply imply an empty single line string
# consume the third delim, EOF here is an issue (middle of string)
# convert delim to multi delim
# to extract the original string with whitespace and all
# A newline immediately following the opening delimiter will be trimmed.
# consume the newline, EOF here is an issue (middle of string)
# whether the previous key was ESCAPE
# try to process current as a closing delim
# Consume the delimiters to see if we are at the end of the string
# Not a triple quote, leave in result as-is.
# Adding back the characters we already consumed
# We are at the end of the string
# consume the closing delim, we do not care if EOF occurs as
# that would simply imply the end of self._src
# attempt to parse the current char as an escaped value, an exception
# is raised if this fails
# no longer escaped
# the next char is being escaped
# this is either a literal string where we keep everything as is,
# or this is not a special escaped char in a basic string
# Skip opening bracket
# Skip closing bracket
# TODO: Verify close bracket
# Missing super table
# i.e. a table initialized like this: [foo.bar]
# without initializing [foo]
# So we have to create the parent tables
# Picking up any sibling
# we always want to restore after exiting this scope
# AoT
# Dropping prefix
# Entering this context manager - save the state
# Exiting this context manager - restore the prior state
# Collection of TOMLChars
# initialize both idx and current
# reset marker
# failed to consume minimum number of characters
# Null elements are inserted after deletion
# New AoT element found later on
# Adding it to the current AoT
# Tried to define a table after an AoT with the same name.
# We need to merge both super tables
# Create a new element to replace the old one
# Tried to define an AoT after a table with the same name.
# If there is already at least one table in the current container
# and the given item is not a table, we need to find the last
# item that is not a table and insert after it
# If no such item exists, insert at the top of the table
# Insert after the max index if there are many.
# Increment indices after the current index
# The item we are getting is an out of order table
# so we need a proxy to retrieve the proper objects
# from the parent container
# Dotted key inside table
# Dictionary methods
# Inherit the sep of the old key
# new tables should appear after all non-table values
# Copying trivia
# Insert a cosmetic new line for tables if:
# - it does not have it yet OR is not followed by one
# - it is not the last item, or
# - The table being replaced has a newline
# Append all items to a temp container to see if there is any error
# Overwrite the first table and remove others
# Remove the entry from the map and set value again.
# if the value is a plain value
# find the first table that allows plain values
# data should be a `Container` (and therefore implement `as_string`)
# for all type safe invocations of this function
# Items
# check if consistent line endings
# apply linesep
# Date
# Separator
# Time
# Timezone
# fast-forward escape sequence
# Loading global configuration
# Loading local configuration
# Load local sources
# Only add PyPI if no primary repository is configured
# A project should not depend on itself.
# TODO: consider [project.dependencies] and [project.optional-dependencies]
# cheating in annotation: str will be converted to Priority in __post_init__
# Ensuring the file is only readable and writable
# by the current user
# keys should be string
# items are allowed to be a string
# list items should only contain strings
# This should be directly handled by ThreadPoolExecutor
# however, on some systems the number of CPUs cannot be determined
# (it raises a NotImplementedError), so, in this case, we assume
# that the system only has one CPU.
# Looking in the environment if the setting
# is set via a POETRY_* environment variable
# repositories setting is special for now
# merge installer build config settings from the environment
# this is a configuration table, it is likely that we missed env vars
# in order to capture them recurse, eg: virtualenvs.options
# The key doesn't exist in the config but might be resolved later,
# so we keep it as a format variable.
# Load global config
# Load global auth config
# Contrary to the base library we don't raise an error here since it can
# break pkgutil-style and pkg_resource-style namespace packages.
# Due to the parallel installation it can happen
# that two threads try to create the directory.
# Content validation is temporarily disabled because of
# pypa/installer's out of memory issues with big wheels. See
# https://github.com/python-poetry/poetry/issues/7983
# Additional metadata that is generated by the installation tool.
# Check if refresh
# Force update if there is no lock file present
# Checking extras
# Always re-solve directory dependencies, otherwise we can't determine
# if anything has changed (and the lock file contains an invalid version).
# If no packages have been whitelisted (The ones we want to update),
# we whitelist every package in the lock file.
# If we are only in lock mode, no need to go any further
# We resolve again by only using the lock file
# Everything is resolved at this point, so we no longer need
# to load deferred dependencies (i.e. VCS, URL and path dependencies)
# If no packages synchronisation has been requested we need
# to calculate the uninstall operations
# Validate the dependencies
# Execute operations
# Only write lock file when installation is success
# Cache whether decorated output is supported.
# https://github.com/python-poetry/cleo/issues/423
# sdist build config settings
# pip has to be installed/updated first without parallelism
# because we still need it for uninstalls
# We group operations by priority
# Some operations are unsafe, we must execute them serially in a group
# https://github.com/python-poetry/poetry/issues/3086
# https://github.com/python-poetry/poetry/issues/2658
# We need to explicitly check source type here, see:
# https://github.com/python-poetry/poetry-core/pull/98
# Skipped operations are safe to execute in parallel
# Serially execute git operations that get cloned to the same directory,
# to prevent multiple parallel git operations in the same repo.
# For each git repository, execute all operations serially
# If we have a result of -2 it means a KeyboardInterrupt
# in the any python subprocess, so we raise a KeyboardInterrupt
# error to be picked up by the error handler.
# TODO: Revisit once upstream fix is available https://github.com/python-poetry/cleo/issues/454
# we disable trace here explicitly to workaround incorrect context detection by crashtest
# Uninstall first
# TODO: Make an uninstaller and find a way to rollback in case
# the new package can't be installed
# If we have a VCS package, remove its source directory
# Only cache git archives when we know precise reference hash,
# otherwise we might get stale archives
# Now we just need to install from the source directory
# always reset source_url in case of an error for correct output
# Mark directories with cached git packages, to distinguish from
# "normal" cache
# Store yanked warnings in a list and print after installing, so they can't
# be overlooked. Further, printing them in the concerning section would have
# the risk of overwriting the warning, so it is only briefly visible.
# Get original package for the link provided
# Get potential higher prioritized cached archive, otherwise it will fall back
# to the original archive.
# Since we previously downloaded an archive, we now should have
# something cached that we can use here. The only case in which
# archive is None is if the original archive is not valid for the
# current environment.
# Use the original archive to provide the correct hash.
# these are used only for providing insightful errors to the user
# exact package name must reject wheel, even if `only-binary` includes it
# `:all:` reject wheel only if `only-binary` does not include it
# exact package name must reject sdist, even if `no-binary` includes it
# `:all:` reject sdist only if `no-binary` does not include it
# Get the best link
# TODO: Binary preference
# this cannot presently be replaced with importlib.metadata.version as when building
# itself, poetry-core is not available as an installed distribution.
# PEP 517 requires can be path if not PEP 508
# skip since we could not determine requirement
# Checking validity
# Load package
# If name or version were missing in package mode, we would have already
# raised an error, so we can safely assume they might only be missing
# in non-package mode and use some dummy values in this case.
# This is handled in validate().
# Table values for the license key in the [project] table,
# including the text and file table subkeys, are now deprecated.
# If the new license-files key is present, build tools MUST raise an
# error if the license key is defined and has a value other
# than a single top-level string.
# https://peps.python.org/pep-0639/#deprecate-license-key-table-subkeys
# Tools MUST NOT use the contents of the license.text [project] key
# (or equivalent tool-specific format), [...] to fill [...] the Core
# Metadata License-Expression field without informing the user and
# requiring unambiguous, affirmative user action to select and confirm
# the desired license expression value before proceeding.
# https://peps.python.org/pep-0639/#converting-legacy-metadata
# -> We just set the old license field in this case
# If the specified license file is present in the source tree,
# build tools SHOULD use it to fill the License-File field
# in the core metadata, and MUST include the specified file
# as if it were specified in a license-file field.
# If the file does not exist at the specified path,
# tools MUST raise an informative error as previously specified.
# explicitly not a tuple to allow default handling
# to find additional license files later
# important: distinction between empty array and None:
# - empty array: explicitly no license files
# - None (not set): default handling allowed
# Build tools MUST treat each value as a glob pattern,
# and MUST raise an error if the pattern contains invalid glob syntax.
# https://peps.python.org/pep-0639/#add-license-files-key
# Path delimiters MUST be the forward slash character (/).
# Parent directory indicators (..) MUST NOT be used.
# ignore extras in [tool.poetry] if dependencies or optional-dependencies
# are declared in [project]
# Checking for dependency
# create groups from the dependency-groups section considering
# additional information from the corresponding tool.poetry.group section
# create groups from the tool.poetry.group section
# with no corresponding entry in dependency-groups
# and add dependency information for existing groups
# `name` isn't normalized,
# but `.dependency_group()` handles that.
# VCS dependency
# Normally not valid, but required for enriching [project] dependencies
# Validate against schemas
# With PEP 621 [tool.poetry] is not mandatory anymore. We still create and
# validate it so that default values (e.g. for package-mode) are set.
# Check for required fields if package mode.
# In non-package mode, there are no required fields.
# Validate [project] section
# Validate relation between [project] and [tool.poetry]
# group, path to group, ancestors
# name, deprecated (if not dynamic), new name (or None if same as old)
# version can be dynamically set via `build --local-version` or plugins
# multiple readmes are not supported in [project.readme]
# classifiers are enriched dynamically per default
# scripts are special because entry-points are deprecated
# but files are not because there is no equivalent in [project]
# dependencies are special because we consider
# [project.dependencies] as abstract dependencies for building
# and [tool.poetry.dependencies] as the concrete dependencies for locking
# requires-python in [project] and python in [tool.poetry.dependencies] are
# special because we consider requires-python as abstract python version
# for building and python as concrete python version for locking
# Checking for scripts with extras
# Checking types of all readme files (must match)
# Not OSI Approved
# OSI Approved
# Add a Proprietary license for non-standard licenses
# Parser: PEP 508 Constraints
# Parser: PEP 508 Environment Markers
# "extra" is special because it can have multiple values at the same time.
# "extra == 'a'" will be true if "a" is one of the active extras.
# "extra != 'a'" will be true if "a" is not one of the active extras.
# Further, extra names are normalized for comparison.
# The type of constraint returned by the parser matches our constraint: either
# both are BaseConstraint or both are VersionConstraint. But it's hard for mypy
# to know that.
# Something like `"tegra" in platform_release`
# or `"arm" not in platform_version`.
# fix precision of python_full_version marker
# if we have a in/not in operator we split the constraint
# into a union/multi-constraint of single constraint
# This one is more tricky to handle
# since it's technically a multi marker
# so the inverse will be a union of inverse
# The constraint must be a version range, otherwise
# it's an internal error
# We should never go there
# In __init__ we've made sure that we have a UnionConstraint that
# contains only elements of type Constraint (instead of BaseConstraint)
# but mypy can't see that.
# If we have a MarkerUnion then we can look for the simplifications
# implemented in intersect_simplify().
# If we have a SingleMarker then with any luck after intersection
# it'll become another SingleMarker.
# flatten again because intersect_simplify may return a multi
# Convert 'python_version >= "3.8" and sys_platform == "linux" or python_version > "3.6"'
# to 'python_version > "3.6"'
# Do not use sets to create MultiMarkers for deterministic order!
# Convert 'python_version >= "3.8" and python_version < "3.10"
# or python_version >= "3.10" and python_version < "3.12"'
# to 'python_version >= "3.6" and python_version < "3.12"'
# The marker is not relevant since it must be excluded
# If we have a MultiMarker then we can look for the simplifications
# implemented in union_simplify().
# If we have a SingleMarker then with any luck after union it'll
# become another SingleMarker.
# Especially, for `python_version` markers a multi marker is also
# an improvement. E.g. the union of 'python_version == "3.6"' and
# 'python_version == "3.7" or python_version == "3.8"' is
# 'python_version >= "3.6" and python_version < "3.9"'.
# flatten again because union_simplify may return a union
# Convert '(python_version >= "3.6" or sys_platform == "linux") and python_version > "3.8"'
# to 'python_version > "3.8"'
# Do not use sets to create MarkerUnions for deterministic order!
# Convert '(python_version == "3.6" or python_version >= "3.8)"
# and (python_version >= "3.6" and python_version < "3.8"
# or python_version == "3.9")'
# to 'python_version == "3.6" or python_version == "3.9"'
# All markers were the excluded marker.
# groups is a disjunction of conjunctions
# eg [[A, B], [C, D]] represents "(A and B) or (C and D)"
# Combine the groups.
# This function calls itself recursively. In the inner calls we don't perform any
# simplification, instead doing it all only when we have the complete marker.
# Sometimes normalization makes it more complicated instead of simple
# -> choose candidate with the least complexity
# Markers with the same name have the same constraint type,
# Convert 'python_version >= "3.8" and python_version < "3.9"'
# to 'python_version == "3.8"'.
# Detect 'python_version > "3.8" and python_version < "3.9"' as empty.
# Convert 'python_version <= "3.8" or python_version >= "3.9"' to "any".
# Convert 'python_version == "3.8" or python_version >= "3.9"'
# to 'python_version >= "3.8"'.
# Convert 'python_version' == "3.8" or python_version == "3.9"'
# to 'python_version >= "3.8" and python_version < "3.10"'.
# Although both markers have the same complexity, the latter behaves
# better if it is merged with 'python_version == "3.10' in a next step
# for example.
# prefer original marker to avoid unnecessary changes
# We have to fix markers like 'python_full_version == "3.6"'
# to receive 'python_full_version == "3.6.0"'.
# It seems a bit hacky to convert to string and back to marker,
# but it's probably much simpler than to consider the different constraint
# classes (mostly VersonRangeConstraint, but VersionUnion for "!=") and
# since this conversion is only required for python_full_version markers
# it may be sufficient to handle it here.
# drop trailing ".0"
# Transform 3.6 or 3
# 3.6
# Checking lower bound
# Release phase IDs according to PEP440
# some projects use non-semver versioning schemes, eg: 1.2.3.4
# we use the phase "z" to ensure we always sort this after other phases
# we use the phase "" to ensure we always sort this before other phases
# we do this here to handle both None and tomlkit string values
# We convert strings that are integers so that they can be compared
# if epoch is non-zero we should include it
# setup defaults with current values, excluding compare keys and text
# keys to replace
# Even if there is no [tool.poetry] section, a project can still be a
# valid Poetry project if there is a name and a version in [project]
# and there are no dynamic fields.
# TODO: Convert to dataclass once python 2.7, 3.5 is dropped
# avoid circular dependency when loading DirectoryDependency
# https://url.spec.whatwg.org/#forbidden-host-code-point
# Finding git via where.exe
# compatibility for python <3.11
# mypy reports an error if ignore_cleanup_errors is
# specified literally in the call
# extras or conditional dependencies
# make sure this is a Path object, not str
# both os.unlink and shutil.rmtree can throw exceptions on Windows
# if the files are in use when called
# Only hits this on success
# Increase the timeout and try again
# Final attempt, pass any Exceptions up to caller.
# For now, we require all dependencies to build either a wheel or an sdist.
# noqa: RUF012
# Version 2.1
# Version 2.4
# Version 1.2
# Requires python
# Build tools MUST raise an error if any individual user-specified
# pattern does not match at least one file.
# default handling
# Symlinks & ?
# If we have a build script, use it
# Undocumented setup() feature:
# the empty string matches all package names
# Relative to the top-level package
# This is just a shortcut. It will be ignored later anyway.
# Sort values in pkg_data
# add any additional files
# add legal files
# add script files
# Include project files
# add readme files if specified
# Since we have a build script but no setup.py generation is required,
# we assume that the build script will build and copy the files
# That way they will be picked up when adding files to the wheel.
# We need to place ourselves in the temporary
# directory in order to build the package
# For an editable install, the extension modules will be built
# in-place - so there's no need to copy them into the zip
# The result of building the extensions
# does not exist, this may due to conditional
# builds, so we assume that it's okay
# Roughly equivalent to the naming convention in used by distutils, see:
# distutils.command.build.build.finalize_options
# poetry-core is not run in the build environment
# -> this is probably not a PEP 517 build but a poetry build
# Either the purelib or platlib path will have been used when building
# Walk the files and compress them,
# sorting everything so the order is stable.
# Write a record of the files in the wheel
# RECORD itself is recorded with no hash or size
# We always want to have /-separated paths in the zip file and in RECORD
# Normalize permission bits to either 755 (executable) or 644
# Unix attributes
# MS-DOS directory flag
# The default is a fixed timestamp rather than the current time, so
# that building a wheel twice on the same computer can automatically
# give you the exact same result.
# Checking VCS
# add build script if it is specified and explicitly required
# Indentation is not only for readability, but required
# so that the line break is not treated as end of field.
# The exact indentation does not matter,
# but it is essential to also indent empty lines.
# scripts can be generated by build_script, in this case they do not exist here
# It must exist either as a .py file or a directory, but not both
# Searching for a src module
# returns `True` if this a PEP 561 stub-only package,
# see [PEP 561](https://www.python.org/dev/peps/pep-0561/#stub-only-packages)
# Packages no longer need an __init__.py in python3, but there must
# at least be one .py file for it to be considered a package
# Probably glob
# If it's a directory, we include everything inside it
# Set 644 permissions, leaving higher bits of st_mode unchanged
# Executable: 644 -> 755
# That is a bit inaccurate because
# 1) The exclusive ordered comparison >V MUST NOT allow a post-release
# 2) The exclusive ordered comparison >V MUST NOT match
# https://peps.python.org/pep-0440/#exclusive-ordered-comparison
# However, there is no specific min greater than the greatest post release
# or greatest local version identifier. These cases have to be handled by
# the callers of allowed_min.
# this is an equality range
# The exclusive ordered comparison <V MUST NOT allow a pre-release
# of the specified version unless the specified version is itself a pre-release.
# note that we also allow technically incorrect version patterns with astrix (eg: 3.5.*)
# as this is supported by pip and appears in metadata within python packages
# pattern for non Python versions such as OS versions in `platform_release`
# allow trailing commas for robustness (even though it may not be
# standard-compliant it seems to occur in some packages)
# Tilde range
# PEP 440 Tilde range (~=)
# Caret range
# X Range
# Basic comparator
# These below should be reserved for comparing non python packages such as OS
# versions using `platform_release`
# Rationale:
# 1. If no version can satisfy the constraint,
# 2. The opposite of an empty constraint, which is *, has no upper bound
# allow weak equality to allow `3.0.0+local.1` for `3.0.0`
# Only allow Versions and VersionRanges here so we can more easily reason
# about everything in flattened. _EmptyVersions and VersionUnions are
# filtered out above.
# Merge this constraint with the previous one, but only if they touch.
# when excluded version is local, special handling is required
# to ensure that a constraint (!=2.0+deadbeef) will allow the
# provided version (2.0)
# The exclusive ordered comparison >V MUST NOT allow a post-release
# of the given version unless V itself is a post release.
# e.g. "2.0.post1" does not match ">2"
# The exclusive ordered comparison >V MUST NOT match
# a local version of the specified version.
# e.g. "2.0+local.version" does not match ">2"
# allow weak equality to allow `3.0.0+local.1` for `<=3.0.0`
# Although `>=1.2.3+local` does not allow the exact version `1.2.3`, both of
# those versions do allow `1.2.3+local`.
# A range and a Version just yields the version if it's in the range.
# `>=1.2.3+local` intersects `1.2.3` to return `>=1.2.3+local,<1.2.4`.
# If the range is just a single version.
# Because we already verified that the lower range isn't strictly
# lower, there must be some overlap.
# If we got here, there is an actual range.
# If the two ranges don't overlap, we won't be able to create a single
# VersionRange for both of them.
# Skip any ranges that are strictly lower than [current].
# If we reach a range strictly higher than [current], no more ranges
# will be relevant so we can bail early.
# If [range] split [current] in half, we only need to continue
# checking future ranges against the latter half.
# - "1.*" equals ">=1.0.dev0, <2" (equivalent to ">=1.0.dev0, <2.0.dev0")
# - "1.0.*" equals ">=1.0.dev0, <1.1"
# - "1.2.*" equals ">=1.2.dev0, <1.3"
# remove trailing zeros from second
# fill up first with zeros
# all exceeding parts of first must be zero
# remove trailing zeros from max
# to preserve order (functionally not necessary)
# Do the check after calling the super constructor,
# i.e. after the operator has been normalized.
# string comparator
# same value but different operator, e.g. '== "linux"' and '!= "linux"'
# Since the extra marker can have multiple values at the same time,
# "==extra1, ==extra2" is not empty!
# same value but different operator
# (A or B) and C => (A and C) or (B and C)
# just a special case of UnionConstraint
# (A or B) and (A or B or C) => A or B
# (A or B) and (C or D) => (A and C) or (A and D) or (B and C) or (B and D)
# (A or B) and (C and D) => (A and C and D) or (B and C and D)
# (A or B) and (A and D) => A and D
# (A or B) or C => A or B or C
# (A or B) or (C or D) => A or B or C or D
# (A or B) or (A and D) => A or B
# (A or B) or (not A and D) => A or B or D
# (A or B) or (C and D) => nothing to do
# Attributes must be immutable for clone() to be safe!
# (For performance reasons, clone only creates a copy instead of a deep copy).
# cache validation result to avoid unnecessary file system access
# "_develop" is only required for enriching [project] dependencies
# If we have extras, the dependency is optional
# Recalculate python versions.
# extras activated in a dependency is the same as features
# This branch is a short-circuit logic for special cases and
# avoids having to split and parse constraint again. This has
# no functional difference with the logic in the else branch.
# we re-check for any marker here since the without extra marker might
# return an any marker again
# Python marker
# Removing comments
# noqa: PTH100
# handle RFC 8089 references
# this is a local path not using the file URI scheme
# "constraint" is implicitly given for direct origin dependencies and might not
# be set yet ("*"). Thus, it shouldn't be used to determine if two direct origin
# dependencies are equal.
# Calling is_direct_origin() for one dependency is sufficient because
# super().__eq__() returns False for different origins.
# don't include _constraint in hash because it is mutable!
# adding version since this information is especially useful in debug output
# a base path was specified, so we should respect that
# we check if it is a file (if it exists) or rely on suffix to guess
# override version to make it settable
# The parent Package class's __hash__ incorporates the version because
# a Package's version is immutable. But a ProjectPackage's version is
# mutable. So call Package's parent hash function.
# meaning of different values:
# - tuple: project.license-files -> NO default handling
# - None: nothing specified -> default handling
# - Path: deprecated project.license.file -> file + default handling
# if source reference is a sha1 hash -- truncate
# Automatically set python classifiers
# we sort python versions by sorting an int tuple of (major, minor) version
# to ensure we sort 3.10 after 3.9
# Automatically set license classifiers
# License classifiers have been deprecated in PEP 639.
# We only use them for licenses from the deprecated [project.license] table
# (via self.license) and not if self.license_expression is set.
# Sort classifiers and insert python classifiers at the right location. We do
# it like this so that 3.10 is sorted after 3.9.
# Dynamically add the dependency group
# noqa: SIM103
# The dependency doesn't care about the source, so this package
# certainly satisfies it.
# The dependency specifies a source_name but not a type: it wants either
# pypi or a legacy repository.
# - If this package has no source type then it's from pypi, so it
# - Else this package is a match if and only if it is from the desired
# The dependency specifies a source: this package matches if and only if it is
# from that source.
# legacy mode
# cache this function to avoid multiple IO reads and parsing
# both packages are of source type None
# no need to check further
# We check the resolved reference first:
# if they match we assume equality regardless
# of their source reference.
# This is important when comparing a resolved branch VCS
# dependency to a direct commit reference VCS dependency
# special handling for packages with references
# case: one reference is defined and is non-empty, but other is not
# case: both references defined, but one is not equal to or a short
# representation of the other
# complete_name includes features
# Don't include _source_reference and _source_resolved_reference in hash
# because two specs can be equal even if these attributes are not equal.
# (They must still meet certain conditions. See is_same_source_as().)
# Even though we've `from __future__ import annotations`, mypy doesn't seem to like
# this as `dict[str, ...]`
# python_full_version is equivalent to python_version
# for Poetry so we merge them
# remove duplicates
# `python_version` is a special case: to keep the constructed marker equivalent
# to the constraint we need to be careful with the precision.
# PEP 440 tells us that when we come to make the comparison the release
# segment will be zero padded: eg "<= 3.10" is equivalent to "<= 3.10.0".
# But "python_version <= 3.10" is _not_ equivalent to "python_version <= 3.10.0"
# - see normalize_python_version_markers.
# A similar issue arises for a constraint like "> 3.6".
# groups are in disjunctive normal form (DNF),
# an empty group means that python_version does not appear in this group,
# which means that python_version is arbitrary for this group
# NOSONAR
# Expand python version
# Make adjustments on encountering versions with less than full
# precision.
# Per PEP-508:
# python_version <-> '.'.join(platform.python_version_tuple()[:2])
# So for two digits of precision we make the following adjustments:
# - `python_version > "x.y"` requires version >= x.(y+1).anything
# - `python_version <= "x.y"` requires version < x.(y+1).anything
# Treatment when we see a single digit of precision is less clear: is
# that even a legitimate marker?
# Experiment suggests that pip behaviour is essentially to make a
# lexicographical comparison, for example `python_version > "3"` is
# satisfied by version 3.anything, whereas `python_version <= "3"` is
# satisfied only by version 2.anything.
# We achieve the above by fiddling with the operator and version in the
# marker.
# Cache commands
# Debug commands
# Env commands
# Python commands,
# Self commands
# Source commands
# these are special messages to override the default message when a command is not found
# in cases where a previously existing command has been moved to a plugin or outright
# removed for various reasons
# Set our own CLI styles
# Dark variants
# we do this here and not inside the _configure_io implementation in order
# to ensure the users are not exposed to a stack trace for providing invalid values to
# the options --directory or --project, configuring the options here allow cleo to trap and
# display the error cleanly unless the user uses verbose or debug
# we use ensure_path for the directories to make sure these are valid paths
# this will raise an exception if the path is invalid
# is truthy
# this is required to ensure stdin is transferred
# this is required as cleo internally checks for `io.input._interactive`
# when configuring io, and cleo's test applications overrides this attribute
# explicitly causing test setups to fail
# this means the user has done the right thing and used "poetry run -- echo hello"
# in this case there is not much we need to do, we can skip the rest
# find the correct command index, in some cases this might not be first occurrence
# eg: poetry -C run run echo
# try parsing the tokens so far
# parsing failed, try finding the next "run" token
# looks like we reached the end of the road, let cleo deal with it
# fetch tokens after the "run" command
# we create a new input for parsing the subcommand pretending
# it is poetry command
# we want to bind the definition here so that cleo knows what should be
# parsed, and how
# the first argument here is the subcommand
# recreate the original input reordering in the following order
# reset the input to our constructed form
# only log third-party packages when very verbose
# The builders loggers are special and we can actually
# start at the INFO level.
# If the command already has an installer
# we skip this step
# If we reach this point, the script is not installed
# Allow "Private ::" classifiers as recommended on PyPI and the packaging guide
# to allow users to avoid accidentally publishing private packages to PyPI.
# https://pypi.org/classifiers/
# scan dependencies and group dependencies settings in pyproject.toml
# Load poetry config and display errors, if any
# Validate trove classifiers
# Validate readme (files must exist)
# TODO: consider [project.readme] as well
# Verify that lock file is consistent
# tomlkit types are awkward to work with, treat content as a mostly untyped
# Run-Time Deps incl. extras
# Dependency Groups
# Validate version constraint
# create a second constraint for tool.poetry.dependencies with keys
# that cannot be stored in the project section
# add marker related keys to avoid ambiguity
# _in_extras must be set after converting the dependency to PEP 508
# and adding it to the project section to avoid a redundant extra marker
# Refresh the locker
# Cosmetic new line
# show the value if no value is provided
# handle repositories
# handle auth
# Only username, so we prompt for password
# handle certs
# handle build config settings
# Show tree view if requested
# The default case if there's no reverse dependencies is to query
# the subtree for pkg but if any rev-deps exist we'll query for each
# of them in turn
# if no rev-deps exist we'll make this clear as it can otherwise
# look very odd for packages that also have no or few direct
# dependencies
# Computing widths
# Non installed in non decorated mode
# subtract 6 for ' from '
# find the latest version allowed in this pool
# It needs an immediate semver-compliant upgrade
# it needs an upgrade but has potential BC breaks so is not urgent
# Set in poetry.console.application.Application.configure_installer
# Set in poetry.console.application.Application.configure_env
# Default to an empty value to signal no package was selected
# package selected by user, set constraint name to package name
# no constraint yet, determine the best version automatically
# determine the best version automatically
# check that the specified version/constraint exists
# before we proceed
# TODO: find similar
# remove from all groups
# We need to account for the old `dev-dependencies` section
# Prior to https://github.com/python-poetry/poetry-core/pull/629
# the existence of a module/package was checked when creating the
# EditableBuilder. Afterwards, the existence is checked after
# executing the build script (if there is one),
# i.e. during EditableBuilder.build().
# This is likely due to the fact that the project is an application
# not following the structure expected by Poetry.
# No need for an editable install in this case.
# we do not use resolve here due to compatibility issues
# for path.resolve(strict=False)
# Directory is not empty. Aborting.
# TODO: this should move into poetry-core
# Normalize after validating so that original names are printed
# in case of an error.
# Force update
# Building package first, if told
# ensure new source is valid. eg: invalid name etc.
# TODO: refactor env.py to allow removal with one loop
# Since we remove all the virtualenvs, we can also remove the entry
# in the envs file. (Strictly speaking, we should do this explicitly,
# in case it points to a virtualenv that had been removed manually before.)
# We separate this out to avoid unwanted side effect during testing while
# maintaining dynamic use in help text.
# This is not ideal, but is the simplest solution for now.
# We override the base class's handle() method to ensure that poetry and env
# are reset to work within the system project instead of current context.
# Further, during execution, the working directory is temporarily changed
# to parent directory of Poetry system pyproject.toml file.
# This method **should not** be overridden in child classes as it may have
# unexpected consequences.
# Using current pool for determine_requirements()
# Silencing output
# Calculate number of entries
# prefix all lines from third-party packages for easier debugging
# Find the most specific prefix in sys.path.
# We have to search the entire sys.path because a subsequent path might be
# a sub path of the first match and thereby a better match.
# this is unexpected, but let's play it safe
# main package name
# no pyproject.toml -> no project plugins
# parse project plugins
# Just remove the cache for two reasons:
# 1. Since the path of the cache has already been added to sys.path
# 2. Updating packages in the cache does not work out of the box,
# In sum, we keep it simple by always starting from an empty cache
# if something has changed.
# determine plugins relevant for Poetry's environment
# check if required plugins are already available
# Do not add the package to installed_packages so that
# the solver does not consider it.
# all required plugins are installed and satisfy the requirements
# install missing plugins
# consider all packages in Poetry's environment pinned
# add missing plugin dependencies
# force new package to be installed in the project cache instead of Poetry's env
# Build installed repository from locked packages so that plugins
# that may be overwritten are not included.
# only write .gitignore if path did not exist before
# identify release
# file content
# additional meta-data
# Metadata 1.2
# Metadata 2.1
# TODO: Provides extra
# based on https://github.com/pypa/twine/blob/a6dd69c79f7b5abfb79022092a5d3776a499e31b/twine/commands/upload.py#L32
# pypiserver (https://pypi.org/project/pypiserver)
# PyPI / TestPyPI / GCP Artifact Registry
# Nexus Repository OSS (https://www.sonatype.com/nexus-repository-oss)
# Artifactory (https://jfrog.com/artifactory/)
# Gitlab Enterprise Edition (https://about.gitlab.com)
# Retrieving config information
# Check if we have a token first
# FIPS mode disables MD5
# FIPS mode disables blake2
# foo ^1.5.0 is a subset of foo ^1.0.0
# foo ^2.0.0 is disjoint with foo ^1.0.0
# not foo ^1.0.0 is disjoint with foo ^1.5.0
# not foo ^1.5.0 overlaps foo ^1.0.0
# not foo ^2.0.0 is a superset of foo ^1.5.0
# foo ^2.0.0 is a subset of not foo ^1.0.0
# foo ^1.5.0 is disjoint with not foo ^1.0.0
# if transitive markers are not equal we have to handle it
# as overlapping so that markers are merged later
# foo ^1.0.0 overlaps not foo ^1.5.0
# not foo ^1.0.0 is a subset of not foo ^1.5.0
# not foo ^2.0.0 overlaps not foo ^1.0.0
# not foo ^1.5.0 is a superset of not foo ^1.0.0
# foo ^1.0.0 ∩ not foo ^1.5.0 → foo >=1.0.0 <1.5.0
# foo ^1.0.0 ∩ foo >=1.5.0 <3.0.0 → foo ^1.5.0
# not foo ^1.0.0 ∩ not foo >=1.5.0 <3.0.0 → not foo >=1.0.0 <3.0.0
# we do this here to indicate direct origin dependencies are
# compatible with NVR dependencies
# when creating a new term prefer direct-reference dependencies
# The assignments that have been made so far, in the order they were
# assigned.
# The decisions made for each package.
# The intersection of all positive Assignments for each package, minus any
# negative Assignments that refer to that package.
# This is derived from self._assignments.
# The union of all negative Assignments for each package.
# If a package has any positive Assignments, it doesn't appear in this
# map.
# The number of distinct solutions that have been attempted so far.
# Whether the solver is currently backtracking.
# When we make a new decision after backtracking, count an additional
# attempted solution. If we backtrack multiple times in a row, though, we
# only want to count one, since we haven't actually started attempting a
# new solution.
# Re-compute _positive and _negative for the packages that were removed.
# As soon as we have enough assignments to satisfy term, return them.
# Add suggested solution
# self._cache maps a package name to a stack of cached package lists,
# ordered by the decision level which added them to the cache. This is
# done so that when backtracking we can maintain cache entries from
# previous decision levels, while clearing cache entries from only the
# rolled back levels.
# In order to maintain the integrity of the cache, `clear_level()`
# needs to be called in descending order as decision levels are
# backtracked so that the correct items can be popped from the stack.
# provider.search_for() normally does not include pre-release packages
# (unless requested), but will include them if there are no other
# eligible package versions for a version constraint.
# Therefore, if the eligible versions have been filtered down to
# nothing, we need to call provider.search_for() again as it may return
# additional results this time.
# We could always use dependency.without_features() here,
# but for performance reasons we only do it if necessary.
# Use the cached dependency so that a possible explicit source is set.
# Iterate in reverse because conflict resolution tends to produce more
# general incompatibilities as time goes on. If we look at those first,
# we can derive stronger assignments sooner and more eagerly find
# conflicts.
# If the incompatibility is satisfied by the solution, we use
# _resolve_conflict() to determine the root cause of the conflict as
# a new incompatibility.
# It also backjumps to a point in the solution
# where that incompatibility will allow us to derive new assignments
# that avoid the conflict.
# Back jumping erases all the assignments we did at the previous
# decision level, so we clear [changed] and refill it with the
# newly-propagated assignment.
# The first entry in incompatibility.terms that's not yet satisfied by
# _solution, if one exists. If we find more than one, _solution is
# inconclusive for incompatibility and we can't deduce anything.
# If term is already contradicted by _solution, then
# incompatibility is contradicted as well and there's nothing new we
# can deduce from it.
# If more than one term is inconclusive, we can't deduce anything about
# incompatibility.
# If exactly one term in incompatibility is inconclusive, then it's
# almost satisfied and [term] is the unsatisfied term. We can add the
# inverse of the term to _solution.
# If *all* terms in incompatibility are satisfied by _solution, then
# incompatibility is satisfied and we have a conflict.
# The term in incompatibility.terms that was most recently satisfied by
# _solution.
# The earliest assignment in _solution such that incompatibility is
# satisfied by _solution up to and including this assignment.
# The difference between most_recent_satisfier and most_recent_term;
# that is, the versions that are allowed by most_recent_satisfier and not
# by most_recent_term. This is None if most_recent_satisfier totally
# satisfies most_recent_term.
# The decision level of the earliest assignment in _solution *before*
# most_recent_satisfier such that incompatibility is satisfied by
# _solution up to and including this assignment plus
# most_recent_satisfier.
# Decision level 1 is the level where the root package was selected. It's
# safe to go back to decision level 0, but stopping at 1 tends to produce
# better error messages, because references to the root package end up
# closer to the final conclusion that no solution exists.
# If most_recent_satisfier doesn't satisfy most_recent_term on its
# own, then the next-most-recent satisfier may be the one that
# satisfies the remainder.
# If most_recent_identifier is the only satisfier left at its decision
# level, or if it has no cause (indicating that it's a decision rather
# than a derivation), then incompatibility is the root cause. We then
# backjump to previous_satisfier_level, where incompatibility is
# guaranteed to allow _propagate to produce more assignments.
# using assert to suppress mypy [union-attr]
# Create a new incompatibility by combining incompatibility with the
# incompatibility that caused most_recent_satisfier to be assigned. Doing
# this iteratively constructs an incompatibility that's guaranteed to be
# true (that is, we know for sure no solution will satisfy the
# incompatibility) while also approximating the intuitive notion of the
# "root cause" of the conflict.
# The most_recent_satisfier may not satisfy most_recent_term on its own
# if there are a collection of constraints on most_recent_term that
# only satisfy it together. For example, if most_recent_term is
# `foo ^1.0.0` and _solution contains `[foo >=1.0.0,
# foo <2.0.0]`, then most_recent_satisfier will be `foo <2.0.0` even
# though it doesn't totally satisfy `foo ^1.0.0`.
# In this case, we add `not (most_recent_satisfier \ most_recent_term)` to
# the incompatibility as well, See the `algorithm documentation`_ for
# details.
# .. _algorithm documentation:
# https://github.com/dart-lang/pub/tree/master/doc/solver.md#conflict-resolution
# Direct origin dependencies must be handled first: we don't want to resolve
# a regular dependency for some package only to find later that we had a
# direct-origin dependency.
# We have to get the package from the pool,
# otherwise `requires` will be empty.
# We might need `package.source_reference` as fallback
# for transitive dependencies without a source
# if there is a top-level dependency
# for the same package with an explicit source.
# If there are no versions that satisfy the constraint,
# add an incompatibility that indicates that.
# If an incompatibility is already satisfied, then selecting version
# would cause a conflict.
# We'll continue adding its dependencies, then go back to
# unit propagation which will guide us to choose a better version.
# Remove the root package from generated incompatibilities, since it will
# always be satisfied. This makes error reporting clearer, and may also
# make solving more efficient.
# Short-circuit in the common case of a two-term incompatibility with
# two different packages (for example, a dependency).
# Coalesce multiple terms about the same package if possible.
# If we have two terms that refer to the same package but have a
# null intersection, they're mutually exclusive, making this
# incompatibility irrelevant, since we already know that mutually
# exclusive version ranges are incompatible. We should never derive
# an irrelevant incompatibility.
# simplify markers by removing redundant information
# ignore the warning as provider does not do interpolation
# Merging feature packages with base packages
# Prevent adding base package as a dependency to itself
# Avoid duplication.
# Combine the nodes by name
# The root package, which has no parents, is defined as having depth -1
# So that the root package's top-level dependencies have depth 0.
# TransitivePackageInfo.markers is updated later,
# because the nodes of all packages have to be aggregated first.
# group packages by depth
# calculate markers from lowest to highest depth
# (start with depth 0 because the root package has depth -1)
# there is a cycle -> we need one more iteration
# `override_marker` is often a SingleMarker or a MultiMarker,
# `marker` often is a MultiMarker that contains `override_marker`.
# We can "remove" `override_marker` from `marker`
# because we will do an intersection later anyway.
# By removing it now, it is more likely that we hit
# the performance shortcut instead of the fallback algorithm.
# performance shortcut:
# if markers are the same for all overrides,
# we can use less expensive marker operations
# fallback / general algorithm with performance issues
# Extras that were not requested are not relevant.
# We have to perform an update if the version or another
# attribute of the package has changed (source type, url, ref, ...).
# This has to be done because installed packages cannot
# have type "legacy". If a package with type "legacy"
# is installed, the installed package has no source_type.
# Thus, if installed_package has no source_type and
# the result_package has source_type "legacy" (negation of
# the following condition), update must not be performed.
# This quirk has the side effect that when switching
# from PyPI to legacy (or vice versa),
# no update is performed.
# We preserve pip when not managed by poetry, this is done to avoid
# externally managed virtual environments causing unnecessary removals.
# add version info because issue might be a version conflict
# with a version constraint
# We use the stable version here to improve support of environments of Python pre-release
# versions, e.g. Python 3.14rc2. Without using the stable version here, a dependency with
# a marker like `python_version >= "3.14"` would not be installed.
# For now, the dependency's name must match the actual package's name
# If we've previously found a direct-origin package that meets this dependency,
# use it.
# We rely on the VersionSolver resolving direct-origin dependencies first.
# empty constraint is used in overrides to mark that the package has
# already been handled and is not required for the attached markers
# The difference is only relevant if it intersects
# the root package python constraint
# Find all the optional dependencies that are wanted - taking care to allow
# for self-referential extras.
# If some extras/features were required, we need to add a special dependency
# representing the base package to the current package.
# When adding dependency foo[extra] -> foo, preserve foo's source, if it's
# specified. This prevents us from trying to get foo from PyPI
# when user explicitly set repo for foo[extra].
# For normal dependency resolution, we have to make sure that root extras
# are represented in the markers. This is required to identify mutually
# exclusive markers in cases like 'extra == "foo"' and 'extra != "foo"'.
# However, for installation with re-resolving (installer.re-resolve=true,
# which results in self._env being not None), this spoils the result
# because we have to keep extras so that they are uninstalled
# when calculating the operations of the transaction.
# The clone is required for installation with re-resolving
# without an existing lock file because the root package is used
# once for solving and a second time for re-resolving for installation.
# Retrieving constraints for deferred dependencies
# If lock file contains exactly the same URL and reference
# (commit hash) of dependency as is requested,
# do not analyze it again: nothing could have changed.
# Searching for duplicate dependencies
# If the duplicate dependencies have the same constraint,
# the requirements will be merged.
# For instance:
# will become:
# If the duplicate dependencies have different constraints
# we have to split the dependency graph.
# An example of this is:
# For dependency resolution, markers of duplicate dependencies must be
# mutually exclusive. However, we have to take care about duplicates
# with differing extras.
# There are duplicates with different extras.
# Since all markers are mutually exclusive,
# we can trigger overrides.
# Too complicated to handle with overrides,
# fallback to basic handling without overrides.
# Sort out irrelevant requirements
# At this point, we raise an exception that will
# tell the solver to make new resolutions with specific overrides.
# For instance, if the foo (1.2.3) package has the following dependencies:
# then the solver will need to make two new resolutions
# with the following overrides:
# Modifying dependencies as needed
# The dependency is not needed, since the markers specified
# for the current package selection are not compatible with
# the markers for the current dependency, so we skip it
# This dependency is not needed under current python constraint.
# At this point all duplicates have been eliminated via overrides
# so that explicit sources are unambiguous.
# Clear _explicit_sources because it might be filled
# from a previous override.
# In order to reduce the number of intersections,
# we merge duplicate dependencies by constraint.
# intersection of markers
# For performance optimization, we don't just intersect all markers at once,
# but intersect them one after the other to get empty markers early.
# Further, we intersect the inverted markers at last because
# they are more likely to overlap than the non-inverted ones.
# intersection of constraints
# if direct origin or specific source:
# conflict if specific source already set and not the same
# conflict in overlapping area
# This is an edge case where the dependency is not required
# for the resulting marker. However, we have to consider it anyway
# the last dependency would be missed without this,
# because the intersection with both foo dependencies is empty.
# Set constraint to empty to mark dependency as "not required".
# build new dependency with intersected constraint and marker
# (and correct source)
# In order to reduce the number of overrides we merge duplicate
# dependencies by constraint again. After overlapping markers were
# resolved, there might be new dependencies with the same constraint.
# Select highest version of the two
# If a project is created in the root directory (this is reasonable inside a
# docker container, eg <https://github.com/python-poetry/poetry/issues/5103>)
# then parts will be empty.
# package include and package name are the same,
# packages table is redundant here.
# This should not happen because author has been validated before.
# Add build system
# The cache must be updated
# we identify the candidate pth files to check, this is done so to handle cases
# where the pth file for foo-bar might have been installed as either foo-bar.pth
# or foo_bar.pth (expected) in either pure or platform lib directories.
# A VCS dependency should have been installed
# in the src directory.
# We first check for a direct_url.json file to determine
# the type of package.
# TODO: handle multiple source directories?
# File or URL distribution
# File distribution
# URL distribution
# Directory distribution
# VCS distribution
# PEP 592: yanked files are always ignored, unless they are the only
# file that matches a version specifier that "pins" to an exact
# in contrast to None!
# The order of the members below dictates the actual priority. The first member has
# top priority.
# in cases like PyPI search might not be available, we fallback to explicit searches
# to allow for a nicer ux rather than finding nothing at all
# see: https://discuss.python.org/t/fastly-interfering-with-pypi-search/73597/6
# No dependencies set (along with other information)
# This might be due to actually no dependencies
# or badly set metadata when uploading.
# So, we need to make sure there is actually no
# dependencies by introspecting packages.
# Cache control redirect loop.
# We try to remove the cache and try again
# We are tracking if a domain supports range requests or not to avoid
# unnecessary requests.
# ATTENTION: A domain might support range requests only for some files, so the
# meaning is as follows:
# - Domain not in dict: We don't know anything.
# - True: The domain supports range requests for at least some files.
# - False: The domain does not support range requests for the files we tried.
# If "lazy-wheel" is enabled and the domain supports range requests
# or we don't know yet, we try range requests.
# Do not set to False if we already know that the domain supports
# range requests for some URLs!
# The domain did not support range requests for the first URL(s) we tried,
# but supports it for some URLs (especially the current URL),
# so we abort the download, update _supports_range_requests to try
# range requests for all files and use it for the current URL.
# Sort links by distribution type
# drop yanked files unless the entire release is yanked
# Prefer to read data from wheels: this is faster and more reliable
# We ought just to be able to look at any of the available wheels to read
# metadata, they all should give the same answer.
# In practice this hasn't always been true.
# Most of the code in here is to deal with cases such as isort 4.3.4 which
# published separate python3 and python2 wheels with quite different
# dependencies.  We try to detect such cases and combine the data from the
# two wheels into what ought to have been published in the first place...
# Universal wheel
# Any Python
# Normalizing requires_dist
# Prefer non platform specific wheels
# Handle ValueError here as well since under FIPS environments
# this is what is raised (e.g., for MD5)
# The following code was originally written for PDM project
# https://github.com/pdm-project/pdm/blob/1f4f48a35cdded064def85df117bebf713f7c17a/src/pdm/models/search.py
# and later changed to fit Poetry needs
# release is not yanked if at least one file is not yanked
# if all files are yanked (or there are no files) the release is yanked
# see https://peps.python.org/pep-0714/#clients
# and https://peps.python.org/pep-0691/#project-detail
# and https://peps.python.org/pep-0658/#specification
# A relative URL by definition starts with ../ or ./
# Common error messages
# this is a tag, incorrectly specified as a revision, tags take priority
# this is most likely a ref spec or a branch incorrectly specified
# this is a tag incorrectly specified as a branch
# revision is a short sha, resolve to full sha
# no heads with such SHA, let's check all objects
# we do this conditionally as otherwise, dulwich might complain if these
# parameters are passed in for an ssh url
# branch / ref does not exist
# ensure local HEAD matches remote
# set ref to current HEAD
# this implies the ref we need does not exist or is invalid
# the local copy is at a bad state, lets remove it
# force clean the local copy if it exists, do not reuse
# check if the current local copy matches the requested ref spec
# we use peeled sha here to ensure tags are resolved consistently
# something is wrong with the current checkout, clean it
# head is not a sha, this will cause issues later, lets reset
# if revision is used short-circuit remote fetch head matches
# we do this here to handle http authenticated repositories as dulwich
# does not currently support using credentials from git-credential helpers.
# upstream issue: https://github.com/jelmer/dulwich/issues/873
# this is a little inefficient, however preferred as this is transparent
# without additional configuration or changes for existing projects that
# use http basic auth credentials.
# fallback to legacy git client
# some private sources expect tokens to be provided as passwords with empty userames
# we use a fixed literal to ensure that this can be stored in keyring (jaraco/keyring#687)
# Note: If this is changed, users with passwords stored with empty usernames will have to
# re-add the config.
# we do default to empty username string here since credentials support empty usernames
# unfortunately there is no clean way of checking if keyring is unlocked
# we use `or None` here to ensure that empty strings are passed as None
# Timeout for HTTP requests using the requests library.
# Server response codes to retry requests on.
# compatibility for python <3.10
# If there's no instance, return the descriptor itself
# we double-check the lock has not been created by another thread
# Use a thread-safe lock to ensure the property is computed only once
# We disable version check here as we are already pinning to version available in
# either the virtual environment or the virtualenv package embedded wheel. Version
# checks are a wasteful network call that adds a lot of wait time when installing a
# lot of packages.
# We build Poetry dependencies from the requirements
# we ignore dependencies that are not valid for this environment
# this ensures that we do not end up with unnecessary constraint
# errors when solving build system requirements; this is assumed
# safe as this environment is ephemeral
# we recreate the project's Poetry instance in order to retrieve the correct repository pool
# when a pool is not provided
# the context manager is not being called within a Poetry project context
# fallback to a default pool using only PyPI as source
# fallback to using only PyPI
# we repeat the build system requirements to avoid poetry installer from removing them
# try with the repository name via the password manager
# fallback to url and netloc based keyring entries
# this should never really be hit under any sane circumstance
# cache repository credentials by repository url to avoid multiple keyring
# backend queries when packages are being downloaded from the same source
# no credentials were provided in the url, try finding the
# best repository configuration
# prefer the more specific path
# lookup for packages by name, faster than looping over packages repeatedly
# Depth-first search, with our entry points being the packages directly required by
# extras.
# We expect to find all packages, but can just carry on if we don't.
# Used by FileCache for items that do not expire.
# Check again if the archive exists (under the lock) to avoid
# duplicate downloads because it may have already been downloaded
# by another thread in the meantime
# implication "not strict -> env must not be None"
# implication "strict -> filename must not be None"
# in strict mode return the original cached archive instead of the
# prioritized archive type.
# Correct type signature when used as `shutil.rmtree(..., onexc=_on_rm_error)`.
# Correct type signature when used as `shutil.rmtree(..., onerror=_on_rm_error)`.
# if less than 1MB, we simply show that we're downloading
# but skip the updating
# only retry if server supports byte ranges
# and we have fetched at least one chunk
# otherwise, we should just fail
# Downgrade to short path name if have highbit chars. See
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
# These versions of python shipped with a broken tarfile data_filter, per
# https://github.com/python/cpython/issues/107845.
# this is of the form package@<semver>, do not attempt to parse it
# TODO: cache PEP 517 build environment corresponding to each project venv
# If base is None, it probably means this is
# a virtualenv created from VIRTUAL_ENV.
# In this case we need to get sys.base_prefix
# from inside the virtualenv.
# Why using sysconfig.get_platform() and not ...
# ... platform.machine()
# ... platform.architecture()
# Relevant for the following use cases, for example:
# - using a 32 Bit Python on a 64 Bit Windows
# - using an emulated aarch Python on an x86_64 Linux
# Lists and tuples are the same in JSON and loaded as list.
# type: ignore[typeddict-item]
# A virtualenv is considered sane if "python" exists.
# If we are required to create the virtual environment in the project directory,
# create or recreate it if needed
# We need to check if the patch version is correct
# We need to recreate
# Create if needed
# Activate
# Check if we are inside a virtualenv or not
# Conda sets CONDA_PREFIX in its envs, see
# https://github.com/conda/conda/issues/2764
# It's probably not a good idea to pollute Conda's global "base" env, since
# most users have it activated all the time.
# Checking if a local virtualenv exists
# we only need to discover python version in this case
# Validate env name if provided env is a full path to python
# Exact virtualenv name
# Get all the poetry envs, even for other projects
# Executable in PATH or full executable path
# Already inside a virtualenv.
# The currently activated or chosen Python version
# is not compatible with the Python constraint specified
# for the project.
# If an executable has been specified, we stop there
# and notify the user of the incompatibility.
# Otherwise, we try to find a compatible Python version.
# venv detection:
# stdlib venv may symlink sys.executable, so we can't use realpath.
# but others can symlink *to* the venv Python,
# so we can't just use sys.executable.
# So we just check every item in the symlink tree (generally <= 3)
# Running properly in the virtualenv, don't need to do anything
# Exclude the venv folder from from macOS Time Machine backups
# TODO: Add backup-ignore markers for other platforms too
# Continue only if e.errno == 16
# ERRNO 16: Device or resource busy
# Delete all files and folders but the toplevel one. This is because sometimes
# the venv folder is mounted by the OS, such as in a docker volume. In such
# cases, an attempt to delete the folder itself will result in an `OSError`.
# See https://github.com/python-poetry/poetry/pull/2064
# type: ignore[literal-required]
# we do not use as_posix() here due to issues with windows pathlib2
# clear cached properties using the env paths
# there is at least one path that is not writable
# all paths are writable, return early
# at least one candidate is not writable, we cannot do much here
# We copy pip's logic here for the `include` path
# run as module so that pip can update itself on Windows
# when pip is required we need to ensure that we fall back to
# embedded pip when pip is not available in the environment
# Options Used:
# TODO: Consider replacing (-I) with (-EP) once support for managing Python <3.11 environments dropped.
# This is useful to prevent user site being disabled over zealously.
# On Windows, some executables can be in the base path
# This is especially true when installing Python with
# the official installer, where python.exe will be at
# the root of the env path.
# Workaround for https://github.com/python/cpython/issues/99968
# register default providers
# we overload __init__ to ensure we do not break any downstream plugins
# that use the this
# fallback to findpython, restrict to finding only executables
# named "python" as the intention here is just that, nothing more
# Ignore broken installations.
# this can be broken into download, and install_file if required to make
# use of Poetry's own mechanics for download and unpack
# Attention:
# There are two versions of pbs builds,
# - one like a normal Python installation and
# - one with an additional level of folders where the expected files
# If both versions exist, the first one is preferred.
# However, sometimes (especially for free-threaded Python),
# only the second version exists!
# On Windows Python executables are top level.
# (Only in virtualenvs, they are in the Scripts directory.)
# A python-build-standalone PyPy has no Scripts directory!
# The version could not be determined, so we raise an error since it is
# mandatory.
# If this is a local poetry project, we can extract "richer" requirement
# information, eg: development requirements etc.
# Attempt to parse the PEP-508 requirement string
# Invalid marker, We strip the markers hoping for the best
# Unable to parse requirement so we skip it
# this dependency is required by an extra package
# this is the first time we encounter this extra for this
# If the distribution lists requirements, we use those.
# If the distribution does not list requirements, but the metadata is new enough
# to specify that this is because there definitely are none: then we return an
# empty list.
# If there is a requires.txt, we use that.
# If the METADATA version is greater than the highest supported version,
# pkginfo prints a warning and tries to parse the fields from the highest
# known version. Assuming that METADATA versions adhere to semver,
# this should be safe for minor updates.
# we successfully retrieved dependencies from sdist metadata
# Still not dependencies found
# So, we unpack and introspect
# a little bit of guess work to determine the directory we care about
# now this is an unpacked directory we know how to deal with
# Sometimes pathlib will fail on recursive symbolic links, so we need to work
# around it and use the glob module instead. Note that this does not happen with
# pathlib2 so it's safe to use it for Python < 3.4.
# handle PKG-INFO in unpacked sdist root
# Note: we ignore any setup.py file at this step
# TODO: add support for handling non-poetry PEP-517 builds
# we discovered PkgInfo but no requirements were listed
# if we get here then it is neither an sdist instance nor a file
# so, we assume this is an directory
# if we reach here, everything has failed and all hope is lost
# After context manager exit, wheel.name will point to a deleted file path.
# Add `delete_backing_file=False` to disable this for debugging.
# We assume that these errors have occurred because the wheel contents
# themselves are invalid, not because we've messed up our bookkeeping
# and produced an invalid file.
# this is expected when the code handles issues with lazy wheel metadata retrieval correctly
# Catch all exception to handle any issues that may have occurred during
# attempts to use Lazy Wheel.
# Explicit impl needed to satisfy mypy.
# "no-cache" is the correct value for "up to date every time", so this will also
# ensure we get the most recent value from the server:
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching#provide_up-to-date_content_every_time
# Enable us to seek and write anywhere in the backing file up to this
# known length.
# NB: This is currently dead code, as _fetch_content_length() is overridden
# Reducing by 1 to get an inclusive end range.
# Cache this on the type to avoid trying and failing our initial lazy wheel request
# multiple times in the same invocation against an index without this support.
# prefetch metadata to reduce the number of range requests
# Need to explicitly truncate here in order to perform the write and seek
# operations below when we write the chunk of file contents to disk.
# If we could not download any file contents yet (e.g. if negative byte
# ranges were not supported, or the requested range was larger than the file
# size), then download all of this at once, hopefully pulling in the entire
# central directory.
# If we *could* download some file contents, then write them to the end of
# the file and set up our bisect boundaries by hand.
# Default initial chunk size is currently 1MB, but streaming content
# here allows it to be set arbitrarily large.
# We now need to update our bookkeeping to cover the interval we just
# wrote to file so we know not to do it in later read()s.
# MergeIntervals uses inclusive boundaries i.e. start <= x <= end.
# NB: We expect LazyRemoteResource to reset `self._merge_intervals`
# just before it calls the current method, so our assertion here
# checks that indeed no prior overlapping intervals have
# been covered.
# Perform a negative range index, which is not supported by some servers.
# According to
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests,
# a 200 OK implies that range requests are not supported,
# regardless of the requested size.
# However, some servers that support negative range requests also return a
# 200 OK if the requested range from the end was larger than the file size.
# Initial range request for just the end of the file.
# Our initial request using a negative byte range was not supported.
# This indicates that the requested range from the end was larger than the
# actual file size: https://www.rfc-editor.org/rfc/rfc9110#status.416.
# In this case, we don't have any file content yet, but we do know the
# size the file will be, so we can return that and exit here.
# pypi notably does not support negative byte ranges: see
# https://github.com/pypi/warehouse/issues/12823.
# Avoid trying a negative byte range request against this domain for the
# rest of the resolve.
# Apply a HEAD request to get the real size, and nothing else for now.
# Some servers that do not support negative offsets,
# handle a negative offset like "-10" as "0-10"...
# ... or behave even more strangely, see
# https://github.com/python-poetry/poetry/issues/9056#issuecomment-1973273721
# This may perform further requests if __init__() did not pull in the entire
# central directory at the end of the file (although _initial_chunk_length()
# should be set large enough to avoid this).
# The last .dist-info/ entry may be before the end of the file if the
# wheel's entries are sorted lexicographically (which is unusual).
# If it is the last entry of the zip, then give us everything
# until the start of the central directory.
# might be extended by plugins
# remove any pre-existing pth files for this package
# write PEP 610 metadata
# Retrieving hashes
# if lock file exists: compare with existing lock data
# incompatible, invalid or no lock file
# The following code is roughly equivalent to
# • lockfile = TOMLFile(self.lock)
# • lockfile.read()
# • lockfile.write(data)
# However, lockfile.read() takes more than half a second even
# for a modestly sized project like Poetry itself and the only reason
# for reading the lockfile is to determine the line endings. Thus,
# we do that part for ourselves here, which only takes about 10 ms.
# get original line endings
# enforce original line endings
# Special handling for legacy keys is just for backwards compatibility,
# and thereby not required if there is relevant content in [project].
# For backwards compatibility, we have to put the relevant content
# of the [tool.poetry] section at top level!
# if the lockfile doesn't contain a metadata section at all,
# it probably needs to be rebuilt completely
# Storing of package files and hashes has been through a few generations in
# the lockfile, we can read them all:
# - latest and preferred is that this is read per package, from
# - oldest is that hashes were stored in metadata.hashes without filenames
# - in between those two, hashes were stored alongside filenames in
# Strictly speaking, this is not correct, but we have no chance
# to always determine which are the correct files because the
# lockfile doesn't keep track which files belong to which package.
# handle lock files with invalid PEP 508
# root dir should be the source of the package relative to the lock
# All the constraints should have the same type,
# but we want to simplify them if it's possible
# The lock file should only store paths relative to the root project
# max depth in the dependency tree
# group -> marker
# happy Easter ;)
# Since alphabetically "dev0" < "final" < "post1" < "post2", we don't
# have to do anything special with releaselevel for now.
# We request forward-ref annotations to not break in the presence of
# forward references.
# inspect failed
# Thread-local global to track attrs instances which are already being repr'd.
# This is needed because there is no other (thread-safe) way to pass info
# about the instances that are already being repr'd through the call stack
# in order to ensure we don't perform infinite recursion.
# For instance, if an instance contains a dict which contains that instance,
# we need to know that we're already repr'ing the outside instance from within
# the dict's repr() call.
# This lives here rather than in _make.py so that the functions in _make.py
# don't have a direct reference to the thread-local in their globals dict.
# If they have such a reference, it breaks cloudpickle.
# Workaround for TypeError: cf.__new__() missing 1 required
# positional argument (which appears, for a namedturle)
# Attrs class.
# Very long. :/
# No attrs, maybe it's a specialized generic (A[str])?
# Stick it on here for speed next time.
# Since calling get_type_hints is expensive we cache whether we've
# done it already.
# Since fields have been frozen we must work around it.
# We store the class we resolved so that subclasses know they haven't
# been resolved.
# Return the class so you can use it as a decorator too.
# e.g. `1 in "abc"`
# suppress error to invert validity
# noqa: BLE001, PERF203, S112
# This can be removed once we drop 3.8 and use attrs.Converter instead.
# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes.
# Sphinx's autodata stopped working, so the docstring is inlined in the API
# docs.
# Add operations.
# Add same type requirement.
# Add total ordering if at least one operation was defined.
# functools.total_ordering requires __eq__ to be defined,
# so raise early error here to keep a nice stack.
# By default, mutable classes convert & validate on setattr.
# However, if we subclass a frozen class, we inherit the immutability
# and disable on_setattr.
# maybe_cls's type depends on the usage of the decorator.  It's a class
# if it's used as `@attrs` but `None` if used as `@attrs()`.
# We need to import _compat itself in addition to the _compat members to avoid
# having the thread-local in the globals here.
# This is used at least twice, so cache it here.
# we don't use a double-underscore prefix because that triggers
# name mangling when trying to create a slot for the field
# (when slots=True)
# Unique object for unequivocal getattr() defaults.
# Apply syntactic sugar by auto-wrapping.
# In order of debuggers like PDB being able to step through the code,
# we add a fake linecache entry.
# Tuple class for extracted attributes from a class definition.
# `base_attrs` is a subset of `attrs`.
# Annotation can be quoted.
# A dictionary of base attrs to their classes.
# Traverse the MRO and collect attributes.
# noqa: PLW2901
# For each name, only keep the freshest definition i.e. the furthest at the
# back.  base_attr_map is fine because it gets overwritten with every new
# Check attr order after executing the field_transformer.
# Mandatory vs non-mandatory attr order only matters when they are part of
# the __init__ signature and when they aren't kw_only (which are moved to
# the end and can be mandatory or non-mandatory in any order, as they will
# be specified as keyword args anyway). Check the order of those attrs:
# Resolve default field alias after executing field_transformer.
# This allows field_transformer to differentiate between explicit vs
# default aliases and supply their own defaults.
# Evolve is very slow, so we hold our nose and do it dirty.
# Create AttrsClass *after* applying the field_transformer since it may
# add or remove attributes!
# Wrapped to get `__class__` into closure cell for super()
# (It will be replaced with the newly constructed class after construction).
# To deal with private attributes.
# Check if the pre init method has more arguments than just `self`
# We want to pass arguments if pre init expects arguments
# If class-level on_setattr is set to convert + validate, but
# there's no field to convert or validate, pretend like there's
# no on_setattr.
# tuples of script, globs, hook
# We want to only do this check once; in 99.9% of cases these
# The method gets only called if it's not inherited from a base class.
# _has_own_attribute does NOT work properly for classmethods.
# Clean class of attribute definitions (`attr.ib()`s).
# An AttributeError can happen if a base class defines a
# class variable and we want to set an attribute with the
# same name by using only a type annotation.
# Attach our dunder methods.
# If we've inherited an attrs __setattr__ and don't write our own,
# reset it to object's.
# 3.14.0rc2+
# If our class doesn't have its own implementation of __setattr__
# (either from the user or by us), check the bases, if one of them has
# an attrs-made __setattr__, that needs to be reset. We don't walk the
# MRO because we only care about our immediate base classes.
# XXX: This can be confused by subclassing a slotted attrs class with
# XXX: a non-attrs class and subclass the resulting class with an attrs
# XXX: class.  See `test_slotted_confused` for details.  For now that's
# XXX: OK with us.
# Traverse the MRO to collect existing slots
# and check for an existing __weakref__.
# Collect methods with a `__class__` reference that are shadowed in the new class.
# To know to update them.
# Add cached properties to names for slotting.
# Clear out function from class to avoid clashing.
# We only add the names of attributes that aren't inherited.
# Setting __slots__ to inherited attributes wastes memory.
# There are slots for attributes from current class
# that are defined in parent classes.
# As their descriptors may be overridden by a child class,
# we collect them here and update the class dict
# Create new class based on old class and our methods.
# The following is a fix for
# <https://github.com/python-attrs/attrs/issues/102>.
# If a method mentions `__class__` or uses the no-arg super(), the
# compiler will bake a reference to the class in the method itself
# as `method.__closure__`.  Since we replace the class with a
# clone, we rewrite these references so it keeps working.
# Class- and staticmethods hide their functions inside.
# These might need to be rewritten as well.
# Workaround for property `super()` shortcut (PY3-only).
# There is no universal way for other descriptors.
# Catch None or the empty list.
# noqa: PERF203
# ValueError: Cell is empty
# __weakref__ is not writable.
# Backward compatibility with attrs instances pickled with
# attrs versions before v22.2.0 which stored tuples.
# The hash code cache is not included when the object is
# serialized, but it still needs to be initialized to None to
# indicate that the first call to __hash__ should be a cache
# miss.
# We need to write a __setattr__ but there already is one!
# docstring comes from _add_method_dunders
# cmp takes precedence due to bw-compatibility.
# If left None, equality is set to the specified default and ordering
# mirrors equality.
# Logically, flag is None and auto_detect is True here.
# If eq is custom generated, we need to include the functions in globs
# close __setattr__
# Add the key function to the global namespace
# of the evaluated function.
# Figure out which attributes to include, and which function to use to
# format them. The a.repr value can be either bool or a custom
# callable.
# Even though this is global state, stick it on here to speed
# it up. We rely on `cls` being cached for this to be
# efficient.
# This makes typing.get_type_hints(CLS.__init__) resolve string types.
# Save the lookup overhead in __init__ if we need to circumvent
# setattr hooks.
# Dict frozen classes assign directly to __dict__.
# But only if the attribute doesn't come from an ancestor slot
# Note _inst_dict will be used again below if cache_hash is True
# Not frozen -- we can just assign directly.
# Circumvent the __setattr__ descriptor to save one lookup per
# assignment. Note _setattr will be used again below if
# does_cache_hash is True.
# Parameters in the definition of __init__
# Parameters in the call to __attrs_pre_init__
# Used for both 'args' and 'pre_init_args' above
# This is a dictionary of names to validator and converter callables.
# Injecting this into __init__ globals lets us avoid lookups.
# a.alias is set to maybe-mangled attr_name in _ClassBuilder if not
# explicitly provided
# Use the type from the converter if present.
# we can skip this if there are no validators.
# Because this is set only after __attrs_post_init__ is called, a crash
# will result if post-init tries to access the hash code.  This seemed
# preferable to setting this beforehand, in which case alteration to field
# values during post-init combined with post-init accessing the hash code
# would result in silent bugs.
# For exceptions we rely on BaseException.__init__ for proper
# initialization.
# leading comma & kw_only args
# We need to remove the defaults from the kw_only_args.
# If pre init method has arguments, pass the values given to __init__.
# Python <3.12 doesn't allow backslashes in f-strings.
# These slots must NOT be reordered because we use them later for
# instantiation.
# noqa: RUF023
# XXX: unused, remove along with other cmp code.
# Cache this descriptor here to speed things up later.
# Despite the big red warning, people *do* instantiate `Attribute`
# themselves.
# Shallow copy
# The 'kw_only' argument is the class-level setting, and is used if the
# attribute itself does not explicitly set 'kw_only'.
# type holds the annotated value. deal with conflicts:
# Don't use attrs.evolve since fields(Attribute) doesn't work
# Don't use _add_pickle since fields(Attribute) doesn't work
# noqa: RUF023 -- order matters for __init__
# Class identifiers are converted into the normal form NFKC while parsing
# For pickling to work, the __module__ variable needs to be set to the
# frame where the class is created.  Bypass this step in environments where
# defined for arguments greater than 0 (IronPython).
# We do it here for proper warnings with meaningful stacklevel.
# Only add type annotations now or "_attrs()" will complain:
# These are required by within this module so we define them here and merely
# import into .validators / .converters.
# If the converter list is empty, pipe_converter is the identity.
# Get parameter type from first converter.
# Get return type from last converter.
# Dev builds may produce version like `3.11.0+` and packaging.version
# will reject it. Here we just remove the part after `+`
# since it isn't critical for version comparison.
# Deduplicate with the python executable path
# General:
# Tool Specific:
# Windows only:
# MacOS only:
# -------------------------------------------------------------------------
# Copyright (c) Steve Dower
# Distributed under the terms of the MIT License
# mypy: disable-error-code="attr-defined"
# type:ignore[no-redef]
# These tags are treated specially when the Company is 'PythonCore'
# borrowed from
# http://code.activestate.com/recipes/576949-find-all-subclasses-of-a-given-class/
# fails only when cls is type
# TODO(coherent-oss/granary#4): Migrate to PEP 695 by 2027-10.
# type: ignore[assignment] # Corrected in the next line.
# type: ignore[arg-type,return-value]
# remove any base classes
# change the file to be readable,writable,executable: 0777
# retry
# Liblouis Python ctypes bindings
# Copyright (C) 2009, 2010 James Teh <jamie@jantrid.net>
# This file is part of liblouis.
# liblouis is free software: you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 2.1 of the License, or
# liblouis is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# License along with liblouis. If not, see <http://www.gnu.org/licenses/>.
# Native win32
# Unix/Cygwin
# { Module Configuration
#: Specifies the charSize (in bytes) used by liblouis.
#: This is fetched once using L{liblouis.lou_charSize}.
#: Call it directly, since L{charSize} is not yet defined.
#: @type: int
#: Specifies the number by which the input length should be multiplied
#: to calculate the maximum output length.
# This default will handle the case where every input character is
# undefined in the translation table.
#: Specifies the encoding to use when encode/decode file/dir name
#: @type: str
#: Specifies the encoding to use when converting from byte strings to unicode strings.
# Some general utility functions
# { Typeforms
# { Translation modes
# alias for backward compatiblity
# { logLevels
# Just some common tests.
# @Generated by find_versions.py. DO NOT EDIT.
# Python >= 3.3
# This is copied from Python 3.4.1
# Child.
# Parent.
# Disconnect from controlling tty, if any.  Raises OSError of ENXIO
# if there was no controlling tty to begin with, such as when
# executed by a cron(1) job.
# Verify we are disconnected from controlling tty by attempting to open
# it again.  We expect that OSError of ENXIO should always be raised.
# Verify we can open child pty.
# Verify we now have a controlling tty.
# Solaris uses internal __fork_pty(). All others use pty.fork().
# inherit EOF and INTR definitions from controlling process.
# no fd, raise ValueError to fallback on CEOF, CINTR
# unless the controlling process is also not a terminal,
# such as cron(1), or when stdin and stdout are both closed.
# Fall-back to using CEOF and CINTR. There
# setecho and setwinsize are pulled out here because on some platforms, we need
# to do this from the child before we exec()
# I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent and
# blocked on some platforms. TCSADRAIN would probably be ideal.
# Some very old platforms have a bug that causes the value for
# termios.TIOCSWINSZ to be truncated. There was a hack here to work
# around this, but it caused problems with newer platforms so has been
# removed. For details see https://github.com/pexpect/pexpect/issues/39
# Note, assume ws_xpixel and ws_ypixel are zero.
# Ensure _EOF and _INTR are calculated
# Shallow copy of argv so we can modify it
# [issue #119] To prevent the case where exec fails and the user is
# stuck interacting with a python child process instead of whatever
# was expected, we implement the solution from
# http://stackoverflow.com/a/3703179 to pass the exception to the
# parent process
# [issue #119] 1. Before forking, open a pipe in the parent process.
# Use internal fork_pty, for Solaris
# Some platforms must call setwinsize() and setecho() from the
# child process, and others from the master process. We do both,
# allowing IOError for either.
# set window size
# disable echo if spawn argument echo was unset
# [issue #119] 3. The child closes the reading end and sets the
# close-on-exec flag for the writing end.
# Do not allow child to inherit open file descriptors from parent,
# with the exception of the exec_err_pipe_write of the pipe
# and pass_fds.
# Impose ceiling on max_fd: AIX bugfix for users with unlimited
# nofiles where resource.RLIMIT_NOFILE is 2^63-1 and os.closerange()
# occasionally raises out of range error
# [issue #119] 5. If exec fails, the child writes the error
# code back to the parent using the pipe, then exits.
# Set some informational attributes
# [issue #119] 2. After forking, the parent closes the writing end
# of the pipe and reads from the reading end.
# [issue #119] 6. The parent reads eof (a zero-length read) if the
# child successfully performed exec, since close-on-exec made
# successful exec close the writing end of the pipe. Or, if exec
# failed, the parent reads the error code and can proceed
# accordingly. Either way, the parent blocks until the child calls
# exec.
# It is possible for __del__ methods to execute during the
# teardown of the Python VM itself. Thus self.close() may
# trigger an exception because os.close may be None.
# which exception, shouldn't we catch explicitly .. ?
# Closes the file descriptor
# Give kernel time to update process status.
#self.pid = None
# BSD-style EOF (also appears to work on recent Solaris (OpenIndiana))
# You can't call wait() on a child process in the stopped state.
# This is for Linux, which requires the blocking form
# of waitpid to get the status of a defunct process.
# This is super-lame. The flag_eof would have been set
# in read_nonblocking(), so this should be safe.
# No child processes
# I have to do this twice for Solaris.
# I can't even believe that I figured this out...
# If waitpid() returns 0 it means that no child process
# wishes to report, and the value of status is undefined.
### os.WNOHANG) # Solaris!
# This should never happen...
# If pid is still 0 after two calls to waitpid() then the process
# really is alive. This seems to work on all platforms, except for
# Irix which seems to require a blocking call on waitpid or select,
# so I let read_nonblocking take care of this situation
# (unfortunately, this requires waiting through the timeout).
# note: unlike poetry install, the default excludes non-optional groups
# deprecated: groups are always excluded by default
# Removing the existing export command to avoid an error
# until Poetry removes the export command
# and uses this plugin instead.
# If you're checking this code out to get inspiration
# for your own plugins: DON'T DO THIS!
# Apply the project python marker to all requirements.
# Build a set of all packages required by our selected extras
# If a package is optional and we haven't opted in to it, do not select
# a package is locked as optional, but is not activated via extras
# group packages entries by name, this is required because requirement might use
# different constraints.
# Put higher versions first so that we prefer them.
# create dependency from locked package to retain dependency metadata
# if this is not done, we can end-up with incorrect nested dependencies
# So as to give ourselves enough flexibility in choosing a solution,
# we need to split the world up into the python version ranges that
# this package might care about.
# We create a marker for all of the possible regions, and add a
# requirement for each separately.
# If we've previously chosen a version of this package that is compatible with
# the current requirement, we are forced to stick with it.  (Else we end up with
# different versions of the same package at the same time.)
# If we have more than one overlapping candidate, we've run into trouble.
# Get the packages that are consistent with this dependency.
# If we have an overlapping candidate, we must use it.
# TODO: Support this case:
# https://github.com/python-poetry/poetry-plugin-export/issues/183
# If we have extra indexes, we add them to the beginning of the output
# Iterate over repositories so that we get the repository with the highest
# priority first so that --index-url comes before --extra-index-url
# Copyright (C) 2003, 2004, 2005, 2006 Red Hat Inc. <http://www.redhat.com/>
# Copyright (C) 2003 David Zeuthen
# Copyright (C) 2004 Rob Taylor
# Copyright (C) 2005, 2006 Collabora Ltd. <http://www.collabora.co.uk/>
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use, copy,
# modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is
# The above copyright notice and this permission notice shall be
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# from _dbus
# from proxies
# from _dbus_bindings
# from exceptions
# submodules
# OLPC Sugar compatibility
# Python 2 / Python 3 compatibility helpers.
# Copyright 2011 Barry Warsaw
# Copyright 2021 Collabora Ltd.
# non-deprecated case
# will be validated by SignalMessage ctor in a moment
# end emit_signal
# this is a bit odd, but we create instances of the subtypes
# so we can return the shared instances if someone tries to
# construct one of them (otherwise we'd eg try and return an
# instance of Bus from __new__ in SessionBus). why are there
# three ways to construct this class? we just don't know.
# FIXME: Drop the subclasses here? I can't think why we'd ever want
# polymorphism
# Copyright (C) 2003-2006 Red Hat Inc. <http://www.redhat.com/>
# Copyright (C) 2005-2006 Collabora Ltd. <http://www.collabora.co.uk/>
# Python 2 (and 3.x < 3.3, but we don't support those)
# if necessary, get default bus (deprecated)
# see if this name is already defined, return it if so
# FIXME: accessing internals of Bus
# otherwise register the name
# TODO: more intelligent tracking of bus name states?
# queueing can happen by default, maybe we should
# track this better or let the user know if they're
# queued or not?
# if this is a shared bus which is being used by someone
# else in this process, this can happen legitimately
# and create the object
# cache instance (weak ref only)
# FIXME: accessing Bus internals again
# do nothing because this is called whether or not the bus name
# object was retrieved from the cache or created new
# we can delete the low-level name here because these objects
# are guaranteed to exist only once for each bus name
# split up the cases when we do and don't have an interface because the
# latter is much simpler
# search through the class hierarchy in python MRO order
# if we haven't got a candidate class yet, and we find a class with a
# suitably named member, save this as a candidate class
# however if it is annotated for a different interface
# than we are looking for, it cannot be a candidate
# if we have a candidate class, carry on checking this and all
# superclasses for a method annoated as a dbus method
# on the correct interface
# the candidate class has a dbus method on the correct interface,
# or overrides a method that is, success!
# simpler version of above
# We don't actually want the traceback anyway
# The exception was actually thrown, so we can get a traceback
# We don't have any traceback for it, e.g.
# see also https://bugs.freedesktop.org/show_bug.cgi?id=12403
# these attributes are shared between all instances of the Interface
# object, so this has to be a dictionary that maps class names to
# the per-class introspection/interface data
# merge all the name -> method tables for all the interfaces
# implemented by our base classes into our own
# add in all the name -> method entries for our own methods/signals
# methods are different to signals, so we have two functions... :)
# convert signature into a tuple so length refers to number of
# types, not number of characters. the length is checked by
# the decorator to make sure it matches the length of args.
# magic iterator which returns as many v's as we need
# its tempting to default to Signature('v'), but
# for methods that return nothing, providing incorrect
# introspection data is worse than providing none at all
# types, not number of characters
# Define Interface as an instance of the metaclass InterfaceType, in a way
# that is compatible across both Python 2 and Python 3.
#: A unique object used as the value of Object._object_path and
#: Object._connection if it's actually in more than one place
#: If True, this object can be made available at more than one object path.
#: If True but `SUPPORTS_MULTIPLE_CONNECTIONS` is False, the object may
#: handle more than one object path, but they must all be on the same
#: connection.
#: If True, this object can be made available on more than one connection.
#: If True but `SUPPORTS_MULTIPLE_OBJECT_PATHS` is False, the object must
#: have the same object path on all its connections.
# someone's using the old API; don't gratuitously break them
# someone's using the old API but naming arguments, probably
#: Either an object path, None or _MANY
#: Either a dbus.connection.Connection, None or _MANY
#: A list of tuples (Connection, object path, False) where the False
#: is for future expansion (to support fallback paths)
#: Lock protecting `_locations`, `_connection` and `_object_path`
#: True if this is a fallback object handling a whole subtree.
# there's not really enough information to do anything useful here
# lookup candidate method and parent method
# set up method call parameters
# set up async callback functions
# include the sender etc. if desired
# pathological case: if we're exported in two places,
# one of which is a subtree of the other, then pick the
# subtree by preference (i.e. minimize the length of
# rel_path)
# we already have rel_path == path at the beginning
# yes we're in this exported subtree
# call method
# we're done - the method has got callback functions to reply with
# otherwise we send the return values in a reply. if we have a
# signature, use it to turn the return value into a tuple as
# appropriate
# if we have zero or one return values we want make a tuple
# for the _method_reply_return function, otherwise we need
# to check we're passing it a sequence
# multi-value signature, multi-value return... proceed
# unchanged
# no signature, so just turn the return into a tuple and send it as normal
# If the return is a tuple that is not a Struct, we use it
# as-is on the assumption that there are multiple return
# values - this is the usual Python idiom. (fd.o #10174)
# send error reply
# Copyright (C) 2003-2007 Red Hat Inc. <http://www.redhat.com/>
# Copyright (C) 2005-2007 Collabora Ltd. <http://www.collabora.co.uk/>
# the test suite relies on the existence of this property
# defer the async call til introspection finishes
# we're being synchronous, so block
# trust that the proxy, and the properties it had, are OK
# fail early if the method name is bad
# fail early if the interface name is bad
# we don't get the signals unless the Bus has a main loop
# XXX: using Bus internals
# the attribute is still called _named_service for the moment,
# for the benefit of telepathy-python
#PendingCall object for Introspect call
#queue of async calls waiting on the Introspect to return
#dictionary mapping method names to their input signatures
# must be a recursive lock because block() is called while locked,
# and calls the callback which re-takes the lock
# XXX: We don't currently support this because it's the signal receiver
# that's responsible for tracking name owner changes, but it
# seems a natural thing to add in future.
#unique_bus_name = property(lambda self: something, None, None,
# FIXME: potential to flood the bus
# We should make sure mainloops all have idle handlers
# and do one message per idle
# else someone still has a _DeferredMethod from before we
# finished introspection: no need to do anything special any more
# someone still has a _DeferredMethod from before we
# finished introspection
# this can be done without taking the lock - the worst that can
# happen is that we accidentally return a _DeferredMethod just after
# finishing introspection, in which case _introspect_add_to_queue and
# _introspect_block will do the right thing anyway
# Copyright (C) 2007 Collabora Ltd. <http://www.collabora.co.uk/>
# The odd syntax used here is required so that the code is compatible with
# both Python 2 and Python 3.  It essentially creates a new class called
# ExportedGObject with a metaclass of ExportGObjectType and an __init__()
# Because GObject and `dbus.service.Object` both have custom metaclasses, the
# naive approach using simple multiple inheritance won't work. This class has
# `ExportedGObjectType` as its metaclass, which is sufficient to make it work
# Copyright (C) 2008 Openismus GmbH <http://openismus.com/>
# Copyright (C) 2008 Collabora Ltd. <http://www.collabora.co.uk/>
# This method name is hard-coded in _dbus_bindings._Server.
# This is not public API.
# _bus_names is used by dbus.service.BusName!
# The signals lock is no longer held here (it was in <= 0.81.0)
# else it doesn't exist: try to start it
# already unique
# we don't get the signals otherwise
# XXX: it might be nice to signal IN_QUEUE, EXISTS by exception,
# but this would not be backwards-compatible
# FIXME: add an async success/error handler capability?
# (and the same for remove_...)
# Copyright (C) 2006 Collabora Ltd. <http://www.collabora.co.uk/>
# Copyright (C) 2004 Anders Carlsson
# Copyright (C) 2004, 2005, 2006 Red Hat Inc. <http://www.redhat.com/>
# We can't just use Exception.__unicode__ because it chains up weirdly.
# https://code.launchpad.net/~mvo/ubuntu/quantal/dbus-python/lp846044/+merge/129214
# if the connection is actually a bus, it's responsible for changing
# this later
# these haven't been checked yet by the match tree
# extracting args with byte_arrays is less work
# these have likely already been checked by the match tree
# minor optimization: if we already extracted the args with the
# right calling convention to do the args match, don't bother
# doing so again
# basicConfig is a no-op if logging is already configured
# do nothing if the connection has already vanished
# this if-block is needed because shared bus connections can be
# __init__'ed more than once
# Now called without the signals lock held (it was held in <= 0.81.0)
# no need to validate other args - MethodCallMessage ctor will do
# Add the arguments to the function
# we don't care what happens, so just send it
# (and we can let the recipient optimize by not replying to us)
# make a blocking call
# Copyright 2006-2021 Collabora Ltd.
# Imported into this module
# Submodules
# Copyright (C) 2004-2006 Red Hat Inc. <http://www.redhat.com/>
# Copyright (C) 2003-2007  Robey Pointer <robeypointer@gmail.com>
# This file is part of paramiko.
# Paramiko is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
# Paramiko is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for more
# You should have received a copy of the GNU Lesser General Public License
# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA.
# never convert this to ``s +=`` because this is a string, not a number
# noinspection PyAugmentAssignment
# after much testing, this algorithm was deemed to be the fastest
# strip off leading zeros, FFs
# degenerate case, n was either 0 or -1
# it's crazy how small Python can make this function.
# make only one filter object, so it doesn't get applied more than once
# Attempt to run through our version of b(), which does the Right Thing
# for unicode strings vs bytestrings, and raises TypeError if it's not
# one of those types.
# If it wasn't a string/byte/buffer-ish object, try calling an
# asbytes() method, which many of our internal classes implement.
# Finally, just do nothing & assume this object is sufficiently
# byte-y or buffer-y that everything will work out (or that callers
# are capable of handling whatever it is.)
# TODO: clean this up / force callers to assume bytes OR unicode
# identifier > 30
# now fetch length
# more complimicated...
# FIXME: theoretically should handle indefinite-length (0x80)
# can't fit
# now switch on id
# sequence
# 1: boolean (00 false, otherwise true)
# no need to support ident > 31 here
# Copyright (C) 2003-2011  Robey Pointer <robeypointer@gmail.com>
# TODO: I guess a real plugin system might be nice for future expansion...
# Copyright (C) 2005 John Arbash-Meinel <john@arbash-meinel.com>
# Modified up by: Todd Whiteman <ToddW@ActiveState.com>
# Note: The WM_COPYDATA value is pulled from win32con, as a workaround
# so we do not have to import this huge library just for this one variable.
# Raise a failure to connect exception, pageant isn't running anymore!
# create a name for the mmap
# Create an array buffer containing the mapped filename
# Create a string to use for the SendMessage function call
# Use the default level of zlib compression
# for debugging
# TODO: rewrite SFTP file/server modules' overly-flexible "make a request with
# xyz components" so we don't need this very silly method of signaling whether
# a given Python integer should be 32- or 64-bit.
# NOTE: this only became an issue when dropping Python 2 support; prior to
# doing so, we had to support actual-longs, which served as that signal. This
# is simply recreating that structure in a more tightly scoped fashion.
# ...internals...
# winscp will freak out if the server sends version info before the
# client finishes sending INIT.
# advertise that we support "check-file"
# sometimes sftp is used directly over a socket instead of
# through a paramiko channel.  in this case, check periodically
# if the socket is closed.  (for some reason, recv() won't ever
# return or raise an exception, but calling select on a closed
# socket will.)
# most sftp servers won't accept packets larger than about 32k, so
# anything with the high byte set (> 16MB) is just garbage.
# Defined in RFC 5656 6.2
# Defined in RFC 5656 6.2.1
# TODO 4.0: remove; it does nothing since porting to cryptography.io
# Must set ecdsa_curve first; subroutines called herein may need to
# spit out our get_name(), which relies on this.
# But this also means we need to hand it a real key/curve
# identifier, so strip out any cert business. (NOTE: could push
# that into _ECDSACurveSet.get_by_key_format_identifier(), but it
# feels more correct to do it here?)
# TODO 4.0: deprecate/remove
# PKey._read_private_key_openssh() should check or return
# keytype - parsing could fail for any reason due to wrong type
# Copyright (C) 2021 Lew Gordon <lew.gordon@genesys.com>
# Copyright (C) 2022 Patrick Spendrin <ps_ml@gmx.de>
# use os.listdir() instead of os.path.exists(), because os.path.exists()
# uses CreateFileW() API and the pipe cannot be reopen unless the server
# calls DisconnectNamedPipe().
# retry when errno 22 which means that the server has not
# called DisconnectNamedPipe() yet.
# compute exchange hash
# construct reply
# compute exchange hash and verify signature
# NOTE: this does NOT change when using rsa2 signatures; it's
# purely about key loading, not exchange or verification
# NOTE: see #853 to explain some legacy behavior.
# TODO 4.0: replace with a nice clean fingerprint display or something
# HASHES being just a map from long identifier to either SHA1 or
# SHA256 - cert'ness is not truly relevant.
# And here again, cert'ness is irrelevant, so it is stripped out.
# NOTE: pad received signature with leading zeros, key.verify()
# expects a signature of key size (e.g. PuTTY doesn't pad)
# used for unit tests:
# need to save sockets in _rsock/_wsock so they don't get closed
# Copyright (C) 2013-2014 science + computing ag
# Author: Sebastian Deiss <sebastian.deiss@t-online.de>
#: A boolean constraint that indicates if GSS-API / SSPI is available.
#: A tuple of the exception types used by the underlying GSSAPI implementation.
#: :var str _API: Constraint for the used API
# old, unmaintained python-gssapi package
# keep this for compatibility
# client mode
# server mode
# Internals
# for key exchange with gssapi-keyex
# hostname and username are not required for GSSAPI, but for SSPI
# Verifies data and its signature.  If verification fails, an
# sspi.error will be raised.
# so here's the plan:
# we fetch as many random bits as we'd need to fit N-1, and if the
# generated number is >= N, we try again.  in the worst case (N-1 is a
# power of 2), we have slightly better than 50% odds of getting one that
# fits, so i can't guarantee that this loop will ever finish, but the odds
# of it looping forever should be infinitesimal.
# pack is a hash of: bits -> [ (generator, modulus) ... ]
# weed out primes that aren't at least:
# type 2 (meets basic structural requirements)
# test 4 (more than just a small-prime sieve)
# tries < 100 if test & 4 (at least 100 tries of miller-rabin)
# there's a bug in the ssh "moduli" file (yeah, i know: shock! dismay!
# call cnn!) where it understates the bit lengths of these primes by 1.
# this is okay.
# find nearest bitsize >= preferred
# if that failed, find greatest bitsize >= min
# their entire (min, max) range has no intersection with our range.
# if their range is below ours, pick the smallest.  otherwise pick
# the largest.  it'll be out of their range requirement either way,
# but we'll be sending them the closest one we have.
# now pick a random modulus of this bitsize
# TODO 4.0: just merge into __bytes__ (everywhere)
# Limit padding to 1 MB
# TODO 4.0: depending on where this is used internally or downstream, force
# users to specify get_binary instead and delete this.
# TODO 4.0: also consider having this take over the get_string name, and
# remove this name instead.
# TODO: see the TODO for get_string/get_text/et al, this should change
# to match.
# TODO: this would never have worked for unicode strings under Python 3,
# guessing nobody/nothing ever used it for that purpose?
# TripleDES is moving from `cryptography.hazmat.primitives.ciphers.algorithms`
# in cryptography>=43.0.0 to `cryptography.hazmat.decrepit.ciphers.algorithms`
# It will be removed from `cryptography.hazmat.primitives.ciphers.algorithms`
# in cryptography==48.0.0.
# Source References:
# - https://github.com/pyca/cryptography/commit/722a6393e61b3ac
# - https://github.com/pyca/cryptography/pull/11407/files
# for thread cleanup
# These tuples of algorithm identifiers are in preference order; do not
# reorder without reason!
# NOTE: if you need to modify these, we suggest leveraging the
# `disabled_algorithms` constructor argument (also available in SSHClient)
# instead of monkeypatching or subclassing.
# ~= HostKeyAlgorithms in OpenSSH land
# ~= PubKeyAcceptedAlgorithms
# TODO: at some point we will want to drop this as it's no longer
# considered secure due to using SHA-1 for signatures. OpenSSH 8.8 no
# longer supports it. Question becomes at what point do we want to
# prevent users with older setups from using this?
# zlib@openssh.com is just zlib, but only turned on after a successful
# authentication.  openssh servers may only offer this type because
# they've had troubles with security holes in zlib in the past.
# TODO: these two overrides on sock's type should go away sometime, too
# many ways to do it!
# convert "host:port" into (host, port)
# connect to the given (host, port)
# addr = sockaddr
# okay, normal socket-ish flow here...
# we set the timeout so we can check self.active periodically to
# see if we should bail. socket.timeout exception is never propagated.
# negotiated crypto parameters
# GSS-API / SSPI Key Exchange
# This will be set to True if GSS-API Key Exchange was performed
# state used during negotiation
# synchronization (always higher level than write_lock)
# tracking open channels
# (id -> Event)
# (id -> True)
# response Message from an arbitrary global request
# user-defined event callbacks
# how long (seconds) to wait for the SSH banner
# how long (seconds) to wait for the handshake to finish after SSH
# banner sent.
# how long (seconds) to wait for the auth response.
# how long (seconds) to wait for opening a channel
# server mode:
# Handler table, now set at init time for easier per-instance
# manipulation and subclass twiddling.
# Interleave cert variants here; resistant to various background
# overwriting of _preferred_keys, and necessary as hostkeys can't use
# the logic pubkey auth does re: injecting/checking for certs at
# runtime
# No GSSAPI in play == nothing to do
# Obtain the correct host first - did user request a GSS-specific name
# to use that is distinct from the actual SSH target hostname?
# Finally, canonicalize via DNS if DNS is trusted.
# And set attribute for reference later.
# async, return immediately and let the app poll for completion
# synchronous, wait for a result
# Handle SHA-2 extensions for RSA by ensuring that lookups into
# self.server_key_dict will yield this key for any of the algorithm
# places to look for the openssh "moduli" file
# none succeeded
# src_addr, src_port = src_addr_port
# dest_addr, dest_port = dest_addr_port
# TODO: a more robust implementation would be to ask each key class
# for its nameS plural, and just use that.
# TODO: that could be used in a bunch of other spots too
# check host key if we were given one
# If GSS-API Key Exchange was performed, we are not required to check
# the host key.
# we should never try to send the password unless we're on a secure
# caller wants to wait for event themselves
# if password auth isn't allowed, but keyboard-interactive *is*,
# try to fudge it
# for some reason, at least on os x, a 2nd request will
# be made with zero fields requested.  maybe it's just
# to try to fake out automated scripting of the exact
# type we're doing here.  *shrug* :)
# attempt failed; just raise the original exception
# we should never try to authenticate unless we're on a secure link
# Keep trying to join() our main thread, quickly, until:
# * We join()ed successfully (self.is_alive() == False)
# * Or it looks like we've hit issue #520 (socket.recv hitting some
# race condition preventing it from timing out correctly), wherein
# our socket and packetizer are both closed (but where we'd
# otherwise be sitting forever on that recv()).
# internals...
# TODO 4.0: make a public alias for this because multiple other classes
# already explicitly rely on it...or just rewrite logging :D
# Fallback to SHA1 for kex engines that fail to specify a hex
# algorithm, or for e.g. transport tests that don't run kexinit.
# AEAD types (eg GCM) use their algorithm class /as/ the encryption
# engine (they expose the same encrypt/decrypt API as a CipherContext)
# All others go through the Cipher class.
# TODO: why is this getting tickled in aesgcm mode???
# only called if a channel has turned on x11 forwarding
# by default, use the same mechanism as accept()
# WELP. We must be dealing with someone trying to do non-auth things
# without being authed. Tell them off, based on message class.
# Global requests have no details, just failure.
# Channel opens let us reject w/ a specific type + message.
# NOTE: Post-open channel messages do not need checking; the above will
# reject attempts to open channels, meaning that even if a malicious
# user tries to send a MSG_CHANNEL_REQUEST, it will simply fall under
# the logic that handles unknown channel IDs (as the channel list will
# be empty.)
# (use the exposed "run" method, because if we specify a thread target
# of a private method, threading.Thread will keep a reference to it
# indefinitely, creating a GC cycle and not letting Transport ever be
# GC'd. it's a bug in Thread.)
# Hold reference to 'sys' so we can test sys.modules to detect
# interpreter shutdown.
# active=True occurs before the thread is launched, to avoid a race
# The above is actually very much part of the handshake, but
# sometimes the banner can be read but the machine is not
# responding, for example when the remote ssh daemon is loaded
# in to memory but we can not read from the disk/spawn a new
# shell.
# Make sure we can specify a timeout for the initial handshake.
# Reuse the banner timeout for now.
# These message IDs indicate key exchange & will differ
# depending on exact exchange algorithm
# Respond with "I don't implement this particular
# message type" message (unless the message type was
# itself literally MSG_UNIMPLEMENTED, in which case, we
# just shut up to avoid causing a useless loop).
# empty tuple, e.g. socket.timeout
# Don't raise spurious 'NoneType has no attribute X' errors when we
# wake up during interpreter shutdown. Or rather -- raise
# everything *if* sys.modules (used as a convenient sentinel)
# appears to still exist.
# Log useful, non-duplicative line re: an agreed-upon algorithm.
# Old code implied algorithms could be asymmetrical (different for
# inbound vs outbound) so we preserve that possibility.
# protocol stages
# throws SSHException on anything unusual
# remote side wants to renegotiate
# this is slow, but we only have to do it once
# give them 15 seconds for the first line, then just 2 seconds
# each additional line.  (some sites have very high latency.)
# save this server version string for later
# pull off any attached comment
# NOTE: comment used to be stored in a variable and then...never used.
# since 2003. ca 877cd974b8182d26fa76d566072917ea67b64e67
# parse out version string and make sure it matches
# can't do group-exchange if we don't have a pack of potential
# primes
# TODO: ensure tests will catch if somebody streamlines
# this by mistake - case is the admittedly silly one where
# the only calls to add_server_key() contain keys which
# were filtered out of the below via disabled_algorithms.
# If this is streamlined, we would then be allowing the
# disabled algorithm(s) for hostkey use
# TODO: honestly this prob just wants to get thrown out
# when we make kex configuration more straightforward
# Signal support for MSG_EXT_INFO so server will send it to us.
# NOTE: doing this here handily means we don't even consider this
# value when agreeing on real kex algo to use (which is a common
# pitfall when adding this apparently).
# Similar to ext-info, but used in both server modes, so done outside
# of above if/else.
# save a copy for later (needed to compute a hash)
# cookie, discarded
# TODO: shouldn't these two lines say "cipher" to match usual
# terminology (including elsewhere in paramiko!)?
# Record, and strip out, ext-info and/or strict-kex non-algorithms
# NOTE: this is what we are expecting from the /remote/ end.
# Set strict mode if agreed.
# CVE mitigation: expect zeroed-out seqno anytime we are performing kex
# init phase, if strict mode was negotiated.
# as a server, we pick the first item in the client's list that we
# support.
# as a client, we pick the first item in our list that the server
# supports.
# TODO: do an auth-overhaul style aggregate exception here?
# TODO: would let us streamline log output & show all failures up
# front
# save for computing hash later...
# now wait!  openssh has a bug (and others might too) where there are
# actually some extra bytes (one NUL byte in openssh's case) added to
# the end of the packet but not parsed.  turns out we need to throw
# away those bytes because they aren't part of the hash.
# Non-AEAD/GCM type ciphers' IV size is their block size.
# initial mac keys are done in the hash's natural size (not the
# potentially truncated transmission size)
# Reset inbound sequence number if strict mode.
# Reset outbound sequence number if strict mode.
# If client indicated extension support, send that packet immediately
# we always expect to receive NEWKEYS now
# delayed initiation of compression
# Packet is a count followed by that many key-string to possibly-bytes
# pairs.
# NOTE: this should work ok in cases where a server sends /two/ such
# messages; the RFC explicitly states a 2nd one should overwrite the
# 1st.
# can also free a bunch of stuff here
# create auth handler for server mode
# this was the first key exchange
# (also signal to packetizer as it sometimes wants to know this
# status as well, eg when seqnos rollover)
# send an event?
# it's now okay to send data again (if this was a re-key)
# ignored language
# handle direct-tcpip requests coming from the client
# always_display
# language
# TODO 4.0: drop this, we barely use it ourselves, it badly replicates the
# Transport-internal algorithm management, AND does so in a way which doesn't
# honor newer things like disabled_algorithms!
# (id -> Channel)
# NOTE: this purposefully duplicates some of the parent class in order to
# modernize, refactor, etc. The intent is that eventually we will collapse
# this one onto the parent in a backwards incompatible release.
# Short-circuit for any service name not ssh-userauth.
# NOTE: it's technically possible for 'service name' in
# SERVICE_REQUEST/ACCEPT messages to be "ssh-connection" --
# but I don't see evidence of Paramiko ever initiating or expecting to
# receive one of these. We /do/ see the 'service name' field in
# MSG_USERAUTH_REQUEST/ACCEPT/FAILURE set to this string, but that is a
# different set of handlers, so...!
# TODO 4.0: consider erroring here (with an ability to opt out?)
# instead as it probably means something went Very Wrong.
# Record that we saw a service-userauth acceptance, meaning we are free
# to submit auth requests.
# Make sure we're not trying to auth on a not-yet-open or
# already-closed transport session; that's our responsibility, not that
# of AuthHandler.
# TODO: better error message? this can happen in many places, eg
# user error (authing before connecting) or developer error (some
# improperly handled pre/mid auth shutdown didn't become fatal
# enough). The latter is much more common & should ideally be fixed
# by terminating things harder?
# Also make sure we've actually been told we are allowed to auth.
# Or request to do so, otherwise.
# Now we wait to hear back; the user is expecting a blocking-style auth
# request so there's no point giving control back anywhere.
# TODO: feels like we're missing an AuthHandler Event like
# 'self.auth_event' which is set when AuthHandler shuts down in
# ways good AND bad. Transport only seems to have completion_event
# which is unclear re: intent, eg it's set by newkeys which always
# happens on connection, so it'll always be set by the time we get
# here.
# NOTE: this copies the timing of event.wait() in
# AuthHandler.wait_for_response, re: 1/10 of a second. Could
# presumably be smaller, but seems unlikely this period is going to
# be "too long" for any code doing ssh networking...
# NOTE: using new sibling subclass instead of classic AuthHandler
# TODO 4.0: merge to parent, preserving (most of) docstring
# attempt to fudge failed; just raise the original exception
# NOTE: legacy impl omitted equiv of ensure_session since it just wraps
# another call to an auth method. however we reinstate it for
# consistency reasons.
# Copyright (C) 2006-2007  Robey Pointer <robeypointer@gmail.com>
# Make sure the event starts in `set` state if we appear to already
# be closed; otherwise, if we start in `clear` state & are closed,
# nothing will ever call `.feed` and the event (& OS pipe, if we're
# wrapping one - see `Channel.fileno`) will permanently stay in
# `clear`, causing deadlock if e.g. `select`ed upon.
# should we block?
# loop here in case we get woken up but a different thread has
# grabbed everything in the buffer.
# something's in the buffer and we have the lock!
# Copyright (C) 2012  Yipit, Inc <coders@yipit.com>
# Try-and-ignore import so platforms w/o subprocess (eg Google App Engine) can
# still import paramiko.
# There was a problem with the child process. It probably
# died and we can't proceed. The best option here is to
# raise an exception informing the user that the informed
# ProxyCommand is not working.
# Don't raise socket.timeout, return partial result instead
# socket.timeout is a subclass of IOError
# Concession to Python 3 socket-like API
# pos - position within the file, according to the user
# realpos - position according the OS
# (these may be different because we buffer for line reading)
# size only matters for seekable files
# go for broke
# it's almost silly how complex this function is.
# edge case: the newline may be '\r\n' and we may have read
# only the first '\r' last time.
# check size before looking for a linefeed, in case we already have
# enough.
# truncate line
# find the newline
# we couldn't find a newline in the truncated string, return it
# if the string was truncated, _rbuffer needs to have the string after
# the newline character plus the truncated part of the line we stored
# earlier in _rbuffer
# we could read the line up to a '\r' and there could still be a
# '\n' following that we read next time.  note that and eat it.
# Accept text and encode as utf-8 for compatibility only.
# only scan the new data for linefeed, to avoid wasting time.
# even if we're line buffering, if the buffer has grown past the
# buffer size, force a flush.
# ...overrides...
# set bufsize in any event, because it's used for readline().
# do no buffering by default, because otherwise writes will get
# buffered in a way that will probably confuse people.
# apparently, line buffering only affects writes.  reads are only
# buffered if you call readline (directly or indirectly: iterating
# over a file will indirectly call readline).
# unbuffered
# built-in file objects have this attribute to store which kinds of
# line terminations they've seen:
# <http://www.python.org/doc/current/lib/built-in-funcs.html>
# the underlying stream may be something that does partial writes (like
# a socket).
# silliness about tracking what kinds of newlines we've seen.
# i don't understand why it can be None, a string, or a tuple, instead
# of just always being a tuple, but we'll emulate that behavior anyway.
# This file is part of Paramiko.
# lock for request_number
# request # -> SFTPFile
# override default logger
# NOTE: these bits MUST continue using %-style format junk because
# logging.Logger.log() explicitly requires it. Grump.
# escape '%' in msg (they could come from file or directory names)
# before logging
# done with handle
# Send out a bunch of readdir requests so that we can read the
# responses later on Section 6.7 of the SSH file transfer RFC
# explains this
# http://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt
# For each of our sent requests
# Read and parse the corresponding packets
# If we're at the end of our queued requests, then fire off
# some more requests
# Exit the loop when we've reached the end of the directory
# handle
# If we've hit the end of our queued requests, reset nums.
# Python continues to vacillate about "open" vs "file"...
# TODO: make class initialize with self._cwd set to self.normalize('.')
# this method may be called from other threads (prefetch)
# For all other types, rely on as_string() to either coerce
# to bytes before writing or raise a suitable exception.
# might be response for a file that was closed before
# responses came back
# just doing a single check
# synchronous
# can not rewrite this to deal with E721, either as a None check
# nor as not an instance of None or NoneType
# clever idea from john a. meinel: map the error codes to errno
# absolute path
# Formerly of py3compat.py. May be fully delete'able with a deeper look?
# In case we're handed a string instead of an int.
# for debugging:
# authentication request return codes:
# channel request failed reasons:
# Common IO/select/etc sleep period, in seconds
# lower bound on the max packet size we'll accept from the remote host
# Minimum packet size is 32768 bytes according to
# http://www.ietf.org/rfc/rfc4254.txt
# However, according to http://www.ietf.org/rfc/rfc4253.txt it is perfectly
# legal to accept a size much smaller, as OpenSSH client does as size 16384.
# Max windows size according to http://www.ietf.org/rfc/rfc4254.txt
# We may eventually want this to be usable for other key types, as
# OpenSSH moves to it, but for now this is just for Ed25519 keys.
# This format is described here:
# https://github.com/openssh/openssh-portable/blob/master/PROTOCOL.key
# The description isn't totally complete, and I had to refer to the
# source for a full implementation.
# kdfname of "none" must have an empty kdfoptions, the ciphername
# must be "none"
# We can't control how many rounds are on disk, so no sense
# warning about it.
# A copy of the public key, again, ignore.
# The second half of the key data is yet another copy of the public
# key...
# Verify that all the public keys are the same...
# Comment, ignore.
# TODO 4.0: remove
# Some sftp servers will choke if you send read/write requests larger than
# this size.
# We allow double-close without signaling an error, because real
# Python file objects do.  However, we must protect against actually
# sending multiple CMD_CLOSE packets, because after we close our
# handle, the same handle may be re-allocated by the server, and we
# may end up mysteriously closing some random other file.  (This is
# especially important because we unconditionally call close() from
# __del__.)
# GC'd file handle could be called from an arbitrary thread
# -- don't wait for a response
# may have outlived the Transport connection
# prefetch request ends before this one begins
# inclusive
# well, we have part of the request.  see if another chunk has
# it's not here
# while not closed, and haven't fetched past the current position,
# and haven't reached EOF...
# may write less than requested if it would exceed max packet size
# convert_status already called
# ext
# alg
# queue up async reads for the rest of the file
# don't fetch data that's already in the prefetch buffer
# break up anything larger than the max read size
# now we can just devolve to a bunch of read()s :)
# do these read requests in a temporary thread because there may be
# a lot of them, so it may block.
# Limit the number of concurrent requests in a busy-loop
# save exception and re-raise it on next file operation
# spin if in race with _prefetch_thread
# draft-ietf-secsh-transport-09.txt, page 17
# compute f = g^x mod p, but don't send it yet
# compute e = g^x mod p (where g=2), and send it
# generate an "x" (1 < x < q), where q is (p-1)/2.
# p is a 128-byte (1024-bit) number, where the first 64 bits are 1.
# therefore q can be approximated as a 2^1023.  we drop the subset of
# potential x where the first 63 bits are 1, because some of those
# will be larger than q (but this is a tiny tiny subset of
# potential x).
# okay, build up the hash H of
# (V_C || V_S || I_C || I_S || K_S || e || f || K)
# sign it
# send reply
# Copyright (C) 2012  Olle Lundberg <geek@nerd.sh>
# TODO: do a full scan of ssh.c & friends to make sure we're fully
# compatible across the board, e.g. OpenSSH 8.1 added %n to ProxyCommand.
# Doesn't seem worth making this 'special' for now, it will fit well
# enough (no actual match-exec config key to be confused with).
# Start out w/ implicit/anonymous global host-like block to hold
# anything not contained by an explicit one.
# Strip any leading or trailing whitespace from the line.
# Refer to https://github.com/paramiko/paramiko/issues/499
# Skip blanks, comments
# Parse line into key, value
# Host keyword triggers switch to new block/context
# TODO 4.0: make these real objects or at least name this
# "hosts" to acknowledge it's an iterable. (Doing so prior
# to 3.0, despite it being a private API, feels bad -
# surely such an old codebase has folks actually relying on
# these keys.)
# Special-case for noop ProxyCommands
# Store 'none' as None - not as a string implying that the
# proxycommand is the literal shell command "none"!
# All other keywords get stored, directly or via append
# identityfile, localforward, remoteforward keys are special
# cases, since they are allowed to be specified multiple times
# and they should be tried in order of specification.
# Store last 'open' block and we're done
# First pass
# Inject HostName if it was not set (this used to be done incidentally
# during tokenization, for some reason).
# Handle canonicalization
# NOTE: OpenSSH manpage does not explicitly state this, but its
# implementation for CanonicalDomains is 'split on any whitespace'.
# Overwrite HostName again here (this is also what OpenSSH does)
# Init
# Iterate all stanzas, applying any that match, in turn (so that things
# like Match can reference currently understood state)
# Create a copy of the original value,
# else it will reference the original list
# in self._config and update that value too
# when the extend() is being called.
# Expand variables in resulting values
# (besides 'Match exec' which was already handled above)
# TODO: would we want to dig deeper into other results? e.g. to
# find something that satisfies PermittedCNAMEs when that is
# implemented?
# TODO: what does ssh use here and is there a reason to use
# that instead of gethostbyname?
# TODO: follow CNAME (implied by found != candidate?) if
# CanonicalizePermittedCNAMEs allows it
# If we got here, it means canonicalization failed.
# When CanonicalizeFallbackLocal is undefined or 'yes', we just spit
# back the original hostname.
# And here, we failed AND fallback was set to a non-yes value, so we
# need to get mad.
# Convenience auto-splitter if not already a list
# Short-circuit if target matches a negated pattern
# Flag a match, but continue (in case of later negation) if regular
# match occurs
# Obtain latest host/user value every loop, so later Match may
# reference values assigned within a prior Match.
# Canonical is a hard pass/fail based on whether this is a
# canonicalized re-lookup.
# The parse step ensures we only see this by itself or after
# canonical, so it's also an easy hard pass. (No negation here as
# that would be uh, pretty weird?)
# From here, we are testing various non-hard criteria,
# short-circuiting only on fail
# This is the laziest spot in which we can get mad about an
# inability to import Invoke.
# Like OpenSSH, we 'redirect' stdout but let stderr bubble up
# Tackle any 'passed, but was negated' results from above
# Made it all the way here? Everything matched!
# Did anything match? (To be treated as bool, usually.)
# Short-circuit if no tokenization possible
# Obtain potentially configured hostname, for use with %h.
# Special-case where we are tokenizing the hostname itself, to avoid
# replacing %h with a %h-bearing value, etc.
# Ditto the rest of the source values
# The actual tokens!
# TODO: %%???
# TODO: %i?
# also this is pseudo buggy when not in Match exec mode so document
# that. also WHY is that the case?? don't we do all of this late?
# TODO: %T? don't believe this is possible however
# Do the thing with the stuff
# TODO: log? eg that value -> tokenized
# Handle per-keyword negation
# all/canonical have no params (everything else does)
# Perform some (easier to do now than in the middle) validation that is
# better handled here than at lookup time.
# If the SSH config contains AddressFamily, use that when
# determining  the local host's FQDN. Using socket.getfqdn() from
# the standard library is the most general solution, but can
# result in noticeable delays on some platforms when IPv6 is
# misconfigured or not available, as it calls getaddrinfo with no
# address family specified, so both IPv4 and IPv6 are checked.
# Handle specific option
# Handle 'any' / unspecified / lookup failure
# Cache
# ...Channel requests...
# request a bit range: we accept (min_bits) to (max_bits), but prefer
# (preferred_bits).  according to the spec, we shouldn't pull the
# minimum up above 1024.
# only used for unit tests: we shouldn't ever send this
# generate an "x" (1 < x < (p-1)/2).
# smoosh the user's preferred size into our own limits
# fix min/max if they're inconsistent.  technically, we could just pout
# and hang up, but there's no harm in giving them the benefit of the
# doubt and just picking a bitsize for them.
# now save a copy
# generate prime
# same as above, but without min_bits or max_bits (used by older
# clients like putty)
# reject if p's bit length < 1024 or > 8192
# now compute e = g^x mod p
# (V_C || V_S || I_C || I_S || K_S || min || n || max || p || g || e || f || K)  # noqa
# emulate a dict of { hostname: { keytype: PKey } }
# replace
# add a new one
# don't use this please.
# Bad number of fields
# Decide what kind of key we're looking at and create an object
# to hold it accordingly.
# TODO: this grew organically and doesn't seem /wrong/ per se (file
# read -> unicode str -> bytes for base64 decode -> decoded bytes);
# but in Python 3 forever land, can we simply use
# `base64.b64decode(str-from-file)` here?
# TODO 4.0: consider changing HostKeys API so this just raises
# naturally and the exception is muted higher up in the stack?
# on windows, normalize backslashes to sftp/posix format
# try the user's .ssh key file, and mask exceptions
# update local host keys from file (in case other SSH clients
# have written to the known_hosts file meanwhile.
# some OS like AIX don't indicate SOCK_STREAM support, so just
# guess. :(  We only do this if we did not get a single result marked
# as socktype == SOCK_STREAM.
# Try multiple possible address families (e.g. IPv4 vs IPv6)
# Break out of the loop on success
# As mentioned in socket docs it is better
# to close sockets explicitly
# Raise anything that isn't a straight up connection error
# (such as a resolution error)
# Capture anything else so we know how the run looks once
# iteration is complete. Retain info about which attempt
# this was.
# Make sure we explode usefully if no address family attempts
# succeeded. We've no way of knowing which error is the "right"
# one, so we construct a hybrid exception containing all the real
# ones, of a subclass that client code should still be watching for
# (socket.error)
# t.hostname may be None, but GSS-API requires a target name.
# Therefore use hostname as fallback.
# If GSS-API Key Exchange is performed we are not required to check the
# host key, because the host is authenticated via GSS-API / SSPI as
# well as our client.
# will raise exception if the key is rejected
# New auth flow!
# Old auth flow!
# Assume privkey, not cert, by default
# Blindly try the key path; if no private key, nothing will work.
# TODO: change this to 'Loading' instead of 'Trying' sometime; probably
# when #387 is released, since this is a critical log message users are
# likely testing/filtering for (bah.)
# Attempt to load cert if it exists.
# If GSS-API support and GSS-PI Key Exchange was performed, we attempt
# authentication with gssapi-keyex.
# Try GSS-API authentication (gssapi-with-mic) only if GSS-API Key
# Exchange is not performed, because if we use GSS-API for the key
# exchange, there is already a fully established GSS-API context, so
# why should we do that again?
# TODO 4.0: leverage PKey.from_path() if we don't end up just
# killing SSHClient entirely
# for 2-factor auth a successfully auth'd key password
# will return an allowed 2fac auth method
# ~/ssh/ is for windows
# TODO: only do this append if below did not run
# for 2-factor auth a successfully auth'd key will result
# in ['password']
# if we got an auth-failed exception earlier, re-raise it
# private key, client public and server public keys
# SEC1: V2.0  2.3.3 Elliptic-Curve-Point-to-Octet-String Conversion
# for server mode:
# for GSSAPI
# Use certificate contents, if available, plain pubkey otherwise
# this is horrible.  Python Exception isn't yet descended from
# object, so type(e) won't work. :(
# TODO 4.0: lol. just lmao.
# accepted
# dunno this one
# For use in server mode.
# Fallback: first one in our (possibly tweaked by caller) list
# Short-circuit for non-RSA keys
# NOTE re #2017: When the key is an RSA cert and the remote server is
# OpenSSH 7.7 or earlier, always use ssh-rsa-cert-v01@openssh.com.
# Those versions of the server won't support rsa-sha2 family sig algos
# for certs specifically, and in tandem with various server bugs
# regarding server-sig-algs, it's impossible to fit this into the rest
# of the logic here.
# Normal attempts to handshake follow from here.
# Only consider RSA algos from our list, lest we agree on another!
# Short-circuit negatively if user disabled all RSA algos (heh)
# Check for server-sig-algs if supported & sent
# Prefer to match against server-sig-algs
# Only use algos from our list that the server likes, in our own
# preference order. (NOTE: purposefully using same style as in
# Transport...expect to refactor later)
# TODO: MAY want to use IncompatiblePeer again here but that's
# technically for initial key exchange, not pubkey auth.
# Fallback to something based purely on the key & our configuration
# send the supported GSSAPI OIDs to the server
# Read the mechanism selected by the server. We send just
# the Kerberos V5 OID, so the server can only respond with
# this OID.
# After this step the GSSAPI should not return any
# token. If it does, we keep sending the token to
# the server until no more token is returned.
# send the MIC to the server
# RFC 4462 says we are not required to implement GSS-API
# error messages.
# See RFC 4462 Section 3.8 in
# http://www.ietf.org/rfc/rfc4462.txt
# Lang tag - discarded
# okay, send result
# make interactive query instead of response
# er, uh... what?
# ignore
# check if GSS-API authentication is enabled
# some clients/servers expect non-utf-8 passwords!
# in this case, just return the raw byte string.
# always treated as failure, since we don't support changing
# passwords, but collect the list of valid auth types from
# the callback anyway
# NOTE: server never wants to guess a client's algo, they're
# telling us directly. No need for _finalize_pubkey_algorithm
# anywhere in this flow.
# first check if this key is okay... if not, we can skip the verify
# key is okay, verify it
# client wants to know if this key is acceptable, before it
# signs anything...  send special "ok" message
# Read the number of OID mechanisms supported by the client.
# OpenSSH sends just one OID. It's the Kerveros V5 OID and that's
# the only OID we support.
# We can't accept more than one OID, so if the SSH client sends
# more than one, disconnect.
# if we don't support the mechanism, disconnect.
# send the Kerberos V5 GSSAPI OID to the client
# RFC 4462 says we are not required to implement GSS-API error
# messages. See section 3.8 in http://www.ietf.org/rfc/rfc4462.txt
# If there is no valid context, we reject the authentication
# TODO 4.0: we aren't giving callers access to authlist _unless_ it's
# partial authentication, so eg authtype=none can't work unless we
# tweak this.
# who cares.
# lang
# TODO 4.0: MAY make sense to make these tables into actual
# classes/instances that can be fed a mode bool or whatever. Or,
# alternately (both?) make the message types small classes or enums that
# embed this info within themselves (which could also then tidy up the
# current 'integer -> human readable short string' stuff in common.py).
# TODO: if we do that, also expose 'em publicly.
# Messages which should be handled _by_ servers (sent by clients)
# TODO 4.0: MSG_SERVICE_REQUEST ought to eventually move into
# Transport's server mode like the client side did, just for
# Messages which should be handled _by_ clients (sent by servers)
# NOTE: prior to the fix for #1283, this was a static dict instead of a
# property. Should be backwards compatible in most/all cases.
# use the client token as input to establish a secure
# context.
# TODO: Implement client credential saving.
# The OpenSSH server is able to create a TGT with the delegated
# client credentials, but this is not supported by GSS-API.
# TODO: determine if we can cut this up like we did for the primary
# AuthHandler class.
# Store a few things for reference in handlers, including auth failure
# handler (which needs to know if we were using a bad method, etc)
# Generic userauth request fields
# Caller usually has more to say, such as injecting password, key etc
# TODO 4.0: seems odd to have the client handle the lock and not
# Transport; that _may_ have been an artifact of allowing user
# threading event injection? Regardless, we don't want to move _this_
# locking into Transport._send_message now, because lots of other
# untouched code also uses that method and we might end up
# double-locking (?) but 4.0 would be a good time to revisit.
# We have cut out the higher level event args, but self.auth_event is
# still required for self.wait_for_response to function correctly (it's
# the mechanism used by the auth success/failure handlers, the abort
# handler, and a few other spots like in gssapi.
# TODO: interestingly, wait_for_response itself doesn't actually
# enforce that its event argument and self.auth_event are the same...
# This field doesn't appear to be named, but is False when querying
# for permission (ie knowing whether to even prompt a user for
# passphrase, etc) or True when just going for it. Paramiko has
# never bothered with the former type of message, apparently.
# Unnamed field that equates to "I am changing my password", which
# Paramiko clientside never supported and serverside only sort of
# Unlike most siblings, this auth method _does_ require other
# superclass handlers (eg userauth info request) to understand
# what's going on, so we still set some self attributes.
# Empty string for deprecated language tag field, per RFC 4256:
# https://www.rfc-editor.org/rfc/rfc4256#section-3.1
# NOTE: not strictly 'auth only' related, but allows users to opt-in.
# At the moment, this is only used for unpadding private keys on disk. This
# really ought to be made constant time (possibly by upstreaming this logic
# into pyca/cryptography).
# no padding, last byte part comment (printable ascii)
# known encryption types for private key files:
# TODO: make sure sphinx is reading Path right in param list...
# Lazy import to avoid circular import issues
# Normalize to string, as cert suffix isn't quite an extension, so
# pathlib isn't useful for this.
# Sort out cert vs key, i.e. it is 'legal' to hand this kind of API
# /either/ the key /or/ the cert, when there is a key/cert pair.
# Like OpenSSH, try modern/OpenSSH-specific key load first
# Then fall back to assuming legacy PEM type
# TODO Python 3.10: match statement? (NOTE: we cannot use a dict
# because the results from the loader are literal backend, eg openssl,
# private classes, so isinstance tests work but exact 'x class is y'
# tests will not work)
# TODO: leverage already-parsed/math'd obj to avoid duplicate cpu
# cycles? seemingly requires most of our key subclasses to be rewritten
# to be cryptography-object-forward. this is still likely faster than
# the old SSHClient code that just tried instantiating every class!
# load_certificate can take Message, path-str, or value-str
# TODO: needs to passthru things like passphrase
# TODO 4.0: make this and subclasses consistent, some of our own
# classmethods even assume kwargs we don't define!
# TODO 4.0: prob also raise NotImplementedError instead of pass'ing; the
# contract is pretty obviously that you need to handle msg/data/filename
# appropriately. (If 'pass' is a concession to testing, see about doing the
# work to fix the tests instead)
# TODO: arguably this might want to be __str__ instead? ehh
# TODO: ditto the interplay between showing class name (currently we just
# say PKey writ large) and algorithm (usually == class name, but not
# always, also sometimes shows certificate-ness)
# TODO: if we do change it, we also want to tweak eg AgentKey, as it
# currently displays agent-ness with a suffix
# Works for AgentKey, may work for others?
# Nuke the leading 'ssh-'
# TODO in Python 3.9: use .removeprefix()
# Trim any cert suffix (but leave the -cert, as OpenSSH does)
# Nuke any eg ECDSA suffix, OpenSSH does basically this too.
# TODO 4.0: raise NotImplementedError, 0 is unlikely to ever be
# _correct_ and nothing in the critical path seems to use this.
# yes, OpenSSH does this too!
# TODO 4.0: NotImplementedError (plus everywhere else in here)
# find the BEGIN tag
# find the END tag
# parse any headers first
# if we trudged to the end of the file, just try to cope.
# unencryped: done
# encrypted keyfile: will need a password
# if no password was passed in,
# raise an exception pointing out that we need one
# read data struct
# For now, just support 1 key.
# Encrypted private key.
# If no password was passed in, raise an exception pointing
# out that we need one
# Unpack salt and rounds from kdfoptions
# run bcrypt kdf to derive key and iv/nonce (32 + 16 bytes)
# decrypt private key blob
# Unencrypted private key
# Unpack private key and verify checkints
# string
# long integer
# 32-bit unsigned int
# remainder as string
# PKey-consuming code frequently wants to save-and-skip-over issues
# with loading keys, and uses SSHException as the (really friggin
# awful) signal for this. So for now...we do this.
# Ensure that we create new key files directly with a user-only mode,
# instead of opening, writing, then chmodding, which leaves us open to
# CVE-2022-24302.
# NOTE: O_TRUNC is a noop on new files, and O_CREAT is a noop
# on existing files, so using all 3 in both cases is fine.
# Ditto the use of the 'mode' argument; it should be safe to
# give even for existing files (though it will not act like a
# chmod in that case).
# Yea, you still gotta inform the FLO that it is in "write" mode.
# Normalization; most classes have a single key type and give a string,
# but eg ECDSA is a 1:N mapping.
# Can't do much with no message, that should've been handled elsewhere
# First field is always key type, in either kind of object. (make sure
# we rewind before grabbing it - sometimes caller had to do their own
# introspection first!)
# Regular public key - nothing special to do besides the implicit
# type check.
# OpenSSH-compatible certificate - store full copy as .public_blob
# (so signing works correctly) and then fast-forward past the
# nonce.
# This seems the cleanest way to 'clone' an already-being-read
# message; they're *IO objects at heart and their .getvalue()
# always returns the full value regardless of pointer position.
# Read out nonce as it comes before the public numbers - our caller
# is likely going to use the (only borrowed by us, not owned)
# 'msg' object for loading those numbers right after this.
# TODO: usefully interpret it & other non-public-number fields
# (requires going back into per-type subclasses.)
# General construct for an OpenSSH style Public Key blob
# readable from a one-line file of the format:
# Of little value in the case of standard public keys
# {ssh-rsa, ssh-ecdsa, ssh-ed25519}, but should
# provide rudimentary support for {*-cert.v01}
# Verify that the blob message first (string) field matches the
# key_type
# All good? All good.
# Just piggyback on Message/BytesIO, since both of these should be one.
# TODO: are there any good libs for this? maybe some helper from
# structlog?
# Password auth is marginally more 'username-caring' than pkeys, so may
# as well log that info here.
# Lazily get the password, in case it's prompting a user
# TODO: be nice to log source _of_ the password?
# TODO 4.0: twiddle this, or PKey, or both, so they're more obviously distinct.
# TODO 4.0: the obvious is to make this more wordy (PrivateKeyAuth), the
# minimalist approach might be to rename PKey to just Key (esp given all the
# subclasses are WhateverKey and not WhateverPKey)
# No decryption (presumably) necessary!
# NOTE: most of interesting repr-bits for private keys is in PKey.
# TODO: tacking on agent-ness like this is a bit awkward, but, eh?
# Superclass wants .pkey, other two are mostly for display/debugging.
# TODO re sources: is there anything in an OpenSSH config file that doesn't fit
# into what Paramiko already had kwargs for?
# TODO: tempting to make this an OrderedDict, except the keys essentially want
# to be rich objects (AuthSources) which do not make for useful user indexing?
# TODO: members being vanilla tuples is pretty old-school/expedient; they
# "really" want to be something that's type friendlier (unless the tuple's 2nd
# member being a Union of two types is "fine"?), which I assume means yet more
# classes, eg an abstract SourceResult with concrete AuthSuccess and
# AuthFailure children?
# TODO: arguably we want __init__ typechecking of the members (or to leverage
# mypy by classifying this literally as list-of-AuthSource?)
# NOTE: meaningfully distinct from __repr__, which still wants to use
# superclass' implementation.
# TODO: go hog wild, use rich.Table? how is that on degraded term's?
# TODO: test this lol
# TODO 4.0: descend from SSHException or even just Exception
# TODO: arguably we could fit in a "send none auth, record allowed auth
# types sent back" thing here as OpenSSH-client does, but that likely
# wants to live in fabric.OpenSSHAuthStrategy as not all target servers
# will implement it!
# TODO: needs better "server told us too many attempts" checking!
# NOTE: this really wants to _only_ wrap the authenticate()!
# TODO: 'except PartialAuthentication' is needed for 2FA and
# similar, as per old SSHClient.connect - it is the only way
# AuthHandler supplies access to the 'name-list' field from
# MSG_USERAUTH_FAILURE, at present.
# TODO: look at what this could possibly raise, we don't really
# want Exception here, right? just SSHException subclasses? or
# do we truly want to capture anything at all with assumption
# it's easy enough for users to look afterwards?
# NOTE: showing type, not message, for tersity & also most of
# the time it's basically just "Authentication failed."
# Gotta die here if nothing worked, otherwise Transport's main loop
# just kinda hangs out until something times out!
# Success: give back what was done, in case they care.
# TODO: is there anything OpenSSH client does which _can't_ cleanly map to
# iterating a generator?
# Initialize GSS-API Key Exchange
# ## This must be TRUE, if there is a GSS-API token in this message.
# we don't care about the language!
# we don't care about the language (lang_tag)!
# READ the secsh RFC's before raising these values.  if anything,
# they should probably be lower.
# Allow receiving this many packets after a re-key request before
# terminating
# Allow receiving this many bytes after a re-key request before terminating
# used for noticing when to re-key:
# current inbound/outbound ciphering:
# AEAD (eg aes128-gcm/aes256-gcm) cipher use
# lock around outbound writes (packet computation)
# keepalives:
# wait until the reset happens in both directions before clearing
# rekey flag
# handle over-reading from reading the banner line
# on Linux, sometimes instead of socket.timeout, we get
# EAGAIN.  this is a bug in recent (> 2.6.9) kernels but
# we need to work around it.
# so it doesn't get swallowed by the below catchall
# could be: (32, 'Broken pipe')
# We shouldn't retry the write, but we didn't
# manage to send anything over the socket. This might be an
# indication that we have lost contact with the remote
# side, but are yet to receive an EOFError or other socket
# errors. Let's give it some iteration to try and catch up.
# Per https://www.rfc-editor.org/rfc/rfc5647.html#section-7.1 ,
# we increment the last 8 bytes of the 12-byte IV...
# ...then re-concatenate it with the static first 4 bytes
# encrypt this sucka
# packet length is not encrypted in EtM
# Packet-length field is used as the 'associated data'
# under AES-GCM, so like EtM, it's not encrypted. See
# https://www.rfc-editor.org/rfc/rfc5647#section-7.3
# Append an MAC when needed (eg, not under AES-GCM)
# only ask once for rekeying
# Grab unencrypted (considered 'additional data' under GCM) packet
# length.
# When ETM or AEAD (GCM) are in use, we've already read the packet size
# & decrypted everything, so just set the packet back to the header we
# obtained.
# Otherwise, use the older non-ETM logic
# leftover contains decrypted bytes from the first block (after the
# length field)
# check for rekey
# we've asked to rekey -- give them some packets to comply before
# dropping the connection
# ...protected...
# wait till we're encrypting, and not in the middle of rekeying
# pad up at least 4 bytes, to nearest block-size (usually 8)
# do not include payload length in computations for padding in EtM mode
# (payload length won't be encrypted)
# cute trick i caught openssh doing: if we're not encrypting or
# SDCTR mode (RFC4344),
# don't waste random bytes for the padding
# outside code should check for this flag
# only for handles to folders:
# in append mode, don't care about seeking
# Copyright (C) 2003-2007  John Rochester <john@jrochester.org>
# NOTE: RFC mildly confusing; while these flags are OR'd together, OpenSSH at
# least really treats them like "AND"s, in the sense that if it finds the
# SHA256 flag set it won't continue looking at the SHA512 one; it
# short-circuits right away.
# Thus, we never want to eg submit 6 to say "either's good".
# TODO 4.0: rename all these - including making some of their methods public?
# Found that r should be either
# a socket from the socket library or None
# The address should be an IP address as a string? or None
# XXX Not sure what to do here ... raise or pass ?
# probably a dangling env var: the ssh agent is gone
# no agent support
# Log, but don't explode, since inner_key is a best-effort thing.
# Prefer inner_key.asbytes, since that will differ for eg RSA-CERT
# Have to work around PKey's default get_bits being crap
# nothing to proxy to
# NOTE: this used to be just self.blob, which is not entirely right for
# RSA-CERT 'keys' - those end up always degrading to ssh-rsa type
# signatures, for reasons probably internal to OpenSSH's agent code,
# even if everything else wants SHA2 (including our flag map).
# Copyright (C) 2003-2006 Robey Pointer <robeypointer@gmail.com>
# throw away any fractional seconds
# compute display date
# shouldn't really happen
# (15,552,000s = 6 months)
# not all servers support uid/gid
# TODO: not sure this actually worked as expected beforehand, leaving
# it untouched for the time being, re: .format() upgrade, until someone
# has time to doublecheck
# Copyright (C) 2013  Torsten Landschoff <torsten@debian.org>
# http://tools.ietf.org/html/rfc3526#section-3
# Copyright (C) 2019 Edgar Sousa <https://github.com/edgsousa>
# http://tools.ietf.org/html/rfc3526#section-5
# known hash algorithms for the "check-file" extension
# map of handle-string to SFTPHandle for files & folders:
# send some kind of failure message, at least
# close any file handles that were left open
# (so we can return them to the OS quickly)
# no such file
# mode operations are meaningless on win32
# NOTE: this is a very silly tiny class used for SFTPFile mostly
# must be error code
# some clients expect a "language" tag at the end
# (but don't mind it being blank)
# got an actual list of filenames in the folder
# must be an error code
# this extension actually comes from v6 protocol, but since it's an
# extension, i feel like we can reasonably support it backported.
# it's very useful for verifying uploaded files or checking for
# rsync-like differences between local and remote files.
# don't try to read more than about 64KB at a time
# the sftp 2 draft is incorrect here!
# path always follows target_path
######################
# jaraco.windows.error
# first some flags used by FormatMessageW
# Let FormatMessageW allocate the buffer (we'll free it below)
# Also, let it know we want a system error message.
# note the following will cause an infinite loop if GetLastError
###########################
# jaraco.windows.api.memory
#####################
# jaraco.windows.mmap
# A little safety.
#############################
# jaraco.windows.api.security
# from WinNT.h
# from NTSecAPI.h
#########################
# jaraco.windows.security
# by attaching the actual security descriptor, it will be garbage-
# collected with the security attributes
# TODO 4.0: remove explanation kwarg
# TODO 4.0: remove this supercall unless it's actually required for
# pickling (after fixing pickling)
# TODO 4.0: stop inheriting from SSHException, move to auth.py
# TODO 4.0: consider making this annotate w/ 1..N 'missing' algorithms,
# either just the first one that would halt kex, or even updating the
# Transport logic so we record /all/ that /could/ halt kex.
# TODO: update docstrings where this may end up raised so they are more
# specific.
# stand-in for errno
#: Channel ID
#: Remote channel ID
#: `.Transport` managing this channel
#: Whether the connection is presently active
#: Whether the connection has been closed
# in many cases, the channel will not still be open here.
# that's fine.
# copy old stderr buffer into primary buffer
# ...socket API...
# only close the pipe when the user explicitly closes the channel.
# otherwise they will get unpleasant surprises.  (and do it before
# checking self.closed, since the remote host may have already
# closed the connection.)
# no need to hold the channel lock when sending this
# create the pipe and feed in any existing data
# feign "read" shutdown
# Concession to Python 3's socket API, which has a private ._closed
# attribute instead of a semipublic .closed attribute.
# ...calls from Transport
# threshold of bytes we receive before we bother to send
# a window update
# passed from _feed_extended
# this doesn't seem useful, but it is the documented behavior
# of Socket
# eof or similar
# Note: We release self.lock before calling _send_user_message.
# Otherwise, we can deadlock during re-keying.
# you are holding the lock.
# Notify any waiters that we are closed
# can't unlink from the Transport yet -- the remote side may still
# try to send meta-data (exit-status, etc)
# server connection could die before we become active:
# still signal the close!
# you are already holding the lock
# filled the buffer
# we have some window to squeeze into
# Copyright 2013 Donald Stufft and individual contributors
# Must be kept in sync with `pyproject.toml`
# Decode the key
# If we were given the message and signature separately, validate
# Decode the signed message
# Decode the seed
# Verify that our seed is the proper size
# TODO: when the minimum supported version of Python is 3.8, we can import
# Protocol from typing, and replace Encoder with a Protocol instead.
# Functions that use encoders are passed a subclass of _Encoder, not an instance
# (because the methods are all static). Let's gloss over that detail by defining
# an alias for Type[_Encoder].
# Decode our ciphertext
# If we were given the nonce and ciphertext combined, split them.
# Decode the secret_key
# verify the given secret key type and size are correct
# decode the seed
# Verify the given seed type and size are correct
# generate a raw key pair from the given seed
# construct a instance from the raw secret key
# Create an empty box
# Assign our decoded value to the shared key of the box
# Copyright 2016-2019 Donald Stufft and individual contributors
# We create a clone of various builtin Exception types which additionally
# inherit from CryptoError. Below, we refer to the parent types via the
# `builtins` namespace, so mypy can distinguish between (e.g.)
# `nacl.exceptions.RuntimeError` and `builtins.RuntimeError`.
# Copyright 2017 Donald Stufft and individual contributors
# zero-length malloc results are implementation.dependent
# Copyright 2013-2019 Donald Stufft and individual contributors
# Initialize Sodium
# crypto_hash_BYTES = lib.crypto_hash_bytes()
# Copyright 2013-2018 Donald Stufft and individual contributors
# Copyright 2016 Donald Stufft and individual contributors
# Copyright 2018 Donald Stufft and individual contributors
# both _salt and _personal must be zero-padded to the correct length
# all went well, therefore:
# Copyright 2013-2017 Donald Stufft and individual contributors
# crypto_sign_SEEDBYTES = lib.crypto_sign_seedbytes()
# Cast safety: we `ensure` above that `state.tagbuf is not None`.
# since version 1.0.15 of libsodium
# Cast safety: n_log2 is a positive integer, and so 2 ** n_log2 is also
# a positive integer. Mypy+typeshed can't deduce this, because there's no
# way to for them to know that n_log2: int is positive.
# A set of classifier names
# A mapping from the deprecated classifier name to a list of zero or more valid
# classifiers that should replace it
# All classifiers, including deprecated classifiers
#-----------------------------------------------------------------
# pycparser: __init__.py
# This package file exports some convenience functions for
# interacting with pycparser
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
# Note the use of universal_newlines to treat all newlines
# as \n for Python's purpose
#------------------------------------------------------------------------------
# pycparser: ast_transforms.py
# Some utilities used by the parser to create a friendlier AST.
# The new Compound child for the Switch, which will collect children in the
# correct order
# The last Case/Default node
# Goes over the children of the Compound below the Switch, adding them
# either directly below new_compound or below the last Case as appropriate
# (for `switch(cond) {}`, block_items would have been None)
# If it's a Case/Default:
# 1. Add it to the Compound and mark as "last case"
# 2. If its immediate child is also a Case or Default, promote it
# Other statements are added as children to the last case, if it
# There can be multiple levels of _Atomic in a decl; fix them until a
# fixed point is reached.
# Make sure to add an _Atomic qual on the topmost decl if needed. Also
# restore the declname on the innermost TypeDecl (it gets placed in the
# wrong place during construction).
# If we've reached a node without a `type` field, it means we won't
# find what we're looking for at this point; give up the search
# and return the original decl unmodified.
# pycparser: c_generator.py
# C code generator from pycparser AST nodes.
# Statements start with indentation of self.indent_level spaces, using
# the _make_indent method.
# Always parenthesize the argument of sizeof since it can be
# a name.
# Precedence map of binary operators:
# Should be in sync with c_parser.CParser.precedence
# Higher numbers are stronger binding
# weakest binding
# strongest binding
# Note: all binary operators are left-to-right associative
# If `n.left.op` has a stronger or equally binding precedence in
# comparison to `n.op`, no parenthesis are needed for the left:
# e.g., `(a*b) + c` is equivalent to `a*b + c`, as well as
# If the left operator is weaker binding than the current, then
# parentheses are necessary:
# e.g., `(a+b) * c` is NOT equivalent to `a+b * c`.
# If `n.right.op` has a stronger -but not equal- binding precedence,
# parenthesis can be omitted on the right:
# e.g., `a + (b*c)` is equivalent to `a + b*c`.
# If the right operator is weaker or equally binding, then parentheses
# are necessary:
# e.g., `a * (b+c)` is NOT equivalent to `a * b+c` and
# no_type is used when a Decl is part of a DeclList, where the type is
# explicitly only for the first declaration in a list.
# None means no members
# Empty sequence means an empty list of members
# `[:-2] + '\n'` removes the final `,` from the enumerator list
# These can also appear in an expression context so no semicolon
# is added to them automatically
# No extra indentation required before the opening brace of a
# compound - because it consists of multiple lines it has to
# compute its own indentation.
#~ print(n, modifiers)
# Resolve modifiers.
# Wrap in parens to distinguish pointer to array and pointer to
# function syntax.
# pycparser: c_lexer.py
# CLexer class: lexer for the C language
# Keeps track of the last token returned from self.token()
# Allow either "# line" or "# <num>" to support GCC's
# cpp output
######################--   PRIVATE   --######################
## Internal auxiliary methods
## Reserved keywords
## All the tokens recognized by the lexer
# Identifiers
# Type identifiers (identifiers previously defined as
# types with typedef)
# String literals
# Operators
# Assignment
# Increment/decrement
# Structure dereference (->)
# Conditional operator (?)
# Delimiters
# ( )
# [ ]
# { }
# . ,
# ; :
# Ellipsis (...)
# pre-processor
# '#'
# 'pragma'
## Regexes for use in tokens
# valid C identifiers (K&R2: A.2.3), plus '$' (supported by some compilers)
# integer constants (K&R2: A.2.5.1)
# comments are not supported
# character constants (K&R2: A.2.5.2)
# Note: a-zA-Z and '.-~^_!=&;,' are allowed as escape chars to support #line
# directives with Windows paths as filenames (..\..\dir\file)
# For the same reason, decimal_escape allows all digit sequences. We want to
# parse all correct code, even if it means to sometimes parse incorrect
# code.
# The original regexes were taken verbatim from the C syntax definition,
# and were later modified to avoid worst-case exponential running time.
# The following modifications were made to avoid the ambiguity that allowed backtracking:
# (https://github.com/eliben/pycparser/issues/61)
# - \x was removed from simple_escape, unless it was not followed by a hex digit, to avoid ambiguity with hex_escape.
# - hex_escape allows one or more hex characters, but requires that the next character(if any) is not hex
# - decimal_escape allows one or more decimal characters, but requires that the next character(if any) is not a decimal
# - bad_escape does not allow any decimals (8-9), to avoid conflicting with the permissive decimal_escape.
# Without this change, python's `re` module would recursively try parsing each ambiguous escape sequence in multiple ways.
# e.g. `\123` could be parsed as `\1`+`23`, `\12`+`3`, and `\123`.
# This complicated regex with lookahead might be slow for strings, so because all of the valid escapes (including \x) allowed
# 0 or more non-escaped characters after the first character, simple_escape+decimal_escape+hex_escape got simplified to
# string literals (K&R2: A.2.6)
# floating constants (K&R2: A.2.5.3)
## Lexer states: used for preprocessor \n-terminated directives
# ppline: preprocessor line directives
# pppragma: pragma
## Rules for the ppline state
# Ignore: GCC's cpp sometimes inserts a numeric flag
# after the file name
## Rules for the pppragma state
## Rules for the normal state
# Newlines
# Assignment operators
# ->
# ?
# Scope delimiters
# To see why on_lbrace_func is needed, consider:
# Outside the function, TT is a typedef, but inside (starting and ending
# with the braces) it's a parameter.  The trouble begins with yacc's
# lookahead token.  If we open a new scope in brace_open, then TT has
# already been read and incorrectly interpreted as TYPEID.  So, we need
# to open and close scopes from within the lexer.
# Similar for the TT immediately outside the end of the function.
# The following floating and integer constants are defined as
# functions to impose a strict order (otherwise, decimal
# is placed before the others because its regex is longer,
# and this is bad)
# Must come before bad_char_const, to prevent it from
# catching valid char constants as invalid
# unmatched string literals are caught by the preprocessor
# plyparser.py
# PLYParser class and other utilities for simplifying programming
# parsers with PLY
# Remove the template method
# Create parameterized rules from this method; only run this if
# the method has a docstring. This is to address an issue when
# pycparser's users are installed in -OO mode which strips
# docstrings away.
# See: https://github.com/eliben/pycparser/pull/198/ and
# for discussion.
# Use the template method's body for each new method
# Substitute in the params for the grammar rule and function name
# Attach the new method to the class
# yacctab.py
# This file is automatically generated. Do not edit.
# pycparser: c_parser.py
# CParser class: Parser and AST builder for the C language
# Stack of scopes for keeping track of symbols. _scope_stack[-1] is
# the current (topmost) scope. Each scope is a dictionary that
# specifies whether a name is a type. If _scope_stack[n][name] is
# True, 'name' is currently a type in the scope. If it's False,
# 'name' is used in the scope but not as a type (for instance, if we
# saw: int name;
# If 'name' is not a key in _scope_stack[n] then 'name' was not defined
# in this scope at all.
# Keeps track of the last token given to yacc (the lookahead token)
# If name is an identifier in this scope it shadows typedefs in
# higher scopes.
# To understand what's going on here, read sections A.8.5 and
# A.8.6 of K&R2 very carefully.
# A C type consists of a basic type declaration, with a list
# of modifiers. For example:
# int *c[5];
# The basic declaration here is 'int c', and the pointer and
# the array are the modifiers.
# Basic declarations are represented by TypeDecl (from module c_ast) and the
# modifiers are FuncDecl, PtrDecl and ArrayDecl.
# The standard states that whenever a new modifier is parsed, it should be
# added to the end of the list of modifiers. For example:
# K&R2 A.8.6.2: Array Declarators
# In a declaration T D where D has the form
# and the type of the identifier in the declaration T D1 is
# "type-modifier T", the type of the
# identifier of D is "type-modifier array of T"
# This is what this method does. The declarator it receives
# can be a list of declarators ending with TypeDecl. It
# tacks the modifier to the end of this list, just before
# the TypeDecl.
# Additionally, the modifier may be a list itself. This is
# useful for pointers, that can come as a chain from the rule
# p_pointer. In this case, the whole modifier list is spliced
# into the new location.
#~ print '****'
#~ decl.show(offset=3)
#~ modifier.show(offset=3)
# The modifier may be a nested list. Reach its tail.
# If the decl is a basic type, just tack the modifier onto it.
# Otherwise, the decl is a list of modifiers. Reach
# its tail and splice the modifier onto the tail,
# pointing to the underlying basic type.
# Due to the order in which declarators are constructed,
# they have to be fixed in order to look like a normal AST.
# When a declaration arrives from syntax construction, it has
# these problems:
# * The innermost TypeDecl has no type (because the basic
# * The declaration has no variable name, since that is saved
# * The typename of the declaration is a list of type
# This method fixes these problems.
# Reach the underlying basic type
# The typename is a list of types. If any type in this
# list isn't an IdentifierType, it must be the only
# type in the list (it's illegal to declare "int enum ..")
# If all the types are basic, they're collected in the
# IdentifierType holder.
# Functions default to returning int
# At this point, we know that typename is a list of IdentifierType
# nodes. Concatenate all the names into a single list.
# Bit-fields are allowed to be unnamed.
# When redeclaring typedef names as identifiers in inner scopes, a
# problem can occur where the identifier gets grouped into
# spec['type'], leaving decl as None.  This can only occur for the
# first declarator.
# Make this look as if it came from "direct_declarator:ID"
# Remove the "new" type's name from the end of spec['type']
# A similar problem can occur where the declaration ends up looking
# like an abstract declarator.  Give it a name if this is the case.
# Add the type name defined by typedef to a
# symbol table (for usage in the lexer)
## Precedence and associativity of operators
# If this changes, c_generator.CGenerator.precedence_map needs to change as
# well
## Grammar productions
## Implementation of the BNF defined in K&R2 A.13
# Wrapper around a translation unit, to allow for empty input.
# Not strictly part of the C99 Grammar, but useful in practice.
# Note: external_declaration is already a list
# Declarations always come as lists (because they can be
# several in one line), so we wrap the function definition
# into a list as well, to make the return value of
# external_declaration homogeneous.
# This encompasses two types of C99-compatible pragmas:
# - The #pragma directive:
# - The _Pragma unary operator:
# In function definitions, the declarator can be followed by
# a declaration list, for old "K&R style" function definitios.
# no declaration specifiers - 'int' becomes the default type
# Note, according to C18 A.2.2 6.7.10 static_assert-declaration _Static_assert
# is a declaration, not a statement. We additionally recognise it as a statement
# to fix parsing of _Static_assert inside the functions.
# A pragma is generally considered a decorator rather than an actual
# statement. Still, for the purposes of analyzing an abstract syntax tree of
# C code, pragma's should not be ignored and were previously treated as a
# statement. This presents a problem for constructs that take a statement
# such as labeled_statements, selection_statements, and
# iteration_statements, causing a misleading structure in the AST. For
# example, consider the following C code.
# This code will compile and execute "sum += 1;" as the body of the for
# loop. Previous implementations of PyCParser would render the AST for this
# block of code as follows:
# This AST misleadingly takes the Pragma as the body of the loop and the
# assignment then becomes a sibling of the loop.
# To solve edge cases like these, the pragmacomp_or_statement rule groups
# a pragma and its following statement (which would otherwise be orphaned)
# using a compound block, effectively turning the above code into:
# In C, declarations can come several in a line:
# However, for the AST, we will split them to separate Decl
# nodes.
# This rule splits its declarations and always returns a list
# of Decl nodes, even if it's one element long.
# p[2] (init_declarator_list_opt) is either a list or None
# By the standard, you must have at least one declarator unless
# declaring a structure tag, a union tag, or the members of an
# enumeration.
# However, this case can also occur on redeclared identifiers in
# an inner scope.  The trouble is that the redeclared type's name
# gets grouped into declaration_specifiers; _build_declarations
# compensates for this.
# The declaration has been split to a decl_body sub-rule and
# SEMI, because having them in a single rule created a problem
# for defining typedefs.
# If a typedef line was directly followed by a line using the
# type defined with the typedef, the type would not be
# recognized. This is because to reduce the declaration rule,
# the parser's lookahead asked for the token after SEMI, which
# was the type from the next line, and the lexer had no chance
# to see the updated type symbol table.
# Splitting solves this problem, because after seeing SEMI,
# the parser reduces decl_body, which actually adds the new
# type into the table to be seen by the lexer before the next
# line is reached.
# Since each declaration is a list of declarations, this
# rule will combine all the declarations and return a single
# To know when declaration-specifiers end and declarators begin,
# we require declaration-specifiers to have at least one
# type-specifier, and disallow typedef-names after we've seen any
# type-specifier. These are both required by the spec.
# Without this, `typedef _Atomic(T) U` will parse incorrectly because the
# _Atomic qualifier will match, instead of the specifier.
# See section 6.7.2.4 of the C11 standard.
# Returns a {decl=<declarator> : init=<initializer>} dictionary
# If there's no initializer, uses None
# Require at least one type specifier in a specifier-qualifier-list
# TYPEID is allowed here (and in other struct/enum related tag names), because
# struct/enum tags reside in their own namespace and can be named the same as types
# None means no list of members
# Combine all declarations into a single list
# Anonymous struct/union, gcc extension, C1x feature.
# Although the standard only allows structs/unions here, I see no
# reason to disallow other types since some compilers have typedefs
# here, and pycparser isn't about rejecting all invalid code.
# Structure/union members can have the same names as typedefs.
# The trouble is that the member's name gets grouped into
# specifier_qualifier_list; _build_declarations compensates.
# struct_declarator passes up a dict with the keys: decl (for
# the underlying declarator) and bitsize (for the bitsize)
# Accept dimension qualifiers
# Per C99 6.7.5.3 p7
# Using slice notation for PLY objects doesn't work in Python 3 for the
# version of PLY embedded with pycparser; see PLY Google Code issue 30.
# Work around that here by listing the two elements separately.
# Special for VLAs
# To see why _get_yacc_lookahead_token is needed, consider:
# Outside the function, TT is a typedef, but inside (starting and
# ending with the braces) it's a parameter.  The trouble begins with
# yacc's lookahead token.  We don't know if we're declaring or
# defining a function until we see LBRACE, but if we wait for yacc to
# trigger a rule on that token, then TT will have already been read
# and incorrectly interpreted as TYPEID.  We need to add the
# parameters to the scope the moment the lexer sees LBRACE.
# Pointer decls nest from inside out. This is important when different
# levels have different qualifiers. For example:
# Means "pointer to const pointer to char"
# While:
# Means "const pointer to pointer to char"
# So when we construct PtrDecl nestings, the leftmost pointer goes in
# as the most nested type.
# single parameter
# From ISO/IEC 9899:TC2, 6.7.5.3.11:
# "If, in a parameter declaration, an identifier can be treated either
# Inside a parameter declaration, once we've reduced declaration specifiers,
# if we shift in an LPAREN and see a TYPEID, it could be either an abstract
# declarator or a declarator nested inside parens. This rule tells us to
# always treat it as an abstract declarator. Therefore, we only accept
# `id_declarator`s and `typeid_noparen_declarator`s.
# Parameters can have the same names as typedefs.  The trouble is that
# the parameter's name gets grouped into declaration_specifiers, making
# it look like an old-style declaration; compensate.
# This truly is an old-style parameter declaration
# single initializer
# Designators are represented as a list of nodes, in the order in which
# they're written in the code.
# Creating and using direct_abstract_declarator_opt here
# instead of listing both direct_abstract_declarator and the
# lack of it in the beginning of _1 and _2 caused two
# shift/reduce errors.
# declaration is a list, statement isn't. To make it consistent, block_item
# will always be a list
# Since we made block_item a list, this just combines lists
# Empty block items (plain ';') produce [None], so ignore them
# K&R2 defines these as many separate rules, to encode
# precedence and associativity. Why work hard ? I'll just use
# the built in precedence/associativity specification feature
# of PLY. (see precedence declaration above)
# single expr
# The "unified" string and wstring literal rules are for supporting
# concatenation of adjacent string literals.
# I.e. "hello " "world" is seen by the C compiler as a single string literal
# with the value "hello world"
# single literal
# If error recovery is added here in the future, make sure
# _get_yacc_lookahead_token still works!
# lextab.py. This file automatically created by PLY (version 3.10). Don't edit!
# ** ATTENTION **
# This code was automatically generated from the file:
# _c_ast.cfg
# Do not modify it directly. Modify the configuration file and
# run the generator again.
# ** ** *** ** **
# pycparser: c_ast.py
# AST Node classes.
# pycparser: _build_tables.py
# A dummy for generating the lexing/parsing tables and and
# compiling them into .pyc for faster execution in optimized mode.
# Also generates AST code from the configuration file.
# Should be called from the pycparser directory.
# Insert '.' and '..' as first entries to the search path for modules.
# Restricted environments like embeddable python do not include the
# current working directory on startup.
# Generate c_ast.py
# Generates the tables
# Load to compile into .pyc
# _ast_gen.py
# Generates the AST Node classes from a specification given in
# a configuration file
# The design of this module was inspired by astgen.py from the
# Python 2.5 code-base.
# Empty generator
# -----------------------------------------------------------------------------
# ply: yacc.py
# Copyright (C) 2001-2017
# David M. Beazley (Dabeaz LLC)
# * Redistributions of source code must retain the above copyright notice,
# * Neither the name of the David Beazley or Dabeaz LLC may be used to
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# This implements an LR parser that is constructed from grammar rules defined
# as Python functions. The grammer is specified by supplying the BNF inside
# Python documentation strings.  The inspiration for this technique was borrowed
# from John Aycock's Spark parsing system.  PLY might be viewed as cross between
# Spark and the GNU bison utility.
# The current implementation is only somewhat object-oriented. The
# LR parser itself is defined in terms of an object (which allows multiple
# parsers to co-exist).  However, most of the variables used during table
# construction are defined in terms of global variables.  Users shouldn't
# notice unless they are trying to define multiple parsers at the same
# time using threads (in which case they should have their head examined).
# This implementation supports both SLR and LALR(1) parsing.  LALR(1)
# support was originally implemented by Elias Ioup (ezioup@alumni.uchicago.edu),
# using the algorithm found in Aho, Sethi, and Ullman "Compilers: Principles,
# Techniques, and Tools" (The Dragon Book).  LALR(1) has since been replaced
# by the more efficient DeRemer and Pennello algorithm.
# :::::::: WARNING :::::::
# Construction of LR parsing tables is fairly complicated and expensive.
# To make this module run fast, a *LOT* of work has been put into
# optimization---often at the expensive of readability and what might
# consider to be good Python "coding style."   Modify the code at your
# own risk!
# ----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Change these to modify the default behavior of yacc (if you wish)
# Debugging mode.  If set, yacc generates a
# a 'parser.out' file in the current directory
# Default name of the debugging file
# Default name of the table module
# Default LR table generation method
# Number of symbols that must be shifted to leave recovery mode
# Set to True if developing yacc.  This turns off optimized
# implementations of certain functions.
# Size limit of results when running in debug mode.
# Protocol to use when writing pickle files
# String type-checking compatibility
# This object is a stand-in for a logging object created by the
# logging module.   PLY will use this by default to create things
# such as the parser.out file.  If a user wants more detailed
# information, they can create their own logging object and pass
# it into PLY.
# Null logger is used when no output is generated. Does nothing.
# Exception raised for yacc-related errors
# Format the result message that the parser produces when running in debug mode.
# Format stack entries when the parser is running in debug mode
# Panic mode error recovery support.   This feature is being reworked--much of the
# code here is to offer a deprecation/backwards compatible transition
# Utility function to call the p_error() function with some deprecation hacks
# The following classes are used for the LR parser itself.  These are not
# used during table construction and are independent of the actual LR
# table generation algorithm
# This class is used to hold non-terminal grammar symbols during parsing.
# It normally has the following attributes set:
# This class is a wrapper around the objects actually passed to each
# grammar rule.   Index lookup and assignment actually assign the
# .value attribute of the underlying YaccSymbol object.
# The lineno() method returns the line number of a given
# item (or 0 if not defined).   The linespan() method returns
# a tuple of (startline,endline) representing the range of lines
# for a symbol.  The lexspan() method returns a tuple (lexpos,endlexpos)
# representing the range of positional information for a symbol.
# The LR Parsing engine.
# Defaulted state support.
# This method identifies parser states where there is only one possible reduction action.
# For such states, the parser can make a choose to make a rule reduction without consuming
# the next look-ahead token.  This delayed invocation of the tokenizer can be useful in
# certain kinds of advanced parsing situations where the lexer and parser interact with
# each other or change states (i.e., manipulation of scope, lexer states, etc.).
# See:  https://www.gnu.org/software/bison/manual/html_node/Default-Reductions.html#Default-Reductions
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# parsedebug().
# This is the debugging enabled version of parse().  All changes made to the
# parsing engine should be made here.   Optimized versions of this function
# are automatically created by the ply/ygen.py script.  This script cuts out
# sections enclosed in markers such as this:
#--! parsedebug-start
# Current lookahead symbol
# Stack of lookahead symbols
# Local reference to action table (to avoid lookup on self.)
# Local reference to goto table (to avoid lookup on self.)
# Local reference to production list (to avoid lookup on self.)
# Local reference to defaulted states
# Production object passed to grammar rules
# Used during error recovery
#--! DEBUG
# If no lexer was given, we will try to use the lex module
# Set up the lexer and parser objects on pslice
# If input was supplied, pass to lexer
# Tokenize function
# Set the parser() token method (sometimes used in error recovery)
# Set up the state and symbol stacks
# Stack of parsing states
# Stack of grammar symbols
# Put in the production
# Err token
# The start state is assumed to be (0,$end)
# Get the next symbol on the input.  If a lookahead symbol
# is already set, we just use that. Otherwise, we'll pull
# the next token off of the lookaheadstack or from the lexer
# Get the next token
# Check the action table
# shift a symbol on the stack
# Decrease error count on successful shift
# reduce a symbol on the stack, emit a production
# Get production function
# Production name
#--! TRACKING
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# The code enclosed in this section is duplicated
# below as a performance optimization.  Make sure
# changes get made in both locations.
# Call the grammar rule with our special slice object
# If an error was set. Enter error recovery state
# Save the current lookahead token
# Put the production slice back on the stack
# Pop back one state (before the reduce)
# above as a performance optimization.  Make sure
# We have some kind of parsing error here.  To handle
# this, we are going to push the current token onto
# the tokenstack and replace it with an 'error' token.
# If there are any synchronization rules, they may
# catch it.
# In addition to pushing the error token, we call call
# the user defined p_error() function if this is the
# first syntax error.  This function is only called if
# errorcount == 0.
# End of file!
# User must have done some kind of panic
# mode recovery on their own.  The
# returned token is the next lookahead
# case 1:  the statestack only has 1 entry on it.  If we're in this state, the
# entire parse has been rolled back and we're completely hosed.   The token is
# discarded and we just keep going.
# Nuke the pushback stack
# case 2: the statestack has a couple of entries on it, but we're
# at the end of the file. nuke the top entry and generate an error token
# Start nuking entries on the stack
# Whoa. We're really hosed here. Bail out
# Hmmm. Error is on top of stack, we'll just nuke input
# symbol and continue
# Create the error symbol for the first time and make it the new lookahead symbol
# Call an error function here
#--! parsedebug-end
# parseopt().
# Optimized version of parse() method.  DO NOT EDIT THIS CODE DIRECTLY!
# This code is automatically generated by the ply/ygen.py script. Make
# changes to the parsedebug() method instead.
#--! parseopt-start
#--! parseopt-end
# parseopt_notrack().
# Optimized version of parseopt() with line number tracking removed.
# DO NOT EDIT THIS CODE DIRECTLY. This code is automatically generated
# by the ply/ygen.py script. Make changes to the parsedebug() method instead.
#--! parseopt-notrack-start
#--! parseopt-notrack-end
# The following functions, classes, and variables are used to represent and
# manipulate the rules that make up a grammar.
# regex matching identifiers
# class Production:
# This class stores the raw information about a single production or grammar rule.
# A grammar rule refers to a specification such as this:
# Here are the basic attributes defined on all productions
# The following attributes are defined or optional.
# Internal settings used during table construction
# Length of the production
# Create a list of unique production symbols used in the production
# List of all LR items for the production
# Create a string representation
# Return the nth lr_item from the production (or None if at the end)
# Precompute the list of productions immediately following.
# Bind the production function name to a callable
# This class serves as a minimal standin for Production objects when
# reading table data from files.   It only contains information
# actually used by the LR parsing engine, plus some additional
# debugging information.
# class LRItem
# This class represents a specific stage of parsing a production rule.  For
# In the above, the "." represents the current location of the parse.  Here
# basic attributes:
# rightmost_terminal()
# Return the rightmost terminal from a list of symbols.  Used in add_production()
# The following class represents the contents of the specified grammar along
# with various computed properties such as first sets, follow sets, LR items, etc.
# This data is used for critical parts of the table generation process later.
# A list of all of the productions.  The first
# entry is always reserved for the purpose of
# building an augmented grammar
# A dictionary mapping the names of nonterminals to a list of all
# productions of that nonterminal.
# A dictionary that is only used to detect duplicate
# productions.
# A dictionary mapping the names of terminal symbols to a
# list of the rules where they are used.
# A dictionary mapping names of nonterminals to a list
# of rule numbers where they are used.
# A dictionary of precomputed FIRST(x) symbols
# A dictionary of precomputed FOLLOW(x) symbols
# Precedence rules for each terminal. Contains tuples of the
# form ('right',level) or ('nonassoc', level) or ('left',level)
# Precedence rules that were actually used by the grammer.
# This is only used to provide error checking and to generate
# a warning about unused precedence rules.
# Starting symbol for the grammar
# set_precedence()
# Sets the precedence for a given terminal. assoc is the associativity such as
# 'left','right', or 'nonassoc'.  level is a numeric level.
# add_production()
# Given an action function, this function assembles a production rule and
# computes its precedence level.
# The production rule is supplied as a list of symbols.   For example,
# a rule such as 'expr : expr PLUS term' has a production name of 'expr' and
# symbols ['expr','PLUS','term'].
# Precedence is determined by the precedence of the right-most non-terminal
# or the precedence of a terminal specified by %prec.
# A variety of error checks are performed to make sure production symbols
# are valid and that %prec is used correctly.
# Look for literal tokens
# Determine the precedence level
# Drop %prec from the rule
# If no %prec, precedence is determined by the rightmost terminal symbol
# See if the rule is already in the rulemap
# From this point on, everything is valid.  Create a new Production instance
# Add the production number to Terminals and Nonterminals
# Create a production and add it to the list of productions
# Add to the global productions list
# set_start()
# Sets the starting symbol and creates the augmented grammar.  Production
# rule 0 is S' -> start where start is the start symbol.
# find_unreachable()
# Find all of the nonterminal symbols that can't be reached from the starting
# symbol.  Returns a list of nonterminals that can't be reached.
# Mark all symbols that are reachable from a symbol s
# infinite_cycles()
# This function looks at the various parsing rules and tries to detect
# infinite recursion cycles (grammar rules where there is no possible way
# to derive a string of only terminals).
# Terminals:
# Nonterminals:
# Initialize to false:
# Then propagate termination until no change:
# Nonterminal n terminates iff any of its productions terminates.
# Production p terminates iff all of its rhs symbols terminate.
# The symbol s does not terminate,
# so production p does not terminate.
# didn't break from the loop,
# so every symbol s terminates
# so production p terminates.
# symbol n terminates!
# Don't need to consider any more productions for this n.
# s is used-but-not-defined, and we've already warned of that,
# so it would be overkill to say that it's also non-terminating.
# undefined_symbols()
# Find all symbols that were used the grammar, but not defined as tokens or
# grammar rules.  Returns a list of tuples (sym, prod) where sym in the symbol
# and prod is the production where the symbol was used.
# unused_terminals()
# Find all terminals that were defined, but not used by the grammar.  Returns
# a list of all symbols.
# unused_rules()
# Find all grammar rules that were defined,  but not used (maybe not reachable)
# Returns a list of productions.
# unused_precedence()
# Returns a list of tuples (term,precedence) corresponding to precedence
# rules that were never used by the grammar.  term is the name of the terminal
# on which precedence was applied and precedence is a string such as 'left' or
# 'right' corresponding to the type of precedence.
# _first()
# Compute the value of FIRST1(beta) where beta is a tuple of symbols.
# During execution of compute_first1, the result may be incomplete.
# Afterward (e.g., when called from compute_follow()), it will be complete.
# We are computing First(x1,x2,x3,...,xn)
# Add all the non-<empty> symbols of First[x] to the result.
# We have to consider the next x in beta,
# i.e. stay in the loop.
# We don't have to consider any further symbols in beta.
# There was no 'break' from the loop,
# so x_produces_empty was true for all x in beta,
# so beta produces empty as well.
# compute_first()
# Compute the value of FIRST1(X) for all symbols
# Initialize to the empty set:
# Then propagate symbols until no change:
# compute_follow()
# Computes all of the follow sets for every non-terminal symbol.  The
# follow set is the set of all symbols that might follow a given
# non-terminal.  See the Dragon book, 2nd Ed. p. 189.
# If already computed, return the result
# If first sets not computed yet, do that first.
# Add '$end' to the follow list of the start symbol
# Here is the production set
# Okay. We got a non-terminal in a production
# Add elements of follow(a) to follow(b)
# build_lritems()
# This function walks the list of productions and builds a complete set of the
# LR items.  The LR items are stored in two ways:  First, they are uniquely
# numbered and placed in the list _lritems.  Second, a linked list of LR items
# is built for each production.  For example:
# Creates the list
# Precompute the list of productions immediately following
# This basic class represents a basic table of LR parsing information.
# Methods for generating the tables are not defined here.  They are defined
# in the derived class LRGeneratedTable.
# Bind all production function names to callable objects in pdict
# The following classes and functions are used to generate LR parsing tables on
# a grammar.
# digraph()
# traverse()
# The following two functions are used to compute set valued functions
# of the form:
# This is used to compute the values of Read() sets as well as FOLLOW sets
# in LALR(1) generation.
# Inputs:  X    - An input set
# F(X) <- F'(x)
# Get y's related to x
# This class implements the LR table generation algorithm.  There are no
# public methods except for write()
# Set up the logger
# Internal attributes
# Action table
# Goto table
# Copy of grammar Production array
# Cache of computed gotos
# Cache of closures
# Internal counter used to detect cycles
# Diagonistic information filled in by the table generator
# List of conflicts
# Build the tables
# Compute the LR(0) closure operation on I, where I is a set of LR(0) items.
# Add everything in I to J
# Add B --> .G to J
# Compute the LR(0) goto function goto(I,X) where I is a set
# of LR(0) items and X is a grammar symbol.   This function is written
# in a way that guarantees uniqueness of the generated goto sets
# (i.e. the same goto set will never be returned as two different Python
# objects).  With uniqueness, we can later do fast set comparisons using
# id(obj) instead of element-wise comparison.
# First we look for a previously cached entry
# Now we generate the goto set in a way that guarantees uniqueness
# of the result
# Compute the LR(0) sets of item function
# Loop over the items in C and each grammar symbols
# Collect all of the symbols that could possibly be in the goto(I,X) sets
# LALR(1) parsing is almost exactly the same as SLR except that instead of
# relying upon Follow() sets when performing reductions, a more selective
# lookahead set that incorporates the state of the LR(0) machine is utilized.
# Thus, we mainly just have to focus on calculating the lookahead sets.
# The method used here is due to DeRemer and Pennelo (1982).
# DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1)
# Further details can also be found in:
# compute_nullable_nonterminals()
# Creates a dictionary containing all of the non-terminals that might produce
# an empty production.
# find_nonterminal_trans(C)
# Given a set of LR(0) items, this functions finds all of the non-terminal
# transitions.    These are transitions in which a dot appears immediately before
# a non-terminal.   Returns a list of tuples of the form (state,N) where state
# is the state number and N is the nonterminal symbol.
# The input C is the set of LR(0) items.
# dr_relation()
# Computes the DR(p,A) relationships for non-terminal transitions.  The input
# is a tuple (state,N) where state is a number and N is a nonterminal symbol.
# Returns a list of terminals.
# This extra bit is to handle the start state
# reads_relation()
# Computes the READS() relation (p,A) READS (t,C).
# Look for empty transitions
# compute_lookback_includes()
# Determines the lookback and includes relations
# LOOKBACK:
# This relation is determined by running the LR(0) state machine forward.
# For example, starting with a production "N : . A B C", we run it forward
# to obtain "N : A B C ."   We then build a relationship between this final
# state and the starting state.   These relationships are stored in a dictionary
# lookdict.
# INCLUDES:
# Computes the INCLUDE() relation (p,A) INCLUDES (p',B).
# This relation is used to determine non-terminal transitions that occur
# inside of other non-terminal transition states.   (p,A) INCLUDES (p', B)
# if the following holds:
# L is essentially a prefix (which may be empty), T is a suffix that must be
# able to derive an empty string.  State p' must lead to state p with the string L.
# Dictionary of lookback relations
# Dictionary of include relations
# Make a dictionary of non-terminal transitions
# Loop over all transitions and compute lookbacks and includes
# Okay, we have a name match.  We now follow the production all the way
# through the state machine until we get the . on the right hand side
# Check to see if this symbol and state are a non-terminal transition
# Yes.  Okay, there is some chance that this is an includes relation
# the only way to know for certain is whether the rest of the
# production derives empty
# No forget it
# Appears to be a relation between (j,t) and (state,N)
# Go to next set
# Go to next state
# When we get here, j is the final state, now we have to locate the production
# This look is comparing a production ". A B C" with "A B C ."
# compute_read_sets()
# Given a set of LR(0) items, this function computes the read sets.
# Inputs:  C        =  Set of LR(0) items
# Returns a set containing the read sets
# compute_follow_sets()
# Given a set of LR(0) items, a set of non-terminal transitions, a readset,
# and an include set, this function computes the follow sets
# Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)}
# Inputs:
# Returns a set containing the follow sets
# add_lookaheads()
# Attaches the lookahead symbols to grammar rules.
# Inputs:    lookbacks         -  Set of lookback relations
# This function directly attaches the lookaheads to productions contained
# in the lookbacks set
# Loop over productions in lookback
# add_lalr_lookaheads()
# This function does all of the work of adding lookahead information for use
# with LALR parsing
# Determine all of the nullable nonterminals
# Find all non-terminal transitions
# Compute read sets
# Compute lookback/includes relations
# Compute LALR FOLLOW sets
# Add all of the lookaheads
# lr_parse_table()
# This function constructs the parse tables for SLR or LALR
# Goto array
# Action array
# Logger for output
# Action production array (temporary)
# Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items
# This determines the number of states
# Build the parser table, state by state
# Loop over each production in I
# List of actions
# Start symbol. Accept!
# We are at the end of a production.  Reduce!
# Whoa. Have a shift/reduce or reduce/reduce conflict
# Need to decide on shift or reduce here
# By default we favor shifting. Need to add
# some precedence rules here.
# Shift precedence comes from the token
# Reduce precedence comes from rule being reduced (p)
# We really need to reduce here.
# Hmmm. Guess we'll keep the shift
# Reduce/reduce conflict.   In this case, we favor the rule
# that was defined first in the grammar file
# Get symbol right after the "."
# We are in a shift state
# Whoa have a shift/reduce or shift/shift conflict
# Do a precedence check.
# Reduce precedence comes from the rule that could have been reduced
# We decide to shift here... highest precedence to shift
# Hmmm. Guess we'll keep the reduce
# Print the actions associated with each terminal
# Print the actions that were not used. (debugging)
# Construct the goto table for this state
# write()
# This function writes the LR parsing tables to a file
# Change smaller to 0 to go back to original tables
# Factor out names to try and make smaller
# Write production table
# pickle_table()
# This function pickles the LR parsing tables to a supplied file object
# The following functions and classes are used to implement the PLY
# introspection features followed by the yacc() function itself.
# get_caller_module_dict()
# This function returns a dictionary containing all of the symbols defined within
# a caller further down the call stack.  This is used to get the environment
# associated with the yacc() call if none was provided.
# parse_grammar()
# This takes a raw grammar rule string and parses it into production data
# Split the doc string into lines
# This is a continuation of a previous rule
# ParserReflect()
# This class represents information extracted for building a parser including
# start symbol, error function, tokens, precedence list, action functions,
# etc.
# Get all of the basic information
# Validate all of the information
# Compute a signature over the grammar
# validate_modules()
# This method checks to see if there are duplicated p_rulename() functions
# in the parser module file.  Without this function, it is really easy for
# users to make mistakes by cutting and pasting code fragments (and it's a real
# bugger to try and figure out why the resulting parser doesn't work).  Therefore,
# we just do a little regular expression pattern matching of def statements
# to try and detect duplicates.
# Match def p_funcname(
# Get the start symbol
# Validate the start symbol
# Look for error handler
# Validate the error function
# Get the tokens map
# Validate the tokens
# Validate the tokens.
# Get the precedence map (if any)
# Validate and parse the precedence map
# Get all p_functions from the grammar
# Sort all of the actions by line number; make sure to stringify
# modules to make them sortable, since `line` may not uniquely sort all
# p functions
# Validate all of the p_functions
# Check for non-empty symbols
# Looks like a valid grammar rule
# Mark the file in which defined.
# Secondary validation step that looks for p_ definitions that are not functions
# or functions that look like they might be grammar rules.
# yacc(module)
# Build a parser
# Reference to the parsing method of the last built parser
# If pickling is enabled, table files are not created
# Get the module dictionary used for the parser
# If no __file__ attribute is available, try to obtain it from the __module__ instead
# If no output directory is set, the location of the output files
# is determined according to the following rules:
# Determine if the module is package of a package or not.
# If so, fix the tabmodule setting so that tables load correctly
# Set start symbol if it's specified directly using an argument
# Collect parser information from the dictionary
# Check signature against table files (if any)
# Read the tables
# Validate the parser information
# Create a grammar object
# Set precedence level for terminals
# Add productions to the grammar
# Set the grammar start symbols
# Verify the grammar structure
# Print out all productions to the debug log
# Find unused non-terminals
# Run the LRGeneratedTable on the grammar
# Report shift/reduce and reduce/reduce conflicts
# Write out conflicts to the output file
# Write the table file if requested
# Write a pickled version of the tables
# Build the parser
# PLY package
# Author: David Beazley (dave@dabeaz.com)
# cpp.py
# Author:  David Beazley (http://www.dabeaz.com)
# Copyright (C) 2017
# All rights reserved
# This module implements an ANSI-C style lexical preprocessor for PLY.
# Some Python 3 compatibility shims
# Default preprocessor lexer definitions.   These tokens are enough to get
# a basic preprocessor working.   Other modules may import these if they want
# Whitespace
# Identifier
# Integer literal
# Floating literal
# String literal
# Character constant 'c' or L'c'
# replace with one space or a number of '\n'
# Line comment
# replace with '/n'
# trigraph()
# Given an input string, this function replaces all trigraph sequences.
# The following mapping is used:
# ------------------------------------------------------------------
# Macro object
# This object holds information about preprocessor macros
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# Preprocessor object
# Object representing a preprocessor.  Contains macro definitions,
# include directories, and other information
# Probe the lexer for selected tokens
# tokenize()
# Utility function. Given a string of text, tokenize into a list of tokens
# error()
# Report a preprocessor error/warning of some kind
# lexprobe()
# This method probes the preprocessor lexer object to discover
# the token types of symbols that are important to the preprocessor.
# If this works right, the preprocessor will simply "work"
# with any suitable lexer regardless of how tokens have been named.
# Determine the token type for identifiers
# Determine the token type for integers
# Determine the token type for strings enclosed in double quotes
# Determine the token type for whitespace--if any
# Determine the token type for newlines
# Check for other characters used by the preprocessor
# add_path()
# Adds a search path to the preprocessor.
# group_lines()
# Given an input string, this function splits it into lines.  Trailing whitespace
# is removed.   Any line ending with \ is grouped with the next line.  This
# function forms the lowest level of the preprocessor---grouping into text into
# a line-by-line format.
# tokenstrip()
# Remove leading/trailing whitespace tokens from a token list
# collect_args()
# Collects comma separated arguments from a list of tokens.   The arguments
# must be enclosed in parenthesis.  Returns a tuple (tokencount,args,positions)
# where tokencount is the number of tokens consumed, args is a list of arguments,
# and positions is a list of integers containing the starting index of each
# argument.  Each argument is represented by a list of tokens.
# When collecting arguments, leading and trailing whitespace is removed
# from each argument.
# This function properly handles nested parenthesis and commas---these do not
# define new arguments.
# Search for the opening '('.
# Missing end argument
# macro_prescan()
# Examine the macro value (token sequence) and identify patch points
# This is used to speed up macro expansion later on---we'll know
# right away where to apply patches to the value to form the expansion
# Standard macro arguments
# String conversion expansion
# Variadic macro comma patch
# Conversion of argument to a string
# Concatenation
# Standard expansion
# macro_expand_args()
# Given a Macro and list of arguments (each a token list), this method
# returns an expanded version of a macro.  The return value is a token sequence
# representing the replacement macro tokens
# Make a copy of the macro token sequence
# Make string expansion patches.  These do not alter the length of the replacement sequence
# Make the variadic macro comma patch.  If the variadic macro argument is empty, we get rid
# Make all other patches.   The order of these matters.  It is assumed that the patch list
# has been sorted in reverse order of patch location since replacements will cause the
# size of the replacement sequence to expand from the patch point.
# Concatenation.   Argument is left unexpanded
# Normal expansion.  Argument is macro expanded first
# Get rid of removed comma if necessary
# expand_macros()
# Given a list of tokens, this function performs macro expansion.
# The expanded argument is a dictionary that contains macros already
# expanded.  This is used to prevent infinite recursion.
# Yes, we found a macro match
# A simple macro
# A macro with arguments
# Get macro replacement text
# evalexpr()
# Evaluate an expression token sequence for the purposes of evaluating
# integral expressions.
# tokens = tokenize(line)
# Search for defined macros
# Strip off any trailing suffixes
# parsegen()
# Parse an input string/
# Replace trigraph sequences
# Preprocessor directive
# insert necessary whitespace instead of eaten tokens
# We only pay attention if outer "if" allows this
# If already true, we flip enable False
# If False, but not triggered yet, we'll check expression
# Unknown preprocessor directive
# Normal text
# include()
# Implementation of file-inclusion
# Try to extract the filename and then process an include file
# Include <...>
# define()
# Define a new macro
# A normal macro
# If, for some reason, "." is part of the identifier, strip off the name for the purposes
# of macro expansion
# undef()
# Undefine a macro
# parse()
# Parse input text.
# token()
# Method to return individual tokens
# Run a preprocessor
# ply: lex.py
# This tuple contains known string types
# Python 2.6
# Python 3.0
# This regular expression is used to match valid token names
# Exception thrown when invalid token encountered and no default error
# handler is defined.
# Token class.  This class is used to represent the tokens produced.
# logging module.
# The following Lexer class implements the lexer runtime.   There are only
# a few public methods and attributes:
# Master regular expression. This is a list of
# tuples (re, findex) where re is a compiled
# regular expression and findex is a list
# mapping regex group numbers to rules
# Current regular expression strings
# Dictionary mapping lexer states to master regexs
# Dictionary mapping lexer states to regex strings
# Dictionary mapping lexer states to symbol names
# Current lexer state
# Stack of lexer states
# State information
# Dictionary of ignored characters for each state
# Dictionary of error functions for each state
# Dictionary of eof functions for each state
# Optional re compile flags
# Actual input data (as a string)
# Current position in input text
# Length of the input text
# Error rule (if any)
# EOF rule (if any)
# List of valid tokens
# Ignored characters
# Literal characters that can be passed through
# Module
# Current line number
# Optimized mode
# If the object parameter has been supplied, it means we are attaching the
# lexer to a new object.  In this case, we have to rebind all methods in
# the lexstatere and lexstateerrorf tables.
# writetab() - Write lexer information to a table file
# Rewrite the lexstatere table, replacing function objects with function names
# readtab() - Read lexer information from a tab file
# input() - Push a new string into the lexer
# Pull off the first character to see if s looks like a string
# begin() - Changes the lexing state
# push_state() - Changes the lexing state and saves old on stack
# pop_state() - Restores the previous state
# current_state() - Returns the current lexing state
# skip() - Skip ahead n characters
# opttoken() - Return the next token from the Lexer
# Note: This function has been carefully implemented to be as fast
# as possible.  Don't make changes unless you really know what
# you are doing
# Make local copies of frequently referenced attributes
# This code provides some short-circuit code for whitespace, tabs, and other ignored characters
# Look for a regular expression match
# Create a token for return
# If no token type was set, it's an ignored token
# If token is processed by a function, call it
# Set additional attributes useful in token rules
# Every function must return a token, if nothing, we just move to next token
# This is here in case user has updated lexpos.
# This is here in case there was a state change
# Verify type of the token.  If not in the token map, raise an error
# No match, see if in literals
# No match. Call t_error() if defined.
# Error method didn't change text position at all. This is an error.
# Iterator interface
# The functions and classes below are used to collect lexing information
# and build a Lexer object from it.
# _get_regex(func)
# Returns the regular expression assigned to a function either as a doc string
# or as a .regex attribute attached by the @TOKEN decorator.
# _funcs_to_names()
# Given a list of regular expression functions, this converts it to a list
# suitable for output to a table file
# _names_to_funcs()
# Given a list of regular expression function names, this converts it back to
# functions.
# _form_master_re()
# This function takes a list of all of the regex components and attempts to
# form the master regular expression.  Given limitations in the Python re
# module, it may be necessary to break the master regex into separate expressions.
# Build the index to function map for the matching engine
# def _statetoken(s,names)
# Given a declaration name s of the form "t_" and a dictionary whose keys are
# state names, this function returns a tuple (states,tokenname) where states
# is a tuple of state names and tokenname is the name of the token.  For example,
# calling this with s = "t_foo_bar_SPAM" might return (('foo','bar'),'SPAM')
# LexerReflect()
# This class represents information needed to build a lexer as extracted from a
# user's input file.
# Get the literals specifier
# Validate literals
# Build statemap
# Get all of the symbols with a t_ prefix and sort them into various
# categories (functions, strings, error functions, and ignore characters)
# Now build up a list of functions and a list of strings
# Mapping of symbols to token names
# Symbols defined as functions
# Symbols defined as strings
# Ignore strings by state
# Error functions by state
# EOF functions by state
# Sort the functions by line number
# Sort the strings by regular expression length
# Validate all of the t_rules collected
# Validate all rules defined by functions
# Validate all rules defined by strings
# validate_module()
# This checks to see if there are duplicated t_rulename() functions or strings
# in the parser input file.  This is done using a simple regular expression
# match on each line in the source code of the given module.
# lex(module)
# Build all of the regular expression rules from definitions in the supplied module
# Get the module dictionary used for the lexer
# Dump some basic debugging information
# Build a dictionary of valid token names
# Get literals specification
# Get the stateinfo dictionary
# Build the master regular expressions
# Add rules defined by functions first
# Now add all of the simple rules
# For inclusive states, we need to add the regular expressions from the INITIAL state
# Set up ignore variables
# Set up error functions
# Set up eof functions
# Check state information for ignore and error rules
# Create global versions of the token() and input() functions
# If in optimize mode, we write the lextab
# runmain()
# This runs the lexer as a main program
# @TOKEN(regex)
# This decorator function can be used to set the regex expression on a function
# when its docstring might need to be set in an alternative way
# Alternative spelling of the TOKEN decorator
# ctokens.py
# Token specifications for symbols in ANSI C and C++.  This file is
# meant to be used as a library in other tokenizers.
# Reserved words
# Literals (identifier, integer constant, float constant, string constant, char const)
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
# Increment/decrement (++,--)
# Ternary operator (?)
# Delimeters ( ) [ ] { } , . ; :
# Delimeters
# Comment (C-Style)
# Comment (C++-Style)
# ply: ygen.py
# This is a support program that auto-generates different versions of the YACC parsing
# function with different features removed for the purposes of performance.
# Users should edit the method LParser.parsedebug() in yacc.py.   The source code
# for that method is then used to create the other methods.   See the comments in
# yacc.py for further details.
# Get the original source
# Filter the DEBUG sections out
# Filter the TRACKING sections out
# Replace the parser source sections with updated versions
# .--.O.--.          techniques involving string concatenation.
#TODO: Change use_default to False when upgrading to version 3.
# pylint: disable=redefined-builtin,dangerous-default-value,exec-used
# Do not pass local state so it can recursively call itself.
# Schema in from draft-06 can be just the boolean value.
# When two blocks have the same condition (such as value has to be dict),
# do the check only once and keep it under one block.
# pylint: disable=dangerous-default-value,too-many-arguments
# pylint: disable=too-many-instance-attributes,too-many-public-methods
# spaces
# Any extra library should be here to be imported only once.
# Lines are imports to be printed in the file and objects
# key-value pair to pass to compile function directly.
# map schema URIs to validation function names for functions
# that are not yet generated, but need to be generated
# validation function names that are already done
# add main function to `self._needed_validation_functions`
# Generate parts that are referenced and not yet generated
# During generation of validation function, could be needed to generate
# new one that is added again to `_needed_validation_functions`.
# Therefore usage of while instead of for loop.
# needed because ref overrides any sibling keywords
# call validation function
# Add name_prefix to the name when it is being outputted.
# Unfortunately using `pprint.pformat` is causing errors
# specially with big regexes
# Finds any un-escaped $ (including inside []-sets)
# pylint: disable=line-too-long
# I was thinking about using ipaddress module instead of regexps for example, but it's big
# difference in performance. With a module I got this difference: over 100 ms with a module
# vs. 9 ms with a regex! Other modules are also ineffective or not available in standard
# library. Some regexps are not 100% precise but good enough, fast and without dependencies.
# Check dependencies before properties generates default values.
# When we know it's passing (at least once), we do not need to do another expensive try-except.
# When we know it's failing (one of means exactly once), we do not need to do another expensive try-except.
# Checking custom formats - user is allowed to override default formats.
# Format regex is used only in meta schemas.
# For proper multiplication check of floats we need to use decimals,
# because for example 19.01 / 0.01 = 1901.0000000000002.
# For example, 1e308 / 0.123456789
#'regex': r'',
# pylint: disable=duplicate-code
# Well known name
# Unique name
# Quoting rules: single quotes ('') needed if the value contains a comma.
# A literal ' can only be represented outside single quotes, by
# backslash-escaping it. No escaping inside the quotes.
# The simplest way to handle this is to use '' around every value, and
# use '\'' (end quote, escaped ', restart quote) for literal ' .
# Append data, ignoring any truncated integers at the end.
# If there may be file descriptors, we try to read 1 message at a time.
# The reference implementation of D-Bus defaults to allowing 16 FDs per
# message, and the Linux kernel currently allows 253 FDs per sendmsg()
# call. So hopefully allowing 256 FDs per recvmsg() will always suffice.
# D-Bus booleans take 4 bytes
# unsigned 8 bit
# signed 16 bit
# unsigned 16 bit
# bool (32-bit)
# signed 32-bit
# unsigned 32-bit
# signed 64-bit
# unsigned 64-bit
# file descriptor (uint32 index in a separate list)
# String
# Object path
# print('Array start', pos)
# Array of bytes
# print('Array elem', pos)
# Convert list of 2-tuples to dict
# Fail fast if we know in advance that the data is too big:
# print('Array ser: pad1={!r}, len_data={!r}, pad2={!r}, buf={!r}'.format(
# print('variant', pos)
# print('variant sig:', repr(sig), pos)
# print('variant done', (sig, val), pos)
# Based on http://norvig.com/lispy.html
# The first 16 bytes tell us the message size
# Final chunk
# TODO: fallbacks to X, filesystem
# Jeepney already includes bindings for these common interfaces
# Many D-Bus services have a main object at a predictable name, e.g.
# org.freedesktop.Notifications -> /org/freedesktop/Notifications
# print(xml)
# If no --output, guess a (hopefully) reasonable name.
# e.g. path is '/'
# States from the D-Bus spec (plus 'Success'). Not all used in Jeepney.
# We only support EXTERNAL authentication, but if we allow others,
# 'REJECTED <mechs>' would tell us to try another one.
# The protocol allows us to continue if FD passing is rejected,
# but Jeepney assumes that if you enable FD support you need it,
# so we fail rather
# We only expect one line before we reply
# Avoid consuming lots of memory if the server is not sending what we
# expect. There doesn't appear to be a specified maximum line length,
# but 8192 bytes leaves a sizeable margin over all the examples in the
# spec (all < 100 bytes per line).
# Old name (behaviour on errors has changed, but should work for standard case)
# Wait for data or a signal on the stop pipe
# Clear any data on the stop pipe
# Code to run in receiver thread ------------------------------------
# Send errors to any tasks still waiting for a message.
# The function below is copied from trio, which is under the MIT license:
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
# EBADF on Unix, ENOTSOCK on Windows
# type: Optional[memoryview]
# _send_data is copied & modified from trio's SocketStream.send_all() .
# See above for the MIT license.
# A previous message was partly sent - finish sending it now.
# Sending cancelled mid-message. Keep track of the remaining data
# so it can be sent before the next message, otherwise the next
# message won't be recognised.
# Once data is read, it must be given to the parser with no
# checkpoints (where the task could be cancelled).
# not self.enable_fds
# Our closing is currently sync, but AsyncResource objects must have aclose
# Authentication
# BSD: send credentials message to authenticate (kernel fills in data)
# Linux: no ancillary data needed, bus checks with SO_PEERCRED
# Say *Hello* to the message bus - this must be the first message, and the
# reply gives us our unique name.
# Task management -------------------------------------------
# It doesn't matter if we receive a partial message - the connection
# should ensure that whatever is received is fed to the parser.
# Ensure trio checkpoint
# Code to run in receiver task ------------------------------------
# Closing a memory channel can't block, but it only has an
# async close method, so we need to shield it from cancellation.
# Authentication flow
# Authentication finished
# Throw exception if receive task failed
# If sendmsg succeeds, I think ancillary data has been sent atomically?
# So now we just need to send any leftover normal data.
# Message routing machinery
# Say Hello, get our unique name
# Not the reply
# To impose the overall auth timeout, we'll update the timeout on the socket
# before each send/receive. This is ugly, but we can't use the socket for
# anything else until this has succeeded, so this should be safe.
# Put the socket back in blocking mode
# Windows doesn't let us delete or overwrite files that are being run
# But it does let us rename/move it. To get around this issue, we can move
# the file to a temporary folder (to be deleted at a later time)
# So, if safe_rm is True, we ignore any errors and move the file to the trash with below code
# move it to be deleted later if it still exists
# Make sure to use the real root path. This matters especially on Windows when using the packaged app
# (Microsoft Store) version of Python, which uses path redirection for sandboxing.
# See https://github.com/pypa/pipx/issues/1164
# Remove PYTHONPATH because some platforms (macOS with Homebrew) add pipx
# Remove __PYVENV_LAUNCHER__ because it can cause the wrong python binary
# Make sure that Python writes output in UTF-8
# Make sure we install package to venv, not userbase dir
# windows cannot take Path objects, only strings
# TODO: Switch to using `-P` / PYTHONSAFEPATH instead of running in
# separate directory in Python 3.11
# for any useful information in stdout, `pip install` must be run without
# In order of most useful to least useful
# Save STDOUT and STDERR to file in pipx/logs/
# make sure we show cursor again before handing over control
# All emojis that pipx might possibly use
# pipx shell exit codes
# Valid package specifiers for pipx:
# If package_spec is valid pypi name, pip will always treat it as a
# not a valid PEP508 package specification
# valid PEP508 package specification
# It might be a local archive
# If this looks like a URL, treat it as such.
# Treat the input as a local path if it does not look like a PEP 508
# specifier nor a URL. In this case we want to split out the extra part.
# It is a valid local path without "./"
# Use valid_local_path
# option == "--constraint=some_path"
# package name supplied by user might not match package found in URL,
# also if package name ends with archive extension, it might be a local archive file,
# only handles what json.JSONEncoder doesn't understand by default
# Only change this if file format changes
# V0.1 -> original version
# V0.2 -> Improve handling of suffixes
# V0.3 -> Add man pages fields
# V0.4 -> Add source interpreter
# V0.5 -> Add pinned
# We init this instance with reasonable fallback defaults for all
# always True for main_package
# handle older suffixed packages gracefully
# Reset self if problem reading
# Overrides for testing
# Much of the code in this module is adapted with extreme gratitude from
# https://github.com/tusharsadhwani/yen/blob/main/src/yen/github.py
# musl doesn't exist
# python_version can be a bare version number like "3.9" or a "binary name" like python3.10
# we'll convert it to a bare version number
# download the python build gz
# unpack the python build
# the python installation we want is nested in the tarball
# under a directory named 'python'. We move it to the install
# python standalone builds are typically ~32MB in size. to avoid
# ballooning memory usage, we read the file in chunks
# Calculate checksum
# Validate checksum
# update index after 30 days
# update index
# raise
# linux suffixes are nested under glibc or musl builds
# fallback to musl if libc version is not found
# Suffixes are in order of preference.
# Don't override already found versions, they are in order of preference
# sort by semver
# type: ignore[import-not-found,no-redef]
# Add an empty extra to enable evaluation of non-extra markers
# "entry_points" entry in setup.py are found here
# WINDOWS adds .exe to entry_point name
# search installed files
# "scripts" entry in setup.py is found here (test w/ awscli)
# vast speedup by ignoring all paths not above distribution root dir
# not sure what is found here
# Initialize: we have already visited root
# avoid infinite recursion, avoid duplicates in info
# recursively search for more
# In Windows, editable package have additional files starting with the
# Add "*-script.py", "*.exe.manifest" only to app_paths to make
# Collect the generator created from metadata.distributions()
# (see `itertools.chain.from_iterable`) into a tuple because we
# need to iterate over it multiple times in `_dfs_package_apps`.
# Tuple is chosen over a list because the program only iterate over
# the distributions and never modify it.
# always use shared libs when creating a new venv
# This is OK, because if no metadata, we are pipx < v0.15.0.0 and
# write path pointing to the shared libs site-packages directory
# example pipx_pth location:
# example shared_libs.site_packages location:
# https://docs.python.org/3/library/site.html
# A path configuration file is a file whose name has the form 'name.pth'.
# its contents are additional items (one per line) to be added to sys.path
# TODO: setuptools and wheel? Original code didn't bother
# but shared libs code does.
# package_name in package specifier can mismatch URL due to user error
# check syntax and clean up spec and pip_args
# do not use -q with `pip install` so subprocess_post_check_pip_errors
# no logging because any errors will be specially logged by
# Verify package installed ok
# Note: We want to install everything at once, as that lets
# pip resolve conflicts correctly.
# Try to infer app name from dist's metadata if given
# local path
# No [pipx.run] entry point; default to run console script.
# Evaluate and execute the entry point.
# TODO: After dropping support for Python < 3.9, use
# "entry_point.module" and "entry_point.attr" instead.
# match the indent of argparse options
# Stripping the single quote that can be parsed from several shells
# make sure --editable is last because it needs to be right before
# We should never reach here because run() is NoReturn.
# modify usage text to show required app argument
# add a double-dash to usage text to show requirement before app
# don't use utils.mkdir, to prevent emission of log message
# Determine logging level, a value between 0 and 50
# "incremental" is False so previous pytest tests don't accumulate handlers
# we manually discard a first -- because using nargs=argparse.REMAINDER
# since we would like app to be required but not in a separate argparse
# No animation, just a single print of message
# for Windows pre-ANSI-terminal-support (before Windows 10 TH2 (v1511))
# https://stackoverflow.com/a/10455937
# hello mypy
# Colorama is Windows only package
# Running from source. Add pipx's source code to the system
# path to allow direct invocation, such as:
# Python command could be `python3` or `python3.x` without micro version component
# The following code was copied from https://github.com/uranusjr/pipx-standalone
# which uses the same technique to build a completely standalone pipx
# distribution.
# If we are running under the Windows embeddable distribution,
# venv isn't available (and we probably don't want to use the
# embeddable distribution as our applications' base Python anyway)
# so we try to locate the system Python and use that instead.
# If the path contains "WindowsApps", it's the store python
# Special treatment to detect Windows Store stub.
# https://twitter.com/zooba/status/1212454929379581952
# Cover the 9009 return code pre-emptively.
# A real Python should print version, Windows Store stub won't.
# This executable seems to work.
# ignore installed packages to ensure no unexpected patches from the OS vendor
# are used
# Don't try to upgrade multiple times per run
# If the app is a script, return its content.
# Return None if it should be treated as a package name.
# Look for a local file first.
# Check for a URL
# Otherwise, it's a package
# Note that the environment name is based on the identified
# requirements, and *not* on the script name. This is deliberate, as
# it ensures that two scripts with the same requirements can use the
# same environment, which means fewer environments need to be
# managed. The requirements are normalised (in
# _get_requirements_from_script), so that irrelevant differences in
# whitespace, and similar, don't prevent environment sharing.
# For any package, we need to just use the name
# Raw URLs to scripts are supported, too, so continue if
# we can't parse this as a package
# If there's a single app inside the package, run that by default
# Let future _remove_all_expired_venvs know to remove this
# 15 chosen arbitrarily
# This regex comes from the inline script metadata spec
# Windows is currently getting un-normalized line endings, so normalize
# Validate the requirement
# Use the normalised form of the requirement
# NOTE: using this method to detect pip user-installed pipx will return
# https://docs.python.org/3/install/index.html#inst-alt-install-user
# Technically, even on Unix this depends on the filesystem
# The following is to satisfy mypy that python_version is str and not
# package_binary_names is used only if local_bin_path cannot use symlinks.
# It is necessary for non-symlink systems to return valid app_paths.
# sometimes symlinks can resolve to a file of a different name
# (in the case of ansible for example) so checking the resolved paths
# is not a reliable way to determine if the symlink exists.
# We always use the stricter check on non-Windows systems. On
# Windows, we use a less strict check if we don't have a symlink.
# shortcut if valid PyPI name
# NOTE: if pypi name and installed package name differ, this means pipx
# Valid metadata for venv
# No metadata from pipx_metadata.json, but valid python interpreter.
# In pre-metadata-pipx venv.root.name is name of main package
# In pre-metadata-pipx there is no suffix
# We make the conservative assumptions: no injected packages,
# not include_dependencies.  Other PackageInfo fields are irrelevant
# No metadata and no valid python interpreter.
# We'll take our best guess on what to uninstall here based on symlink
# location for symlink-capable systems.
# The heuristic here is any symlink in ~/.local/bin pointing to
# .local/share/pipx/venvs/VENV_NAME/{bin,Scripts} should be uninstalled.
# For non-symlink systems we give up and return an empty set.
# package_spec is anything pip-installable, including package_name, vcs spec,
# Any failure to install will raise PipxError, otherwise success
# Combined collection of package specifications
# Remove duplicates and order deterministically
# Inject packages
# Based on https://github.com/pypa/pip/blob/main/src/pip/_internal/req/req_file.py
# Strip comments and filter empty lines
# Reset venv_dir to None ready to install the next package in the list
# Enable installing shared library `pip` with `pipx`
# Install the main package
# Install the injected packages
# in case legacy original dir name
# install main package first
# now install injected packages
# This should never happen, but package_or_url is type
# iterate on all packages and reinstall them
# for the first one, we also trigger
# a reinstall of shared libs beforehand
# Upgrade shared libraries (pip, setuptools and wheel)
# Any error in upgrade will raise PipxError (e.g. from venv.upgrade_package())
# Any failure to upgrade will raise PipxError, otherwise success
# noqa:F401
# copy the current environment variables and add the vales from
# `env`
# execvpe on Windows returns control immediately
# rather than once the command has finished.
# A type alias for a string path to be used for the paths in this file.
# These paths may flow to `open()` and `shutil.move()`; `shutil.move()`
# only accepts string paths, not byte paths or file descriptors. See
# https://github.com/python/typeshed/pull/6832.
# Should work without __file__, e.g. in REPL or IPython notebook.
# will work for .py files
# Locate the .env file
# Load the .env file
# TODO: signals support
# The command was registered in a different name in the command loader
# If we are piped to another process, it may close early and send a
# SIGPIPE: https://docs.python.org/3/library/signal.html#note-on-sigpipe
# TODO: Custom error exit codes
# Errors must be ignored, full binding/validation
# happens later when the command is known.
# Makes ArgvInput.first_argument() able to
# distinguish an option from an argument.
# If the command is namespaced we rearrange
# the input to parse it as a single argument
# Bind before the console.command event,
# so the listeners have access to the arguments and options
# Ignore invalid option/arguments for now,
# to allow the listeners to customize the definition
# os.get_terminal_size() is unsupported # noqa: ERA001
# Get Levenshtein distance between the input and each command name
# Only keep results with a distance below the threshold
# Display results with shortest distance first
# Strip the application name
# If it's a long option, consider that
# everything after "--" is the option name.
# Otherwise, use the last character
# (if it's a short option set, only the last one
# can take a value with space separator).
# noop
# Options with values:
# For long options, test for '--option=' at beginning
# For short options, test for '-o' at beginning
# An option with a value and no space
# If the input is expecting another argument, add it
# If the last argument is a list, append the token to it
# Unexpected argument
# If the option accepts a value, either required or optional,
# we check if there is one
# Skip spaces
# Skip first delimiter
# Skip last delimiter
# Check if this is a valid argument index
# abs(x + (x < 0)) to normalize negative indices
# Follow https://no-color.org/
# noqa: SIM112
# Checking for Windows version
# If we have a compatible version
# activate color support
# Activate colors if possible
# Multiply lines by 2 to cater for each new line added between content
# Move cursor up n lines
# Erase to end of screen
# Collapse all spaces.
# Check for a separated comma values
# There is a title
# add the width of the following columns(numbers of colspan).
# Remove any new line breaks and replace it with a new line
# Create a two dimensional dict (rowspan x colspan)
# we need to know if unmerged_row will be merged or inserted into rows
# insert cell into row at cell_key position
# insert empty value at column position
# exclude grouped columns.
# Add highlighted text style
# Disable icanon (so we can read each keypress) and
# echo (we'll do echoing here instead)
# Read a keypress
# Backspace character
# Move cursor backwards
# Pop the last character off the end of our string
# Did we read an escape sequence
# A = Up Arrow. B = Down Arrow
# Echo out remaining chars for current match
# If typed characters match the beginning
# chunk of value (e.g. [AcmeDe]moBundle)
# Erase characters from cursor to end of line
# Save cursor position
# Write highlighted text
# Restore cursor position
# Encoding line
# End of source
# New line
# The token spans multiple lines
# If simple rendering wouldn't show anything useful, abandon it.
# make mypy happy
# Options
# If we have an IO, ensure we write to the error output
# Disable overwrite when output does not support ANSI codes.
# Set a reasonable redraw frequency so output isn't flooded
# Draw regardless of other limits
# Throttling
# Draw each step period, but not too late
# try to use the _nomax variant if available
# gets string length for each sub line with multiline format
# break reference cycle
# https://docs.python.org/3/library/inspect.html#the-interpreter-stack
# Global options
# Commands + options
# newline
# We either have a command like `poetry add` or a nested (namespaced)
# command like `poetry cache clear`.
# Complete the namespace first
# Now complete the command
# add the text up to the next tag
# </>
# calculate max width based on available commands per namespace
# The COMMAND event allows to attach listeners before any command
# is executed. It also allows the modification of the command and IO
# before it's handed to the command.
# The SIGNAL event allows some actions to be performed after
# the command execution is interrupted.
# The TERMINATE event allows listeners to be attached after the command
# is executed by the console.
# The ERROR event occurs when an uncaught exception is raised.
# This event gives the ability to deal with the exception or to modify
# the raised exception.
# Copyright 2019 The GNOME Music developers
# GNOME Music is free software; you can redistribute it and/or modify
# GNOME Music is distributed in the hope that it will be useful,
# You should have received a copy of the GNU General Public License along
# with GNOME Music; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# The GNOME Music authors hereby grant permission for non-GPL compatible
# GStreamer plugins to be used and distributed together with GStreamer
# and GNOME Music.  This permission is above and beyond the permissions
# granted by the GPL license by which GNOME Music is covered.  If you
# modify this code, you may extend this exception to your version of the
# code, but you are not obligated to do so.  If you do not wish to do so,
# delete this exception statement from your version.
# Copyright © 2018 The GNOME Music developers
# Copyright 2020 The GNOME Music developers
# directory does not exist yet
# File already exists.
# Copyright (c) 2013 Arnel A. Borja <kyoushuu@yahoo.com>
# Copyright (c) 2013 Vadim Rutkovsky <vrutkovs@redhat.com>
# Copyright (c) 2013 Lubosz Sarnecki <lubosz@gmail.com>
# Copyright (c) 2013 Guillaume Quintard <guillaume.quintard@gmail.com>
# Copyright (c) 2013 Felipe Borges <felipe10borges@gmail.com>
# Copyright (c) 2013 Eslam Mostafa <cseslam@gmail.com>
# Copyright (c) 2013 Shivani Poddar <shivani.poddar92@gmail.com>
# Copyright (c) 2013 Sai Suman Prayaga <suman.sai14@gmail.com>
# Copyright (c) 2013 Seif Lotfy <seif@lotfy.com>
# Order is important: CoreGrilo initializes the Grilo sources,
# which in turn uses CoreModel extensively.
# Translators: "shuffle" causes tracks to play in random order.
# The type checking is necessary to avoid false positives
# See: https://github.com/python/mypy/issues/1021
# Song is being processed or has already been processed.
# Nothing to do.
# In the case of gapless playback, both 'about-to-finish'
# and 'eos' can occur during the same stream. 'about-to-finish'
# already sets self._playlist to the next song, so doing it
# again on eos would skip a song.
# TODO: Improve playlist handling so this hack is no longer
# needed.
# After 'eos' in the gapless case, the pipeline needs to be
# hard reset.
# This is a special case for a song that is very short and the
# first song in the playlist. It can trigger gapless, but
# has_previous will return False.
# TODO: used by MPRIS
# FIXME: Just a proxy
# Copyright 2018 The GNOME Music developers
# Copyright (c) 2022 Bar Harel
# Licensed under the MIT license as detailed in LICENSE.txt
# Deque, Optional are required for supporting python versions 3.8, 3.9
# pragma: no cover # ABC
# No need to touch wakeup, as wakeup holds a strong reference and
# __del__ won't be called.
# Technically this should never happen, where there are waiters
# without a wakeup scheduled. Means there was a bug in the code.
# Error during initialization before _waiters exists.
# pragma: no cover # Technically a bug.
# Alert for the bug.
# Saving next wakeup and not this wakeup to account for fractions
# of rate passed. See leftover_time under _wakeup.
# Short circuit if there are no waiters
# We woke up early. Damn event loop!
# We have a negative leftover bois. Increase the next sleep!
# More than 1 tick early. Great success.
# Technically the higher the rate, the more likely the event loop
# should be late. If we came early on 2 ticks, that's really bad.
# We woke up too late!
# Missed wakeups can happen in case of heavy CPU-bound activity,
# or high event loop load.
# Check if we overflowed longer than a single call-time.
# Attempt to wake up only the missed wakeups and ones that were
# inserted while we missed the original wakeup.
# Might have been cancelled.
# All of the waiters were cancelled or we missed wakeups and we're out
# of waiters. Free to accept traffic.
# If we still have waiters, we need to schedule the next wakeup.
# If we're out of waiters we still need to wait before
# unlocking in case a new waiter comes in, as we just
# let a call through.
# There are no waiters if level is not == capacity.
# We can decrease without accounting for current level.
# We have no more waiters
# Copyright 2022 The GNOME Music Developers
# All views are created together, so if the album view is
# already initialized, assume the rest are as well.
# Ctrl + F: Open search bar
# Ctrl + Space: Play / Pause
# Ctrl + B: Previous
# Ctrl + N: Next
# Ctrl + R: Toggle repeat
# Ctrl + S: Toggle shuffle
# Alt + 1 : Switch to albums view
# Alt + 2 : Switch to artists view
# Alt + 3 : Switch to playlists view
# Close the search bar after Esc is pressed
# Open the search bar when typing printable chars.
# Copyright 2022 The GNOME Music developers
# Scale down the image according to the biggest axis
# Icon scale
# Copyright 2024 The GNOME Music developers
# SPDX-License-Identifier: GPL-2.0-or-later WITH GStreamer-exception-2008
# FIXME: How to set an IntEnum type?
# FIXME: Circular trigger, can probably be solved more neatly.
# If the search is active, it means that the search view is visible,
# so the player playlist is a list of songs from the search result.
# Otherwise, it's a list of songs from the songs view.
# This indicates that the file has not been created, so
# there is no art in the MediaArt cache.
# Music has two main cycling views (AlbumsView and ArtistsView),
# both have around 200 cycling items each when fully used. For
# the cache to be useful it needs to be larger than the given
# numbers combined.
# List slicing with 0 gives an empty list in
# _cache_cleanup.
# noqa: E226
# This error indicates that the coverart has already
# been linked by another concurrent lookup.
# Copyright 2020 The GNOME Music Developers
# Calling self._application.get_dbus_connection() seems to return None
# here, so get the bus directly from Gio.
# Query callback for _setup_local_miner_fs() to connect to session bus
# Open a local Tracker database.
# Version checks against the local version of Tracker can be done
# here, set `self._tracker_available = TrackerState.OUTDATED` if the
# checks fail.
# aboutwindow.py
# Copyright 2022 Christopher Davis <christopherdavis@gnome.org>
# This program is free software: you can redistribute it and/or modify
# the Free Software Foundation, either version 2 of the License, or
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
# SPDX-License-Identifier: GPL-2.0-or-later
# Translators should localize the following string which
# will be displayed at the bottom of the about box to give
# credit to the translator(s).
# Changing the state to NULL flushes the pipeline.
# Thus, the change message never arrives.
# Setter provided to trigger a property signal.
# For internal use only.
# We get the callback too soon, before the installation has
# actually finished. Do nothing for now.
# If the above failed, fall back to immediately starting the
# codec installation.
# TRANSLATORS: this is a button to launch a codec installer.
# {} will be replaced with the software installer's name, e.g.
# 'Software' in case of gnome-software.
# TRANSLATORS: separator for two codecs
# TRANSLATORS: separator for a list of codecs
# Smart Playlists do not have an id
# In repeat song mode, no metadata has changed if the
# player was already started
# Do no update the properties if the model has completely changed.
# These changes will be applied once a new song starts playing.
# Some clients (for example GSConnect) try to acesss the volume
# Copyright (c) 2016 Marinus Schraal <mschraal@src.gnome.org>
# FIXME: This and the later occurance are user facing strings,
# but they ideally should never be seen. A media should always
# contain a URL or we can not play it, in that case it should
# FIXME: query_info is not async.
# Activate the Tracker plugin only when TrackerWrapper
# is available by listening to the tracker-available
# property, so skip it here.
# FIXME:No removal support yet.
# Aggregate sources being added, for example when the
# network comes online.
# FIXME: Handle removing sources.
# FIXME: Only removes search sources atm.
# See pygobject#114 for bytes conversion.
# Remove notifications contain two events for the same file.
# One event as a file uri and the other one as a
# nie:InformationElement. Only the nie:InformationElement event
# needs to be kept because it is saved in the hash.
# The Tracker indexed paths may differ from Music's paths.
# In that case Tracker will report it as 'changed', while
# it means 'added' to Music.
# Initialize the playlists subwrapper after the initial
# songs model fill, the playlists expect a filled songs
# hashtable.
# If a search does not change the number of items found,
# SearchView will not update without a signal.
# FIXME: Searches are limited to not bog down the UI with
# widget creation ({List,Flow}Box limitations). The limit is
# arbitrarily set to 50 and set in the Tracker query. It should
# be possible to set it through Grilo options instead. This
# does not work as expected and needs further investigation.
# Even though we check for the album_artist, we fill
# the artist key, since Grilo coverart plugins use
# only that key for retrieval.
# If there is no album and artist do not go through with the
# query, it will not give any results.
# TRANSLATORS: this is a playlist name
# TRANSLATORS: this is a playlist name indicating that the
# files are not tagged enough to be displayed in the albums
# or artists views.
# It is not necessary to bind all the CoreSong properties:
# selected property is linked to a view
# validation is a short-lived playability check for local songs
# There is no need for the "state" property to be bidirectional
# Unlike ListModel, MediaList starts counting from 1
# Set the item to be reordered to position 0 (unused in
# a MediaFileListEntry) and bump or drop the remaining items
# in between. Then set the initial item from 0 to position.
# FIXME: Workaround for adding the right list type to the proxy
# list model.
# This indicates if the current list has been empty and has
# had no user interaction since.
# Copyright (c) 2016 The GNOME Music Developers
# FIXME: Just a bit of guesswork here.
# FIXME: Add an empty state.
# Copyright © 2018 The GNOME Music Developers
# FIXME: This is a workaround for not being able to pass the player
# object via init when using Gtk.Builder.
# FIXME: This is now duplicated here and in GrlTrackerWrapper.
# TRANSLATORS: This is a label to display a link to open
# a user's music folder. {} will be replaced with the
# translated text 'Music folder'
# Hack to get to AdwClamp, so it can be hidden for the
# initial state.
# In case the duration is no longer changing, make sure it is
# displayed.
# When the coreobject is an artist, the first song of the album
# needs to be loaded. Otherwise, the first album of the artist
# is played.
# Copyright (c) 2021 The GNOME Music Developers
# Copyright 2019 The GNOME Music Developers
# Copyright 2021 The GNOME Music developers
# Scroll with keys, hence no smoothing.
# TRANSLATORS: These are verbs, to (un)mark something as a
# favorite.
# Copyright 2018 The GNOME Music Developers
# FIXME: This is a workaround for not being able to pass the application
# optional wide-character (CJK) support
# running __init__.py as a script, AppVeyor pytests
# minimum extra space in headers
# Whether or not to preserve leading/trailing whitespace in data.
# default align will be overwritten by "left", "center" or "decimal"
# depending on the formatter
# if True, enable wide-character (CJK) support
# Constant that can be used as part of passed rows to generate a separating line
# It is purposely an unprintable character, very unlikely to be used in a table
# A table structure is supposed to be:
# TableFormat's line* elements can be
# TableFormat's *row elements can be
# padding (an integer) is the amount of white space around data values.
# with_header_hide:
# e.g. printing an empty data frame (github issue #15)
# hard-coded padding _around_ align attribute and value together
# rather than padding parameter which affects only the value
# this table header will be suppressed if there is a header row
# it's a header row, create a new table header
# generate the column specifiers
# use the column widths generated by tabulate for the asciidoc column width specifiers
# generate the list of options (currently only "header")
# generate the list of entries in the table header field
# two arguments are passed if called in the context of aboveline
# print the table header with column widths and optional header tag
# three arguments are passed if called in the context of dataline or headerline
# print the table line and make the aboveline if it is a header
# The table formats for which multiline cells will be folded into subsequent
# table rows. The key is the original format specified at the API. The value is
# the format that will be used to represent the original format.
# TODO: Add multiline support for the remaining table formats:
# Handle ANSI escape sequences for both control sequence introducer (CSI) and
# operating system command (OSC). Both of these begin with 0x1b (or octal 033),
# which will be shown below as ESC.
# CSI ANSI escape codes have the following format, defined in section 5.4 of ECMA-48:
# CSI: ESC followed by the '[' character (0x5b)
# Parameter Bytes: 0..n bytes in the range 0x30-0x3f
# Intermediate Bytes: 0..n bytes in the range 0x20-0x2f
# Final Byte: a single byte in the range 0x40-0x7e
# Also include the terminal hyperlink sequences as described here:
# https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda
# OSC 8 ; params ; uri ST display_text OSC 8 ;; ST
# Example: \x1b]8;;https://example.com\x5ctext to show\x1b]8;;\x5c
# Where:
# OSC: ESC followed by the ']' character (0x5d)
# params: 0..n optional key value pairs separated by ':' (e.g. foo=bar:baz=qux:abc=123)
# URI: the actual URI with protocol scheme (e.g. https://, file://, ftp://)
# ST: ESC followed by the '\' character (0x5c)
# datetime.datetime, date, and time
# no point
# not a number
# a bytestring
# optional wide-character support
# optional wide-character support if available
# TODO: refactor column alignment in single-line and multiline modes
# enable wide-character width corrections
# wcswidth and _visible_width don't count invisible characters;
# padfn doesn't need to apply another correction
# single-line cell values
# else: not multiline
# val is likely to be a numpy array with many elements
# numpy.ndarray, pandas.core.index.Index, ...
# dict-like and pandas.DataFrame?
# likely a conventional dict
# columns have to be transposed
# values is a property, has .index => it's likely a pandas.DataFrame (pandas 0.11.0)
# values matrix doesn't need to be transposed
# for DataFrames add an index per default
# headers should be strings
# it's a usual iterable of iterables, or a NumPy array, or an iterable of dataclasses
# an empty table (issue #81)
# numpy record array
# namedtuple
# dict-like object
# implements hashed lookup
# storage for set
# Save unique items in input order
# a dict of headers for a list of dicts
# Python Database API cursor object (PEP 0249)
# print tabulate(cursor, headers='keys')
# Python 3.7+'s dataclass
# keys are column indices
# take headers from the first row if necessary
# add or remove an index column
# pad with empty headers for initial columns if necessary
# Cast based on our internal type handling
# Any future custom formatting of types (such as datetimes)
# may need to be more explicit than just `str` of the object
# Expand scalar for all columns
# Ignore col width for any 'trailing' columns
# empty values in the first column of RST tables should be escaped (issue #82)
# "" should be escaped as "\\ " or ".."
# PrettyTable formatting does not use any extra padding.
# Numbers are not parsed and are treated the same as strings for alignment.
# Check if pretty is the format being used and override the defaults so it
# does not impact other formats.
# optimization: look for ANSI control codes once,
# enable smart width functions only if a control code is found
# convert the headers and rows into a single, tab-delimited string ensuring
# that any bytestrings are decoded safely (i.e. errors ignored)
# headers
# rows: chain the rows together into a single iterable after mapping
# the bytestring conversino to each cell value
# format rows and columns, convert numeric values to strings
# old version
# just duplicate the string to use in each column
# if floatfmt is list, tuple etc we have one per column
# if intfmt is list, tuple etc we have one per column
# align columns
# align headers and add headers
# NOTE: rowalign is ignored and exists for api compatibility with _append_multiline_row
# number of lines in the row
# vertically pad cells where some lines are missing
# cells_lines = [
# noqa do it later, in _append_multiline_row
# initial rows with a line below
# the last row without a line below
# test to see if either the 1st column or the 2nd column (account for showindex) has
# the SEPARATING_LINE flag
# a completely empty table
# For python2 compatibility
# Add color codes from earlier in the unwrapped line, and then track any new ones we add.
# A single reset code resets everything
# Always ensure each line is color terminted if any colors are
# still active, otherwise colors will bleed into other cells on the console
# Tabulate Custom: Build the string up piece-by-piece in order to
# take each charcter's width into account
# file generated by setuptools_scm
# Copyright (C) 2011-2012 Johan Dahlin <johan@gnome.org>
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301
# USA
# bisect.py -- Git bisect algorithm implementation
# Copyright (C) 2025 Jelmer Vernooij <jelmer@jelmer.uk>
# Dulwich is dual-licensed under the Apache License, Version 2.0 and the GNU
# General Public License as published by the Free Software Foundation; version 2.0
# or (at your option) any later version. You can redistribute it and/or
# modify it under the terms of either of these two licenses.
# You should have received a copy of the licenses; if not, see
# <http://www.gnu.org/licenses/> for a copy of the GNU General Public License
# and <http://www.apache.org/licenses/LICENSE-2.0> for a copy of the Apache
# License, Version 2.0.
# Create bisect state directory
# Store current branch/commit
# No HEAD exists
# Use the first non-HEAD ref in the chain, or the SHA itself
# The actual branch ref
# Detached HEAD
# No HEAD exists - can't start bisect
# Write BISECT_START
# Write BISECT_TERMS
# Write BISECT_NAMES (paths)
# Initialize BISECT_LOG
# Mark bad commit if provided
# Mark good commits if provided
# Write bad ref
# Update log
# Write good ref
# Read original branch/commit
# Clean up bisect files
# Clean up refs/bisect directory
# Reset to target commit/branch
# It's a branch reference - need to create a symbolic ref
# It's a commit SHA
# Parse and execute commands from log
# Get bad commit
# Get all good commits
# Get skip commits
# Find commits between good and bad
# Bisect complete - the first bad commit is found
# Find midpoint
# Write BISECT_EXPECTED_REV
# Update status in log
# Use git's graph walking to find commits
# This is a simplified version - a full implementation would need
# to handle merge commits properly
# Don't include good commits
# Add parents to queue
# Remove the bad commit itself
# reflog.py -- Parsing and writing reflog files
# Copyright (C) 2015 Jelmer Vernooij and others.
# SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
# Use HEAD if no ref specified (e.g., "@{1}")
# String annotation to work around typing module bug in Python 3.9.0/3.9.1
# See: https://github.com/jelmer/dulwich/issues/1948
# Filter entries that should be kept
# Check if entry is reachable
# Apply expiration rules
# Check the appropriate expiration time based on reachability
# Write back the kept entries
# Get the ref name by removing the logs_dir prefix
# Convert path separators to / for refs
# archive.py -- Creating an archive from a tarball
# Copyright (C) 2015 Jonas Haag <jonas@lophus.org>
# Copyright (C) 2015 Jelmer Vernooij <jelmer@jelmer.uk>
# The tarfile.open overloads are complex; cast to Any to avoid issues
# Manually correct the gzip header file modification time so that
# archives created from the same Git tree are always identical.
# The gzip header file modification time is not currently
# accessible from the tarfile API, see:
# https://bugs.python.org/issue31526
# Entry probably refers to a submodule, which we don't yet
# Fallback for objects without chunked attribute
# tarfile only works with ascii.
# __init__.py -- The git module of dulwich
# Copyright (C) 2007 James Westby <jw+debian@jameswestby.net>
# Copyright (C) 2008 Jelmer Vernooij <jelmer@jelmer.uk>
# if dissolve is not installed, then just provide a basic implementation
# of its replace_me decorator
# lfs.py -- Implementation of the LFS
# Copyright (C) 2020 Jelmer Vernooij
# First pass: compute SHA256 and collect data
# If object already exists, no need to write
# Object doesn't exist, write it
# Handle concurrent writes - if file already exists, just remove temp file
# LFS pointer files have a specific format
# Must start with version
# Size must be non-negative
# Check if data is already an LFS pointer
# Store the file content in LFS
# Create and return LFS pointer
# Try to parse as LFS pointer
# Not an LFS pointer, return as-is
# Validate the pointer
# Read the actual content from LFS store
# Object not found in LFS store, try to download it
# Download failed, fall back to returning pointer
# Return pointer as-is when object is missing and download failed
# Create LFS client and download
# Store the downloaded content in local LFS store
# Verify the stored OID matches what we expected
# LFSFilterDriver doesn't hold any resources that need cleanup
# LFSFilterDriver is stateless and lightweight, no need to cache
# Use configured user agent verbatim if set
# Default LFS user agent (similar to git-lfs format)
# Must have a scheme
# Only support http, https, and file schemes
# http/https require a hostname
# file:// URLs must have a path (netloc is typically empty)
# Ensure trailing slash for urljoin
# Try to get LFS URL from config first
# Validate explicitly configured URL - raise error if invalid
# Return appropriate client based on scheme
# This shouldn't happen if _is_valid_lfs_url works correctly
# Fall back to deriving from remote URL (same as git-lfs)
# Convert SSH URLs to HTTPS if needed
# Convert git@host:user/repo.git to https://host/user/repo.git
# Remove "git@"
# Ensure URL ends with .git for consistent LFS endpoint
# Standard LFS endpoint is remote_url + "/info/lfs"
# Return None if derived URL is invalid (LFS is optional)
# Derived URLs are always http/https
# Use urllib3 pool manager with git config applied
# Get download URL via batch API
# Download the object using urllib3 with git config
# Verify size
# Verify SHA256
# Get upload URL via batch API
# If no actions, object already exists
# Upload the object
# Verify if needed
# Convert file:// URL to filesystem path
# url2pathname handles the conversion properly across platforms
# Store the object
# worktree.py -- Working tree operations for Git repositories
# Copyright (C) 2024 Jelmer Vernooij <jelmer@jelmer.uk>
# File no longer exists
# already removed
# no head mean no commit in the repo
# if tree_entry didn't exist, this file was being added, so
# remove index entry
# no hook defined, silent fallthrough
# FIXME: Support GIT_COMMITTER_DATE environment variable
# FIXME: Use current user timezone rather than UTC
# FIXME: Support GIT_AUTHOR_DATE environment variable
# No dice
# Store original message (might be callable)
# Will be set later after parents are set
# Check if we should sign the commit
# Check commit.gpgSign configuration when sign is not explicitly set
# Default to not signing if no config
# Get the signing key from config if signing is enabled
# Create a dangling commit
# Handle message after parents are set
# FIXME: Try to read commit message from .git/MERGE_MSG
# no hook defined, message not modified
# Fail if the atomic compare-and-swap failed, leaving the
# commit and all its objects as garbage.
# silent failure
# Trigger auto GC if needed
# type: ignore[misc,unused-ignore]
# type: ignore[arg-type,unused-ignore]
# If core.sparseCheckoutCone is not set, default to False
# Add main worktree
# Get branch info for main worktree
# List additional worktrees
# Will be set below
# Read gitdir to get actual worktree path
# Convert relative path to absolute if needed
# Remove .git suffix
# Worktree directory is missing, skip it
# TODO: Consider adding these as prunable worktrees with a placeholder path
# Check if worktree path exists
# Read HEAD
# Resolve ref to get commit sha
# Check if locked
# Check if path already exists
# Normalize branch name
# Check if branch is already checked out in another worktree
# Determine what to checkout
# Check if branch exists
# Create new branch from HEAD
# Create new branch from specified commit
# Default to current HEAD
# Create the worktree directory
# Initialize the worktree
# Set HEAD appropriately
# Detached HEAD - write SHA directly to HEAD
# Point to branch
# Should be guaranteed by logic above
# Reset index to match HEAD
# Don't allow removing the main worktree
# Find the worktree
# Should be set if worktree_found is True
# Check for local changes if not forcing
# TODO: Check for uncommitted changes in the worktree
# Remove the working directory
# Remove the administrative files
# Skip locked worktrees
# Check if gitdir exists and points to valid location
# Check expiry time if specified
# Don't allow moving the main worktree
# Check if new path already exists
# Move the actual worktree directory
# Update the gitdir file in the worktree
# Update the gitdir pointer in the control directory
# Repair specific worktrees
# Check if this is a linked worktree
# Read the .git file to get the worktree control directory
# Make the path absolute if it's relative
# Update the gitdir file in the worktree control directory
# Update to point to the current location
# Repair from main repository to all linked worktrees
# Read the gitdir file to find where the worktree thinks it is
# Can't repair if we can't read the gitdir file
# Get the worktree directory (remove .git suffix)
# Check if the .git file exists at the old location
# Try to read and update the .git file to ensure it points back correctly
# If it doesn't point to the right place, fix it
# Update the .git file to point to the correct location
# Create temporary directory
# Add worktree
# Clean up worktree registration
# Clean up temporary directory
# hooks.py -- for dealing with git hooks
# Copyright (C) 2012-2013 Jelmer Vernooij and others.
# noqa: ANN401
# no file. silent failure.
# do nothing if the script doesn't exist
# client_refs is a list of (oldsha, newsha, ref)
# sparse_patterns.py -- Sparse checkout pattern handling.
# Copyright (C) 2013 Jelmer Vernooij <jelmer@jelmer.uk>
# For .gitignore logic, match_gitignore_patterns returns True if 'included'
# strip leading '/' and trailing '/'
# Check if this is top-level (no slash) or which top_dir it belongs to
# top-level file
# subdirs are excluded unless they appear in reinclude_dirs
# if we never set exclude_subdirs, we might include everything by default
# or handle partial subdir logic. For now, let's assume everything is included
# 1) Update skip-worktree bits
# Skip conflicted entries
# 2) Reflect changes in the working tree
# Excluded => remove if safe
# Included => materialize if missing
# Apply checkout normalization if normalizer is available
# ignore comments and blank lines
# remove leading '!'
# remove leading '/'
# If pattern ends with '/', we consider it directory-only
# (like "docs/"). Real Git might treat it slightly differently,
# but we'll simplify and mark it as "dir_only" if it ends in "/".
# Start by assuming "excluded" (like a .gitignore starts by including everything
# until matched, but for sparse-checkout we often treat unmatched as "excluded").
# We will flip if we match an "include" pattern.
# If dir_only is True and path_is_dir is False, we skip matching
# root subpath (anchored or unanchored)
# unanchored subpath
# If anchored is True, pattern should match from the start of path_str.
# If not anchored, we can match anywhere.
# We match from the beginning. For example, pattern = "docs"
# path_str = "docs/readme.md" -> start is "docs"
# We'll just do a prefix check or prefix + slash check
# Or you can do a partial fnmatch. We'll do a manual approach:
# Means it was just "/", which can happen if line was "/"
# That might represent top-level only?
# We'll skip for simplicity or treat it as a special case.
# Not anchored: we can do a simple wildcard match or a substring match.
# For simplicity, let's use Python's fnmatch:
# If negation is True, that means 'exclude'. If negation is False, 'include'.
# The last matching pattern overrides, so we continue checking until the end.
# refs.py -- For dealing with git refs
# Copyright (C) 2008-2013 Jelmer Vernooij <jelmer@jelmer.uk>
# For backwards compatibility
# These could be combined into one big expression, but are listed
# separately to parallel [1].
# Remove the prefix
# Split into remote name and branch name
# Use ZERO_SHA for None values, matching git behavior
# Unable to resolve
# Only update the specific ref requested, not the whole chain
# TODO(dborowitz): replace this with a public function that uses
# set_if_equal.
# Convert path-like objects to strings, then to bytes for Git compatibility
# Iterate through all the refs from the main worktree
# Iterate through all the shared refs from the commondir, excluding per-worktree refs
# Iterate through all the per-worktree refs from the worktree's gitdir
# TODO: invalidate the cache on repacking
# set both to empty because we want _peeled_refs to be
# None if and only if _packed_refs is also None.
# reread cached refs from disk, while holding the lock
# remove any loose refs pointing to this one -- please
# note that this bypasses remove_if_equals as we don't
# want to affect packed refs in here
# No cache: no peeled refs were read, or this ref is loose
# Known not peelable
# Read only the first line
# Read only the first 40 bytes
# don't assume anything specific about the error; in
# particular, invalid or forbidden paths can raise weird
# errors depending on the specific operating system
# make sure none of the ancestor folders is in packed refs
# read again while holding the lock to handle race conditions
# Check if ref already has the desired value while holding the lock
# This avoids fsync when ref is unchanged but still detects lock conflicts
# Ref already has desired value, abort write to avoid fsync
# remove the reference file itself
# may only be packed, or otherwise unstorable
# never write, we just wanted the lock
# outside of the lock, clean-up any parent directory that might now
# be empty. this ensures that re-creating a reference of the same
# name of what was previously a directory works as expected
# this can be caused by the parent directory being
# removed by another process, being not empty, etc.
# in any case, this is non fatal because we already
# removed the reference, just ignore it
# Never pack HEAD
# Broken ref, skip it
# TODO: Avoid recursive import :(
# get_refs() includes HEAD as a special case, but we don't want to
# advertise it
# Only add to peeled dict if sha is not None
# set refs/remotes/origin/HEAD
# detach HEAD at specified tag
# set HEAD to specific branch
# Delete the actual ref file while holding the lock
# repo.py -- For dealing with git repositories.
# There are no circular imports here, but we try to defer imports as long
# as possible to reduce start-up time for anything that doesn't need
# these imports.
# TODO(jelmer): Cache?
# type: ignore[attr-defined,unused-ignore]
# Could raise or log `ctypes.WinError()` here
# Could implement other platform specific filesystem hiding here
# Get commit graph once at initialization for performance
# Try to use commit graph for faster parent lookup
# Fallback to reading the commit object
# For now, just mimic the old behaviour
# Only update if graph_walker has shallow attribute
# TODO(dborowitz): find a way to short-circuit that doesn't change
# this interface.
# Do not send a pack in shallow short-circuit path
# Return an actual MissingObjectFinder with empty wants
# If the graph walker is set up with an implementation that can
# ACK/NAK to the wire, it will write data to the client through
# this call as a side-effect.
# Deal with shallow requests separately because the haves do
# not reflect what objects are missing
# TODO: filter the haves commits from iter_shas. the specific
# commits aren't missing.
# TODO: move this method to WorkTree
# Pass all arguments to Walker explicitly to avoid type issues with **kwargs
# Simple validation
# Initialize refs early so they're available for config condition matchers
# Initialize worktrees container
# Track extensions we encounter
# Use reftable if extension is configured
# Update worktrees container after refs change
# Initialize filter context as None, will be created lazily
# Root reached
# TODO(jelmer): Actually probe disk / look at filesystem
# TODO(dborowitz): sanitize filenames, since this is used directly by
# the dumb web serving code.
# Check for manyFiles feature configuration
# When feature.manyFiles is enabled, set index.version=4 and index.skipHash=true
# Default to version 4 for manyFiles
# Check for explicit index settings
# Bare repos must never have index files; non-bare repos may have a
# missing index file, which is treated as empty.
# set detached HEAD
# Update target head
# Add gitdir matchers
# Handle relative patterns (starting with ./)
# Can't handle relative patterns without config directory context
# Normalize repository path
# Expand ~ in pattern and normalize
# Normalize pattern following Git's rules
# Check for Windows absolute path
# Use the existing _match_gitdir_pattern function
# Add onbranch matcher
# Get the current branch using refs
# Get the final resolved ref
# Extract branch name from ref
# Pass condition matchers for includeIf evaluation
# Ensure we use absolute path for the worktree control directory
# Clean up filter context if it was created
# Unset attribute
# Set to value
# Set attribute
# Get fresh configuration and GitAttributes
# Lazily create FilterContext if needed
# Refresh the context with current config to handle config changes
# Return a new FilterBlobNormalizer with the context
# Read system gitattributes (TODO: implement this)
# Read global gitattributes (TODO: implement this)
# Read repository .gitattributes from index/tree
# Try to get from HEAD
# No HEAD, no attributes from tree
# Read .git/info/attributes
# Read .gitattributes from working directory (if it exists)
# Memory repos don't have working trees or gitattributes files
# Return empty GitAttributes
# Handle message (for MemoryRepo, we don't support callable messages)
# log_utils.py -- Logging utilities for Dulwich
# Copyright (C) 2010 Google, Inc.
# stderr
# Check if it's a file descriptor (integer 3-9)
# If it's an absolute path, return it as a string
# For any other value, treat it as disabled
# File descriptor
# File path
# For directories, create a file per process
# Try to configure from GIT_TRACE, fall back to default if it fails
# Git's broken format: split into two bytes that add up to the value
# Single byte case - most common path
# Two byte case - handle missing second byte
# Multi-byte varint case - delegate to proper decoder
# Two-byte case: choose between Git's format and standard LEB128
# Use Git's format if it produces reasonable values
# Fall back to standard LEB128
# Reftable magic bytes
# Reftable version
# Constants - extracted magic numbers
# 28 - version 2 header size
# SHA1 and size constants
# SHA1 is 20 bytes
# SHA1 as hex string is 40 characters
# Peeled ref is 80 hex characters (2 SHA1s)
# Validation limits
# Block types
# Value types for ref records (matching Git's format)
# one object name
# two object names (ref + peeled)
# symbolic reference
# Convert hex string to binary if needed
# Convert hex strings to binary if needed
# varint(target_len) + target
# Convert to hex string for interface compatibility
# Encode according to Git format:
# varint(prefix_length)
# varint((suffix_length << 3) | value_type)
# suffix
# varint(update_index_delta)
# value?
# Calculate update_index_delta
# Encode value based on type
# Read prefix length
# Read combined suffix_length and value_type
# Read suffix
# Read update_index_delta
# Read value based on type
# Git expects compressed empty data for new repos
# Single ref pattern
# Multiple refs pattern
# first restart offset (always 0)
# Special offset calculation: second_ref_offset + EMBEDDED_FOOTER_MARKER
# padding byte
# restart count (Git always uses 2)
# Add header copy
# Create a minimal header copy
# version 1, block size 4096
# min/max update indices
# Final padding
# Sort refs by name
# Encode refs with prefix compression
# Record offset for restart points
# Git records offsets starting from second ref
# Git uses embedded footer format for ref blocks
# Look for embedded footer marker pattern
# Fallback: use most of the data, leaving room for footer
# Find where ref data ends (before embedded footer)
# Save position to check for progress
# Check if we have enough data for a minimal record
# Need at least prefix_len + suffix_and_type + some data
# Skip metadata records (empty refnames are Git's internal index records)
# Only add non-empty refnames
# If we can't decode a record, we might have hit padding or invalid data
# Stop parsing rather than raising an error
# Ensure we made progress to avoid infinite loops
# Track insertion order for update indices
# Track written data for CRC calculation
# Track if this is a batch operation
# HEAD should always be first (like git does)
# Update existing ref (e.g., if HEAD was auto-created and now explicitly set)
# Git always creates HEAD -> refs/heads/master by default
# Skip recalculation if max_update_index was already set higher than default
# This preserves Git's behavior for symbolic-ref operations
# Also skip for batch operations where min == max by design
# Calculate max_update_index based on actual number of refs
# Write ref blocks (includes embedded footer)
# Git embeds the footer within the ref block for small files,
# so we only need to add final padding and CRC
# Magic bytes
# Version + block size (4 bytes total, big-endian network order)
# Format: uint8(version) + uint24(block_size)
# Min/max update index (timestamps) (big-endian network order)
# Git uses increasing sequence numbers for update indices
# min_update_index
# max_update_index
# Store header for footer
# All refs get same index in batch
# Use provided indices
# Special case for single HEAD symbolic ref
# Sequential indices
# Only write block if we have refs
# Write refs in insertion order to preserve update indices like Git
# Get update indices for all refs
# Add refs to block with their update indices
# Generate block data (may use embedded footer for small blocks)
# Write block type
# Write block length (3 bytes, big-endian network order)
# Write block data
# Git writes exactly 40 bytes after the ref block (which includes embedded footer)
# This is 36 bytes of zeros followed by 4-byte CRC
# Calculate CRC over the last 64 bytes before CRC position
# Collect all data written so far
# CRC is calculated over the 64 bytes before the CRC itself
# Pad with zeros if file is too small
# Read magic bytes
# Read version + block size (4 bytes total, big-endian network order)
# First byte
# Last 3 bytes
# Read min/max update index (big-endian network order)
# Read block type
# Read block length (3 bytes, big-endian network order)
# Convert 3-byte big-endian to int
# Read block data
# TODO: Handle other block types
# Stop if we encounter footer header copy
# Likely parsing footer as block
# Store all refs including deletion records - deletion handling is done at container level
# Only flush if no exception occurred
# Normalize path to string
# Buffer for batching ref updates
# Track chronological update index for each ref
# Track whether we're in batch mode
# Create refs/heads marker file for Git compatibility
# If refs/heads is a directory, remove it
# Create marker file if it doesn't exist
# Read header to get min_update_index
# Read the ref block to get individual update indices
# Parse the block to get refs with their update indices
# Store the update index for each ref
# This preserves the chronological order
# Find the highest update index used so far by reading all tables
# No existing tables, but Git starts with HEAD at index 1
# Our first operations start at index 2
# First, read all tables and sort them by min_update_index
# Sort by min_update_index to ensure chronological order
# Merge results in chronological order
# Apply updates from this table
# Remove ref if it exists
# Add/update ref
# For symbolic refs, also include their targets as implicit refs
# Add the target ref as an implicit ref
# First SHA1 hex chars
# Unknown value type
# Too many levels of indirection
# Return in Git format: "ref: <target>"
# Return the first SHA (not the peeled one)
# Return the peeled SHA (second 40 hex chars)
# Known not to be peeled
# Symbolic ref or other - no peeled info
# For now, implement a simple non-atomic version
# TODO: Implement proper atomic compare-and-swap
# Delete ref
# Update ref
# Ref exists
# Ref doesn't exist, continue
# Check if we're in batch mode - if so, buffer for later
# Buffer the update for later batching using RefUpdate objects
# Write immediately like Git does - one file per update
# Git uses max_update_index = min + 1 for single ref updates
# Don't auto-create HEAD - let it be set explicitly
# Add the requested ref
# First, load the current update indices for all refs
# Read all existing refs to create complete consolidated view
# Process pending updates
# Get next update index - all refs in batch get the SAME index
# Apply updates to get final state
# Write consolidated batch file
# Update tables list with new files (don't compact, keep separate)
# Remove old files and update tables.list
# Remove old .ref files
# Write new tables.list with separate files
# Sort for deterministic order
# Process all RefUpdate objects
# Resolve symref chain like Git does (only for HEAD)
# Regular ref, symref, or deletion
# Follow the chain up to 5 levels deep (Git's limit)
# Circular reference, return current
# Check if current target is also a symref in pending updates
# Not a symref, this is the final target
# Process all updates and assign the SAME update index to all refs in batch
# Deletion still gets recorded with the batch index
# All refs in batch get the same update index
# Process HEAD update with same batch index
# All refs in batch have same update index
# Add all refs in sorted order
# Git typically puts HEAD first if present
# Pass the update indices to the writer
# Remove old .ref files (Git's compaction behavior)
# Write new tables.list with just the consolidated file
# Get all .ref files in the directory
# Sort by name (which includes timestamp)
# Write to tables.list
# Copyright (C) 2017 Jelmer Vernooij <jelmer@jelmer.uk>
# Find the last negation pattern
# Check if any exclusion pattern excludes a parent directory
# Handle **/middle/** patterns
# Handle dir/** patterns
# Special case: dir/** allows immediate child file negations
# Nested files with ** negation patterns
# Directory patterns (ending with /) can exclude parent directories
# Check if ** is at end
# ** at end - matches everything
# Check if next segment is also **
# Consecutive ** segments
# Check if this ends with a directory pattern (trailing /)
# Pattern like c/**/**/ - requires at least one intermediate directory
# Pattern like c/**/**/d - allows zero intermediate directories
# ** in middle - handle differently depending on what follows
# ** at start - any prefix
# ** in middle - match zero or more complete directory segments
# Leading /** is same as **
# Leading **/
# Leading / means relative to .gitignore location
# Check for invalid patterns with // - Git treats these as broken patterns
# Pattern with // doesn't match anything in Git
# Negative lookahead - matches nothing
# Don't normalize consecutive ** patterns - Git treats them specially
# c/**/**/ requires at least one intermediate directory
# So we keep the pattern as-is
# Handle patterns with no slashes (match at any level)
# No slash except possibly at end
# Handle leading patterns
# Process the rest of the pattern
# Add slash separator (except for first segment)
# End of pattern
# Add optional trailing slash for files
# Ignore blank lines, they're used for readability.
# Trailing spaces are ignored unless they are quoted with a backslash.
# Handle negation
# Handle escaping of ! and # at start only
# Check if this is a directory-only pattern
# For negation directory patterns (e.g., !dir/), only match directories
# Check if the regex matches
# For exclusion directory patterns, also match files under the directory
# Basic rule: last matching pattern wins
# Apply Git's parent directory exclusion rule for negations
# Only applies to inclusions (negations)
# On Windows, opening a path that contains a symlink can fail with
# errno 22 (Invalid argument) when the symlink points outside the repo
# Paths leading up to the final part are all directories,
# so need a trailing slash.
# Standard behavior - last matching pattern wins
# Only check if we would include due to negation
# Apply special case for issue #1203: directory traversal with ** patterns
# Original logic for traversal check
# Check if subdirectories would be unignored
# Use standard logic for test case - last matching pattern wins
# Keep original result
# walk.py -- General implementation of walking commits and their contents.
# Maximum number of commits to walk past a commit time boundary.
# For merge commits, we need to handle multiple parents differently
# Use a lambda to adapt the signature
# TODO(jelmer): What to do about non-Commit and non-Tag objects?
# TODO: This is inefficient unless the object store does
# some caching (which DiskObjectStore currently does not).
# We could either add caching in this class or pass around
# parsed queue entry objects instead of commits.
# If the next commit is newer than the last one, we
# need to keep walking in case its parents (which we
# may not have seen yet) are excluded. This gives the
# excluded set a chance to "catch up" while the commit
# is still in the Walker's output queue.
# We want to stop walking at min_time, but commits at the
# boundary may be out of order with respect to their parents.
# So we walk _MAX_EXTRA_COMMITS more commits once we hit this
# boundary.
# We're not at a boundary, so reset the counter.
# Note: when adding arguments to this method, please also update
# dulwich.repo.BaseRepo.get_walker
# TODO(jelmer): Really, this should require a single type.
# Print deprecation warning here?
# For merge commits, changes() returns list[list[TreeChange]]
# For merge commits, only include changes with conflicts for
# this path. Since a rename conflict may include different
# old.paths, we have to check all of them.
# Handle both list[TreeChange] and list[list[TreeChange]]
# It's list[list[TreeChange]], flatten it
# It's list[TreeChange]
# file.py -- Safe access to git files
# Defer the tempfile import since it pulls in a lot of other things.
# destination file exists
# Convert PathLike to str/bytes for our internal use
# The file may have been removed already, which is ok.
# Windows versions prior to Vista don't support atomic
# renames
# Implement IO[bytes] methods by delegating to the underlying file
# TODO: Remove type: ignore when Python 3.10 support is dropped (Oct 2026)
# Python 3.9/3.10 have issues with IO[bytes] overload signatures
# type: ignore[override,unused-ignore]
# e porcelain.py -- Porcelain-like layer on top of Dulwich
# Module level tuple definition for status output
# TypeVar for preserving BaseRepo subclass types
# Type alias for common repository parameter pattern
# All Buffer implementations (bytes, bytearray, memoryview) support len()
# Git internal format
# RFC 2822
# ISO 8601
# Supported offsets:
# sHHMM, sHH:MM, sHH
# YYYY.MM.DD, MM/DD/YYYY, DD.MM.YYYY contain no timezone information
# Resolve might returns a relative path on Windows
# https://bugs.python.org/issue38671
# Convert bytes paths to str for Path
# Resolve and abspath seems to behave differently regarding symlinks,
# as we are doing abspath on the file path, we need to do the same on
# the repo path or they might not match
# If path is a symlink that points to a file outside the repo, we
# want the relpath for the link itself, not the resolved target
# Get all refs
# Define callbacks for each logical variable
# Falls back to GIT_EDITOR if not set
# Git's default is "master"
# Dictionary mapping variable names to their getter callbacks
# Build the variables dictionary by calling callbacks
# Handle amend logic
# If message not provided, use the message from the current HEAD
# If author not provided, use the author from the current HEAD
# Use the parent(s) of the current HEAD as our parent(s)
# If -a flag is used, stage all modified tracked files
# Create a wrapper that handles the bytes -> Blob conversion
# Convert bytes paths to strings for add function
# For amend, create dangling commit to avoid adding current HEAD as parent
# Update HEAD to point to the new commit
# TODO(jelmer): Capture logging output and stream to errstream
# For direct repo cloning, use LocalGitClient
# Convert PathLike to str
# Convert bytes to str
# Initialize and update submodules if requested
# .gitmodules file doesn't exist - no submodules to process
# Submodule configuration missing
# Get unstaged changes once for the entire operation
# Check if core.preloadIndex is enabled
# When no paths specified, add all untracked and modified files from repo root
# Handle bytes paths by decoding them
# Make relative paths relative to the repo directory
# Don't resolve symlinks completely - only resolve the parent directory
# to avoid issues when symlinks point outside the repository
# For symlinks, resolve only the parent directory
# For regular files/dirs, resolve normally
# Path is not within the repository
# Handle directories by scanning their contents
# Check if the directory itself is ignored
# When adding a directory, add all untracked files within it
# If we're scanning a subdirectory, adjust the path
# Also add unstaged (modified) files within this directory
# Check if this unstaged file is within the directory we're processing
# File is within this directory, add it
# File is not within this directory, skip it
# FIXME: Support patterns
# TODO: option to remove ignored files also, in line with `git clean -fdx`
# TODO(jelmer): if require_force is set, then make sure that -f, -i or
# -n is specified.
# Reverse file visit order, so that files and subdirectories are
# removed before containing directory
# target_dir and r.path are both str, so ap must be str
# All subdirectories and files have been removed if untracked,
# so dir contains no tracked files iff it is empty.
# If path is absolute, use it as-is. Otherwise, treat it as relative to repo
# Treat relative paths as relative to the repository root
# Convert to bytes for file operations
# Apply checkin normalization to compare apples to apples
# Handle paths - convert to string if necessary
# Get full paths
# Check if destination is a directory
# Move source into destination directory
# Convert to tree paths for index
# Check if source exists in index
# Check if source exists in filesystem
# Check if destination already exists
# Check if destination is already in index
# Get the index entry for the source
# Create parent directory for destination if needed
# Move the file in the filesystem
# Update the index
# Create a wrapper for ColorizedDiffStream to handle string/bytes conversion
# Convert string to bytes for ColorizedDiffStream
# Use wrapper for ColorizedDiffStream, direct stream for others
# Write diff directly to the ColorizedDiffStream as bytes
# Traditional path: buffer diff and write as decoded text
# Convert paths to bytes if needed
# TODO(jelmer): better default for encoding?
# Normalize paths to bytes
# Check if paths is not empty
# Convert empty list to None
# Resolve commit refs to SHAs if provided
# Already a Commit object
# parse_commit handles both refs and SHAs, and always returns a Commit object
# Compare two commits
# Get trees from commits
# Use tree_changes to get the changes and apply path filtering
# Skip if paths are specified and this change doesn't match
# Show staged changes (index vs commit)
# Compare working tree to a specific commit
# mypy: commit_sha is set when commit is not None
# Compare working tree to index
# TODO(jelmer): Move this logic to dulwich.submodule
# Get list of submodules to update
# Read submodule configuration
# Find the submodule name from .gitmodules
# Get the URL from config
# URL not in config, skip this submodule
# Get or create the submodule repository paths
# Clone or fetch the submodule
# Clone the submodule as bare repository
# Clone to the git directory
# Create the submodule directory if it doesn't exist
# Create .git file in the submodule directory
# Set up working directory configuration
# Checkout the target commit
# Build the index and checkout files
# If it's a commit, get the tree
# Fetch and checkout in existing submodule
# Fetch from remote
# Update to the target commit
# Reset the working directory
# Create the tag object
# Check if we should sign the tag
# Check tag.gpgSign configuration when sign is not explicitly set
# Parse the object to get its SHA
# Parse the target tree
# Only parse as commit if treeish is not a Tree object
# For Tree objects, we can't determine the commit, skip updating HEAD
# Update HEAD to point to the target commit
# Soft reset: only update HEAD, leave index and working tree unchanged
# Mixed reset: update HEAD and index, but leave working tree unchanged
# Open the index
# Clear the current index
# Populate index from the target tree
# Create an IndexEntry from the tree entry
# Use zeros for filesystem-specific fields since we're not touching the working tree
# Size will be 0 since we're not reading from disk
# Write the updated index
# Hard reset: update HEAD, index, and working tree
# Get configuration for working directory update
# Update working tree and index
# For reset --hard, use current index tree as old tree to get proper deletions
# Empty index
# Allow overwriting modified files
# Open the repo
# Check if mirror mode is enabled
# Mirror mode: push all refs and delete non-existent ones
# Push all refs to the same name on remote
# Normalize refspecs to bytes
# Get the client and path
# In mirror mode, delete remote refs that don't exist locally
# TODO: Handle selected_refs == {None: None}
# Wrap to match the expected signature
# Convert AbstractSet to set since generate_pack_data expects set
# type: ignore[arg-type]  # Function matches protocol but mypy can't verify
# Store the old HEAD tree before making changes
# Perform merge
# Skip updating ref since merge already updated HEAD
# Only update HEAD if we didn't perform a merge
# Update working tree to match the new HEAD
# Skip if merge was performed as merge already updates the working tree
# 1. Get status of staged
# 2. Get status of unstaged
# Convert messages to single string per author
# Sort by number of commits (lines in messages)
# Convert paths to strings for os.walk compatibility
# Skip .git and below.
# Normalize paths to str
# List to store untracked directories found during traversal
# If we can't read the directory, assume it has non-ignored files
# Check if directory is ignored
# For "normal" mode, check if the directory is entirely untracked
# Convert directory path to tree path for index lookup
# Check if any file in this directory is tracked
# This directory is entirely untracked
# If excluding ignored, check if directory contains any non-ignored files
# Directory only contains ignored files, skip it
# Check if it should be excluded due to ignore rules
# For "all" mode, use the original behavior
# frompath_str and basepath_str are both str, so ap must be str
# "normal" mode
# Walk directories, handling both files and directories
# This part won't be reached for pruned directories
# Check if this directory is entirely untracked
# Check individual files in directories that contain tracked files
# Yield any untracked directories found during pruning
# Compile the pattern
# Get the tree to search
# Set up ignore filter if requested
# Convert pathspecs to bytes
# Iterate through all files in the tree
# Skip directories
# Check max depth
# Check pathspecs
# Simple prefix matching (could be enhanced with full pathspec support)
# Check ignore patterns
# Get the blob content
# Search for pattern in the blob
# Compares the Index to the HEAD & determines changes
# Iterate through the changes and report add/delete/modify
# TODO: call out to dulwich.diff_tree somehow.
# TODO(jelmer): Support git-daemon-export-ok and --export-all.
# FIXME: Catch exceptions and write a single-line summary to outf.
# Try to expand branch shorthand before parsing
# Check if we should set up tracking
# Default value
# Determine if the objectish refers to a remote-tracking branch
# Try to resolve objectish as a ref
# HEAD might point to a remote-tracking branch
# Set up tracking if appropriate
# Extract remote name and branch from the ref
# Set up tracking
# Check for branch.sort configuration
# Default is refname (alphabetical)
# Parse sort key
# Apply sorting
# Simple alphabetical sort (default)
# Sort by date
# authordate
# Sort branches by date
# Note: Python's sort naturally orders smaller values first (ascending)
# For dates, this means oldest first by default
# Use a stable sort with branch name as secondary key for consistent ordering
# For reverse sort, we want newest dates first but alphabetical names second
# Unknown sort key, fall back to default
# Unknown sort key
# Check if branch is an ancestor of HEAD (fully merged)
# git for-each-ref uses glob (7) style patterns, but fnmatch
# is greedy and also matches slashes, unlike glob.glob.
# We have to check parts of the pattern individually.
# See https://github.com/python/cpython/issues/72904
# Convert string patterns to bytes
# Filter by branches/tags if specified
# By default, show tags, heads, and remote refs (but not HEAD)
# Add HEAD if requested
# Filter by patterns if specified
# Verify mode requires exact match
# Pattern matching from the end of the full name
# Only complete parts are matched
# E.g., "master" matches "refs/heads/master" but not "refs/heads/mymaster"
# Try to match from the end
# Check if the end of ref matches the pattern
# Sort by ref name
# Build result list
# Dereference tags if requested
# Peel tag objects to get the underlying commit/object
# Object not found, skip dereferencing
# Determine which branches to show
# Specific branches requested
# Try as full ref name first
# Try as branch name
# Try as remote branch
# Default behavior: show local branches
# Show both local and remote branches
# Show only remote branches
# Show only local branches
# Add current branch if requested and not already included
# HEAD doesn't point to a branch or doesn't exist
# Sort branches for consistent output
# Handle --independent flag
# Handle --merge-base flag
# Need at least 2 branches for merge base
# Get current branch for marking
# Collect commit information for each branch
# (sha, message)
# Handle --list flag (show only branch headers)
# Just show the branch headers
# Create spacing for alignment
# Build commit history for visualization
# Collect all commits reachable from any branch
# sha -> (timestamp, parents, message)
# Recurse to parents
# Commit not found, stop traversal
# Collect commits from all branches
# Find common ancestor
# Sort commits (chronological by default, or topological if requested)
# Topological sort is more complex, for now use chronological
# TODO: Implement proper topological ordering
# Reverse chronological order (newest first)
# Determine how many commits to show
# Find index of common ancestor
# Show commits up to ancestor + more
# Determine which branches contain which commits
# Output branch headers
# Output separator
# Output commits
# Build marker string
# This is the tip of the branch
# This commit is in the branch
# This commit is not in the branch
# Limit output to 26 branches (git show-branch limitation)
# Check if path needs quoting (non-ASCII or special characters)
# Apply C-style quoting
# Non-ASCII character, encode as octal escape
# Convert path to string for consistent handling
# Normalize to str
# Preserve whether the original path had a trailing slash
# Normalize Windows paths to use forward slashes
# Restore trailing slash if it was in the original
# For directories, check with trailing slash to get correct ignore behavior
# If this is a directory path, ensure we test it correctly
# Return relative path (like git does) when absolute path was provided
# TODO(jelmer): Provide some way so that the actual ref gets
# updated rather than what it points to, so the delete isn't
# necessary.
# Store the original target for later reference checks
# Handle path-specific checkout (like git checkout -- <paths>)
# Convert paths to bytes
# If no target specified, use HEAD
# Get the target commit and tree
# Get blob normalizer for line ending conversion
# Restore specified paths from target tree
# Look up the path in the target tree
# Path doesn't exist in target tree
# Create directories if needed
# Handle path as string
# Write the file content
# Apply checkout filters (smudge)
# Normal checkout (switching branches/commits)
# For Commit/Tag objects, we'll use their SHA
# Parse the target to get the commit
# Guaranteed by earlier check for normal checkout
# Get current HEAD tree for comparison
# No HEAD yet (empty repo)
# Check for uncommitted changes if not forcing
# staged is a dict with 'add', 'delete', 'modify' keys
# unstaged is a list
# Check if any changes would conflict with checkout
# File doesn't exist in target tree - change can be preserved
# File exists in target tree - would overwrite local changes
# Update working tree
# Update HEAD
# Create new branch and switch to it
# Set up tracking if creating from a remote branch
# Set tracking to refs/heads/<branch> on the remote
# Invalid remote ref format, skip tracking setup
# Check if target is a branch name (with or without refs/heads/ prefix)
# Try adding refs/heads/ prefix
# It's a branch - update HEAD symbolically
# It's a tag, other ref, or commit SHA - detached HEAD
# Simply delegate to the new checkout function
# --- 0) Possibly infer 'cone' from config ---
# --- 1) Read or write patterns ---
# --- 2) Determine the set of included paths ---
# --- 3) Apply those results to the index & working tree ---
# root-level files only
# Finally, apply the patterns and update the working tree
# Do not pass base patterns as dirs
# Convert tuple back to bytes format
# TODO(jelmer): check pack files
# TODO(jelmer): check graph
# TODO(jelmer): check refs
# Convert Entry objects to (old_sha, new_sha) tuples
# Start with minimum length
# Check if this prefix is unique
# Not unique, need more characters
# Found unique prefix
# If we get here, return the full ID
# Get the repository
# Get a list of all tags
# Annotated tag case
# Lightweight tag case - obj is already the commit
# Sort tags by datetime (first element of the value list)
# Get the latest commit
# If there are no tags, return the latest commit
# We're now 0 commits from the top
# Walk through all commits
# Check if tag
# Return plain commit if no parent tag can be found
# Get HEAD commit
# Check if fast-forward is possible
# Use the first merge base for fast-forward checks
# Check if we're trying to merge the same commit
# Already up to date
# Check for fast-forward
# Fast-forward merge
# Update the working directory
# Perform recursive merge (handles multiple merge bases automatically)
# Add merged tree to object store
# Update index and working directory
# Don't create a commit if there are conflicts or no_commit is True
# Create merge commit
# Set author/committer
# Set timestamps
# UTC
# Set commit message
# Add commit to object store
# Get all commits to merge
# Check if we're trying to merge the same commit as HEAD
# Skip this commit, it's already merged
# If no commits to merge after filtering, we're already up to date
# If only one commit to merge, use regular merge
# Find the octopus merge base
# Check if this is a fast-forward (HEAD is the merge base)
# For octopus merges, fast-forward doesn't really apply, so we always create a merge commit
# Perform octopus merge
# Don't create a commit if there are conflicts
# Octopus merge refuses to proceed with conflicts
# Don't create a commit if no_commit is True
# Create merge commit with multiple parents
# Generate default message for octopus merge
# Handle both single commit and multiple commits
# Multiple commits - use octopus merge
# Only one commit, use regular merge
# Multiple commits, use octopus merge
# Single commit - use regular merge
# Type narrowing: committish is not a sequence in this branch
# Resolve tree-ish arguments to actual trees
# Perform the merge
# Add the merged tree to the object store
# Resolve upstream
# Try to find tracking branch
# Build the tracking branch ref
# Default to HEAD^ if no tracking branch found
# Resolve head
# Convert strings to bytes
# Resolve refs to commit IDs
# Get limit commit ID if specified
# Find all commits reachable from head but not from upstream
# This is equivalent to: git rev-list ^upstream head
# Get commits from head that are not in upstream
# Apply limit if specified
# Stop when we reach the limit commit
# Compute patch IDs for upstream commits
# Maps patch_id -> commit_id for debugging
# For each head commit, check if equivalent patch exists in upstream
# Show oldest first
# First line only
# noqa: D417
# Validate that committish is provided when needed
# Handle abort
# Clean up any cherry-pick state
# Reset index to HEAD
# Handle continue
# Check if there's a cherry-pick in progress
# Check for unresolved conflicts
# Create the commit
# Read saved message if any
# Clean up state files
# Normal cherry-pick operation
# Get current HEAD
# Parse the commit to cherry-pick
# committish cannot be None here due to validation above
# Check if commit has parents
# Get parent of cherry-pick commit
# Perform three-way merge
# Reset index to match merged tree
# Update working tree from the new index
# Allow overwriting because we're applying the merge result
# Save state for later continuation
# Save commit message
# Normalize commits to a list
# Convert string refs to bytes
# Process commits in order
# For revert, we want to apply the inverse of the commit
# This means using the commit's tree as "base" and its parent as "theirs"
# For simplicity, we only handle commits with one parent (no merge commits)
# Perform three-way merge:
# - base: the commit we're reverting (what we want to remove)
# - ours: current HEAD (what we have now)
# - theirs: parent of commit being reverted (what we want to go back to)
# theirs
# Update working tree with conflicts
# Create revert commit
# Set message
# Extract original commit subject
# 2 weeks default
# Count loose objects
# Git uses disk usage, not file size. st_blocks is always in
# 512-byte blocks per POSIX standard
# Available on Linux and macOS
# Fallback for Windows
# Object may have been removed between iteration and stat
# Count pack information
# Get pack file size
# Get index file size
# Check if todo file exists
# Edit the todo list of an interactive rebase
# Continue interactive rebase
# Continue regular rebase
# Rebase complete
# Still have conflicts
# Start interactive rebase
# Process the todo list
# Regular rebase
# Continue rebase automatically
# Conflicts
# Return the SHAs of the rebased commits
# Ensure path is bytes
# Parse branch/committish
# Determine which refs to process
# Resolve HEAD to actual branch
# resolved is a list of (refname, sha) tuples
# HEAD points directly to a commit
# Convert branch name to full ref if needed
# Convert subdirectory filter to bytes if needed
# Create commit filter
# Tag callback for renaming tags
# Copy tag to new name
# Delete old tag
# Filter refs
# Determine which commits to format
# Get the last n commits from HEAD
# No HEAD or empty repository
# Handle commit range (start, end)
# Extract commit IDs from commit objects if needed
# Walk from end back to start
# Single commit
# Generate patches
# Get the parent
# Generate the diff
# Initial commit - diff against empty tree
# Generate patch with commit metadata
# Get binary stream from TextIO
# Fallback for non-text streams
# Generate filename
# Convert single good commit to sequence
# Parse commits
# Return the next commit to test if we have both good and bad
# Checkout the next commit
# Convert single rev to sequence
# Get old tree before reset
# Update working tree to new HEAD
# No HEAD after reset
# Use iter_reflogs to discover all reflogs
# Read the reflog entries for this ref
# Default expire times if not specified
# Default: expire entries older than 90 days, unreachable older than 30 days
# Already checked above
# Build set of reachable objects if we have unreachable expiration time
# Process each ref
# Create reachability checker
# No unreachable expiration, so assume everything is reachable
# Open the reflog file
# For dry run, just read and count what would be expired
# Actually expire entries
# Load existing GitAttributes
# Return current LFS tracked patterns
# Add new patterns
# Ensure pattern is bytes
# Set LFS attributes for the pattern
# Write updated attributes
# Stage the .gitattributes file
# Return updated list
# Remove specified patterns
# Check if pattern is tracked by LFS
# Create LFS store
# Set up Git config for LFS
# Get LFS store
# Read file content
# Clean the content (convert to LFS pointer)
# Smudge the pointer (retrieve actual content)
# Get the commit and tree
# Walk the tree
# Check if it's an LFS pointer
# Initialize LFS if needed
# Get current index
# Determine files to migrate
# Migrate all files above 100MB
# 100MB
# Use include/exclude patterns
# Check include patterns
# Check exclude patterns
# Migrate files
# Convert to LFS pointer
# Write pointer back to file
# Create blob for pointer content and update index
# Write updated index
# Track patterns if include was specified
# Check all files in index
# Get LFS server URL from config
# Try remote URL
# Append /info/lfs to remote URL
# Get authentication
# TODO: Support credential helpers and other auth methods
# Create LFS client and store
# Find all LFS pointers in the refs
# Walk the commit tree
# Check if we already have it
# Object exists, no need to fetch
# Fetch missing objects
# First do a fetch for HEAD
# Then checkout LFS files in working directory
# Replace pointer with actual content
# Object not available
# Find all LFS objects to push
# Push current branch
# Push objects
# Object not in local store
# Check working directory files
# Check if object exists locally
# Object exists locally
# Check if file has been modified
# TODO: Check for not committed and not pushed files
# Resolve committish references to commit IDs
# Find merge base
# Return first result only if all=False
# If ancestor is the merge base of (ancestor, descendant), then it's an ancestor
# Filter to independent commits
# Convert PathLike to str for split_maildir
# Read from stdin
# Convert PathLike to str if needed
# input_path is either str or IO[bytes] here
# objectspec.py -- Object specification
# Copyright (C) 2014 Jelmer Vernooij <jelmer@jelmer.uk>
# Re-raise original KeyError for consistency
# Handle :<path> - lookup path in tree
# Handle @{N} - reflog lookup
# Git uses reverse chronological order
# Handle ^{} - tag dereferencing
# Handle ~ and ^ operators
# Follow first parent N times
# sep == b"^"
# Get N-th parent (or commit itself if N=0)
# Process remaining operators recursively
# No operators, just return the object
# If already a Tree, return it directly
# If it's a Commit, return its tree
# For Tag objects or strings, use the existing logic
# treeish is commit sha
# Try parsing as commit (handles short hashes)
# Tag handling - dereference and recurse
# TODO: check force?
# TODO: Support * in refspecs
# Tag points to a missing object
# If already a Commit object, return it directly
# If it's a Tag object, dereference it
# TODO: parse_path_in_tree(), which handles e.g. v1.0:Documentation
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
# Copyright (c) 2020 Kevin B. Hendricks, Stratford Ontario Canada
# priority queue using builtin python minheap tools
# why they do not have a builtin maxheap is simply ridiculous but
# liveable with integer time stamps using negation
# Flags to Record State
# ancestor of commit 1
# ancestor of commit 2
# Do Not Consider
# potential LCA (Lowest Common Ancestor)
# initialize the working list states with ancestry info
# note possibility of c1 being one of c2s should be handled
# If c1 doesn't exist and we have shallow commits, it might be a missing parent
# For missing commits in shallow repos, use a minimal timestamp
# If c2 doesn't exist and we have shallow commits, it might be a missing parent
# loop while at least one working list commit is still viable (not marked as _DNC)
# adding any parents to the list in a breadth first manner
# Look only at ANCESTRY and _DNC flags so that already
# found _LCAs can still be marked _DNC by lower _LCAS
# potential common ancestor if not already in candidates add it
# mark any parents of this node _DNC as all parents
# would be one generation further removed common ancestors
# If we can't get parents in a shallow repo, skip this node
# This is safer than pretending it has no parents
# if this parent was already visited with no new ancestry/flag information
# do not add it to the working list again
# Parent doesn't exist - if we're in a shallow repo, skip it
# walk final candidates removing any superseded by _DNC by later lower _LCAs
# remove any duplicates and sort it so that earliest is first
# actual git sorts these based on commit times
# must use parents provider to handle grafts and shallow
# Algorithm: Find the common ancestor
# If c1 doesn't exist in the object store, we can't determine fast-forward
# This can happen in shallow clones where c1 is a missing parent
# Check if any shallow commits have c1 as a parent
# We're in a shallow repository and c1 doesn't exist
# We can't determine if fast-forward is possible
# Filter out commits that are ancestors of other commits
# Check if this commit is an ancestor of any other commit
# If merge base of (commit_id, other_id) is commit_id,
# then commit_id is an ancestor of other_id
# diff.py -- Diff functionality for Dulwich
# Copyright (C) 2025 Dulwich contributors
# No HEAD means no commits yet
# Get tree from index
# Get index for tracking new files
# Process files from the committed tree lazily
# Get the old file from tree
# Use lstat to handle symlinks properly
# File was deleted
# Show as deletion if it was in tree
# Handle different file types
# Directory in working tree where file was expected
# Show as deletion
# If old_blob is None, it's a new directory - skip it
# Symlink in working tree
# New symlink
# Type change: file/submodule -> symlink
# Symlink target changed
# Regular file
# Create a temporary blob for filtering and comparison
# Apply filters if needed (only for regular files, not gitlinks)
# Determine the git mode for the new file
# New file
# Symlink -> file
# Submodule -> file
# Regular file, check for content or mode changes
# Now process any new files from index that weren't in the tree
# New file already deleted, skip
# Handle different file types for new files
# New directory - skip it
# New regular file
# Apply filters if needed
# Process each file in the index
# Handle conflicted entries by using stage 2 ("ours")
# No stage 2 entry, skip
# Get file from regular index entry
# Type check and cast to Blob
# Check if type changed or content changed
# Apply filters if needed (only for regular files)
# Check if this was a type change
# File was deleted - this is normal, not a warning
# Show as deletion since we can't read it
# Rich expects a text stream, so we need to wrap our binary stream
# Add new data to buffer
# Process complete lines
# Colorize based on diff line type
# Fallback to raw output if we can't decode/encode the text
# Write any remaining buffer content
# Flush the text wrapper and underlying stream
# BinaryIO interface methods
# Start with all refs
# This follows symbolic refs
# Broken ref
# TODO: Add reflog support when reflog functionality is available
# Walk all reachable objects
# Add referenced objects
# Tree
# Parents
# Tree entries
# Tagged object
# Check grace period
# Object not found, skip it
# Calculate size before attempting deletion
# Only count as pruned if we get here (deletion succeeded or dry run)
# Object already gone
# File system errors during deletion
# Count initial state
# Find unreachable objects to exclude from repacking
# Apply grace period check
# Pack refs
# Delete loose unreachable objects
# Repack everything, excluding unreachable objects
# This handles both loose object packing and pack consolidation
# Repack excluding unreachable objects
# Normal repack
# Prune orphaned temporary files
# Count final state
# Check environment variable first
# Check programmatic disable flag
# Check if auto GC is disabled
# Auto GC is disabled
# Check loose object count
# Can't count loose objects on non-disk stores
# Check pack file count
# Check for gc.log file - only for disk-based repos
# For non-disk repos, just run GC without gc.log handling
# Check gc.logExpiry
# Default to 1 day
# Parse time value (simplified - just support days for now)
# gc.log exists and is not expired - skip GC
# TODO: Support gc.autoDetach to run in background
# For now, run in foreground
# Run GC with auto=True flag
# Remove gc.log on successful completion
# Write error to gc.log
# Don't propagate the error - auto GC failures shouldn't break operations
# bundle.py -- Bundle format support
# Copyright (C) 2020 Jelmer Vernooij <jelmer@jelmer.uk>
# Convert the unpacked object to a proper git object
# Extract pack data to separate stream since PackData expects
# the file to start with PACK header at position 0
# Build the references dictionary for the bundle
# Handle peeled refs
# Convert prerequisites to proper format
# SHA1 hex string
# Validate it's actually hex
# Store hex in bundle and for pack generation
# Not a valid hex string, invalid prerequisite
# Binary SHA, convert to hex for both bundle and pack generation
# Invalid length
# Assume it's already a binary SHA
# Generate pack data containing all objects needed for the refs
# Store the pack objects directly, we'll write them when saving the bundle
# For now, create a simple wrapper to hold the data
# Materialize the iterator
# Create bundle object
# lru_cache.py -- Simple LRU cache for dulwich
# Copyright (C) 2006, 2008 Canonical Ltd
# Copyright (C) 2022 Jelmer Vernooĳ <jelmer@jelmer.uk>
# TODO: We could compute this 'on-the-fly' like we used to, and remove
# Just make sure to break any refcycles, etc
# The "HEAD" of the lru linked list
# The "TAIL" of the lru linked list
# Inlined from _record_access to decrease the overhead of __getitem__
# We also have more knowledge about structure if __getitem__ is
# succeeding, then we know that self._most_recently_used must not be
# Nothing to do, this node is already at the head of the queue
# Remove this node from the old location
# benchmarking shows that the lookup of _null_key in globals is faster
# than the attribute lookup for (node is self._least_recently_used)
# 'node' is the _least_recently_used, because it doesn't have a
# 'next' item. So move the current lru to the previous node.
# Insert this node at the front of the list
# Trigger the cleanup
# Make sure the cache is shrunk to the correct size
# Move 'node' to the front of the queue
# We've taken care of the tail pointer, remove the node, and insert it
# at the front
# REMOVE
# If we have removed all entries, remove the head pointer as well
# Now remove this node from the linked list
# And remove this node's pointers
# Clean up in LRU order
# The new value is 'too big to fit', as it would fill up/overflow
# the cache all by itself
# We won't be replacing the old node, so just remove it
# Time to cleanup
# lfs_server.py -- Simple Git LFS server implementation
# Type annotation for the server attribute
# Check if object exists
# upload
# Extract OID from path
# Read content in chunks
# Calculate SHA256
# Verify OID matches
# Check if object already exists
# Store the object only if it doesn't exist
# Optionally validate size
# Could verify size matches stored object
# Try to open the object - if it exists, close it immediately
# Get the actual port if we used 0
# config.py - Reading and writing Git config files
# Copyright (C) 2011-2013 Jelmer Vernooij <jelmer@jelmer.uk>
# Type for file opener callback
# Type for includeIf condition matcher
# Takes the condition value (e.g., "main" for onbranch:main) and returns bool
# Security limits for include files
# 1MB max for included config files
# Maximum recursion depth for includes
# Convert to strings for easier manipulation
# Normalize paths to use forward slashes for consistent matching
# Handle the common cases for gitdir patterns
# Pattern like **/dirname/** should match any path containing dirname
# Remove **/ and /**
# Check if path contains the directory name as a path component
# Pattern like **/filename
# Remove **/
# Pattern like /path/to/dir/** should match /path/to/dir and any subdirectory
# Remove /**
# Handle patterns with ** in the middle
# Path must start with prefix and end with suffix (if any)
# Direct match or simple glob pattern
# Convert glob pattern to regex
# Replace escaped \*\* with .* (match anything)
# Replace escaped \* with [^/]* (match anything except /)
# Anchor the pattern
# For config sections, only lowercase the section name (first element)
# but preserve the case of subsection names (remaining elements)
# Key type must be ConfigKey
# For get() default parameter
# Return a view of the original keys (not lowercased)
# We need to deduplicate since _real can have duplicates
# Return a view that iterates over the real list to preserve order
# Return iterator over original keys (not lowercased), deduplicated
# This method replaces all existing values for the key
# Backslash at end of string - treat as literal backslash
# Unknown escape sequence - treat backslash as literal and process next char normally
# Reprocess the character after the backslash
# the rest of the line is a comment
# Normalize line to bytearray for simple 2/3 compatibility
# Comment characters outside balanced quotes denote comment start
# Remove only the newline characters, keep the content including the backslash
# Remove \r\n, keep the \
# Remove \n, keep the \
# Count consecutive backslashes at the end
# If we have an odd number of backslashes, the last one is a line continuation
# If we have an even number, they are all escaped and there's no continuation
# Parse section header ("[bla]")
# Handle subsections - Git allows more complex syntax for certain sections like includeIf
# Standard quoted subsection
# Special handling for includeIf sections which can have complex conditions
# Git allows these without strict quote validation
# Other sections must have quoted subsections
# Track included files to prevent cycles
# Prevent excessive recursion
# Process include/includeIf directives
# continuation line
# Handle includeIf conditions
# Resolve the include path
# Check for circular includes
# Invalid path - log and skip
# Load and merge the included file
# Use provided file opener or default to GitFile
# Git silently ignores missing or unreadable include files
# Log for debugging purposes
# Track this path to prevent cycles
# Parse the included file
# Merge the included configuration
# Expand ~ to home directory
# If path is relative and we have a config directory, make it relative to that
# Try custom matchers first if provided
# Fall back to built-in matchers
# Unknown condition type - log and ignore (Git behavior)
# Split on the first colon to separate config key from pattern
# Parse the config key to get section and name
# Handle wildcards in section names (e.g., remote.*)
# Match any subsection
# Check all sections that match the pattern
# Direct section lookup
# in windows native shells (powershell/cmd) exe path is
# .../Git/bin/git.exe or .../Git/cmd/git.exe
# in git-bash exe path is .../Git/mingw64/bin/git.exe
# There is no set standard for system config dirs on windows. We try the
# following:
# Try to find Git installation from PATH first
# Only use the first found path
# Fall back to registry if not found in PATH
# Include deprecated PROGRAMDATA location
# Include all Git installations found
# Handle GIT_CONFIG_GLOBAL - overrides user config paths
# Handle GIT_CONFIG_SYSTEM and GIT_CONFIG_NOSYSTEM
# If either path or url is missing, just ignore this
# submodule entry and move on to the next one. This is
# how git itself handles malformed .gitmodule entries.
# dulwich - Simple command-line interface to Dulwich
# Copyright (C) 2008-2011 Jelmer Vernooij <jelmer@jelmer.uk>
# vim: expandtab
# 30 days
# 365 days
# Handle special cases
# Expire all entries - set to future time so everything is "older"
# 100 years in future
# Never expire - set to epoch start so nothing is older
# Try parsing as direct Unix timestamp
# Parse relative time and convert to timestamp
# Determine which editor to use
# Create a temporary file
# Launch the editor
# Read the edited content
# Clean up the temporary file
# add padding
# Convert to bytes and decode to string for the pager
# Pager failed to start, fall back to direct output
# If pager died (user quit), stop writing output
# Pager died (user quit), stop writing output
# No pager available, write directly to stdout
# Additional file-like methods for compatibility
# Expose buffer if it exists
# We only use this with sys.stdout which is TextIO
# For stdout/stderr, we don't close them
# Check global pager disable flag
# Don't page if stdout is not a terminal
# Priority order for pager command (following git's behavior):
# 1. Check pager.<cmd> config (if cmd_name provided)
# 2. Check environment variables: DULWICH_PAGER, GIT_PAGER, PAGER
# 3. Check core.pager config
# 4. Fallback to common pagers
# 1. Check per-command pager config (pager.<cmd>)
# It's a custom pager command
# 2. Check environment variables
# -F: quit if one screen, -R: raw control chars, -X: no init/deinit
# Ultimate fallback
# Use binary buffer for archive output
# Convert '.' to None to add all files
# Show shortened commit hash and line content
# blame is an alias for annotate
# Mutually exclusive group for location vs --all
# Mutually exclusive group for tag handling
# Determine include_tags setting
# Default behavior - don't force tag inclusion
# Fetch from all remotes
# Fetch from specific location
# Handle the -- separator for paths
# Determine diff algorithm
# Determine if we should use color
# auto
# Show diff for working tree or staged changes
# Show diff between working tree and specified commit
# Show diff between two commits
# Flush any remaining output
# Start with the initial message
# Add branch info if repo is provided
# Get the final reference
# Remove 'refs/heads/' prefix
# Launch editor
# Remove comment lines and strip
# Determine which config file to use
# Use global config file
# Use local repository config (default)
# Handle --list
# Handle --unset or --unset-all
# Parse the key (e.g., "user.name" or "remote.origin.url")
# For keys like "remote.origin.url", section is ("remote", "origin")
# Check if the key exists first
# Delete the configuration key using ConfigDict's delete method
# Handle --get-all
# Handle get (no value provided)
# Handle set (key and value provided)
# No action specified
# For amend, create a callable that opens editor with original message pre-populated
# Get the original commit message from current HEAD
# Open editor with original message
# For regular commits, use empty template
# If ref is provided, we're setting; otherwise we're reading
# Set symbolic reference
# Read symbolic reference
# ignored, we never prune
# List all variables
# Query specific variable
# No arguments - print error
# Wrap the ColorizedDiffStream (BinaryIO) back to TextIO
# Handle --exists mode
# Reference exists
# Reference missing
# Error looking up reference
# Regular show-ref mode
# Return error if no matches found (unless quiet)
# Output results
# Show commit contents before verification
# In raw mode, let the exception propagate
# Show tag contents before verification
# Show subcommand (default when no subcommand is specified)
# Expire subcommand
# Delete subcommand
# If no arguments or first arg is not a subcommand, treat as show
# Parse as show command
# show or default
# Show reflogs for all refs
# Format similar to git reflog
# Parse time specifications
# Execute expire
# Print results
# Parse refspec (e.g., "HEAD@{1}" or "refs/heads/master@{2}")
# Execute delete
# Default to mixed behavior
# Use the porcelain.reset function for all modes
# Show symrefs first, like git does
# Show regular refs
# Parse expire grace period
# Try to parse as absolute date
# Progress callback
# bisect start
# bisect bad
# bisect good
# bisect skip
# bisect reset
# bisect log
# bisect replay
# bisect help
# Bisect complete - find the first bad commit
# If multiple commits are provided, pass them as a list
# If only one commit is provided, pass it as a string
# Convert commit_sha to hex string
# Check argument validity
# This shouldn't happen unless there were conflicts
# Determine base tree - if only two parsed_args provided, base is None
# Only two arguments provided
# Three arguments provided
# Output only conflict paths, null-terminated
# Output the merged tree SHA
# Output conflict information
# Parse prune grace period
# Default to 2 weeks
# Report results
# Handle the case where revision might be a pathspec
# If revision looks like a pathspec (contains wildcards or slashes),
# treat it as a pathspec instead
# Display verbose output
# Size in KiB
# Simple output
# Handle abort/continue/skip first
# Check if interactive rebase is in progress
# Edit todo list for interactive rebase
# Normal rebase requires upstream
# Interactive rebase
# Supported Git-compatible options
# Branch/ref to rewrite (defaults to HEAD)
# Track if any filter fails
# Setup environment for filters
# Helper function to run shell commands
# Create filter functions based on arguments
# Export tree to tmpdir
# Run the filter command in the temp directory
# Rebuild tree from modified temp directory
# Use appropriate file mode
# Read back from index
# The filter receives: tree parent1 parent2...
# Skip commit
# Open repo once
# Check for refs/original if not forcing
# Call porcelain.filter_branch with the repo object
# Always keep original with git
# Check if any filter failed
# Git filter-branch shows progress
# Git shows: Ref 'refs/heads/branch' was rewritten
# lfs init
# lfs track
# lfs untrack
# lfs ls-files
# lfs migrate
# lfs pointer
# lfs clean
# lfs smudge
# lfs fetch
# lfs pull
# lfs push
# lfs status
# Parse committish using the new function
# Convert Commit objects to their SHAs
# Determine if input is a Maildir
# Check if it's a Maildir (has cur, tmp, new subdirectories)
# Call porcelain function
# Print information about the split
# Handle both progress(msg) and progress(count, msg) signatures
# Convert bytes to string if needed
# For ranges like A..B, we need to include B if it's a ref
# Split the range to get the end part
# Not empty (not "A..")
# Process the bundle while file is still available via stdin
# Keep the file open during bundle processing
# Process pack data while file is still open
# If committish is provided and not detaching, treat as branch
# If committish is provided and detaching, treat as commit
# Parse only the global options and command, stop at first positional
# We'll handle help ourselves
# Parse known args to separate global options from command args
# Apply global pager settings
# Handle help
# First remaining arg is the command
# TODO(jelmer): Return non-0 on errors
# server.py -- Implementation of the server side git protocols
# Copyright (C) 2008 John Carr <john.carr@unrouted.co.uk>
# Copyright(C) 2011-2012 Jelmer Vernooij <jelmer@jelmer.uk>
# Handle both str and bytes keys for backward compatibility
# Try converting between str and bytes
# Ensure path is a string to avoid TypeError when joining with self.root
# Flags needed for the no-done capability
# A state variable for denoting that the have list is still
# being processed, and the client is not accepting any other
# data (such as side-band, see the progress method here).
# The provided haves are processed, and it is safe to send side-
# band data now.
# proto.write returns Optional[int], but we need to treat it as returning None
# for compatibility with write_pack_from_container
# Bail if we don't have a Repo available; this is ok since
# clients must be able to handle if the server doesn't include
# all relevant tags.
# TODO: fix behavior when missing
# TODO(jelmer): Integrate this with the refs logic in
# Repo.find_missing_objects
# Note the fact that client is only processing responses related
# to the have lines it sent, and any other data (including side-
# band) will be be considered a fatal error.
# Did the process short-circuit (e.g. in a stateless RPC call)? Note
# that the client still expects a 0-object pack in most cases.
# Also, if it also happens that the object_iter is instantiated
# with a graph walker with an implementation that talks over the
# wire (which is this instance of this class) this will actually
# iterate through everything and write things out to the wire.
# Handle shallow clone case where missing_objects can be None
# we are done
# non-commit wants are assumed to be satisfied
# TODO: handle parents with later commit times than children
# Skip refs that are inaccessible
# TODO(jelmer): Integrate with Repo.find_missing_objects refs
# logic.
# i'm done..
# Now client will sending want want want commands
# The client may close the socket at this point, expecting a
# flush-pkt from the server. We might be ready to send a packfile
# at this point, so we need to explicitly short-circuit in this
# type: ignore[call-overload, no-any-return]
# consume client's flush-pkt
# Update self.shallow instead of reassigning it since we passed a
# reference to it before this method was called.
# relay the message down to the handler.
# Delegate this to the implementation.
# defer the handling of done
# we are not done, especially when done is required; skip
# the pack for this request and especially do not handle
# the done.
# Okay we are not actually done then since the walker picked
# up no haves.  This is usually triggered when client attempts
# to pull from a source that has no common base_commit.
# See: test_server.MultiAckDetailedGraphWalkerImplTestCase.\
# else we blind ack within next
# in multi-ack mode, a flush-pkt indicates the client wants to
# flush but more have lines are still coming
# blind ack
# don't nak unless no common commits were found, even if not
# everything is satisfied
# Should only be called iff have_ref is common
# The HTTP version of this request a flush-pkt always
# signifies an end of request, so we also return
# nothing here as if we are done (but not really, as
# it depends on whether no-done capability was
# specified and that's handled in handle_done which
# may or may not call post_nodone_check depending on
# that).
# Let the walker know that we got a done.
# return the sha and let the caller ACK it with the
# above ack method.
# TODO: more informative error messages than just the exception
# The pack may still have been moved in, but it may contain
# broken objects. We trust a later GC to clean it up.
# The git protocol want to find a status entry related to unpack
# process even if no pack data has been sent.
# if ref is none then client doesn't want to send us anything..
# client will now send us a list of (oldsha, newsha, ref)
# backend can now deal with this refs and read a pack using self.read
# when we have read all the pack from the client, send a status report
# if the client asked for it
# Default handler classes for git services.
# diff_tree.py -- Utilities for diffing files and trees.
# TreeChange type constants.
# _NULL_ENTRY removed - using None instead
# This could be fairly easily generalized to >2 trees if we find a use
# If we have path filters, check if we should process this tree
# Special case for root tree
# Check if any of our filter paths could be under this tree
# Exact match - we want this directory itself
# Filter path is under this directory
# This directory is under a filter path
# Skip this tree entirely
# Ensure trees are Tree objects before merging
# Use empty trees for None values
# Only yield entries that match our path filters
# Check if this entry matches any of our filters
# Exact match
# This entry is under a filter directory
# This is a parent directory of a filter path
# Treat entries for trees as missing.
# File type changed: report as delete/add.
# Both were None because at least one was a tree.
# Organize by path.
# Yield only conflicting changes.
# If no change was found relative to one parent, that means the SHA
# must have matched the SHA in that parent, so it is not a
# conflict.
# Cache attrs as locals to avoid expensive lookups in the inner loop.
# Iterate over the smaller of the two dicts, since this is symmetrical.
# Sort by old path then new path. If only one exists, use it for both keys.
# Treat all modifies as potential deletes for rename detection,
# but don't split them (to avoid spurious renames). Setting
# find_copies_harder means we treat unchanged the same as
# modified.
# Keep track of whether the delete was actually marked as a delete.
# If not, it needs to be marked as a copy.
# TODO(dborowitz): Less arbitrary way of dealing with extra copies.
# If the paths match, this must be a split modify, so make sure it
# comes out as a modify.
# If it's in deletes but not marked as a delete, it must have been
# added due to find_copies_harder, and needs to be marked as a
# copy.
# TODO: Optimizations:
# Match C git's behavior of not attempting to find content renames if
# the matrix size exceeds the threshold.
# Git links don't exist in this repo.
# Sort scores from highest to lowest, but keep names in ascending
# order.
# If the candidate was originally a copy, that means it came from a
# modified or unchanged path, so we don't want to prune it.
# Hold on to the pure-python implementations for testing.
# For type checking, use the Python implementations
# At runtime, try to import Rust extensions
# Try to import Rust versions
# Override with Rust versions
# line_ending.py -- Line ending conversion functions
# Copyright (C) 2018-2018 Boris Feld <boris.feld@comet.ml>
# Default filter
# For text attribute: always normalize on checkin
# No config: no conversion
# Get core.eol setting
# Get core.autocrlf setting
# Get core.safecrlf setting
# For text attribute: always normalize to LF on checkin
# Smudge behavior depends on core.eol and core.autocrlf
# Normal autocrlf behavior
# Skip binary files if detection is enabled
# Check if conversion is safe
# LineEndingFilter doesn't hold any resources that need cleanup
# LineEndingFilter is lightweight and should always be recreated
# to ensure it uses the latest configuration
# Single-pass conversion: split on LF and join with CRLF
# This avoids the double replacement issue
# Remove any trailing CR to avoid CRCRLF
# Check if conversion is reversible
# For CRLF->LF conversion, check if converting back would recover original
# This was a CRLF->LF conversion
# warn
# For LF->CRLF conversion, check if converting back would recover original
# This was a LF->CRLF conversion
# Git attributes handling is done by the filter infrastructure
# Checking filter should never be `convert_lf_to_crlf`
# Backwards compatibility wrappers
# Convert core_autocrlf to bytes for compatibility
# Set up a filter registry with line ending filters
# Create line ending filter if needed
# Always register a text filter that can be used by gitattributes
# Even if autocrlf is false, gitattributes text=true should work
# Convert dict gitattributes to GitAttributes object for parent class
# Initialize parent class with gitattributes
# The filter infrastructure will handle gitattributes processing
# Store original filters for backward compatibility
# First try to get filter from gitattributes (handled by parent)
# Check if gitattributes explicitly disabled text conversion
# Explicitly marked as binary, no conversion
# If no filter was applied via gitattributes and we have a fallback filter
# (autocrlf is enabled), apply it to all files
# Apply the clean filter with binary detection
# Get safecrlf from config
# Apply the smudge filter with binary detection
# Read the original blob
# If we need to detect if a file is binary and the file is detected as
# binary, do not apply the conversion function and return the original
# chunked text
# Now apply the conversion
# Existing files should only be normalized on checkin if:
# 1. They were previously normalized on checkout (autocrlf=true), OR
# 2. We have a write filter (autocrlf=true or autocrlf=input), OR
# 3. They are new files
# client.py -- Implementation of the client side git protocols
# Default ref prefix, used if none is specified.
# GitHub defaults to just sending HEAD if no ref-prefix is
# specified, so explicitly request all refs to match
# behaviour with v1 when no ref-prefix is specified.
# malformed response, move on to the next one
# Receive refs from server
# To be overridden by subclasses
# Avoid infinite recursion by checking against class variable directly
# Direct attribute access to avoid recursion
# Git-protocol v2
# Just ignore progress data
# TODO(durin42): this doesn't correctly degrade if the server doesn't
# support some capabilities. This should work properly with servers
# that don't support multi_ack.
# will be overridden later
# TODO(jelmer): abstract method for get_location?
# The packfile MUST NOT be sent if the only command used is delete.
# TODO(jelmer): warn about unknown capabilities
# Convert new_refs to match SendPackResult expected type
# Server does not support deletions. Fail later.
# NOOP - Original new refs filtered out by policy
# Convert to Optional type for SendPackResult
# refs may have None values in v2 but not in v1
# delim-pkt
# v1 refs never have None values, but we need Optional type for compatibility
# Filter out None values (shouldn't be any in v1 protocol)
# Handle both old and new style determine_wants
# Old-style determine_wants that doesn't accept depth
# stock `git ls-remote` uses upload-pack
# IPv6 addresses contain colons and need to be wrapped in brackets
# -1 means system default buffering
# 0 means unbuffered
# Git protocol version advertisement is hidden behind two NUL bytes
# for compatibility with older Git server implementations, which
# would crash if something other than a "host=" header was found
# after the first NUL byte.
# TODO(jelmer): Alternative to ascii?
# support .exe, .bat and .cmd
# to avoid overhead
# run through cmd.exe with some overhead
# Ignore the thin_packs argument
# Did the process short-circuit (e.g. in a stateless RPC call)?
# Note that the client still expects a 0-object pack in most cases.
# Convert refs to Optional type for FetchPackResult
# Extract symrefs from the local repository
# Check if this ref is symbolic by reading it directly
# Extract the target from the symref
# Not a symbolic ref or error reading it
# Read bundle metadata without PackData to avoid file handle issues
# Don't read PackData here, we'll do it later
# Will be read on demand
# Skip capabilities (v3 only)
# Skip prerequisites
# Skip references
# Now at pack data
# Get references from bundle
# Determine what we want to fetch
# Add pack data to target repository
# Need to reopen the file for pack data access
# Skip to pack data section
# Read pack data into memory to avoid file positioning issues
# Create PackData from in-memory bytes
# Apply ref filtering if specified
# Write pack data to the callback
# Read pack data and write it to the callback
# Bundle refs are always concrete (never None), but LsRemoteResult expects Optional
# What Git client to use for local access
# plink.exe does not provide a way to pass environment variables
# via the command line. The best we can do is set an environment
# variable and hope that plink will pass it to the server. If this
# does not work then the server should behave as if we had requested
# protocol version 0.
# Can be overridden by users
# Priority: ssh_command parameter, then env vars, then core.sshCommand config
# Check environment variables first
# Fall back to config if no environment variable set
# GIT_SSH_COMMAND takes precedence over GIT_SSH
# Start user agent with "git/", because GitHub requires this. :-( See
# https://github.com/jelmer/dulwich/issues/562 for details.
# TODO(jelmer): Support per-host settings
# Check for timeout configuration
# Check for extra headers in config
# Git allows multiple http.extraHeader entries
# Parse the header (format: "Header-Name: value")
# Add timeout if specified
# Handle cert_reqs - allow override from parameter
# Default to SSL verification
# Check if a proxy bypass is defined with the no_proxy environment variable
# only check if base_url is provided
# implementation based on curl behavior: https://curl.se/libcurl/c/CURLOPT_NOPROXY.html
# get hostname of provided parsed url
# check if hostname is an ip address
# ignore leading dots
# check if no_proxy_value is a ip network
# if hostname is a ip address and no_proxy_value is a ip network -> check if ip address is part of network
# '*' is special case for always bypass proxy
# add a dot to only match complete domains
# Track original URL with credentials (set by from_parsedurl when credentials come from URL)
# Enable protocol v2 only when fetching, not when pushing.
# Git does not yet implement push over protocol v2, and as of
# git version 2.37.3 git-http-backend's behaviour is erratic if
# we try: It responds with a Git-protocol-v1-style ref listing
# which lacks the "001f# service=git-receive-pack" marker.
# Something changed (redirect!), so let's update the base URL
# Github sends "version 2" after sending the service name.
# Try to negotiate protocol version 2 again.
# Convert v1 refs to Optional type
# dumb servers only support protocol v0
# Read all the response data
# Assert that old_refs has no None values
# Determine wants function is aborting the push.
# Filter out None values from refs for determine_wants
# Use dumb HTTP protocol
# Pass http_request function
# Fetch pack data from dumb remote
# Write pack data
# Wrap pack_data to match expected signature
# Write pack data directly using the unpacked objects
# Include credentials in the URL only if they came from a URL (not passed explicitly)
# This preserves credentials that were in the original URL for git config storage
# Construct netloc with credentials
# Reconstruct URL with credentials
# Extract credentials from URL if present
# ParseResult.username and .password are URL-encoded, need to unquote them
# Explicit parameters take precedence over URL credentials
# Remove credentials from URL for base_url
# Pass credentials to constructor if it's a subclass that supports them
# Base class now supports credentials in constructor
# Mark that credentials came from URL (not passed explicitly) if URL had credentials
# No escaping needed: ":" is not allowed in username:
# https://tools.ietf.org/html/rfc2617#section-2
# urllib3.util.url._encode_invalid_chars() converts the path back
# to bytes using the utf-8 codec.
# Check if geturl() is available (urllib3 version >= 1.23)
# get_redirect_location() is available for urllib3 >= 1.1
# file://C:/foo.bar/baz or file://C://foo.bar//baz
# SSH with no user@, zero or one leading slash.
# SSH with user@host:foo.
# First, try to parse it as a URL
# Windows local path - but check if it's a bundle file first
# Check if it's a bundle file before assuming it's a local path
# Otherwise, assume it's a local path.
# If the file doesn't exist, try the next one.
# If one side is unchanged, we can take the other side
# For now, treat any difference as a conflict
# A more sophisticated algorithm would check for non-overlapping changes
# type: ignore[no-untyped-call,unused-ignore]
# Check if this is a real conflict or just different changes
# Try to merge line by line
# Real conflict - add conflict markers
# This shouldn't happen if _can_merge_lines returned True
# Check for merge driver
# Use merge driver if found
# Get content from blobs
# Use merge driver
# Convert success (no conflicts) to had_conflicts (conflicts occurred)
# Fall back to default merge behavior
# Handle deletion cases
# No common ancestor
# Both added different content - conflict
# Get content for each version
# Check if either side deleted
# We deleted, check if they modified
# They didn't modify, accept deletion
# Conflict: we deleted, they modified
# They deleted, check if we modified
# We didn't modify, accept deletion
# Conflict: they deleted, we modified
# Both sides exist, check if merge is needed
# Check for conflicts and generate merged content
# Get all paths from all trees
# Process each path
# Extract mode and sha
# Handle deletions
# Deleted in both
# Handle additions
# Same addition in both
# Added only in theirs
# Added only in ours
# Different additions - conflict
# For now, keep ours
# Check for mode conflicts
# Handle modifications
# Same modification or no change
# Only theirs modified
# Only ours modified
# We deleted
# They modified, we deleted - conflict
# They deleted
# We modified, they deleted - conflict
# Both modified differently
# For trees and submodules, this is a conflict
# Tree conflict
# Try to merge blobs
# Store merged blob
# Build merged tree
# Add the tree to the object store
# Create a virtual commit
# Add the commit to the object store
# No common ancestor - use None as base
# Single merge base - simple three-way merge
# Multiple merge bases - need to create a virtual merge base
# Start by merging the first two bases
# Recursively merge each additional base
# Find merge base of these two bases
# Import here to avoid circular dependency
# We need access to the repo for find_merge_base
# For now, we'll perform a simple three-way merge without recursion
# between the two virtual commits
# A proper implementation would require passing the repo object
# Perform three-way merge of the two bases (using None as their base)
# No common ancestor for virtual merge bases
# Create a virtual commit with this merged tree
# Now use the virtual merge base for the final merge
# Start with the head commit's tree as our current state
# Merge each commit sequentially
# Find the merge base between current state and the commit we're merging
# For octopus merges, we use the octopus base for all commits
# Octopus merge refuses to proceed if there are conflicts
# Create a temporary commit object with the merged tree for the next iteration
# This allows us to continue merging additional commits
# For intermediate merges, we use the same parent as current
# Set minimal required commit fields
# patch.py -- For dealing with packed-style patches.
# Copyright (C) 2009-2013 Jelmer Vernooij <jelmer@jelmer.uk>
# diffstat not available?
# type: ignore[no-any-return,unused-ignore]
# Fallback for non-blob objects
# TODO(jelmer): Support writing unicode, rather than bytes.
# Handle other types by converting to string first
# Normalize the diff for patch-id computation
# Skip diff headers (diff --git, index, ---, +++)
# Normalize @@ headers to a canonical form
# Replace line numbers with canonical form
# Use canonical hunk header without line numbers
# For +/- lines, strip all whitespace
# Keep the +/- prefix but remove all whitespace from the rest
# Remove all whitespace from the content
# Just +/- alone
# Keep context lines and other content as-is
# Join normalized lines and compute SHA1
# Get the parent tree (or empty tree for root commit)
# Root commit - compare against empty tree
# Generate diff
# pack.py -- For dealing with packed git objects.
# For some reason the above try, except fails to set has_mmap = False for plan9
# Keep pack files under 16Mb in memory, otherwise write them out to disk
# Default pack index version to use when none is specified
# Cached binary SHA.
# Compressed object chunks.
# CRC32.
# Decompressed object chunks.
# Decompressed length of this object.
# Delta base offset or SHA.
# Decompressed and delta-resolved chunks.
# Type of this object.
# Offset in its pack.
# Type of this object in the pack (may be a delta).
# TODO(dborowitz): read_zlib_chunks and unpack_object could very well be
# methods of this object.
# Only provided for backwards compatibility with code that expects either
# chunks or a delta tuple.
# 64KB buffer for better I/O performance
# Attempt to use mmap if possible
# Can't mmap - perhaps a socket or invalid file descriptor
# Default to SHA-1 for backward compatibility
# Default implementation for PackIndex classes that don't override
# Take the size now, so it can be checked each time we map the file to
# ensure that it hasn't changed.
# Quick optimization:
# Not stored in v1 index files
# Read hash algorithm identifier (1 = SHA-1, 2 = SHA-256)
# SHA-1
# SHA-256
# Read length of shortened object names
# Calculate offsets based on variable hash size
# After header (4 + 4 + 4 + 4)
# trailer is a deque to avoid memory allocation on small reads
# maintain a trailer of the last 20 bytes we've read
# hash everything but the trailer
# prepend any unused data to current read buffer
# If the read buffer is full, then the last read() got the whole
# trailer off the wire. If not, it means there is still some of the
# trailer to read. We need to read() all 20 bytes; N come from the
# read buffer and (20 - N) come from the wire.
# default count of entries if read_objects() is empty
# Use delta_cache_size config if available, otherwise default
# Back up over unused data.
# Not an external ref, but may depend on one. Either it will
# get popped via a _follow_chain call, or we will raise an
# error below.
# Unlike PackData.get_object_at, there is no need to cache offsets as
# this approach by design inflates each object exactly once.
# If git option index.skipHash is set the index will be empty
# BinaryIO abstract methods
# Convert object to list of bytes chunks
# Shouldn't reach here with proper typing
# Pack header
# Pack version
# Number of objects in pack
# Build a list of objects ordered by the magic Linus heuristic
# This helps us find good objects to diff against us
# TODO(jelmer): Use threads
# TODO(jelmer): support deltaifying
# PERFORMANCE/TODO(jelmer): This should be enabled but is *much* too
# slow at the moment.
# Empty iterator if None
# Write the pack
# Fan-out table
# The length of delta compression copy operations in version 2 packs is limited
# to 64K.  To copy more, we use several copy operations.  Version 3 packs allow
# 24-bit lengths in copy operations, but we always make version 2 packs.
# write delta header
# write out delta opcodes
# Git patch opcodes don't care about deletes!
# if opcode == 'replace' or opcode == 'delete':
# If they are equal, unpacker will use data from base_buf
# Write out an opcode that says what range to use
# If we are replacing a range or adding one, then we just
# output it to the stream (prefixed by its size)
# Version 3 packs can contain copy sizes larger than 64K.
# Magic!
# TODO: Add SHA256Writer when SHA-256 support is implemented
# Convert entries to list to allow multiple iterations
# Calculate shortest unambiguous prefix length for object names
# For now, use full hash size (this could be optimized)
# Version 3
# Hash algorithm
# Shortened OID length
# Object names table
# CRC32 checksums table
# Offset table
# Large offset table
# TODO: object connectivity checks
# TODO: cache these results
# Walk down the delta chain, building a stack of deltas to reach
# the requested object.
# TODO: clean up asserts and replace with nicer error messages
# object is based on itself
# Now grab the base object (mustn't be a delta) and apply the
# deltas all the way up the stack.
# Convert chunks to bytes for apply_delta if needed
# For tuple type, second element is the actual data
# Apply delta and get result as list
# Update the header with the new number of objects.
# Must flush before reading (http://bugs.python.org/issue3207)
# Rescan the rest of the pack, computing the SHA with the new header.
# Must reposition before writing (http://bugs.python.org/issue3207)
# Complete the pack.
# Convert bytes to list[bytes]
# protocol.py -- Shared parts of the git protocols
# Copyright (C) 2008-2012 Jelmer Vernooij <jelmer@jelmer.uk>
# Git protocol version 0 is the original Git protocol, which lacked a
# version number until Git protocol version 1 was introduced by Brandon
# Williams in 2017.
# Protocol version 1 is simply the original v0 protocol with the addition of
# a single packet line, which precedes the ref advertisement, indicating the
# protocol version being used. This was done in preparation for protocol v2.
# Git protocol version 2 was first introduced by Brandon Williams in 2018 and
# adds many features. See the gitprotocol-v2(5) manual page for details.
# As of 2024, Git only implements version 2 during 'git fetch' and still uses
# version 0 during 'git push'.
# pack data
# progress messages
# fatal error message just before stream aborts
# Magic ref that is used to attach capabilities to when
# there are no refs. Should always be ste to ZERO_SHA.
# flush-pkt or delim-pkt
# a pktline can be a max of 65520. a sideband line can therefore be
# 65520-5 = 65515
# WTF: Why have the len in ASCII, but the channel in binary.
# 64KB buffer for better network I/O performance
# From _fileobj.read in socket.py in the Python 2.6.5 standard library,
# with the following modifications:
# Copyright (c) 2001-2010 Python Software Foundation; All Rights
# Reserved
# Licensed under the Python Software Foundation License.
# TODO: see if buffer is more efficient than cBytesIO.
# Our use of BytesIO rather than lists of string objects returned by
# recv() minimizes memory usage and fragmentation that occurs when
# rbufsize is large compared to the typical return value of recv().
# buffer may have been partially consumed by recv()
# Already have size bytes in our buffer?  Extract and return.
# reset _rbuf.  we consume it via buf.
# recv() will malloc the amount of memory given as its
# parameter even though it often returns much less data
# than that.  The returned data string is short lived
# as we copy it into a BytesIO and free it.  This avoids
# fragmentation issues on many platforms.
# Shortcut.  Avoid buffer data copies when:
# - We have no data in our buffer.
# AND
# - Our call to recv returned exactly the
# explicit free
# assert buf_len == buf.tell()
# only read from the wire if our read buffer is exhausted
# shortcut: skip the buffer if we read exactly size bytes
# Get repo path as bytes
# Create full path to submodule
# Create submodule directory if it doesn't exist
# Create .git file pointing to the submodule's git directory
# Submodule git directories are typically stored in .git/modules/<name>
# The relative path from the submodule to the parent's .git directory
# depends on the submodule's depth
# objects.py -- Access to base git objects
# Header fields for commits
# Header fields for objects
# (2**63) - 1 - signed long int max
# Signature type constants
# os.path.join accepts bytes or unicode, but all args must be of the same
# type. Make sure that hex which is expected to be bytes, is the same type
# as path.
# path is bytes
# grab the last (up to) two path components
# filename is bytes
# Prevent overflow error
# Type guard functions for runtime type narrowing
# Runtime versions without type narrowing
# skip type and size; type must have already been determined, and
# we trust zlib to fail if it's otherwise corrupted
# TODO: if we find that error-checking during object parsing is a
# performance bottleneck, those checks should be moved to the class's
# check() method during optimization so we can still check the object
# when necessary.
# this is a local because as_raw_chunks() overwrites self._sha
# Parse the headers
# Headers can contain newlines. The next line is indented with a space.
# We store the latest key as 'k', and the accumulated value as 'v'.
# Indented continuation of the previous line
# We parsed a new header, return its value
# Empty line indicates end of headers
# We reached end of file before the headers ended. We still need to
# return the previous header, then we need to return a None field for
# the text.
# We didn't reach the end of file while parsing headers. We can return
# the rest of the file as a message.
# There must be a new line after the headers
# Try to find either PGP or SSH signature
# Determine signature type
# Stricter type checks than normal to mirror checks in the Rust version.
# TODO: list comprehension is for efficiency in the common (small)
# case; if memory efficiency in the large case is a concern, use a
# genexp.
# TODO: optionally exclude as in git fsck --strict
# Handle empty path - return the tree itself
# cgit parses the first character as the sign, and the rest
# noqa: UP031
# TODO(jelmer): Enforce ordering
# checked by _check_has_member above
# TODO: optionally check for duplicate parents
# Hold on to the pure-python implementations for testing
# bitmap.py -- Packfile bitmap support for git
# Bitmap file signature
# Bitmap format version
# Bitmap flags
# Full closure
# Name-hash cache
# Lookup table for random access
# Pseudo-merge bitmaps
# Check for runs of all zeros or all ones
# Count consecutive identical words
# Collect following literal words
# Max literal count in RLW
# Create RLW with correct bit layout:
# [literal_words(31 bits)][running_len(32 bits)][running_bit(1 bit)]
# Collect literal words
# Max literal count
# RLW with no run, just literals
# Read header
# Read all words first
# Process EWAH chunks: RLW followed by literal words
# This is an RLW
# Bit layout: [literal_words(31 bits)][running_len(32 bits)][running_bit(1 bit)]
# Process running bits
# Add all bits in the repeated section
# Process literal words
# Extract set bits from literal word
# Read RLW position (we don't use it currently, but it's part of the format)
# Empty bitmap: bit_count=0, word_count=0, rlw_pos=0
# Create literal words
# Compress using EWAH run-length encoding
# Build EWAH data
# Write compressed words
# Write RLW position (position of last RLW in the compressed words)
# For now, we'll use 0 as we don't track this during encoding
# This could be improved in the future if needed
# Type bitmaps for commits, trees, blobs, tags
# Bitmap entries indexed by commit SHA
# List of entries in order (for XOR offset resolution)
# Optional lookup table for random access
# Optional name-hash cache
# Decompress using XOR if needed
# Find the entry at the XOR offset
# The XOR offset tells us how many entries back to look
# We need to find this entry in the ordered list
# Entry not found in list, return as-is
# XOR offset is how many positions back to look (max 160)
# Get the base bitmap (recursively if it also uses XOR)
# XOR the current bitmap with the base
# Read entry count
# Read pack checksum
# Read type bitmaps (EWAH bitmaps are self-describing)
# EWAH format:
# 4 bytes: bit count
# 4 bytes: word count
# N x 8 bytes: compressed words
# 4 bytes: RLW position
# Read header to determine size
# Read compressed words
# Read RLW position
# Reconstruct the full EWAH data to pass to _decode
# Read bitmap entries
# Read object position (4 bytes)
# Read XOR offset (1 byte)
# Read flags (1 byte)
# Read self-describing EWAH bitmap
# EWAH format: bit_count (4) + word_count (4) + words (word_count * 8) + rlw_pos (4)
# Reconstruct full EWAH data
# Create bitmap entry
# Resolve object position to SHA if we have a pack index
# Get the SHA at the given position in the sorted index
# Without pack index, use position as temporary key
# Read optional lookup table
# Lookup table contains triplets: (commit_pos, offset, xor_row)
# Number of entries matches the bitmap entry count
# Read commit position (4 bytes)
# Read file offset (8 bytes)
# Read XOR row (4 bytes)
# Read optional name-hash cache
# Name-hash cache contains one 32-bit hash per object in the pack
# The number of hashes depends on the total number of objects
# For now, we'll read what's available
# Write entry count
# Write pack checksum
# Write type bitmaps (self-describing EWAH format, no size prefix needed)
# Write bitmap entries
# Write object position (4 bytes)
# Write XOR offset (1 byte)
# Write flags (1 byte)
# Write compressed bitmap data (self-describing EWAH format, no size prefix)
# Write optional lookup table
# 4 bytes
# 8 bytes
# Write optional name-hash cache
# mailmap.py -- Mailmap reader
# Copyright (C) 2018 Jelmer Vernooij <jelmer@jelmer.uk>
# TODO(jelmer): Integrate this with dulwich.fastexport.split_email and
# dulwich.repo.check_user_identity
# Remove comments
# mbox.py -- For dealing with mbox files
# Convert output_dir to Path for easier manipulation
# Open the mbox file
# For file-like objects, we need to read and parse manually
# Format the output filename with the specified precision
# Write the message to the output file
# Handle mboxrd format - reverse the escaping
# Handle CR/LF if needed
# Strip trailing newlines (mailbox module adds separator newlines)
# Convert paths to Path objects
# Open the Maildir
# Get all messages and sort by their keys to ensure consistent ordering
# Create a temporary file to hold the mbox data
# Check if line matches the pattern ^>+From (one or more > followed by From)
# Remove one leading ">"
# rebase.py -- Git rebase implementation
# Support abbreviations
# Try full command name
# Store as hex string encoded as bytes
# Use short SHA (first 7 chars) like Git does
# Unknown command, skip
# These commands take arguments instead of commit SHA
# Break has no arguments
# Commands that operate on commits
# Store SHA as hex string encoded as bytes
# Parse commit message if present
# Add entries from current position onward
# Extract first line of commit message
# Already bytes
# original_head
# rebasing_branch
# onto
# todo
# done
# Ensure the directory exists
# Store the original HEAD ref (e.g. "refs/heads/feature")
# Store the branch name being rebased
# Store the commit we're rebasing onto
# Track progress
# Store the current commit being rebased (same as C Git)
# Store progress counters
# Current commit number (1-based)
# Total number of commits
# Load rebase state files
# Directory doesn't exist, that's ok
# Copy the lists
# Return copies
# Initialize state
# Load any existing rebase state
# Get the branch commit
# Use current HEAD
# Parse the branch reference
# Get upstream commit
# If already up to date, return empty list
# Get commits between merge base and branch head
# Return in chronological order (oldest first)
# Get the parent of the commit being cherry-picked
# Store merge state for conflict resolution
# Create new commit
# Save original HEAD
# Save which branch we're rebasing (for later update)
# Parse the branch ref
# Assume it's a branch name
# Use current branch
# Determine onto commit
# Parse the onto commit
# Get commits to rebase
# Store rebase state
# Get next commit to rebase
# Determine what to rebase onto
# Cherry-pick the commit
# Success - add to done list
# Continue with next commit if any
# Conflicts - save state and return
# Restore original HEAD
# Clean up rebase state
# Reset instance state
# No commits were rebased
# Update HEAD to point to last rebased commit
# Update the branch we're rebasing
# If HEAD was pointing to this branch, it will follow automatically
# If we don't know which branch, check current HEAD
# Reset instance state but keep _done for caller
# Start rebase
# Continue rebase
# Generate todo list
# Save initial todo to disk
# Let user edit todo if callback provided
# Parse edited todo
# Check if user removed all entries (abort)
# User removed everything, abort
# Save edited todo
# Load current todo
# Edit todo
# Load todo if not provided
# Process each todo entry
# Handle each command type
# Regular cherry-pick
# Cherry-pick then edit message
# Get the last commit and allow editing its message
# Create new commit with edited message
# Replace last commit in done list
# Cherry-pick then pause
# Pause for user to amend
# Combine with previous commit, keeping both messages
# Combine with previous commit, discarding this message
# Skip this commit
# Execute shell command
# Command failed, pause rebase
# Pause rebase
# Unsupported command
# Move to next entry
# Save progress
# Get the commit to squash
# Get the previous commit (target of squash)
# Cherry-pick the changes onto the previous commit
# Perform three-way merge for the tree
# Combine messages if squashing (not fixup)
# Create new combined commit
# Replace the previous commit with the combined one
# Remove the squashed commit from todo
# commit_graph.py -- Git commit graph file format support
# File format constants
# Chunk IDs
# Generation number constants
# Parent encoding constants
# Read table of contents
# +1 for terminating entry
# Read chunks
# Offsets in TOC are absolute from start of file
# Parse chunks
# Parse OID lookup chunk
# Parse commit data chunk
# Tree OID
# Parent positions (2 x 4 bytes)
# Generation number and commit time (2 x 4 bytes)
# Upper 30 bits
# 34 bits total
# Parse parents
# Handle extra edges (3+ parents)
# Convert hex ObjectID to binary if needed for lookup
# Input is hex ObjectID, convert to binary for internal lookup
# Input is already binary
# Sort entries by commit ID for consistent output
# Build OID lookup chunk
# Build commit data chunk
# Create OID to index mapping for parent lookups
# Tree OID (20 bytes)
# More than 2 parents - would need extra edge list chunk
# For now, just store first two parents
# Generation and commit time (2 x 4 bytes)
# Build fanout table
# Fill in gaps - each fanout entry should be cumulative
# Calculate chunk offsets
# signature + version + hash_version + num_chunks + base_graph_count
# 4 entries (3 chunks + terminator) * 12 bytes each
# OID Fanout
# OID Lookup
# Commit Data
# 3 chunks
# 0 base graphs
# Write table of contents
# Write chunks
# Standard location: .git/objects/info/commit-graph
# Chain files in .git/objects/info/commit-graphs/
# Look for graph-{hash}.graph files
# Ensure all commit_ids are in the correct format for object store access
# DiskObjectStore expects hex ObjectIDs (40-byte hex strings)
# Already hex ObjectID
# Binary SHA, convert to hex ObjectID
# Assume it's already correct format
# Build a map of all commits and their metadata
# Commit not found, skip
# Calculate generation numbers using topological sort
# Unknown commit, assume generation 0
# Root commit
# Calculate based on parents
# Calculate generation numbers for all commits
# Build commit graph entries
# commit_id is already hex ObjectID from normalized_commit_ids
# Handle tree ID - might already be hex ObjectID
# Binary, convert to hex
# Handle parent IDs - might already be hex ObjectIDs
# Build the OID to index mapping for lookups
# Generate the commit graph
# Nothing to write
# Ensure the objects/info directory exists
# Write using GitFile for atomic operation
# Normalize commit IDs for object store access and tracking
# Hex ObjectID - use directly for object store access
# Binary SHA, convert to hex ObjectID for object store access
# Add to reachable list (commit_id is already hex ObjectID)
# Add parents to stack
# merge_drivers.py -- Merge driver support for dulwich
# Write temporary files
# Prepare command with placeholders
# Execute merge command
# Read merged content from ours file
# Exit code 0 means clean merge, non-zero means conflicts
# If the command fails completely, return original with conflicts
# Register built-in drivers
# The "text" driver is the default three-way merge
# We don't register it here as it's handled by the default merge code
# First check registered drivers
# Then check factories
# Finally check configuration
# Look for merge.<name>.driver configuration
# Global registry instance
# Update config if provided
# whitespace.py -- Whitespace error detection and fixing
# Default whitespace errors Git checks for
# All available whitespace error types
# Trailing whitespace at end of line
# Space before tab in indentation
# Indent with space when tabs expected (8+ spaces)
# Tab in indentation when spaces expected
# Blank lines at end of file
# Trailing whitespace (same as blank-at-eol)
# Carriage return at end of line
# Special: sets tab width (not an error type)
# Start with defaults if no explicit errors are specified or if negation is used
# Handle aliases
# Check for trailing whitespace (blank-at-eol)
# Find where trailing whitespace starts
# Check for space before tab
# Check in indentation
# Check for indent-with-non-tab (8+ spaces at start)
# Reset on tab
# Non-whitespace character
# Check for tab-in-indent
# Check for carriage return
# Handle CRLF line endings
# Check each line
# Check for blank lines at end of file
# Skip the last empty line if content ends with newline
# Report the line number of the last non-empty line + 1
# Handle CRLF line endings - we need to track which lines had them
# Group errors by line
# Fix errors
# Fix trailing whitespace
# Remove trailing spaces and tabs
# Fix carriage return - since we already stripped CRs, we just don't restore them
# Restore CRLF for lines that should keep them
# Fix blank lines at end of file
# Remove trailing empty lines
# stash.py
# Copyright (C) 2018 Jelmer Vernooij <jelmer@samba.org>
# Get the stash entry before removing it
# Get the stash commit
# The stash commit has the working tree changes
# Its first parent is the commit the stash was based on
# Its second parent is the index commit
# Get current HEAD to determine if we can apply cleanly
# Check if we're at the same commit where the stash was created
# If not, we need to do a three-way merge
# For now, we'll apply changes directly but this could cause conflicts
# A full implementation would do a three-way merge
# Apply the stash changes to the working tree and index
# Get config for working directory update
# Apply working tree changes
# First, if we have index changes (second parent), restore the index state
# Update index entries from the stashed index tree
# Add to index with stage 0 (normal)
# Get file stats for the entry
# File doesn't exist yet, use dummy stats
# Apply working tree changes from the stash
# Create parent directories if needed
# Write the file
# Submodule - just create directory
# Apply blob normalization for checkout if normalizer is provided
# Update index if the file wasn't already staged
# Update with file stats from disk
# Remove the stash entry
# First, create the index commit.
# Create a dangling commit for the index state
# Note: We pass ref=None which is handled specially in do_commit
# to create a commit without updating any reference
# Don't update any ref
# Then, the working tree one.
# Filter out entries with None values since commit_tree expects non-None values
# TODO(jelmer): Just pass parents into do_commit()?
# Reset working tree and index to HEAD to match git's behavior
# Use update_working_tree to reset from stash tree to HEAD tree
# Get HEAD tree
# Update from stash tree to HEAD tree
# This will remove files that were in stash but not in HEAD,
# and restore files to their HEAD versions
# We need to overwrite modified files
# object_store.py -- Object store for git objects
# use permissions consistent with Git; just readable by everyone
# TODO: should packs also be non-writable on Windows? if so, that
# would requite some rather significant adjustments to the test suite
# Grace period for cleaning up temporary pack files (in seconds)
# Matches git's default of 2 weeks
# 2 weeks
# Try to use commit graph first if available
# Fall back to loading the object
# stack of (sha, depth)
# Peel tags if necessary
# Try to use commit graph for parent lookup if available
# Default implementation for stores that don't support packing
# Convert ShaFile to UnpackedObject
# Note that the pack-specific implementation below is more efficient,
# as it reuses deltas
# Default implementation is a NO-OP
# Default implementation raises KeyError
# Subclasses should override to provide actual mtime
# Don't bother writing an empty pack file
# Check if there's a .keep file for this pack
# Only create a new pack if there are objects to pack
# The name of the consolidated pack might match the name of a
# pre-existing pack. Take care not to remove the newly created
# consolidated pack.
# Delete loose objects that were packed
# Delete excluded loose objects
# Maybe something else has added a pack with the object
# in the mean time?
# Commit graph support - lazy loaded
# Default to true
# Read pack configuration options
# Read core.commitGraph setting
# Read core.fsyncObjectFiles setting
# verify that idx exists first (otherwise the pack was not yet
# fully written)
# Open newly appeared pack files
# Remove disappeared pack files
# Check from object dir
# 40 - 2 for the prefix
# Directory may have been removed or is inaccessible
# First check if it's a loose object
# Check if it's in a pack file
# Use the pack file's mtime for packed objects
# TODO: Handle self.pack_dir being bytes
# Move the pack in.
# Windows might have the target pack file lingering. Attempt
# removal, silently passing if the target does not exist.
# Write the index.
# Add the pack to the store and return it.
# Already there, no need to write again
# Look for commit graph in our objects directory
# Get all commit objects from the object store
# Iterate through all objects to find commits
# Use provided refs
# No commits to include
# Get all reachable commits
# Just use the direct ref targets - ensure they're hex ObjectIDs
# Write commit graph directly to our object store path
# Ensure the info directory exists
# GitFile in write mode always returns _GitFile
# Clear cached commit graph so it gets reloaded
# Clean up tmp_pack_* files in the repository directory
# Check if file is old enough (more than grace period)
# Clean up orphaned .pack files without corresponding .idx files
# Remove .pack extension
# Remove .idx extension
# Remove .pack files without corresponding .idx files
# Since MemoryObjectStore doesn't support pack files, we need to
# extract individual objects. To handle deltas properly, we write
# to a temporary pack and then use PackInflater to resolve them.
# process Commits and Tags differently
# Note, while haves may list commits/tags not available locally,
# and such SHAs would get filtered out by _split_commits_and_tags,
# wants shall list only known SHAs, and otherwise
# _split_commits_and_tags fails with KeyError
# all_ancestors is a set of commits that shall not be sent
# (complete repository up to 'haves')
# all_missing - complete set of commits between haves and wants
# common - commits from all_ancestors we hit into while
# traversing parent hierarchy of wants
# Now, fill sha_done with commits and revisions of
# files and directories known to be both locally
# and on target. Thus these commits and files
# won't get selected for fetch
# record tags we have as visited, too
# in fact, what we 'want' is commits, tags, and others
# we've found missing
# stop if we run out of heads to remove
# collect all ancestors
# no more ancestors; stop
# TODO(jelmer): Save up the objects and add them using .add_objects
# rather than with individual calls to .add_object.
# Handle both Tree object and SHA
# For new directories, pass an empty Tree object
# Create a copy of todo for each base to avoid modifying
# the set while iterating through it
# Check for any remaining objects not found
# Doesn't exist..
# Try to use commit graph if available
# Try to use commit graph for parent lookup
# Iterate all contained files if path points to a dir, otherwise just get that
# single file
# index.py -- File parser/writer for the git index file
# Type alias for recursive tree structure used in commit_tree
# 2-bit stage (during merge)
# assume-valid
# extended flag (must be zero in version 2)
# used by sparse checkout
# used by "git add -N"
# Index extension signatures
# Sparse directory extension
# Take lower 7 bits
# Set continuation bit
# No continuation bit
# Find the common prefix length
# The number of bytes to remove from the end of previous_path
# to get the common prefix
# The suffix to append
# Encode: varint(remove_len) + suffix + NUL
# Decode the number of bytes to remove from previous path
# Find the NUL terminator for the suffix
# Skip the NUL terminator
# Reconstruct the path
# Decode the varint for remove_len by reading byte by byte
# Read the suffix until NUL terminator
# NUL terminator
# Unknown extension - just store raw data
# TODO: Implement tree cache parsing
# TODO: Implement tree cache serialization
# TODO: Implement resolve undo parsing
# TODO: Implement resolve undo serialization
# Clear out any existing stage bits, then set them from the Stage.
# Turn on the skip-worktree bit
# Also ensure the main 'extended' bit is set in flags
# Turn off the skip-worktree bit
# Optionally unset the main extended bit if no extended flags remain
# Version 4: paths are always compressed (name_len should be 0)
# Versions < 4: regular name reading
# Padding:
# Version 4: use compression but set name_len to actual filename length
# This matches how C Git implements index v4 flags
# Versions < 4: include actual name length
# Version 4: always write compressed path
# Versions < 4: write regular path and padding
# Read extensions
# Check if we're at the end (20 bytes before EOF for SHA checksum)
# Try to read extension signature
# Check if it's a valid extension signature (4 uppercase letters)
# Not an extension, seek back
# Read extension size
# Read extension data
# STEP 1: check if any extended_flags are set
# Force or bump the version to 3
# The rest is unchanged, but you might insert a final check:
# Double-check no extended flags appear
# Proceed with the existing code to write the header and entries.
# Write extensions
# TODO(jelmer): Store the version returned by read_index
# Filter out extensions with no meaningful data
# Skip extensions that have empty data
# When skipHash is enabled, write the index without computing SHA1
# Write 20 zero bytes instead of SHA1
# Extensions have already been read by read_index_dict_with_version
# Handle ConflictedIndexEntry case
# Find all sparse directory entries
# Expand each sparse directory
# Remove the sparse directory entry
# Get the tree object
# Recursively add all entries from the tree
# Remove the sparse directory extension
# Recursively expand subdirectories
# Create an index entry for this file
# Use the template entry for metadata but with the file's sha and mode
# Size is unknown from tree
# Don't copy skip-worktree flag
# Get the base tree
# For each sparse directory, find its tree SHA and create sparse entry
# Find the tree SHA for this directory
# Directory doesn't exist in tree, skip it
# Remove all entries under this directory
# Create a sparse directory entry
# Use minimal metadata since it's not a real file
# Add sparse directory extension if not present
# Look for this part in the current tree
# Path component is a file, not a directory
# Load the next tree
# TODO(jelmer): Support a include_trees option
# Was removed
# Mention added files
# On Windows, creating symlinks either requires administrator privileges
# or developer mode. Raise a more helpful error when we're unable to
# create symlinks
# https://github.com/jelmer/dulwich/issues/1005
# os.readlink on Python3 on Windows requires a unicode string.
# Write out file
# Decode to Unicode (let UnicodeDecodeError bubble up)
# Remove HFS+ ignorable characters
# Normalize to NFD
# HFS+ ignorable Unicode codepoints (from Git's utf8.c)
# ZERO WIDTH NON-JOINER
# ZERO WIDTH JOINER
# LEFT-TO-RIGHT MARK
# RIGHT-TO-LEFT MARK
# LEFT-TO-RIGHT EMBEDDING
# RIGHT-TO-LEFT EMBEDDING
# POP DIRECTIONAL FORMATTING
# LEFT-TO-RIGHT OVERRIDE
# RIGHT-TO-LEFT OVERRIDE
# INHIBIT SYMMETRIC SWAPPING
# ACTIVATE SYMMETRIC SWAPPING
# INHIBIT ARABIC FORM SHAPING
# ACTIVATE ARABIC FORM SHAPING
# NATIONAL DIGIT SHAPES
# NOMINAL DIGIT SHAPES
# ZERO WIDTH NO-BREAK SPACE
# Malformed UTF-8 - be conservative and reject
# Check against invalid names
# Also check for 8.3 short name
# TODO(jelmer): Merge new index into working tree
# TODO(jelmer): record and return submodule paths
# Add file to index
# we can not use tuple slicing to build a new tuple,
# because on windows that will convert the times to
# longs, which causes errors further along
# default to a stage 0 index entry (normal)
# when reading from the filesystem
# Repo currently expects a "str", so decode if necessary.
# TODO(jelmer): Perhaps move this into Repo() ?
# This is actually a directory
# Submodule
# The file was changed to a directory, so consider it removed.
# Walk up the directory tree to find the first existing parent
# Reached the root or can't go up further
# Check if the existing parent (if any) is a directory
# Now check each parent we need to create isn't blocked by an existing file
# On Windows, remove read-only attribute and retry
# Directory doesn't exist - stop trying
# Directory not empty - stop trying
# Symlink doesn't exist
# Not a symlink
# Check mode first (if honor_filemode is True)
# For regular files, only check the user executable bit, not group/other permissions
# This matches Git's behavior where umask differences don't count as modifications
# Normalize regular file modes to ignore group/other write permissions
# Keep only user rwx and all read+execute
# For Git compatibility, regular files should be either 644 or 755
# Default for regular files
# Determine if it should be executable based on user execute bit
# User execute bit is set
# For non-regular files (symlinks, etc.), check mode exactly
# If mode matches (or we don't care), check content via size first
# Size matches, check actual content
# Already a directory, just ensure .git file exists
# Remove whatever is there and create submodule
# Check if we need to update
# File to file - check if update needed
# Symlink to symlink - check if update needed
# Just update index - current_stat should always be valid here since we're not updating
# Remove existing entry if needed
# Remove directory
# Ensure parent directory exists
# Check if it's a submodule directory
# Try to remove empty parent directories
# Build dictionaries of old and new paths with their normalized forms
# Map from old path to change object
# Map from new path to change object
# Get the appropriate normalizer based on config
# Pre-normalize all paths once to avoid repeated normalization
# Treat RENAME as DELETE + ADD for case-only detection
# Find case-only renames and transform changes
# Found a case-only rename
# Create a CHANGE_RENAME to replace the DELETE and ADD/MODIFY pair
# Simple case: DELETE + ADD becomes RENAME
# Complex case: DELETE + MODIFY becomes RENAME
# Use the old file from DELETE and new file from MODIFY
# Mark the old changes for removal
# Return new list with original ADD/DELETE changes replaced by renames
# Convert iterator to list since we need multiple passes
# Transform case-only renames on case-insensitive filesystems
# Check for path conflicts where files need to become directories
# This is a file inside a directory
# Check if any parent path exists as a file in the old tree or changes
# See if this parent path is being deleted (was a file, becoming a dir)
# Check if any path that needs to become a directory has been modified
# File doesn't exist, nothing to check
# Find the old entry for this path
# Check for uncommitted modifications before making any changes
# Only check files that are being modified or deleted
# Check if working tree file differs from old tree
# Apply the changes
# Remove file/directory
# Add or modify file
# Conflicted files are always unstaged
# The file was removed, so we assume that counts as
# different from whatever file used to exist.
# For each entry in the index check the sha1 & ensure not staged
# Use parallel processing for better performance on slow filesystems
# If threading is not available, fall back to serial processing
# Collect all entries first
# Use number of CPUs but cap at 8 threads to avoid overhead
# Process entries in parallel
# Submit all tasks
# Yield results as they complete
# Serial processing
# On Windows, we need to handle tree path encoding properly
# Decode from tree encoding, then re-encode for filesystem
# If decoding fails, use the original bytes
# On Windows, we need to ensure tree paths are properly encoded
# Decode from filesystem encoding, then re-encode with tree encoding
# If filesystem decoding fails, use the original bytes
# __init__.py -- Fast export/import functionality
# Copyright (C) 2010-2013 Jelmer Vernooij <jelmer@jelmer.uk>
# TODO(jelmer): Dedupe this and the same functionality in
# format_annotate_line.
# FIXME: Batch creation of objects?
# dumb.py -- Support for dumb HTTP(S) git repositories
# Read all content
# Decompress and parse the object
# Parse header
# Convert type name to type number
# No packs file, repository might only have loose objects
# Extract just the pack name without path
# Find the pack in our list
# Fetch and cache the index
# Convert hex to binary for pack operations
# Check if object is in this pack
# We found the object, now we need to fetch the pack data
# For efficiency, we could fetch just the needed portion, but for
# simplicity we'll fetch the whole pack and cache it
# Download the pack file
# Open the pack and get the object
# Check cache first
# Try packs first
# Try loose object
# Try packs
# We can't efficiently list loose objects over dumb HTTP
# So we only iterate pack objects
# Fetch info/refs
# Keep SHAs as hex
# handle HEAD legacy format containing a commit id instead of a ref name
# For dumb HTTP, we don't have peeled refs readily available
# We would need to fetch and parse tag objects
# For dumb HTTP, we traverse the object graph starting from wants
# Fetch the object
# Parse the object to find references to other objects
# Commit
# Tag
# web.py -- WSGI smart-http server
# Copyright (C) 2012 Jelmer Vernooij <jelmer@jelmer.uk>
# wsgiref.types was added in Python 3.11
# Fallback type definitions for Python < 3.11
# For type checking, use the _typeshed types if available
# Define our own protocol types for type checking
# At runtime, just use type aliases since these are only for type hints
# HTTP error strings
# From BaseHTTPRequestHandler.date_time_string in BaseHTTPServer.py in the
# Python 2.6.5 standard library, following modifications:
# Copyright (c) 2001-2010 Python Software Foundation; All Rights Reserved
# handler_cls could expect bytes or str
# non-smart fallback
# TODO: select_getanyfile() (see http-backend.c)
# TODO: support more methods as necessary
# TODO(jelmer): Find a way to pass in repo, rather than having handler_cls
# reopen.
# environ['QUERY_STRING'] has qs args
# This is not necessary if this app is run from a conforming WSGI
# server. Unfortunately, there's no way to tell that at this point.
# TODO: git may used HTTP/1.1 chunked encoding instead of specifying
# content-length
# An error code has been sent, just exit
# type: ignore  # backpointer for logging
# credentials.py -- support for git credential helpers
# Copyright (C) 2022 Daniele Trifirò <daniele@iterative.ai>
# filter_branch.py - Git filter-branch functionality
# Cache for filtered trees
# Split subdirectory path
# Navigate to subdirectory
# Subdirectory not found, return empty tree
# Return the subdirectory tree
# Check out tree to temp directory
# We need a proper checkout implementation here
# For now, pass tmpdir to filter and let it handle checkout
# Create temporary index file
# Build index from tree
# Run index filter
# Read back the modified index and create new tree
# Object not found
# Not a commit, return as-is
# Process parents first
# Skip None parents
# Apply parent filter
# Apply tree filters
# Subdirectory filter takes precedence
# Then apply tree filter
# Or apply index filter
# Check if we should prune empty commits
# Check if tree is same as parent's tree
# This commit doesn't change anything, skip it
# Apply filters
# Custom filter function takes precedence
# Apply specific filters
# Create new commit if anything changed
# Copy extra fields
# Apply commit filter if provided
# The commit filter can create a completely new commit
# Multiple parents, can't skip
# Store the new commit anyway
# Store the new commit
# No changes, keep original
# Check if already filtered
# Process commits starting from refs
# Get the commit SHA for this ref
# Skip refs that can't be resolved
# Update refs
# Save original ref if requested
# Update ref to new commit
# Not a valid ref, skip updating
# Handle tag filtering
# Process all tags
# Get the tag object or commit it points to
# Remove 'refs/tags/'
# Check if it's an annotated tag
# Get the commit it points to
# Process tag if:
# 1. It points to a rewritten commit, OR
# 2. We want to rename the tag regardless
# For annotated tags pointing to rewritten commits,
# we need to create a new tag object
# Create new tag object pointing to rewritten commit
# Update ref to point to new tag object
# Just rename the tag
# Lightweight tag - points directly to a commit
# Process if commit was rewritten or we want to rename
# Point to rewritten commit
# Just rename
# greenthreads.py -- Utility module for querying an ObjectStore with gevent
# Copyright (C) 2013 eNovance SAS <licensing@enovance.com>
# Author: Fabien Boucher <fabien.boucher@enovance.com>
# notes.py -- Git notes handling
# Count the total number of notes in the tree recursively
# Only recurse 2 levels deep
# Use fanout based on number of notes
# Git typically starts using fanout around 256 notes
# 256^2
# Check for presence of both files and directories
# If we have files at the root level, check if they're full SHA names
# Check if any file names are full 40-char hex strings
# Verify it's a valid hex string
# No fanout
# Check if all directories are 2-character hex names
# Check a sample directory to determine if it's level 1 or 2
# Check if this subtree also has 2-char hex directories
# Assume level 1 if we can't inspect
# Collect all existing notes
# Create new empty tree
# Re-add all notes with new fanout structure using set_note
# Temporarily set fanout back to avoid recursion
# Use the internal tree update logic without checking fanout again
# Build new tree structure
# Leaf level - add the note blob
# Directory level
# Update this subtree
# If not a directory, we need to replace it
# Create new subtree path
# Update the tree reference
# Not a directory
# Not a regular file
# Create note blob
# Check if we need to reorganize the tree for better fanout
# Get path components
# Build new tree structure without the note
# Leaf level - remove the note
# Return None if tree is now empty
# Empty tree
# Directory
# File
# Reconstruct the full hex SHA from the path
# Get the commit object
# If it's a commit, get the tree from it
# If it's directly a tree (shouldn't happen in normal usage)
# Get current notes tree
# Update notes tree
# Create commit
# Set parent to previous notes commit if exists
# Remove from notes tree
# Set parent to previous notes commit
# errors.py -- errors for dulwich
# Copyright (C) 2009-2012 Jelmer Vernooij <jelmer@jelmer.uk>
# Please do not add more errors here, but instead add them close to the code
# that raises the error.
# annotate.py -- Annotate files with last changed revision
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# or (at your option) a later version of the License.
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA  02110-1301, USA.
# Walk over ancestry graph breadth-first
# When checking each revision, find lines that according to difflib.Differ()
# are common between versions.
# Any lines that are not in common were introduced by the newer revision.
# If there were no lines kept from the older version, stop going deeper in the
# graph.
# don't care
# filters.py -- Git filter drivers (clean/smudge) implementation
# Copyright (C) 2024 Jelmer Vernooij
# For smudge, apply filters in reverse order
# A composite filter can only be reused if all its components can
# Use bytes
# Check if process started successfully
# Process already terminated
# Create protocol wrapper
# Send handshake using pkt-line format
# flush packet
# Read handshake response
# Verify handshake (be liberal - accept with or without newlines)
# Send capabilities
# Read capability response
# Store supported capabilities
# Be liberal - strip any line endings
# Remove "capability=" prefix
# Send request using pkt-line format
# Send data
# Split data into chunks if needed (max pkt-line payload is 65516 bytes)
# flush packet to end data
# Read response (initial headers)
# flush packet ends headers
# Read result data
# flush packet ends data
# Read final headers per Git filter protocol
# Filters send: headers + flush + content + flush + final_headers + flush
# flush packet ends final headers
# Check final status (if provided, it overrides the initial status)
# Clean up broken process
# Try process filter first (much faster)
# Fall back to clean command
# If not required, log warning and return original data on failure
# Fall back to smudge command
# Substitute %f placeholder with file path
# Close stdin first to signal the process to quit cleanly
# Try to terminate gracefully first
# Still running
# Force kill if terminate didn't work
# On Windows, sometimes we need to be more aggressive
# Process already dead
# Only reuse if it's a long-running process filter AND config hasn't changed
# Not a long-running filter, don't cache
# Check if the filter commands in config match our current commands
# Check if we have a cached instance that should be reused
# Check if the cached driver should still be reused
# Driver shouldn't be reused, clean it up and remove from cache
# Get driver from registry
# Only cache drivers that should be reused
# Clean up active drivers
# Also close the registry
# Update the registry's config
# Re-setup line ending filter with new config
# This will update the text filter factory to use new autocrlf settings
# The get_driver method will now handle checking reuse() for cached drivers
# Don't raise exceptions in __del__
# Register built-in filter factories
# Auto-register line ending filter if autocrlf is enabled
# Check if we already have an instance
# Try to create from config first (respect user configuration)
# Try to create from factory as fallback
# Get process command (preferred over clean/smudge for performance)
# Get clean command
# Get smudge command
# Get required flag (defaults to False)
# Get repository working directory (only for Repo, not BaseRepo)
# If we have a Repo (not just BaseRepo), use its LFS store
# Fall back to creating a temporary LFS store
# Parse autocrlf as bytes
# If autocrlf is enabled, register the text filter
# Pre-create the text filter so it's available
# Use filter_context if provided, otherwise fall back to registry
# Get all attributes for this path
# Collect filters to apply
# Check for text attribute first (it should be applied before custom filters)
# Add text filter for line ending conversion
# -text means binary, no conversion - but still check for custom filters
# If no explicit text attribute, check if autocrlf is enabled
# When autocrlf is true/input, files are treated as text by default
# Add text filter for files without explicit attributes
# Check if there's a filter attribute
# Check if filter is required but missing
# Return appropriate filter(s)
# Multiple filters - create a composite
# Track if we created our own context
# Support both old and new API
# We're using an external context
# We created our own context
# Get filter for this path
# Apply clean filter
# Create new blob with filtered data
# Apply smudge filter
# Only close the filter context if we created it ourselves
# attrs.py -- Git attributes for dulwich
# Copyright (C) 2019-2020 Collabora Ltd
# Copyright (C) 2019-2020 Andrej Shadura <andrew.shadura@collabora.co.uk>
# Split only on first = to handle values with = in them
# If pattern doesn't contain /, it can match at any level
# Leading / means root of repository
# Double asterisk
# **/ - match zero or more directories
# ** at end - match everything
# ** in middle
# Single * - match any character except /
# Character class
# Add anchors
# Normalize path
# Try to match
# Always set by _compile()
# Later patterns override earlier ones
# Update attributes
# Unspecified - remove the attribute
# Find existing pattern
# Convert to mutable dict
# Create new pattern
# Update the existing pattern in the list
# Update the attribute
# value is bytes
# __init__.py -- Contrib module for Dulwich
# paramiko_vendor.py -- paramiko implementation of the SSHVendor interface
# Copyright (C) 2013 Aaron O'Mullan <aaron.omullan@friendco.de>
# Channel must block
# Closed socket
# Read more if needed
# http://docs.paramiko.org/en/2.4/api/client.html
# Config file doesn't exist - this is normal, ignore silently
# Config file exists but can't be read - warn user
# Get SSH config for this host
# Use SSH config values if not explicitly provided
# Use the first identity file from SSH config
# Open SSH session
# Run commands
# requests_vendor.py -- requests implementation of the AbstractHttpGitClient interface
# Copyright (C) 2022 Eden Shalit <epopcop@gmail.com>
# Accept compression by default
# Add required fields as stated in AbstractHttpGitClient._http_request
# This diffstat code was extracted and heavily modified from:
# Copyright (c) 2008-2016 anatoly techtonik
# only needs to detect git style diffs as this is for
# use with dulwich
# emulate original full Patch class by just extracting
# filename and minimal chunk added/deleted information to
# properly interface with diffstat routine
# handle end of input
# note must all done using bytes not string because on linux filenames
# may not be encodable even to utf-8
# max changes for any file used for histogram width calc
# stats column width
# %-19s | %-4d %s
# note b'%d' % namelen is not supported until Python 3.5
# To convert an int to a format width specifier for byte
# strings use str(namelen).encode('ascii')
# -- calculating histogram --
# make sure every entry that had actual insertions gets
# at least one +
# make sure every entry that had actual deletions gets
# at least one -
# allow diffstat.py to also be used from the command line
# if no path argument to a diff file is passed in, run
# a self test. The test case includes tricky things like
# a diff of diff, binary files, renames with further changes
# added files and removed files.
# All extracted from Sigil-Ebook/Sigil's github repo with
# full permission to use under this license.
# return 0 on success otherwise return -1
# release_robot.py
# CONSTANTS
# dulwich repository object
# dictionary of refs and their SHA-1 values
# empty dictionary to hold tags, commits and datetimes
# iterate over refs in repository
# compatible with Python-3
# dulwich object from SHA-1
# don't just check if object is "tag" b/c it could be a "commit"
# instead check if "tags" is in the ref-name
# skip ref if not a tag
# strip the leading text from refs to get "tag name"
# check if tag object is "commit" or "tag" pointing to a "commit"
# a tuple (commit class, commit id)
# commit object
# get tag commit datetime, but dulwich returns seconds since
# beginning of epoch, so use Python time module to convert it to
# timetuple then convert to datetime
# return list of tags sorted by their datetimes from newest to oldest
# swift.py -- Repo implementation atop OpenStack SWIFT
# TODO: Refactor to share more code with dulwich/repo.py.
# TODO(fbo): Second attempt to _send() must be notified via real log
# TODO(fbo): More logs for operations
# Blob
# 12KB
# Should do something with redirections (301 in my case)
# Sometime got Broken Pipe - Dirty workaround
# Second attempt work
# Need to read more from swift
# Seems to have no parents
# Write pack info.
# Copyright (C) 2021 Jelmer Vernooij <jelmer@jelmer.uk>
# TODO(jelmer): For performance, read ranges?
# __init__.py -- The tests for dulwich
# Copyright (C) 2024 Jelmer Vernooĳ <jelmer@jelmer.uk>
# utils.py -- Test utilities for Dulwich.
# ruff: noqa: ANN401
# Plain files are very frequently used in tests, so let the mode be very short.
# Shorthand mode for Files.
# type: ignore[misc,valid-type]
# id property is read-only, so we overwrite sha instead.
# 2010-01-01 00:00:00
# By default, increment the time by a lot. Out-of-order commits should
# be closer together than this because their main cause is clock skew.
# test_object_store.py -- tests for object_store.py
# For type checker purposes - actual implementation supports both styles
# TODO: Argh, no way to construct Git commit objects without
# access to a serialized form.
# For now, just check that close doesn't barf.
# Repack, excluding b2 and b3
# Should have repacked only b1 and b4
# Verify it's loose
# Delete it
# Verify it's gone
# (path, contents)
# this is the same list as used by make_commits_with_contents,
# but ordered to match the actual iter_tree_contents iteration
# order
# No includes
# Explicit include=None
# include=[] is not the same as None
# Note: iter_tree_contents iterates in name order, but we
# listed two separate paths, so they'll keep their order
# as specified
# foo
# dir/subdir
# dir
# dir/baz
# dir/subdir/baz
# Create the following commit tree:
# 1--2
# 1 is shallow along the path from 4, but not along the path from 2.
# This is a stub package designed to roughly emulate the _yaml
# extension module, which previously existed as a standalone module
# and has been moved into the `yaml` package namespace.
# It does not perfectly mimic its old counterpart, but should get
# close enough for anyone who's relying on it even when they shouldn't.
# in some circumstances, the yaml module we imoprted may be from a different version, so we need
# to tread carefully when poking at it here (it may not have the attributes we expect)
# Don't `del yaml` here because yaml is actually an existing
# namespace member of _yaml.
# If the module is top-level (i.e. not a part of any specific package)
# then the attribute should be set to ''.
# https://docs.python.org/3.8/library/types.html
# re-export all functions & definitions, even private ones, from top-level
# module path, to allow for 'from wcwidth import _private_func'.  Of course,
# user beware that any _private function may disappear or change signature at
# any future version.
# local
# The __all__ attribute defines the items exported from statement,
# 'from wcwidth import *', but also to say, "This is the public API".
# We also used pkg_resources to load unicode version tables from version.json,
# generated by bin/update-tables.py, but some environments are unable to
# import pkg_resources for one reason or another, yikes!
# Source: 9.0.0
# Date: 2025-01-30, 21:48:29 GMT
# Number Sign
# Asterisk
# Digit Zero              ..Digit Nine
# Copyright Sign
# Registered Sign
# Double Exclamation Mark
# Exclamation Question Mark
# Trade Mark Sign
# Information Source
# Left Right Arrow        ..South West Arrow
# Leftwards Arrow With Hoo..Rightwards Arrow With Ho
# Keyboard
# Eject Symbol
# Black Right-pointing Dou..Black Right-pointing Tri
# Stopwatch               ..Timer Clock
# Double Vertical Bar     ..Black Circle For Record
# Circled Latin Capital Letter M
# Black Small Square      ..White Small Square
# Black Right-pointing Triangle
# Black Left-pointing Triangle
# White Medium Square     ..Black Medium Square
# Black Sun With Rays     ..Comet
# Black Telephone
# Ballot Box With Check
# Shamrock
# White Up Pointing Index
# Skull And Crossbones
# Radioactive Sign        ..Biohazard Sign
# Orthodox Cross
# Star And Crescent
# Peace Symbol            ..Yin Yang
# Wheel Of Dharma         ..White Smiling Face
# Female Sign
# Male Sign
# Black Chess Pawn        ..Black Spade Suit
# Black Club Suit
# Black Heart Suit        ..Black Diamond Suit
# Hot Springs
# Black Universal Recycling Symbol
# Permanent Paper Sign
# Hammer And Pick
# Crossed Swords          ..Alembic
# Gear
# Atom Symbol             ..Fleur-de-lis
# Warning Sign
# Male With Stroke And Male And Female Sign
# Coffin                  ..Funeral Urn
# Thunder Cloud And Rain
# Pick
# Helmet With White Cross
# Chains
# Shinto Shrine
# Mountain                ..Umbrella On Ground
# Ferry
# Skier                   ..Person With Ball
# Black Scissors
# Airplane                ..Envelope
# Victory Hand            ..Writing Hand
# Pencil
# Black Nib
# Heavy Check Mark
# Heavy Multiplication X
# Latin Cross
# Star Of David
# Eight Spoked Asterisk   ..Eight Pointed Black Star
# Snowflake
# Sparkle
# Heavy Heart Exclamation ..Heavy Black Heart
# Black Rightwards Arrow
# Arrow Pointing Rightward..Arrow Pointing Rightward
# Leftwards Black Arrow   ..Downwards Black Arrow
# Negative Squared Latin C..Negative Squared Latin C
# Thermometer
# White Sun With Small Clo..Wind Blowing Face
# Hot Pepper
# Fork And Knife With Plate
# Military Medal          ..Reminder Ribbon
# Studio Microphone       ..Control Knobs
# Film Frames             ..Admission Tickets
# Weight Lifter           ..Racing Car
# Snow Capped Mountain    ..Stadium
# Waving White Flag
# Rosette
# Label
# Chipmunk
# Eye
# Film Projector
# Om Symbol               ..Dove Of Peace
# Candle                  ..Mantelpiece Clock
# Hole                    ..Joystick
# Linked Paperclips
# Lower Left Ballpoint Pen..Lower Left Crayon
# Raised Hand With Fingers Splayed
# Desktop Computer
# Printer
# Three Button Mouse      ..Trackball
# Frame With Picture
# Card Index Dividers     ..File Cabinet
# Wastebasket             ..Spiral Calendar Pad
# Compression             ..Rolled-up Newspaper
# Dagger Knife
# Speaking Head In Silhouette
# Left Speech Bubble
# Right Anger Bubble
# Ballot Box With Ballot
# World Map
# Couch And Lamp
# Shopping Bags           ..Bed
# Hammer And Wrench       ..Motor Boat
# Small Airplane
# Satellite
# Passenger Ship
# Source: EastAsianWidth-4.1.0.txt
# Date: 2005-03-17, 15:21:00 PST [KW]
# Hangul Choseong Kiyeok  ..Hangul Choseong Yeorinhi
# Hangul Choseong Filler
# Left-pointing Angle Brac..Right-pointing Angle Bra
# Cjk Radical Repeat      ..Cjk Radical Rap
# Cjk Radical Choke       ..Cjk Radical C-simplified
# Kangxi Radical One      ..Kangxi Radical Flute
# Ideographic Description ..Ideographic Description
# Ideographic Space       ..Hangzhou Numeral Nine
# Wavy Dash               ..Ideographic Variation In
# Hiragana Letter Small A ..Hiragana Letter Small Ke
# Katakana-hiragana Voiced..Katakana Digraph Koto
# Bopomofo Letter B       ..Bopomofo Letter Gn
# Hangul Letter Kiyeok    ..Hangul Letter Araeae
# Ideographic Annotation L..Bopomofo Final Letter H
# Cjk Stroke T            ..Cjk Stroke N
# Katakana Letter Small Ku..Parenthesized Korean Cha
# Parenthesized Ideograph ..Parenthesized Ideograph
# Partnership Sign        ..Circled Katakana Wo
# Square Apaato           ..Cjk Unified Ideograph-4d
# Cjk Unified Ideograph-4e..Cjk Unified Ideograph-9f
# Yi Syllable It          ..Yi Syllable Yyr
# Yi Radical Qot          ..Yi Radical Ke
# Hangul Syllable Ga      ..Hangul Syllable Hih
# Cjk Compatibility Ideogr..Cjk Compatibility Ideogr
# Presentation Form For Ve..Presentation Form For Ve
# Presentation Form For Ve..Small Full Stop
# Small Semicolon         ..Small Equals Sign
# Small Reverse Solidus   ..Small Commercial At
# Fullwidth Exclamation Ma..Fullwidth Right White Pa
# Fullwidth Cent Sign     ..Fullwidth Won Sign
# Cjk Unified Ideograph-20..(nil)
# Cjk Unified Ideograph-30..(nil)
# Source: EastAsianWidth-5.0.0.txt
# Date: 2006-02-15, 14:39:00 PST [KW]
# Source: EastAsianWidth-5.1.0.txt
# Date: 2008-03-20, 17:42:00 PDT [KW]
# Bopomofo Letter B       ..Bopomofo Letter Ih
# Cjk Stroke T            ..Cjk Stroke Q
# Source: EastAsianWidth-5.2.0.txt
# Date: 2009-06-09, 17:47:00 PDT [KW]
# Hangul Choseong Kiyeok  ..Hangul Choseong Filler
# Parenthesized Ideograph ..Circled Ideograph Koto
# Cjk Unified Ideograph-4e..Yi Syllable Yyr
# Hangul Choseong Tikeut-m..Hangul Choseong Ssangyeo
# Cjk Compatibility Ideogr..(nil)
# Square Hiragana Hoka
# Squared Cjk Unified Ideo..Squared Cjk Unified Ideo
# Tortoise Shell Bracketed..Tortoise Shell Bracketed
# Source: EastAsianWidth-6.0.0.txt
# Date: 2010-08-17, 12:17:00 PDT [KW]
# Ideographic Annotation L..Bopomofo Letter Zy
# Katakana Letter Archaic ..Hiragana Letter Archaic
# Square Hiragana Hoka    ..Squared Katakana Sa
# Circled Ideograph Advant..Circled Ideograph Accept
# Source: EastAsianWidth-6.1.0.txt
# Date: 2011-09-19, 18:46:00 GMT [KW]
# Source: EastAsianWidth-6.2.0.txt
# Date: 2012-05-15, 18:30:00 GMT [KW]
# Source: EastAsianWidth-6.3.0.txt
# Date: 2013-02-05, 20:09:00 GMT [KW, LI]
# Source: EastAsianWidth-7.0.0.txt
# Date: 2014-02-28, 23:15:00 GMT [KW, LI]
# Source: EastAsianWidth-8.0.0.txt
# Date: 2015-02-10, 21:00:00 GMT [KW, LI]
# Source: EastAsianWidth-9.0.0.txt
# Date: 2016-05-27, 17:00:00 GMT [KW, LI]
# Watch                   ..Hourglass
# Black Right-pointing Dou..Black Down-pointing Doub
# Alarm Clock
# Hourglass With Flowing Sand
# White Medium Small Squar..Black Medium Small Squar
# Umbrella With Rain Drops..Hot Beverage
# Aries                   ..Pisces
# Wheelchair Symbol
# Anchor
# High Voltage Sign
# Medium White Circle     ..Medium Black Circle
# Soccer Ball             ..Baseball
# Snowman Without Snow    ..Sun Behind Cloud
# Ophiuchus
# No Entry
# Church
# Fountain                ..Flag In Hole
# Sailboat
# Tent
# Fuel Pump
# White Heavy Check Mark
# Raised Fist             ..Raised Hand
# Sparkles
# Cross Mark
# Negative Squared Cross Mark
# Black Question Mark Orna..White Exclamation Mark O
# Heavy Exclamation Mark Symbol
# Heavy Plus Sign         ..Heavy Division Sign
# Curly Loop
# Double Curly Loop
# Black Large Square      ..White Large Square
# White Medium Star
# Heavy Large Circle
# Tangut Iteration Mark
# (nil)
# Tangut Component-001    ..Tangut Component-755
# Mahjong Tile Red Dragon
# Playing Card Black Joker
# Negative Squared Ab
# Squared Cl              ..Squared Vs
# Cyclone                 ..Shooting Star
# Hot Dog                 ..Cactus
# Tulip                   ..Baby Bottle
# Bottle With Popping Cork..Graduation Cap
# Carousel Horse          ..Swimmer
# Cricket Bat And Ball    ..Table Tennis Paddle And
# House Building          ..European Castle
# Waving Black Flag
# Badminton Racquet And Sh..Amphora
# Rat                     ..Paw Prints
# Eyes
# Ear                     ..Videocassette
# Prayer Beads            ..Down-pointing Small Red
# Kaaba                   ..Menorah With Nine Branch
# Clock Face One Oclock   ..Clock Face Twelve-thirty
# Man Dancing
# Reversed Hand With Middl..Raised Hand With Part Be
# Black Heart
# Mount Fuji              ..Person With Folded Hands
# Rocket                  ..Left Luggage
# Sleeping Accommodation
# Place Of Worship        ..Shopping Trolley
# Airplane Departure      ..Airplane Arriving
# Scooter                 ..Canoe
# Zipper-mouth Face       ..Hand With Index And Midd
# Face With Cowboy Hat    ..Sneezing Face
# Pregnant Woman
# Selfie                  ..Handball
# Wilted Flower           ..Martial Arts Uniform
# Croissant               ..Pancakes
# Crab                    ..Squid
# Cheese Wedge
# Source: EastAsianWidth-10.0.0.txt
# Date: 2017-03-08, 02:00:00 GMT [KW, LI]
# Bopomofo Letter B       ..Bopomofo Letter O With D
# Tangut Iteration Mark   ..Nushu Iteration Mark
# Katakana Letter Archaic ..Hentaigana Letter N-mu-m
# Nushu Character-1b170   ..Nushu Character-1b2fb
# Rounded Symbol For Fu   ..Rounded Symbol For Cai
# Scooter                 ..Flying Saucer
# Zipper-mouth Face       ..Handball
# Wilted Flower           ..Curling Stone
# Croissant               ..Canned Food
# Crab                    ..Cricket
# Face With Monocle       ..Socks
# Source: EastAsianWidth-11.0.0.txt
# Date: 2018-05-14, 09:41:59 GMT [KW, LI]
# Bopomofo Letter B       ..Bopomofo Letter Nn
# Scooter                 ..Skateboard
# Wilted Flower           ..Smiling Face With Smilin
# Face With Party Horn And..Freezing Face
# Face With Pleading Eyes
# Lab Coat                ..Swan
# Emoji Component Red Hair..Supervillain
# Cheese Wedge            ..Salt Shaker
# Face With Monocle       ..Nazar Amulet
# Source: EastAsianWidth-12.0.0.txt
# Date: 2019-01-21, 14:12:58 GMT [KW, LI]
# Tangut Iteration Mark   ..Old Chinese Iteration Ma
# Hiragana Letter Small Wi..Hiragana Letter Small Wo
# Katakana Letter Small Wi..Katakana Letter Small N
# Hindu Temple
# Scooter                 ..Auto Rickshaw
# Large Orange Circle     ..Large Brown Square
# White Heart             ..Yawning Face
# Face With Pleading Eyes ..Swan
# Sloth                   ..Oyster
# Guide Dog               ..Ice Cube
# Standing Person         ..Nazar Amulet
# Ballet Shoes            ..Shorts
# Drop Of Blood           ..Stethoscope
# Yo-yo                   ..Parachute
# Ringed Planet           ..Banjo
# Source: EastAsianWidth-12.1.0.txt
# Date: 2019-03-31, 22:01:58 GMT [KW, LI]
# Partnership Sign        ..Cjk Unified Ideograph-4d
# Source: EastAsianWidth-13.0.0.txt
# Date: 2029-01-21, 18:14:00 GMT [KW, LI]
# Ideographic Annotation L..Cjk Stroke Q
# Tangut Component-001    ..Khitan Small Script Char
# Hindu Temple            ..Elevator
# Scooter                 ..Roller Skate
# Pinched Fingers         ..Fencer
# Wrestlers               ..Goal Net
# First Place Medal       ..Disguised Face
# Face With Pleading Eyes ..Bubble Tea
# Ballet Shoes            ..Thong Sandal
# Yo-yo                   ..Nesting Dolls
# Ringed Planet           ..Rock
# Fly                     ..Feather
# Anatomical Heart        ..People Hugging
# Blueberries             ..Teapot
# Source: EastAsianWidth-14.0.0.txt
# Date: 2021-07-06, 09:58:53 GMT [KW, LI]
# Katakana Letter Minnan T..Katakana Letter Minnan T
# Katakana Letter Minnan T..Katakana Letter Minnan N
# Katakana Letter Minnan N..Katakana Letter Minnan N
# Katakana Letter Archaic ..Katakana Letter Archaic
# Playground Slide        ..Ring Buoy
# Heavy Equals Sign
# First Place Medal       ..Nazar Amulet
# Drop Of Blood           ..Crutch
# Ringed Planet           ..Hamsa
# Fly                     ..Nest With Eggs
# Anatomical Heart        ..Person With Crown
# Blueberries             ..Jar
# Melting Face            ..Bubbles
# Hand With Index Finger A..Heart Hands
# Source: EastAsianWidth-15.0.0.txt
# Date: 2022-05-24, 17:40:20 GMT [KW, LI]
# Hiragana Letter Small Ko
# Katakana Letter Small Ko
# Wireless                ..Ring Buoy
# Ballet Shoes            ..Crutch
# Yo-yo                   ..Flute
# Ringed Planet           ..Wing
# Goose                   ..Person With Crown
# Moose                   ..Pea Pod
# Melting Face            ..Shaking Face
# Hand With Index Finger A..Rightwards Pushing Hand
# Source: EastAsianWidth-15.1.0.txt
# Date: 2023-07-28, 23:34:08 GMT
# Ideographic Description ..Hangzhou Numeral Nine
# Ideographic Description ..Parenthesized Korean Cha
# Source: EastAsianWidth-16.0.0.txt
# Date: 2024-04-30, 21:48:20 GMT
# Trigram For Heaven      ..Trigram For Earth
# Monogram For Yang       ..Digram For Greater Yin
# Ideographic Annotation L..(nil)
# Partnership Sign        ..Yi Syllable Yyr
# Monogram For Earth      ..Tetragram For Fostering
# Counting Rod Unit Digit ..Ideographic Tally Mark F
# Yo-yo                   ..(nil)
# Moose                   ..(nil)
# Source: EastAsianWidth-17.0.0.txt
# Date: 2025-07-24, 00:12:54 GMT
# (nil)                   ..Khitan Small Script Char
# Hindu Temple            ..(nil)
# (nil)                   ..Rightwards Pushing Hand
# std imports
# small optimization: early return of 1 for printable ASCII, this provides
# approximately 40% performance improvement for mostly-ascii documents, with
# less than 1% impact to others.
# C0/C1 control characters are -1 for compatibility with POSIX-like calls
# Zero width
# 1 or 2 width
# this 'n' argument is a holdover for POSIX function
# Zero Width Joiner, do not measure this or next character
# on variation selector 16 (VS16) following another character,
# conditionally add '1' to the measured width if that character is
# known to be converted from narrow to wide by the VS16 character.
# measure character at current index
# early return -1 on C0 and C1 control characters
# track last character measured to contain a cell, so that
# subsequent VS-16 modifiers may be understood
# Design note: the choice to return the same type that is given certainly
# complicates it for python 2 str-type, but allows us to define an api that
# uses 'string-type' for unicode version level definitions, so all of our
# example code works with all versions of python.
# That, along with the string-to-numeric and comparisons of earliest,
# latest, matching, or nearest, greatly complicates this function.
# Performance is somewhat curbed by memoization.
# default match, when given as 'latest', use the most latest unicode
# version specification level supported.
# exact match, downstream has specified an explicit matching version
# matching any value of list_versions().
# The user's version is not supported by ours. We return the newest unicode
# version level that we support below their given value.
# submitted value raises ValueError in int(), warn and use latest.
# given version is less than any available version, return earliest
# this probably isn't what you wanted, the oldest wcwidth.c you will
# find in the wild is likely version 5 or 6, which we both support,
# but it's better than not saying anything at all.
# create list of versions which are less than our equal to given version,
# and return the tail value, which is the highest level we may support,
# or the latest value we support, when completely unmatched or higher
# than any supported version.
# function will never complete, always returns.
# look ahead to next value
# at end of list, return latest version
# Maybe our given version has less parts, as in tuple(8, 0), than the
# next compare version tuple(8, 0, 0). Test for an exact match by
# comparison of only the leading dotted piece(s): (8, 0) == (8, 0).
# Or, if any next value is greater than our given support level
# version, return the current value in index.  Even though it must
# be less than the given value, it's our closest possible match. That
# is, 4.1 is returned for given 4.9.9, where 4.1 and 5.0 are available.
# Source: DerivedGeneralCategory-4.1.0.txt
# Date: 2005-02-26, 02:35:50 GMT [MD]
# Combining Grave Accent  ..Combining Latin Small Le
# Combining Cyrillic Titlo..Combining Cyrillic Psili
# Combining Cyrillic Hundr..Combining Cyrillic Milli
# Hebrew Accent Etnahta   ..Hebrew Point Holam
# Hebrew Point Qubuts     ..Hebrew Point Meteg
# Hebrew Point Rafe
# Hebrew Point Shin Dot   ..Hebrew Point Sin Dot
# Hebrew Mark Upper Dot   ..Hebrew Mark Lower Dot
# Hebrew Point Qamats Qatan
# Arabic Number Sign      ..Arabic Sign Safha
# Arabic Sign Sallallahou ..Arabic Small High Tah
# Arabic Fathatan         ..Arabic Fatha With Two Do
# Arabic Letter Superscript Alef
# Arabic Small High Ligatu..Arabic Small High Madda
# Arabic Small High Yeh   ..Arabic Small High Noon
# Arabic Empty Centre Low ..Arabic Small Low Meem
# Syriac Abbreviation Mark
# Syriac Letter Superscript Alaph
# Syriac Pthaha Above     ..Syriac Barrekh
# Thaana Abafili          ..Thaana Sukun
# Devanagari Sign Candrabi..Devanagari Sign Visarga
# Devanagari Sign Nukta
# Devanagari Vowel Sign Aa..Devanagari Sign Virama
# Devanagari Stress Sign U..Devanagari Acute Accent
# Devanagari Vowel Sign Vo..Devanagari Vowel Sign Vo
# Bengali Sign Candrabindu..Bengali Sign Visarga
# Bengali Sign Nukta
# Bengali Vowel Sign Aa   ..Bengali Vowel Sign Vocal
# Bengali Vowel Sign E    ..Bengali Vowel Sign Ai
# Bengali Vowel Sign O    ..Bengali Sign Virama
# Bengali Au Length Mark
# Bengali Vowel Sign Vocal..Bengali Vowel Sign Vocal
# Gurmukhi Sign Adak Bindi..Gurmukhi Sign Visarga
# Gurmukhi Sign Nukta
# Gurmukhi Vowel Sign Aa  ..Gurmukhi Vowel Sign Uu
# Gurmukhi Vowel Sign Ee  ..Gurmukhi Vowel Sign Ai
# Gurmukhi Vowel Sign Oo  ..Gurmukhi Sign Virama
# Gurmukhi Tippi          ..Gurmukhi Addak
# Gujarati Sign Candrabind..Gujarati Sign Visarga
# Gujarati Sign Nukta
# Gujarati Vowel Sign Aa  ..Gujarati Vowel Sign Cand
# Gujarati Vowel Sign E   ..Gujarati Vowel Sign Cand
# Gujarati Vowel Sign O   ..Gujarati Sign Virama
# Gujarati Vowel Sign Voca..Gujarati Vowel Sign Voca
# Oriya Sign Candrabindu  ..Oriya Sign Visarga
# Oriya Sign Nukta
# Oriya Vowel Sign Aa     ..Oriya Vowel Sign Vocalic
# Oriya Vowel Sign E      ..Oriya Vowel Sign Ai
# Oriya Vowel Sign O      ..Oriya Sign Virama
# Oriya Ai Length Mark    ..Oriya Au Length Mark
# Tamil Sign Anusvara
# Tamil Vowel Sign Aa     ..Tamil Vowel Sign Uu
# Tamil Vowel Sign E      ..Tamil Vowel Sign Ai
# Tamil Vowel Sign O      ..Tamil Sign Virama
# Tamil Au Length Mark
# Telugu Sign Candrabindu ..Telugu Sign Visarga
# Telugu Vowel Sign Aa    ..Telugu Vowel Sign Vocali
# Telugu Vowel Sign E     ..Telugu Vowel Sign Ai
# Telugu Vowel Sign O     ..Telugu Sign Virama
# Telugu Length Mark      ..Telugu Ai Length Mark
# Kannada Sign Anusvara   ..Kannada Sign Visarga
# Kannada Sign Nukta
# Kannada Vowel Sign Aa   ..Kannada Vowel Sign Vocal
# Kannada Vowel Sign E    ..Kannada Vowel Sign Ai
# Kannada Vowel Sign O    ..Kannada Sign Virama
# Kannada Length Mark     ..Kannada Ai Length Mark
# Malayalam Sign Anusvara ..Malayalam Sign Visarga
# Malayalam Vowel Sign Aa ..Malayalam Vowel Sign Voc
# Malayalam Vowel Sign E  ..Malayalam Vowel Sign Ai
# Malayalam Vowel Sign O  ..Malayalam Sign Virama
# Malayalam Au Length Mark
# Sinhala Sign Anusvaraya ..Sinhala Sign Visargaya
# Sinhala Sign Al-lakuna
# Sinhala Vowel Sign Aela-..Sinhala Vowel Sign Ketti
# Sinhala Vowel Sign Diga Paa-pilla
# Sinhala Vowel Sign Gaett..Sinhala Vowel Sign Gayan
# Sinhala Vowel Sign Diga ..Sinhala Vowel Sign Diga
# Thai Character Mai Han-akat
# Thai Character Sara I   ..Thai Character Phinthu
# Thai Character Maitaikhu..Thai Character Yamakkan
# Lao Vowel Sign Mai Kan
# Lao Vowel Sign I        ..Lao Vowel Sign Uu
# Lao Vowel Sign Mai Kon  ..Lao Semivowel Sign Lo
# Lao Tone Mai Ek         ..Lao Niggahita
# Tibetan Astrological Sig..Tibetan Astrological Sig
# Tibetan Mark Ngas Bzung Nyi Zla
# Tibetan Mark Ngas Bzung Sgor Rtags
# Tibetan Mark Tsa -phru
# Tibetan Sign Yar Tshes  ..Tibetan Sign Mar Tshes
# Tibetan Vowel Sign Aa   ..Tibetan Mark Halanta
# Tibetan Sign Lci Rtags  ..Tibetan Sign Yang Rtags
# Tibetan Subjoined Letter..Tibetan Subjoined Letter
# Tibetan Symbol Padma Gdan
# Myanmar Vowel Sign Aa   ..Myanmar Vowel Sign Ai
# Myanmar Sign Anusvara   ..Myanmar Sign Virama
# Myanmar Vowel Sign Vocal..Myanmar Vowel Sign Vocal
# Hangul Jungseong Filler ..Hangul Jongseong Ssangni
# Ethiopic Combining Gemination Mark
# Tagalog Vowel Sign I    ..Tagalog Sign Virama
# Hanunoo Vowel Sign I    ..Hanunoo Sign Pamudpod
# Buhid Vowel Sign I      ..Buhid Vowel Sign U
# Tagbanwa Vowel Sign I   ..Tagbanwa Vowel Sign U
# Khmer Vowel Inherent Aq ..Khmer Sign Bathamasat
# Khmer Sign Atthacan
# Mongolian Free Variation..Mongolian Free Variation
# Mongolian Letter Ali Gali Dagalga
# Limbu Vowel Sign A      ..Limbu Subjoined Letter W
# Limbu Small Letter Ka   ..Limbu Sign Sa-i
# New Tai Lue Vowel Sign V..New Tai Lue Vowel Sign I
# New Tai Lue Tone Mark-1 ..New Tai Lue Tone Mark-2
# Buginese Vowel Sign I   ..Buginese Vowel Sign Ae
# Combining Dotted Grave A..Combining Suspension Mar
# Zero Width Space        ..Right-to-left Mark
# Line Separator          ..Right-to-left Override
# Word Joiner             ..Invisible Separator
# Inhibit Symmetric Swappi..Nominal Digit Shapes
# Combining Left Harpoon A..Combining Long Double So
# Ideographic Level Tone M..Hangul Double Dot Tone M
# Combining Katakana-hirag..Combining Katakana-hirag
# Syloti Nagri Sign Dvisvara
# Syloti Nagri Sign Hasanta
# Syloti Nagri Sign Anusvara
# Syloti Nagri Vowel Sign ..Syloti Nagri Vowel Sign
# Hangul Jungseong O-yeo  ..(nil)
# Hebrew Point Judeo-spanish Varika
# Variation Selector-1    ..Variation Selector-16
# Combining Ligature Left ..Combining Double Tilde R
# Zero Width No-break Space
# Interlinear Annotation A..Interlinear Annotation T
# Kharoshthi Vowel Sign I ..Kharoshthi Vowel Sign Vo
# Kharoshthi Vowel Sign E ..Kharoshthi Vowel Sign O
# Kharoshthi Vowel Length ..Kharoshthi Sign Visarga
# Kharoshthi Sign Bar Abov..Kharoshthi Sign Dot Belo
# Kharoshthi Virama
# Musical Symbol Combining..Musical Symbol Combining
# Combining Greek Musical ..Combining Greek Musical
# Language Tag
# Tag Space               ..Cancel Tag
# Variation Selector-17   ..Variation Selector-256
# Source: DerivedGeneralCategory-5.0.0.txt
# Date: 2006-02-27, 23:41:27 GMT [MD]
# Hebrew Accent Etnahta   ..Hebrew Point Meteg
# Nko Combining Short High..Nko Combining Double Dot
# Kannada Vowel Sign Vocal..Kannada Vowel Sign Vocal
# Balinese Sign Ulu Ricem ..Balinese Sign Bisah
# Balinese Sign Rerekan   ..Balinese Adeg Adeg
# Balinese Musical Symbol ..Balinese Musical Symbol
# Combining Dotted Grave A..Combining Latin Small Le
# Combining Left Arrowhead..Combining Right Arrowhea
# Combining Left Harpoon A..Combining Right Arrow Be
# Source: DerivedGeneralCategory-5.1.0.txt
# Date: 2008-03-20, 17:54:57 GMT [MD]
# Combining Cyrillic Titlo..Combining Cyrillic Milli
# Arabic Sign Sallallahou ..Arabic Small Kasra
# Gurmukhi Sign Udaat
# Gurmukhi Sign Yakash
# Oriya Vowel Sign Vocalic..Oriya Vowel Sign Vocalic
# Telugu Vowel Sign Vocali..Telugu Vowel Sign Vocali
# Malayalam Vowel Sign Voc..Malayalam Vowel Sign Voc
# Myanmar Vowel Sign Tall ..Myanmar Consonant Sign M
# Myanmar Consonant Sign M..Myanmar Consonant Sign M
# Myanmar Vowel Sign Sgaw ..Myanmar Tone Mark Sgaw K
# Myanmar Vowel Sign Weste..Myanmar Sign Western Pwo
# Myanmar Vowel Sign Geba ..Myanmar Vowel Sign Kayah
# Myanmar Consonant Sign S..Myanmar Sign Shan Counci
# Myanmar Sign Rumai Palaung Tone-5
# Sundanese Sign Panyecek ..Sundanese Sign Pangwisad
# Sundanese Consonant Sign..Sundanese Sign Pamaaeh
# Lepcha Subjoined Letter ..Lepcha Sign Nukta
# Word Joiner             ..Invisible Plus
# Combining Left Harpoon A..Combining Asterisk Above
# Combining Cyrillic Lette..Combining Cyrillic Lette
# Combining Cyrillic Vzmet..Combining Cyrillic Thous
# Combining Cyrillic Kavyk..Combining Cyrillic Payer
# Saurashtra Sign Anusvara..Saurashtra Sign Visarga
# Saurashtra Consonant Sig..Saurashtra Sign Virama
# Kayah Li Vowel Ue       ..Kayah Li Tone Calya Plop
# Rejang Vowel Sign I     ..Rejang Virama
# Cham Vowel Sign Aa      ..Cham Consonant Sign Wa
# Cham Consonant Sign Final Ng
# Cham Consonant Sign Fina..Cham Consonant Sign Fina
# Combining Ligature Left ..Combining Conjoining Mac
# Phaistos Disc Sign Combining Oblique Stroke
# Source: DerivedGeneralCategory-5.2.0.txt
# Date: 2009-08-22, 04:58:21 GMT [MD]
# Samaritan Mark In       ..Samaritan Mark Dagesh
# Samaritan Mark Epentheti..Samaritan Vowel Sign A
# Samaritan Vowel Sign Sho..Samaritan Vowel Sign U
# Samaritan Vowel Sign Lon..Samaritan Mark Nequdaa
# Devanagari Sign Inverted..Devanagari Sign Visarga
# Devanagari Vowel Sign Aa..Devanagari Vowel Sign Pr
# Devanagari Stress Sign U..Devanagari Vowel Sign Ca
# Myanmar Sign Khamti Tone..Myanmar Vowel Sign Aiton
# Tai Tham Consonant Sign ..Tai Tham Consonant Sign
# Tai Tham Sign Sakot     ..Tai Tham Sign Khuen-lue
# Tai Tham Combining Cryptogrammic Dot
# Vedic Tone Karshana     ..Vedic Tone Prenkha
# Vedic Sign Yajurvedic Mi..Vedic Sign Visarga Anuda
# Vedic Sign Tiryak
# Vedic Sign Ardhavisarga
# Combining Almost Equal T..Combining Right Arrowhea
# Coptic Combining Ni Abov..Coptic Combining Spiritu
# Bamum Combining Mark Koq..Bamum Combining Mark Tuk
# Combining Devanagari Dig..Combining Devanagari Sig
# Javanese Sign Panyangga ..Javanese Sign Wignyan
# Javanese Sign Cecak Telu..Javanese Pangkon
# Myanmar Sign Pao Karen Tone
# Tai Viet Mai Kang
# Tai Viet Vowel I        ..Tai Viet Vowel U
# Tai Viet Mai Khit       ..Tai Viet Vowel Ia
# Tai Viet Vowel Am       ..Tai Viet Tone Mai Ek
# Tai Viet Tone Mai Tho
# Meetei Mayek Vowel Sign ..Meetei Mayek Vowel Sign
# Meetei Mayek Lum Iyek   ..Meetei Mayek Apun Iyek
# Kaithi Sign Candrabindu ..Kaithi Sign Visarga
# Kaithi Vowel Sign Aa    ..Kaithi Sign Nukta
# Kaithi Number Sign
# Source: DerivedGeneralCategory-6.0.0.txt
# Date: 2010-08-19, 00:48:09 GMT [MD]
# Arabic Fathatan         ..Arabic Wavy Hamza Below
# Arabic Small High Ligatu..Arabic End Of Ayah
# Arabic Small High Rounde..Arabic Small High Madda
# Mandaic Affrication Mark..Mandaic Gemination Mark
# Devanagari Vowel Sign Oe..Devanagari Sign Nukta
# Devanagari Vowel Sign Aa..Devanagari Vowel Sign Aw
# Devanagari Stress Sign U..Devanagari Vowel Sign Uu
# Tibetan Subjoined Sign L..Tibetan Subjoined Letter
# Ethiopic Combining Gemin..Ethiopic Combining Gemin
# Batak Sign Tompi        ..Batak Panongonan
# Combining Double Inverte..Combining Right Arrowhea
# Tifinagh Consonant Joiner
# Brahmi Sign Candrabindu ..Brahmi Sign Visarga
# Brahmi Vowel Sign Aa    ..Brahmi Virama
# Source: DerivedGeneralCategory-6.1.0.txt
# Date: 2011-11-27, 05:10:22 GMT [MD]
# Arabic Number Sign      ..Arabic Sign Samvat
# Arabic Curly Fatha      ..Arabic Damma With Dot
# Sundanese Consonant Sign..Sundanese Consonant Sign
# Vedic Sign Ardhavisarga ..Vedic Tone Candra Above
# Combining Cyrillic Lette..Combining Cyrillic Payer
# Combining Cyrillic Letter Iotified E
# Meetei Mayek Vowel Sign ..Meetei Mayek Virama
# Chakma Sign Candrabindu ..Chakma Sign Visarga
# Chakma Vowel Sign A     ..Chakma Maayyaa
# Sharada Sign Candrabindu..Sharada Sign Visarga
# Sharada Vowel Sign Aa   ..Sharada Sign Virama
# Takri Sign Anusvara     ..Takri Sign Nukta
# Miao Sign Aspiration    ..Miao Vowel Sign Ng
# Miao Tone Right         ..Miao Tone Below
# Source: DerivedGeneralCategory-6.2.0.txt
# Date: 2012-05-20, 00:42:34 GMT [MD]
# Source: DerivedGeneralCategory-6.3.0.txt
# Date: 2013-07-05, 14:08:45 GMT [MD]
# Arabic Letter Mark
# Mongolian Free Variation..Mongolian Vowel Separato
# Left-to-right Isolate   ..Nominal Digit Shapes
# Source: DerivedGeneralCategory-7.0.0.txt
# Date: 2014-02-07, 18:42:12 GMT [MD]
# Arabic Number Sign      ..Arabic Number Mark Above
# Arabic Curly Fatha      ..Devanagari Sign Visarga
# Telugu Sign Combining Ca..Telugu Sign Visarga
# Kannada Sign Candrabindu..Kannada Sign Visarga
# Malayalam Sign Candrabin..Malayalam Sign Visarga
# Combining Doubled Circum..Combining Parentheses Ov
# Vedic Tone Ring Above   ..Vedic Tone Double Ring A
# Combining Dotted Grave A..Combining Up Tack Above
# Myanmar Sign Shan Saw
# Myanmar Sign Pao Karen T..Myanmar Sign Tai Laing T
# Coptic Epact Thousands Mark
# Combining Old Permic Let..Combining Old Permic Let
# Manichaean Abbreviation ..Manichaean Abbreviation
# Brahmi Number Joiner    ..Kaithi Sign Visarga
# Mahajani Sign Nukta
# Khojki Vowel Sign Aa    ..Khojki Sign Shadda
# Khudawadi Sign Anusvara ..Khudawadi Sign Virama
# Grantha Sign Candrabindu..Grantha Sign Visarga
# Grantha Sign Nukta
# Grantha Vowel Sign Aa   ..Grantha Vowel Sign Vocal
# Grantha Vowel Sign Ee   ..Grantha Vowel Sign Ai
# Grantha Vowel Sign Oo   ..Grantha Sign Virama
# Grantha Au Length Mark
# Grantha Vowel Sign Vocal..Grantha Vowel Sign Vocal
# Combining Grantha Digit ..Combining Grantha Digit
# Combining Grantha Letter..Combining Grantha Letter
# Tirhuta Vowel Sign Aa   ..Tirhuta Sign Nukta
# Siddham Vowel Sign Aa   ..Siddham Vowel Sign Vocal
# Siddham Vowel Sign E    ..Siddham Sign Nukta
# Modi Vowel Sign Aa      ..Modi Sign Ardhacandra
# Bassa Vah Combining High..Bassa Vah Combining High
# Pahawh Hmong Mark Cim Tu..Pahawh Hmong Mark Cim Ta
# Duployan Thick Letter Se..Duployan Double Mark
# Shorthand Format Letter ..Shorthand Format Up Step
# Mende Kikakui Combining ..Mende Kikakui Combining
# Source: DerivedGeneralCategory-8.0.0.txt
# Date: 2015-02-13, 13:47:11 GMT [MD]
# Arabic Turned Damma Belo..Devanagari Sign Visarga
# Combining Ligature Left ..Combining Cyrillic Titlo
# Sharada Sign Nukta      ..Sharada Extra Short Vowe
# Grantha Sign Combining A..Grantha Sign Visarga
# Siddham Vowel Sign Alter..Siddham Vowel Sign Alter
# Ahom Consonant Sign Medi..Ahom Sign Killer
# Signwriting Head Rim    ..Signwriting Air Sucking
# Signwriting Mouth Closed..Signwriting Excitement
# Signwriting Upper Body Tilting From Hip Joints
# Signwriting Location Head Neck
# Signwriting Fill Modifie..Signwriting Fill Modifie
# Signwriting Rotation Mod..Signwriting Rotation Mod
# Emoji Modifier Fitzpatri..Emoji Modifier Fitzpatri
# Source: DerivedGeneralCategory-9.0.0.txt
# Date: 2016-06-01, 10:34:26 GMT
# Arabic Small High Word A..Devanagari Sign Visarga
# Mongolian Letter Ali Gal..Mongolian Letter Ali Gal
# Combining Deletion Mark ..Combining Right Arrowhea
# Saurashtra Consonant Sig..Saurashtra Sign Candrabi
# Khojki Sign Sukun
# Newa Vowel Sign Aa      ..Newa Sign Nukta
# Bhaiksuki Vowel Sign Aa ..Bhaiksuki Vowel Sign Voc
# Bhaiksuki Vowel Sign E  ..Bhaiksuki Sign Virama
# Marchen Subjoined Letter..Marchen Subjoined Letter
# Marchen Subjoined Letter..Marchen Sign Candrabindu
# Combining Glagolitic Let..Combining Glagolitic Let
# Adlam Alif Lengthener   ..Adlam Nukta
# Source: DerivedGeneralCategory-10.0.0.txt
# Date: 2017-03-08, 08:41:49 GMT
# Gujarati Sign Sukun     ..Gujarati Sign Two-circle
# Malayalam Sign Combining..Malayalam Sign Visarga
# Malayalam Sign Vertical ..Malayalam Sign Circular
# Vedic Sign Atikrama     ..Vedic Tone Double Ring A
# Combining Dotted Grave A..Combining Wide Inverted
# Zanabazar Square Vowel S..Zanabazar Square Vowel L
# Zanabazar Square Final C..Zanabazar Square Sign Vi
# Zanabazar Square Cluster..Zanabazar Square Cluster
# Zanabazar Square Subjoiner
# Soyombo Vowel Sign I    ..Soyombo Vowel Length Mar
# Soyombo Final Consonant ..Soyombo Subjoiner
# Masaram Gondi Vowel Sign..Masaram Gondi Vowel Sign
# Masaram Gondi Vowel Sign E
# Masaram Gondi Vowel Sign..Masaram Gondi Virama
# Masaram Gondi Ra-kara
# Source: DerivedGeneralCategory-11.0.0.txt
# Date: 2018-02-21, 05:34:04 GMT
# Nko Dantayalan
# Arabic Small Low Waw    ..Devanagari Sign Visarga
# Bengali Sandhi Mark
# Telugu Sign Combining Ca..Telugu Sign Combining An
# Devanagari Vowel Sign Ay
# Hanifi Rohingya Sign Har..Hanifi Rohingya Sign Tas
# Sogdian Combining Dot Be..Sogdian Combining Stroke
# Kaithi Number Sign Above
# Chakma Vowel Sign Aa    ..Chakma Vowel Sign Ei
# Sharada Sandhi Mark     ..Sharada Extra Short Vowe
# Combining Bindu Below   ..Grantha Sign Nukta
# Newa Sandhi Mark
# Dogra Vowel Sign Aa     ..Dogra Sign Nukta
# Gunjala Gondi Vowel Sign..Gunjala Gondi Vowel Sign
# Gunjala Gondi Vowel Sign..Gunjala Gondi Virama
# Makasar Vowel Sign I    ..Makasar Vowel Sign O
# Source: DerivedGeneralCategory-12.0.0.txt
# Date: 2019-01-22, 08:18:28 GMT
# Lao Vowel Sign I        ..Lao Semivowel Sign Lo
# Vedic Tone Candra Above
# Nandinagari Vowel Sign A..Nandinagari Vowel Sign V
# Nandinagari Vowel Sign E..Nandinagari Sign Virama
# Nandinagari Vowel Sign Prishthamatra E
# Egyptian Hieroglyph Vert..Egyptian Hieroglyph End
# Miao Sign Consonant Modifier Bar
# Miao Sign Aspiration    ..Miao Vowel Sign Ui
# Nyiakeng Puachue Hmong T..Nyiakeng Puachue Hmong T
# Wancho Tone Tup         ..Wancho Tone Koini
# Source: DerivedGeneralCategory-12.1.0.txt
# Date: 2019-03-10, 10:53:08 GMT
# Source: DerivedGeneralCategory-13.0.0.txt
# Date: 2019-10-21, 14:30:32 GMT
# Oriya Sign Overline     ..Oriya Au Length Mark
# Sinhala Sign Candrabindu..Sinhala Sign Visargaya
# Combining Doubled Circum..Combining Latin Small Le
# Syloti Nagri Sign Alternate Hasanta
# Yezidi Combining Hamza M..Yezidi Combining Madda M
# Sharada Vowel Sign Prish..Sharada Sign Inverted Ca
# Dives Akuru Vowel Sign A..Dives Akuru Vowel Sign E
# Dives Akuru Vowel Sign A..Dives Akuru Vowel Sign O
# Dives Akuru Sign Anusvar..Dives Akuru Virama
# Dives Akuru Medial Ya
# Dives Akuru Medial Ra   ..Dives Akuru Sign Nukta
# Khitan Small Script Filler
# Vietnamese Alternate Rea..Vietnamese Alternate Rea
# Source: DerivedGeneralCategory-14.0.0.txt
# Date: 2021-07-10, 00:35:08 GMT
# Arabic Pound Mark Above ..Arabic Piastre Mark Abov
# Arabic Small High Word A..Arabic Half Madda Over M
# Arabic Small High Farsi ..Devanagari Sign Visarga
# Telugu Sign Nukta
# Tagalog Vowel Sign I    ..Tagalog Sign Pamudpod
# Combining Dotted Grave A..Combining Right Arrowhea
# Old Uyghur Combining Dot..Old Uyghur Combining Two
# Brahmi Sign Old Tamil Virama
# Brahmi Vowel Sign Old Ta..Brahmi Vowel Sign Old Ta
# Kaithi Vowel Sign Vocalic R
# Znamenny Combining Mark ..Znamenny Combining Mark
# Znamenny Combining Tonal..Znamenny Priznak Modifie
# Toto Sign Rising Tone
# Source: DerivedGeneralCategory-15.0.0.txt
# Date: 2022-04-26, 23:14:35 GMT
# Kannada Sign Combining Anusvara Above Right
# Lao Tone Mai Ek         ..Lao Yamakkan
# Arabic Small Low Word Sa..Arabic Small Low Word Ma
# Khojki Vowel Sign Vocalic R
# Kawi Sign Candrabindu   ..Kawi Sign Anusvara
# Kawi Sign Visarga
# Kawi Vowel Sign Aa      ..Kawi Vowel Sign Vocalic
# Kawi Vowel Sign E       ..Kawi Conjoiner
# Egyptian Hieroglyph Vert..Egyptian Hieroglyph Mirr
# Egyptian Hieroglyph Modi..Egyptian Hieroglyph Modi
# Combining Cyrillic Small Letter Byelorussian-ukr
# Nag Mundari Sign Muhor  ..Nag Mundari Sign Sutuh
# Source: DerivedGeneralCategory-15.1.0.txt
# Date: 2023-07-28, 23:34:02 GMT
# Source: DerivedGeneralCategory-16.0.0.txt
# Date: 2024-04-30, 21:48:17 GMT
# (nil)                   ..Arabic Half Madda Over M
# (nil)                   ..Arabic Small Low Word Ma
# Source: DerivedGeneralCategory-17.0.0.txt
# Date: 2025-07-24, 00:12:50 GMT
# Combining Doubled Circum..(nil)
# math.sumprod is available for Python 3.12+
# Extended precision algorithms from T. J. Dekker,
# "A Floating-Point Technique for Extending the Available Precision"
# https://csclub.uwaterloo.ca/~pbarfuss/dekker1971.pdf
# Formulas: (5.5) (5.6) and (5.8).  Code: mul12()
# Veltkamp constant = 2.0 ** 27 + 1
# Work around https://bugs.python.org/issue38525
# Normalize the slice's arguments
# If either the start or stop index is negative, we'll need to cache
# the rest of the iterable in order to slice from the right side.
# Otherwise we'll need to find the rightmost index and cache to that
# point.
# This is the "most beautiful of the fast variants" of this function.
# If you think you can improve on it, please ensure that your version
# is both 10x faster and 10x more beautiful.
# Algorithm: https://w.wiki/Qai
# Yield the permutation we have
# Find the largest index i such that A[i] < A[i + 1]
# Find the largest index j greater than j such that A[i] < A[j]
# Swap the value of A[i] with that of A[j], then reverse the
# sequence from A[i + 1] to form the new permutation
# A[i + 1:][::-1]
# Algorithm: modified from the above
# Split A into the first r items and the last r items
# Starting from the right, find the first index of the head with
# value smaller than the maximum value of the tail - call it i.
# Starting from the left, find the first value of the tail
# with a value greater than head[i] and swap.
# If we didn't find one, start from the right and find the first
# index of the head with a value greater than head[i] and swap.
# Reverse head[i + 1:] and swap it with tail[:r - (i + 1)]
# head[i + 1:][::-1]
# functools.partial(_partial, ... )
# interleave(repeat(e), iterable) -> e, x_0, e, x_1, e, x_2...
# islice(..., 1, None) -> x_0, e, x_1, e, x_2...
# interleave(filler, chunks) -> [e], [x_0, x_1], [e], [x_2, x_3]...
# islice(..., 1, None) -> [x_0, x_1], [e], [x_2, x_3]...
# flatten(...) -> x_0, x_1, e, x_2, x_3...
# Generate first window
# Deal with the first window not being full
# Create the filler for the next windows. The padding ensures
# we have just enough elements to fill the last window.
# Generate the rest of the windows
# The length-1 substrings
# And the rest
# If we've cached some items that match the target value, emit
# the first one and evict it from the cache.
# Otherwise we need to advance the parent iterator to search for
# a matching item, caching the rest.
# sort iterables by length, descending
# the longest iterable is the primary one (Bresenham: the longest
# distance along an axis)
# update errors for each secondary iterable
# those iterables for which the error is negative are yielded
# ("diagonal step" in Bresenham)
# equivalent to `list.pop` but slightly faster
# Add our first node group, treat the iterable as a single node
# Check if beyond max level
# Check if done iterating
# Otherwise try to create child nodes
# Save our current location
# Append the new child node
# Break to process child node
# convert the iterable argument into an iterator so its contents can
# be consumed by islice in case it is a generator
# While elements exist produce slices of size n
# Ensure the first batch is at least size n then iterate
# if there is no key function, the key argument to sorted is an
# itemgetter
# if there is a key function, call it with the items at the offsets
# specified by the key function as arguments
# if key_list contains a single item, pass the item at that offset
# as the only argument to the key function
# if key_list contains multiple items, use itemgetter to return a
# tuple of items, which we pass as *args to the key function
# empty iterable, e.g. zip([], [], [])
# spy returns a one-length iterable as head
# If we have an iterable like iter([(1, 2, 3), (4, 5), (6,)]),
# the second unzipped iterable fails at the third tuple since
# it tries to access (6,)[1].
# Same with the third unzipped iterable and the second tuple.
# To support these "improperly zipped" iterables, we suppress
# the IndexError, which just stops the unzipped iterables at
# first length mismatch.
# Allow distance=0 mainly for testing that it reproduces results with map()
# True if both empty
# -self._len < key.start < self._len
# -self._len < key.stop < self._len
# distance > 0 and step > 0: regular euclidean division
# Consume all but the last -start items
# Adjust start to be positive
# Adjust stop to be positive
# Slice the cache
# pop and yield the item.
# We don't want to use an intermediate variable
# it would extend the lifetime of the current item
# just pop and discard the item
# Advance to the start position
# When stop is negative, we have to carry -stop items while
# iterating
# When both start and stop are positive we have the normal case
# Consume all but the last items
# If start and stop are both negative they are comparable and
# we can just slice. Otherwise we can adjust start to be negative
# and then slice.
# Advance to the stop position
# stop is positive, so if start is negative they are not comparable
# and we need the rest of the items.
# stop is None and start is positive, so we just need items up to
# the start index.
# Both stop and start are positive, so they are comparable.
# See https://sites.google.com/site/bbayles/index/decorator_factory for
# notes on how this works.
# Save the substitutes iterable, since it's used more than once
# Add padding such that the number of windows matches the length of the
# iterable
# If the current window matches our predicate (and we haven't hit
# our maximum number of replacements), splice in the substitutes
# and then consume the following windows that overlap with this one.
# For example, if the iterable is (0, 1, 2, 3, 4...)
# and the window size is 2, we have (0, 1), (1, 2), (2, 3)...
# If the predicate matches on (0, 1), we need to zap (0, 1) and (1, 2)
# If there was no match (or we've reached the replacement limit),
# yield the first item from the window.
# if n not specified materialize everything
# materialize up to n
# return number materialized up to n
# Create new chunk
# Check to see whether we're at the end of the source iterable
# Fill previous chunk's cache
# Algorithm L in the 1994 paper by Kim-Hung Li:
# "Reservoir-Sampling Algorithms of Time Complexity O(n(1+log(N/n)))".
# Implementation of "A-ExpJ" from the 2006 paper by Efraimidis et al. :
# "Weighted random sampling with a reservoir".
# Log-transform for numerical stability for weights that are small/large
# Fill up the reservoir (collection of samples) with the first `k`
# weight-keys and elements, then heapify the list.
# The number of jumps before changing the reservoir is a random variable
# with an exponential distribution. Sample it using random() and logs.
# The notation here is consistent with the paper, but we store
# the weight-keys in log-space for better numerical stability.
# generate U(t_w, 1)
# Advance *i* steps ahead and consume an element
# Lazily import concurrent.future
# factorial(n)>0, and r<n so perm(n,r) is never zero
# Python versions below 3.8 don't have math.comb
# Initialize a buffer to process the chunks while keeping
# some back to fill any underfilled chunks
# Append items until we have a completed chunk
# Check if any chunks need addition processing
# Chunks are either size `full_size <= n` or `partial_size = full_size - 1`
# Yield chunks of full size
# Yield chunks of partial size
# Different branches depending on the presence of key. This saves a lot
# of unimportant copies which would slow the "key=None" branch
# significantly down.
# This is based on "Algorithm H" from section 7.2.1.1, page 20.
# a holds the indexes of the source iterables for the n-tuple to be yielded
# f is the array of "focus pointers"
# o is the array of "directions"
# At and above 688,383, the lb/ub spread is under 0.003 * p_n.
# https://en.wikipedia.org/wiki/Prime-counting_function#Inequalities
# Search from the midpoint and return the first odd prime
# zip with strict is available for Python 3.10+
# heapq max-heap functions are available for Python 3.14+
# Use functions that consume iterators at C speed.
# feed the entire iterator into a zero-length deque
# advance to the empty slice starting at position n
# Check whether the iterables are all the same size.
# All sizes are equal, we can use the built-in zip.
# If any one of the iterables didn't have a length, start reading
# them until one runs out.
# Algorithm credited to George Sakkis
# This implementation comes from an older version of the itertools
# documentation.  While the newer implementation is a bit clearer,
# this one was kept because the inlined window logic is faster
# and it avoids an unnecessary deque-to-tuple conversion.
# This deviates from the itertools documentation recipe - see
# https://github.com/more-itertools/more-itertools/issues/889
# Fast path for small, non-zero values of n.
# Normal path for other values of n.
# This recipe differs from the one in itertools docs in that it
# applies list() after each call to convolve().  This avoids
# hitting stack limits with nested generators.
# Slow path for general iterables
# Fast path for sequences
# documentation.  The newer implementation is easier to read but is
# less lazy.
# Return a factor of n using Pollard's rho algorithm.
# Efficient when n is odd and composite.
# Corner case reduction
# Trial division reduction
# Pollard's rho reduction
# Miller–Rabin primality test: https://oeis.org/A014233
# Separate instance of Random() that doesn't share state
# with the default user instance of Random().
# max-heap
# min-heap (same size as or one smaller than lo)
# max-heap (actually a minheap with negated values)
# handle ``pygmentize -L``
# print version
# handle ``pygmentize -H``
# parse -O options
# parse -P options
# encodings
# handle ``pygmentize -N``
# handle ``pygmentize -C``
# handle ``pygmentize -S``
# if no -S is given, -a is not allowed
# parse -F options
# -x: allow custom (eXternal) lexers and formatters
# select lexer
# given by name?
# custom lexer, located relative to user's cwd
# This can happen on Windows: If the lexername is
# C:\lexer.py -- return to normal load path in that case
# read input code
# do we have to guess the lexer?
# treat stdin as full file (-s support is later)
# read code from terminal, always in binary mode since we want to
# decode ourselves and be tolerant with it
# use .buffer to get a binary stream
# else the lexer will do the decoding
# -s option needs a lexer with -l
# process filters
# select formatter
# custom formatter, located relative to user's cwd
# Same logic as above for custom lexer
# determine output encoding if not explicitly selected
# output file? use lexer encoding for now (can still be None)
# else use terminal encoding
# provide coloring under Windows, if possible
# unfortunately colorama doesn't support binary streams on Py3
# When using the LaTeX formatter and the option `escapeinside` is
# specified, we need a special lexer which collects escaped text
# before running the chosen language lexer.
# ... and do it!
# process whole input as per normal...
# line by line processing of stdin (eg: for 'tail -f')...
# someone closed our stdout, e.g. by quitting a pager.
# extract relevant file and position info
# class: 'w'
# class: 'err'
# class: 'c'
# class: 'cm'
# class: 'cp'
# class: 'c1'
# class: 'cs'
# class: 'k'
# class: 'kc'
# class: 'kd'
# class: 'kn'
# class: 'kp'
# class: 'kr'
# class: 'kt'
# class: 'o'
# class: 'ow'
# class: 'n'
# class: 'na'
# class: 'nb'
# class: 'bp'
# class: 'nc'
# class: 'no'
# class: 'nd'
# class: 'ni'
# class: 'ne'
# class: 'nf'
# class: 'py'
# class: 'nl'
# class: 'nn'
# class: 'nx'
# class: 'nt'
# class: 'nv'
# class: 'vc'
# class: 'vg'
# class: 'vi'
# class: 'm'
# class: 'mf'
# class: 'mh'
# class: 'mi'
# class: 'il'
# class: 'mo'
# class: 's'
# class: 'sb'
# class: 'sc'
# class: 'sd'
# class: 's2'
# class: 'se'
# class: 'sh'
# class: 'si'
# class: 'sx'
# class: 'sr'
# class: 's1'
# class: 'ss'
# class: 'g'
# class: 'gd',
# class: 'ge'
# class: 'gr'
# class: 'gh'
# class: 'gi'
# class: 'go'
# class: 'gp'
# class: 'gs'
# class: 'gu'
# class: 'gt'
# DimGray
# Maroon
# Red
# No corresponding class for the following:
# class:  ''
# class 'x'
# class: 'ow' - like keywords
# class: 'p'
# class: 'na' - to be revised
# class: 'nc' - to be revised
# class: 'no' - to be revised
# class: 'nd' - to be revised
# class: 'nn' - to be revised
# class: 'nt' - like a keyword
# class: 'nv' - to be revised
# class: 'vc' - to be revised
# class: 'vg' - to be revised
# class: 'vi' - to be revised
# class: 'l'
# class: 'ld'
# class: 'sd' - like a comment
# class: 'ges'
# In Obj-C code this token is used to colour Cocoa types
# Workaround for a BUG here: lexer treats multiline method signatres as labels
# Don't show it in the gallery, it's intended for LilyPond
# input only and doesn't show good output on Python code.
#"#911520",
# includes durations
# A bare 11 is not distinguishable from a number, so we highlight
# the same.
#Text:                     "", # class:  ''
# because special names such as Name.Class, Name.Function, etc.
# are not recognized as such later in the parsing, we choose them
# to look the same as ordinary variables.
# since the tango light blue does not show up well in text, we choose
# a pure blue instead.
# class: 'gd'
#Keyword:                   "bold #AA22FF",
#String.Symbol:             "#B8860B",
# vars are defined to match the defs in
# - [GitHub's VS Code theme](https://github.com/primer/github-vscode-theme) and
# - [Primer styles](https://github.com/primer/primitives)
# has transparency in VS Code theme as `colors.codemirror.activelineBg`
#: Map token types to a tuple of color values for light and dark
#: backgrounds.
#compat w/ ansi
# compat w/ ansi
# italic
# bold
# underline (\x1F) not supported
# backgrounds (\x03FF,BB) not supported
# actual color - may have issues with ircformat("red", "blah")+"10" type stuff
## Small explanation of the mess below :)
# The previous version of the LaTeX formatter just assigned a command to
# each token type defined in the current style.  That obviously is
# problematic if the highlighted code is produced for a different style
# than the style commands themselves.
# This version works much like the HTML formatter which assigns multiple
# CSS classes to each <span> tag, from the most specific to the least
# specific token type, thus falling back to the parent token type if one
# is not defined.  Here, the classes are there too and use the same short
# forms given in token.STANDARD_TYPES.
# Highlighted code now only uses one custom command, which by default is
# \PY and selectable by the commandprefix option (and in addition the
# escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for
# backwards compatibility purposes).
# \PY has two arguments: the classes, separated by +, and the text to
# render in that style.  The classes are resolved into the respective
# style commands by magic, which serves to ignore unknown classes.
# The magic macros are:
# * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text
# * \PY@reset resets \PY@it etc. to do nothing.
# * \PY@toks parses the list of classes, using magic inspired by the
# * \PY@tok processes one class, calling the \PY@tok@classname command
# * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style
# * \PY resets the style, parses the classnames and then calls \PY@do.
# Tip: to read this code, print it out in substituted form using e.g.
# >>> print STYLE_TEMPLATE % {'cp': 'PY'}
# TODO: add support for background colors
# Try to guess comment starting lexeme and escape it ...
# ... but do not escape inside comment.
# Only escape parts not inside a math environment.
# not in current style
# map known existings encodings from LaTeX distribution
# find and remove all the escape tokens (replace with an empty string)
# this is very similar to DelegatingLexer.get_tokens_unprocessed.
# style color is the css value 'inherit'
# default to ansi bright-black
# style color is assumed to be a hex triplet as other
# colors in pygments/style.py
# empty strings, should give a small performance improvement
# escape text
# ASCII character
# single unicode escape sequence
# RTF limits unicode to 16 bits.
# Force surrogate pairs
# rtf 1.8 header
# color table
# font and fontsize
# ensure Libre Office Writer imports and renders consecutive
# space characters the same width, needed for line numbering.
# https://bugs.documentfoundation.org/show_bug.cgi?id=144050
# first pass of tokens to count lines, needed for line numbering
# for copying the token source generator
# width of line number strings (for padding with spaces)
# highlight stream
# complete line of input
# close line highlighting
# newline in RTF file after closing }
# Import this carefully
# For some unknown reason every font calls it something different
# A sane default for modern systems
# If we get here, we checked all registry keys and had no luck
# We can be in one of two situations now:
# * All key lookups failed. In this case lookuperror is None and we
# * At least one lookup failed with a FontNotFound error. In this
# Pillow >= 9.2.0
# Required by the pygments mapper
# let pygments.format() do the right thing
# Read the style
# Image options
# The fonts
# Line number options
# TODO: make sure tab expansion happens earlier in the chain.  It
# really ought to be done on the input, as to do it right here is
# quite complex.
# print lines
# add a line for each extra line in the value
# Highlight
# see deprecations https://pillow.readthedocs.io/en/stable/releasenotes/9.2.0.html#font-size-and-offset-methods
# Add one formatter per format, so that the "-f gif" option gives the correct result
# when used in pygmentize.
# Check if the color can be shortened from 6 to 3 characters
# compatibility with <= 0.7
# save len(ttype) to enable ordering the styles by
# hierarchy (necessary for CSS cascading rules!)
# it's an absolute filename
# pseudo files, e.g. name == '<fdopen>'
# write CSS file only if noclobber_cssfile isn't given as an option.
# If a filename was specified, we can't put it into the code table as it
# would misalign the line numbers. Hence we emit a separate row for it.
# in case you wonder about the seemingly redundant <div> here: since the
# content in the other cell also is wrapped in a div, some browsers in
# some configurations seem to mess up the formatting...
# need a list of lines since we need the width of a single number :(
# subtract 1 since we have to increment i *before* yielding
# the empty span here is to keep leading empty lines from being
# ignored by HTML parsers
# for all but the last line
# Also check for part being non-empty, so we avoid creating
# empty <span> tags
# both are the same, or the current part was empty
# for the last line
# else we neither have to open a new span nor set lspan
# i + 1 because Python indexes start at 0
# As a special case, we wrap line numbers before line highlighting
# so the line numbers get wrapped in the highlighting tag.
# self.colorscheme is a dict containing usually generic types, so we
# have to walk the tree of dots.  The base Token type must be a key,
# even if it's empty string, as in the default above.
# extract fg color code.
# extract fg color code, add 10 for bg.
# build an RGB-to-256 color conversion table
# convert selected style's colors to term. colors
# strip the `ansi/#ansi` part and look up code
# get foreground from ansicolor if set
# outfile.write( "<" + str(ttype) + ">" )
# Like TerminalFormatter, add "reset colors" escape sequence
# on newline.
# outfile.write( '#' + str(ttype) + '#' )
# ottype = ttype
# outfile.write( '!' + str(ottype) + '->' + str(ttype) + '!' )
# We ignore self.encoding if it is set, since it gets set for lexer
# and formatter if given with -Oencoding on the command line.
# The RawTokenFormatter outputs only ASCII. Override here.
# there are no common BBcodes for background-color and border
# ==========
# '⍝' is traditional; '#' is supported by GNU APL and NGN (but not Dyalog)
# Strings
# supported by NGN APL
# Punctuation
# ===========
# This token type is used for diamond and parenthesis
# but not for bracket and ; (see below)
# Array indexing
# Since this token type is very important in APL, it is not included in
# the punctuation token type but rather in the following one
# Distinguished names
# ===================
# following IBM APL2 standard
# Labels
# ======
# (r'[A-Za-zΔ∆⍙][A-Za-zΔ∆⍙_¯0-9]*:', Name.Label),
# Variables
# following IBM APL2 standard (with a leading _ ok for GNU APL and Dyalog)
# Numbers
# closest token type
# Constant
# ========
# Quad symbol
# Arrows left/right
# D-Fn
# Please note that keyword and operator are case insensitive.
# skip first newline
# if it is not a multipart
# find boundary
# some data has prefix text before first boundary
# process tokens of each body part
# bodypart
# boundary
# some data has suffix text after last boundary
# return if:
# get lexer
# folding
# Empty/comment line:
# Special directives:
# TODO, $GENERATE https://bind9.readthedocs.io/en/v9.18.14/chapter3.html#soa-rr
# Records:
# <domain-name> [<TTL>] [<class>] <type> <RDATA> [<comment>]
# <domain-name> [<class>] [<TTL>] <type> <RDATA> [<comment>]
# Parsing values:
# Parsing nested values (...):
# Array functions
# Date & Time functions
# Map functions
# Math functions
# Other functions
# Set functions
# String functions
# Type Conversion functions
# Domain-Specific functions
# rust allows a file to start with a shebang, but if the first line
# starts with #![ then it's not a shebang but a crate attribute.
# Whitespace and Comments
# Macro parameters
# Keywords
# Prelude (taken from Rust's src/libstd/prelude.rs)
# Path separators, so types don't catch them.
# Types in positions.
# Character literals
# Binary literals
# Octal literals
# Hexadecimal literals
# Decimal literals
# Lifetime names
# Operators and Punctuation
# Raw identifiers
# Misc
# Lone hashes: not used in Rust syntax, but allowed in macro
# arguments, most famously for quote::quote!()
# operators
# Bit operators
# Logical operators
# Relational operators
# String operators
# Numeric operators
# SCRIPT STATEMENTS
# control statements
# prefixes
# regular statements
# alias ... as ...
# comment fields ... using ...
# comment field ... with ...
# comment table ... with ...
# comment tables ... using ...
# ODBC CONNECT TO ...
# OLEDB CONNECT TO ...
# CUSTOM CONNECT TO ...
# LIB CONNECT TO ...
# Qualifiers
# Script functions
# Basic aggregation functions in the data load script
# Counter aggregation functions in the data load script
# Financial aggregation functions in the data load script
# Statistical aggregation functions in the data load script
# Statistical test functions
# Two independent samples t-tests
# Two independent weighted samples t-tests
# One sample t-tests
# One weighted sample t-tests
# One column format functions
# Weighted two-column format functions
# String aggregation functions in the data load script
# Synthetic dimension functions
# Color functions
# Conditional functions
# Counter functions
# Integer expressions of time
# Timestamp functions
# Make functions
# Other date functions
# Set time functions
# In... functions
# Start ... end functions
# Day numbering functions
# Exponential and logarithmic
# Count functions
# Field and selection functions
# File functions
# Financial functions
# Formatting functions
# General numeric functions
# Combination and permutation functions
# Modulo functions
# Parity functions
# Rounding functions
# Geospatial functions
# Interpretation functions
# Field functions
# Inter-record functions in the data load script
# Logical functions
# Mapping functions
# Mathematical functions
# NULL functions
# Basic range functions
# Counter range functions
# Statistical range functions
# Financial range functions
# Statistical distribution
# System functions
# Table functions
# System variables and constants
# see https://help.qlik.com/en-US/sense/August2021/Subsystems/Hub/Content/Sense_Hub/Scripting/work-with-variables-in-data-load-editor.htm
# System Variables
# value handling variables
# Currency formatting
# Number formatting
# Time formatting
# Error variables
# Other
# node
# quoted node
# Input expression terminator.
# Function definition operator.
# Accepts hostname with or without port.
# Line continuations
# Comments - inline and multiline
# Strings - single and double
# Functions (working)
# Event and Struct
# Numeric Literals
# Visibility and State Mutability
# Built-in Functions
# Built-in Variables and Attributes
# indexed keywords
# Other variable names and types
# Matches double underscores followed by word characters
# Generic names and variables
# the comment delimiters must not be adjacent to non-space characters.
# this means ( foo ) is a valid comment but (foo) is not. this also
# applies to nested comments.
# nested comments
# instructions
# delimiters
# integer
# raw string
# raw integer
# abs/rel pad
# macro
# label
# sublabel
# spacer
# literal zero page addr
# literal rel addr
# literal abs addr
# raw zero page addr
# raw relative addr
# raw absolute addr
# immediate jump
# conditional immediate jump
# include
# macro invocation, immediate subroutine
# 1. Does not start from "
# 2. Can start from ` and end with `, containing any character
# 3. Starts with underscore or { or } and have more than 1 character after it
# 4. Starts with letter, contains letters, numbers and underscores
# identifier followed by (
# list of elements
# list of in-built () functions
# with dot
# With scientific notation
# integer simple
# you can't generally find out what module a function belongs to if you
# have only its name. Because of this, here are some callback functions
# that recognize if a gioven function belongs to a specific module
# Comment: Double dash comment
# Comment: Double backslash comment
#Functions
# Keyword
#StringLiteral
# Column reference
#Measure reference
#Parenthesis
# Float Literal
# -- Hex Float
# -- DecimalFloat
# IntegerLiteral
# -- Binary
# -- Octal
# -- Hexadecimal
# -- Decimal
# Preprocessor
# (r'/[*](.|\n)*?[*]/', Comment),
# (r'//.*?\n', Comment, '#pop'),
# anonymous functions
# anonymous classes
# See the language specification:
# http://whiley.org/download/WhileyLanguageSpec.pdf
# Comments
# don't parse empty comment as doc comment
# "constant" & "type" are not keywords unless used in declarations
# "from" is not a keyword unless used with import
# standard library: https://github.com/Whiley/WhileyLibs/
# types defined in whiley.lang.Int
# whiley.lang.Any
# byte literal
# decimal literal
# match "1." but not ranges like "3..5"
# integer literal
# character literal
# string literal
# operators and punctuation
# unicode operators
# identifier
# literal with prepended base
# quoted atom
# Needs to not be followed by an atom.
# (r'=(?=\s|[a-zA-Z\[])', Operator),
# The don't-care variable
# function defn
# atom, characters
# This one includes !
# atom, graphics
# Visual Prolog also uses :-
# Directives
# Event handlers
# Message forwarding handler
# Execution-context methods
# Reflection
# DCGs and term expansion
# Entity
# Entity relations
# Flags
# Compiling, loading, and library paths
# Database
# Control constructs
# All solutions
# Multi-threading predicates
# Engine predicates
# Term unification
# Term creation and decomposition
# Evaluable functors
# Other arithmetic functors
# Term testing
# Term comparison
# Stream selection and control
# Character and byte input/output
# Term input/output
# Atomic term processing
# Implementation defined hooks functions
# Message sending operators
# External call
# Logic and control
# Sorting
# Bitwise functors
# Predicate aliases
# Arithmetic evaluation
# Arithmetic comparison
# DCG rules
# Mode operators
# Existential quantifier
# Atoms
# Double-quoted terms
# Conditional compilation directives
# Entity directives
# Predicate scope directives
# Other directives
# End of entity-opening directive
# Scope operator
# Whitespace:
# (r'--\s*|.*$', Comment.Doc),
# Lexemes:
# this has to come before the TH quote
# tuples and lists get special treatment in GHC
# ..
# promoted type operators
# lambda operator
# specials
# Constructor operators
# Other operators
# Import statements
# after "funclist" state
# import X as Y
# import X hiding (functions)
# import X (functions)
# import X
# (HACK, but it makes sense to push two instances, believe me)
# NOTE: the next four states are shared in the AgdaLexer; make sure
# any change is compatible with Agda as well or copy over and change
# Multiline Comments
# Allows multi-chars, incorrectly.
# Declaration
# Holes
# TODO: these don't match the comments in docs, remove.
# (r'--(?![!#$%&*+./<=>?@^|_~:\\]).*?$', Comment.Single),
# (r'{-', Comment.Multiline, 'comment'),
# bird-style
# latex-style
# keywords that are followed by a type
# keywords valid in a type
# builtin names and special names
# symbols that can be in an operator
# symbol boundary: an operator keyword should not be followed by any of these
# name boundary: a keyword should not be followed by any of these
# koka token abstractions
# main lexer
# go into type mode
# special sequences of tokens (we use ?: for non-capturing group as
# required by 'bygroups')
# keywords
# names
# literal string
# literals. No check for literal characters with len > 1
# type started by alias
# type started by struct
# type started by colon
# type nested in brackets: can contain parameters, comma etc.
# parameter name
# shared contents of a type
# need to match because names overlap...
# kinds
# type names
# Generic.Emph
# type keyword operators
# catchall
# comments and literals
# Yes, \U literals are 6 hex digits.
# numeric literals
# char literal
# tokens
# identifiers
# Types
# Main
# (specialName, Keyword.Reserved),
# Prefix Operators
# Infix Operators
# Variable Names
# Parens
# Not used by itself
# Jsonnet has no integers, only an IEEE754 64-bit float
# Omit : despite spec because it appears to be used as a field
# separator
# heading with pound prefix
# bulleted lists
# numbered lists
# text block
# escape
# underlines
# inline code
# general text, must come last!
# name(section) ["left_footer" ["center_header"]]
# At this time, no easily-parsed, definitive list of data types
# has been found in the MySQL source code or documentation. (The
# `sql/sql_yacc.yy` file is definitive but is difficult to parse.)
# Therefore these types are currently maintained manually.
# Some words in this list -- like "long", "national", "precision",
# and "varying" -- appear to only occur in combination with other
# data type keywords. Therefore they are included as separate words
# even though they do not naturally occur in syntax separately.
# This list is also used to strip data types out of the list of
# MySQL keywords, which is automatically updated later in the file.
# Numeric data types
# Date and time data types
# String data types
# Spatial data types
# JSON data types
# Everything below this line is auto-generated from the MySQL source code.
# Run this file in Python and it will update itself.
# MySQL source code
# Pull content from lex.h.
# Parse content in item_create.cc.
# Remove data types from the set of keywords.
# Line to start/end inserting
# marker: ###MARK###
# constant: {$some.constant}
# constant
# constant: {register:somevalue}
# whitespace
# INCLUDE_TYPOSCRIPT
# Language label or extension resource FILE:... or LLL:... or EXT:...
# Conditions
# Toplevel objects and _*
# Content objects
# Menu states
# Menu objects
# PHP objects
# (r'[0-9]*\.[0-9]+([eE][0-9]+)?[fd]?\s*(?:[^=])', Number.Float),
# Path to a resource
# Brackets and braces
# Constant: {$some.constant}
# Constant: {register:somevalue}
# Hex color: #ff0077
# Translated strings
# Cast expressions
# Closures
# Objects
# Menus
# Templates
# Nested blocks. When extensions are added, this is where they go.
# Properties and signals
# compatibility import
# XXX: those aren't global. but currently we know no way for defining
# stop label highlighting on next ";"
# abort function naming ``foo = Function(...)``
# if we are in a function block we count the open
# braces because ootherwise it's impossible to
# determine the end of the modifier context
# if we are in a special block and a
# block ending keyword occurs (and the parenthesis
# is balanced) we end the current block context
# we are in a function block and the current name
# is in the set of registered modifiers. highlight
# it as pseudo keyword
# if we are in a property highlight some more
# modifiers
# if the last iteration set next_token_is_function
# to true we now want this name highlighted as
# function. so do that and reset the state
# Look if the next token is a dot. If yes it's
# not a function, but a class name and the
# part after the dot a function name
# it's not a dot, our job is done
# same for properties
# Highlight this token as label and add it
# to the list of known labels
# name is in list of known labels
# builtins are just builtins if the token
# before isn't a dot
# if the stack depth is deeper than once, pop
# save the dot!!!11
# Autogenerated
# Invalid DISPLAY causes this to be output:
# only keep first type for a given word
# PGP; *.gpg, *.pgp, and *.sig too, but those can be binary
# X.509; *.cer, *.crt, *.csr, and key etc too, but those can be binary
# SSH private keys
# `var` handled separately
# `interface` handled separately
# Unary
# Binary
# Binary augmented
# Comparison
# Patterns and assignment
# Calls and sends
# _char = _escape_chars + [('.', String.Char)]
# Void constants
# Bool constants
# Double constants
# Special objects
# Docstrings
# Apologies for the non-greedy matcher here.
# `var` declarations
# `interface` declarations
# method declarations
# All other declarations
# Quasiliterals
# Verb operators
# Safe scope constants
# Safe scope guards
# All other safe scope names
# Definite lexer errors
# It is definitely an error to have a char of width == 0.
# It is definitely an error to have a char of width > 1.
# The state of things coming into an interface.
# The state of things coming into a method.
# The state of things immediately following `var`.
# List of vendor prefixes obtained from:
# https://www.w3.org/TR/CSS21/syndata.html#vendor-keyword-history
# List of extended color keywords obtained from:
# https://drafts.csswg.org/css-color/#named-colors
# List of keyword values obtained from:
# http://cssvalues.com/
# List of other keyword values from other sources:
# List of functional notation and function keyword values:
# Note! Handle url(...) separately.
# List of units obtained from:
# https://www.w3.org/TR/css3-values/
# for transition-property etc.
# function-start may be entered recursively
# TODO: broken, and prone to infinite loops.
# (r'(?=[^;{}][;}])', Name.Attribute, 'attr'),
# (r'(?=[^;{}:]+:[^a-z])', Name.Attribute, 'attr'),
# field
# content
# lowlight
# mailbox
# domain
# IPv4
# Date time
# RFC-2047 encoded string
#: optional Comment or Whitespace
# For objdump-disassembled code, shouldn't occur in
# actual assembler input
# Address constants
# Registers
# Numeric constants
# File name & format:
# Section header
# Function labels
# (With offset)
# (Without offset)
# Code line with disassembled instructions
# Code line without raw instructions (objdump --no-show-raw-insn)
# Code line with ascii
# Continued code line, only raw opcodes without disassembled
# instruction
# Skipped a few bytes
# Relocation line
# Instruction Modifiers
# packedTypes
# baseTypes
# opaqueType
# Numeric Constant
# Regular keywords
# Integer types
# Before keywords, because keywords are valid label names :(...
# Attributes on basic blocks
# Basic Block Labels
# Stack references
# Subreg indices
# Virtual registers
# Reference to LLVM-IR global
# Reference to Intrinsic
# Comparison predicates
# Physical registers
# Assignment operator
# gMIR Opcodes
# Target independent opcodes
# ConstantInt values
# ConstantFloat values
# Bare immediates
# MMO's
# MIR Comments
# If we get here, assume it's a target instruction
# Everything else that isn't highlighted
# The integer constant from a ConstantInt value
# The floating point constant from a ConstantFloat value
# The bank or class if there is one
# The LLT if there is one
# The unassigned bank/class
# Scalar and pointer types
# IR references
# Comments are hashes at the YAML level
# Documents starting with | are LLVM-IR
# Other documents are MIR
# Consume everything else in one token for efficiency
# Documents end with '...' or '---'
# Delegate to the LlvmLexer
# Handle the simple attributes
# Handle the attributes don't highlight inside
# Delegate the body block to the LlvmMirBodyLexer
# Consume everything else
# Documents end with '...' or '---'.
# We have to pop llvm_mir_body and llvm_mir
# The '...' is optional. If we didn't already find it then it isn't
# there. There might be a '---' instead though.
# Tasm uses the same file endings, but TASM is not as common as NASM, so
# we prioritize NASM higher by default
# Directives must be followed by whitespace, otherwise CPU will match
# cpuid for instance.
# Probably TASM
# T[A-Z][a-z] is more of a convention. Lexer should filter out STRUC definitions
# and then 'add' them to datatype somehow.
# Do not match newline when it's preceded by a backslash
# See above
# comments in GAS start with "#"
# Regexes yo
# see https://docs.julialang.org/en/v1/manual/variables/#Allowed-Variable-Names
# see https://github.com/JuliaLang/julia/blob/master/src/flisp/julia_opsuffs.h
# symbols
# type assertions - excludes expressions like ::typeof(sin) and ::avec[1]
# type comparisons
# - MyType <: A or MyType >: A
# - <: B or >: B
# - A <: or A >:
# Suffixes aren't actually allowed on all operators, but we'll ignore that
# since those cases are invalid Julia code.
# Patterns below work only for definition sites and thus hardly reliable.
# (r'(function)(\s+)(' + allowed_variable + ')',
# chars
# try to match trailing transpose
# raw strings
# regular expressions
# other strings
# backticks
# - names that begin a curly expression
# - names as part of bare 'where'
# - curly expressions in general
# - names as part of type declaration
# macros
# builtin types
# builtin literals
# single dot operator matched last to permit e.g. ".1" as a float
# Interpolation is defined as "$" followed by the shortest full
# expression, which is something we can't parse.  Include the most
# common cases here: $word, and $(paren'd expr).
# FIXME: This escape pattern is not perfect.
# @printf and @sprintf formats
#Keywords from RFC7950 ; oriented at BNF style
#RFC7950 other keywords
#RFC7950 Built-In Types
#match BNF stmt for `node-identifier` with [ prefix ":"]
#match BNF stmt `date-arg-str`
# Pragmas
# Case statement branch
# Char
# quotes, dollars and backslashes must be parsed one at a time
# The canonical version of this file can be found in the following repository,
# where it is kept in sync with any language changes, as well as the other
# pygments-like lexers that are maintained for use with other tools:
# - https://github.com/savi-lang/savi/blob/main/tooling/pygments/lexers/savi.py
# If you're changing this file in the pygments repository, please ensure that
# any changes you make are also propagated to the official Savi repository,
# in order to avoid accidental clobbering of your changes later when an update
# from the Savi repository flows forward into the pygments repository.
# If you're changing this file in the Savi repository, please ensure that
# any changes you make are also reflected in the other pygments-like lexers
# (rouge, vscode, etc) so that all of the lexers can be kept cleanly in sync.
# Line Comment
# Doc Comment
# Capability Operator
# Double-Quote String
# Single-Char String
# Type Name
# Nested Type Name
# Declare
# Error-Raising Calls/Names
# Numeric Values
# Hex Numeric Values
# Binary Numeric Values
# Function Call (with braces)
# Function Call (with receiver)
# Function Call (with self receiver)
# Parenthesis
# Brace
# Bracket
# Piping Operators
# Branching Operators
# Comparison Operators
# Arithmetic Operators
# Assignment Operators
# Other Operators
# Declare (nested rules)
# Double-Quote String (nested rules)
# Single-Char String (nested rules)
# Interpolation inside String (nested rules)
# If the very first line is 'vcl 4.0;' it's pretty much guaranteed
# that this is VCL
# Skip over comments and blank lines
# This is accurate enough that returning 0.9 is reasonable.
# Almost no VCL files start without some comments.
# all other characters
# line continuation
# override method inherited from VCLLexer
# operators:
# numbers (must come before punctuation to handle `.5`; cannot use
# `\b` due to e.g. `5. + .5`).  The negative lookahead on operators
# avoids including the dot in `1./x` (the dot is part of `./`).
# punctuation:
# quote can be transpose, instead of string:
# (not great, but handles common cases...)
# line starting with '!' is sent as a system command.  not sure what
# label to use...
# from 'iskeyword' on version 9.4 (R2018a):
# Check that there is no preceding dot, as keywords are valid field
# See https://mathworks.com/help/matlab/referencelist.html
# Below data from 2021-02-10T18:24:08Z
# for Matlab release R2020b
# Exclude field names
# line continuation with following comment:
# command form:
# "How MATLAB Recognizes Command Syntax" specifies that an operator
# is recognized if it is either surrounded by spaces or by no
# spaces on both sides (this allows distinguishing `cd ./foo` from
# `cd ./ foo`.).  Here, the regex checks that the first word in the
# line is not followed by <spaces> and then
# (equal | open-parenthesis | <operator><space> | <space>).
# function with no args
# If an equal sign or other operator is encountered, this
# isn't a command. It might be a variable assignment or
# comparison operation with multiple spaces before the
# equal sign or operator
# function declaration.
# system cmd
# without is showing error on same line as before...?
# line = "\n" + line
# line_start is the length of the most recent prompt symbol
# Set leading spaces with the length of the prompt to be a generic prompt
# This keeps code aligned when prompts are removed, say with some Javascript
# Does not allow continuation if a comment is included after the ellipses.
# Continues any line that ends with ..., even comments (lines that start with %)
# or item:
# These lists are generated automatically.
# Run the following in bash shell:
# First dump all of the Octave manual into a plain text file:
# Now grep through it:
# for i in \
# do
# taken from Octave Mercurial changeset 8cc154f45e37 (30-jan-2011)
# from 'iskeyword' on hg changeset 8cc154f45e37
# operators in Octave but not Matlab:
# operators in Octave but not Matlab requiring escape for re:
# operators requiring escape for re:
# the following is needed to distinguish Scilab and GAP .tst files
# Scilab comments (don't appear in e.g. GAP code)
# Whitespaces
# Insignificant commas
# Field
# Fragments
# For NamedType
# Fragment name
# Type condition
# comments starting with #
# multiline comments
# highlight the builtins
# floats
# integers
# word operators
# punctuations
# urls
# names of variables
# TODO: we should probably escape also here ''${ \${
# TODO: let/in
# Autogenerated: please edit them if you like wasting your time.
# Remove 'trigger' from types
# Most of these keywords are from ExplainNode function
# in src/backend/commands/explain.c
# One man's constant is another man's variable.
# Parse a string such as
# time [ (<replaceable>p</replaceable>) ] [ without time zone ]
# into types "time" and "without time zone"
# remove all the tags
# Drop the parts containing braces
# In LilyPond, (unquoted) name tokens only contain letters, hyphens,
# and underscores, where hyphens and underscores must not start or end
# a name token.
# Note that many of the entities listed as LilyPond built-in keywords
# (in file `_lilypond_builtins.py`) are only valid if surrounded by
# double quotes, for example, 'hufnagel-fa1'. This means that
# `NAME_END_RE` doesn't apply to such entities in valid LilyPond code.
# Because parsing LilyPond input is very tricky (and in fact
# impossible without executing LilyPond when there is Scheme
# code in the file), this lexer does not try to recognize
# lexical modes. Instead, it catches the most frequent pieces
# of syntax, and, above all, knows about many kinds of builtins.
# In order to parse embedded Scheme, this lexer subclasses the SchemeLexer.
# It redefines the 'root' state entirely, and adds a rule for #{ #}
# to the 'value' state. The latter is used to parse a Scheme expression
# after #.
# Whitespace.
# Multi-line comments. These are non-nestable.
# Simple comments.
# End of embedded LilyPond in Scheme.
# Embedded Scheme, starting with # ("delayed"),
# or $ (immediate). #@ and and $@ are the lesser known
# "list splicing operators".
# Any kind of punctuation:
# - sequential music: { },
# - parallel music: << >>,
# - voice separator: << \\ >>,
# - chord: < >,
# - bar check: |,
# - dot in nested properties: \revert NoteHead.color,
# - equals sign in assignments and lists for various commands:
# - comma as alternative syntax for lists: \time 3,3,2 4/4,
# - colon in tremolos: c:32,
# - double hyphen and underscore in lyrics: li -- ly -- pond __
# Pitches, with optional octavation marks, octave check,
# and forced or cautionary accidental.
# Strings, optionally with direction specifier.
# Numbers.
# 5. and .5 are not allowed
# Integers, or durations with optional augmentation dots.
# We have no way to distinguish these, so we highlight
# them all as numbers.
# Normally, there is a space before the integer (being an
# argument to a music function), which we check here.  The
# case without a space is handled below (as a fingering
# number).
# Separates duration and duration multiplier highlighted as fraction.
# Ties, slurs, manual beams.
# Predefined articulation shortcuts. A direction specifier is
# required here.
# Fingering numbers, string numbers.
# Builtins.
# Those like slurs that don't take a backslash are covered above.
# Optional backslash because of \layout { \context { \Score ... } }.
# Optional backslashes here because output definitions are wrappers
# around modules.  Concretely, you can do, e.g.,
# \paper { oddHeaderMarkup = \evenHeaderMarkup }
# Other backslashed-escaped names (like dereferencing a
# music variable), possibly with a direction specifier.
# Definition of a variable. Support assignments to alist keys
# (myAlist.my-key.my-nested-key = \markup \spam \eggs).
# Virtually everything can appear in markup mode, so we highlight
# as text.  Try to get a complete word, or we might wrongly lex
# a suffix that happens to be a builtin as a builtin (e.g., "myStaff").
# Scan a LilyPond value, then pop back since we had a
# complete expression.
# Grob subproperties are undeclared and it would be tedious
# to maintain them by hand. Instead, this state allows recognizing
# everything that looks like a-known-property.foo.bar-baz as
# one single property name.
# Scalar functions
# Vector functions
# Special
# Block start
# Reserved Words
# Regular variable names
# Number Literals
# SLexer makes these tokens Operators.
# Infix and prefix operators
# Block
# JAGS
# Truncation/Censoring (should I include)
# Distributions with density, probability and quartile functions
# Other distributions without density and probability
# do not use stateful comments
# Builtins
# Need to use lookahead because . is a valid char
# Names
# # JAGS includes many more than OpenBUGS
# block start
# target keyword
# Truncation
# Data types
# < should be punctuation, but elsewhere I can't tell if it is in
# a range constraint
# Builtin
# Special names ending in __, like lp__
# user-defined functions
# Imaginary Literals
# Real Literals
# Integer Literals
# Infix, prefix and postfix operators (and = )
# Block delimiters
# Distribution |
# pragmas match specifically on the space character
# labels must be followed by a space,
# but anything after that is ignored
# branch targets
# inferred
# for the range of allowed unicode characters in identifiers, see
# http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf
# method names
# return type
# method name
# signature start
# version 1: assumes that 'file' is the only contextual keyword
# that is a class modifier
# using (resource)
# compile the regexes now
# quasiquotation only
# (?)
# Line continuation  (must be before Name)
# any other syntax
# TODO support multiple languages within the same source file
# Very close to functional.OcamlLexer
# Reserved words; cannot hurt to color them as keywords too.
# See http://msdn.microsoft.com/en-us/library/dd233181.aspx and/or
# http://fsharp.org/about/files/spec.pdf for reference.  Good luck.
# a stray quote is another syntax element
# e.g. dictionary index access
# comments cannot be closed within strings in comments
# newlines are allowed in any string
# Temporary, see
# https://github.com/thatch/regexlint/pull/49
# ensure that if is not treated like a function
# declaration
# x++ specific function to get field should highlight the classname
# x++ specific function to get table should highlight the classname
# lit-word
# issue
# money
# time
# tuple
# pair
# url
# email
# The code starts with REBOL header
# The code contains REBOL header but also some text before it
# get-word
# Note: %= not listed on https://ooc-lang.github.io/docs/lang/operators/
# : introduces types
# pointer dereference
# imports or chain operator
# Error,
# In the original PR, all the below here used ((?:\s|\\\s)+) to
# designate whitespace, but I can't find any example of this being
# needed in the example file, so we're replacing it with `\s+`.
# TODO varname the right fit?
# not implemented yet
# TODO https://docs.modular.com/mojo/roadmap#no-async-for-or-async-with
# Mojo builtin types: https://docs.modular.com/mojo/stdlib/builtin/
# TODO supported?
# All comment types
# defining words. The next word is a new command name
# strings are rather simple
# keywords from the various wordsets
# *** Wordset BLOCK
# *** Wordset BLOCK-EXT
# *** Wordset CORE
# *** Wordset CORE-EXT
# *** Wordset CORE-EXT-obsolescent
# *** Wordset DOUBLE
# *** Wordset DOUBLE-EXT
# *** Wordset EXCEPTION
# *** Wordset EXCEPTION-EXT
# *** Wordset FACILITY
# *** Wordset FACILITY-EXT
# *** Wordset FILE
# *** Wordset FILE-EXT
# *** Wordset FLOAT
# *** Wordset FLOAT-EXT
# *** Wordset LOCAL
# *** Wordset LOCAL-EXT
# *** Wordset MEMORY
# *** Wordset SEARCH
# *** Wordset SEARCH-EXT
# *** Wordset STRING
# *** Wordset TOOLS
# *** Wordset TOOLS-EXT
# *** Wordset TOOLS-EXT-obsolescent
# Forth 2012
# amforth specific
# a proposal
# Anything else is executed
# Lua allows a file to start with a shebang.
# multiline strings
# inline function
# proper name
# set . as Operator instead of Punctuation
# XXX: use words() for all of these
# This is a possessive, consider moving
# Stray linefeed also terminates strings.
# Header matches MVS Rexx requirements, this is certainly a Rexx
# Header matches general Rexx requirements; the source code might
# still be any language using C comments such as C++, C# or Java.
# db-refs
# builtins
# special variables
# skip whitespace
# other operators
# function call
# Note: We cannot use r'\b' at the start and end of keywords because
# Easytrieve Plus delimiter characters are:
# Additionally words end once a '*' appears, indicatins a comment.
# Macro argument
# Macro call
# Procedure declaration
# Everything else just belongs to a name
# Remove possible empty lines and header comments.
# Looks like an Easytrieve macro.
# Scan the source for lines starting with indicators.
# Weight the findings.
# Found PARM, JOB and PROC/END-PROC:
# pretty sure this is Easytrieve.
# Found PARAM and  JOB: probably this is Easytrieve
# Found JOB and possibly other keywords: might be Easytrieve
# Note: PARAM is not a proper English word, so this is
# regarded a much better indicator for Easytrieve than
# the other words.
# TODO: JES3 statement
# Input text or inline code in any language.
# (r'\n', Text, 'root'),
# Modules
# Transforms
# non-raw s-strings
# Time and dates
# Template header without name:
# ``<?ul4?>``
# Template header with name (potentially followed by the signature):
# ``<?ul4 foo(bar=42)?>``
# Switch to "expression" mode
# Comment:
# ``<?note?>...<?end note?>``
# Switch to "note" mode
# ``<?note foobar?>``
# Template documentation:
# ``<?doc?>...<?end doc?>``
# ``<?doc foobar?>``
# ``<?ignore?>`` tag for commenting out code:
# ``<?ignore?>...<?end ignore?>``
# Switch to "ignore" mode
# ``<?def?>`` tag for defining local templates
# ``<?def foo(bar=42)?>...<?end def?>``
# The rest of the supported tags
# ``<?end?>`` tag for ending ``<?def?>``, ``<?for?>``,
# ``<?if?>``, ``<?while?>``, ``<?renderblock?>`` and
# ``<?renderblocks?>`` blocks.
# Switch to "end tag" mode
# ``<?whitespace?>`` tag for configuring whitespace handlng
# Switch to "whitespace" mode
# Plain text
# Ignore mode ignores everything upto the matching ``<?end ignore?>`` tag
# Nested ``<?ignore?>`` tag
# ``<?end ignore?>`` tag
# Everything else
# Note mode ignores everything upto the matching ``<?end note?>`` tag
# Nested ``<?note?>`` tag
# ``<?end note?>`` tag
# Doc mode ignores everything upto the matching ``<?end doc?>`` tag
# Nested ``<?doc?>`` tag
# ``<?end doc?>`` tag
# UL4 expressions
# End the tag
# Start triple quoted string constant
# Start single quoted string constant
# Floating point number
# Binary integer: ``0b101010``
# Octal integer: ``0o52``
# Hexadecimal integer: ``0x2a``
# Date or datetime: ``@(2000-02-29)``/``@(2000-02-29T12:34:56.987654)``
# Color: ``#fff``, ``#fff8f0`` etc.
# Decimal integer: ``42``
# Builtin constants
# Variable names
# ``<?end ...?>`` tag for closing the last open block
# Content of the ``<?whitespace ...?>`` tag:
# ``keep``, ``strip`` or ``smart``
# Inside a string constant
# Inside a triple quoted string started with ``'''``
# Inside a triple quoted string started with ``"""``
# Inside a single quoted string started with ``'``
# Inside a single quoted string started with ``"``
# These are classed differently, check later
# recognized by node.js
# Numeric literals
# Browsers support "0o7" and "07" (< ES5) notations
# Javascript BigInt requires an "n" postfix
# Javascript doesn't have actual integer literals, so every other
# numeric literal is handled by the regex below (including "normal")
# Match stuff like: constructor
# Match stuff like: super(argument, list)
# Match stuff like: function() {...}
# private identifier
# TODO: should this include single-line comments and allow nesting strings?
# Higher priority than the TypoScriptLexer, as TypeScript is far more
# common these days
# Match variable type keywords
# Match stuff like: module name {...}
# Match stuff like: (function: return type)
# Match stuff like: Decorators
# note that all kal strings are multi-line.
# hashmarks, quotes and backslashes must be parsed one at a time
# double-quoted string don't need ' escapes
# single quoted strings don't need " escapses
# no need to escape quotes in triple-string
# no need to escape quotes in triple-strings
# note that all coffee script strings are multi-line.
# DIGIT+ (‘.’ DIGIT*)? EXPONENT?
# ‘.’ DIGIT+ EXPONENT?
# pseudo-keyword negate intentionally left out
# Raw strings.
# Normal Strings.
# whitespace/comments
# literals
# definitions
# function definition
# class definition
# interface definition that inherits
# interface definition for a category
# simple interface / implementation
# start of a selector w/ parameters
# open paren
# close paren
# function name
# no-param function
# no return type given, start of a selector w/ parameters
# no return type given, no-param function
# type
# param name
# one piece of a selector name
# smallest possible selector piece
# var args
# stray backslash
# special directive found in most Objective-J files
# This isn't really guarding against mishighlighting well-formed
# code, just the ability to infinite-loop between root and
# slashstartsregex.
# Range Operator
# All strings are multiline in EG
# node does a nested ... thing depending on depth
# Yes, whole line as an operator
# Core
# A character constant is a sequence of the form #s, where s is a string
# constant denoting a string of size one character. This setup just parses
# the entire string as either a String.Double or a String.Char (depending
# on the argument), even if the String.Char is an erroneous
# multiple-character string.
# Control-character notation is used for codes < 32,
# where \^@ == \000
# Docs say 'decimal digits'
# Callbacks for distinguishing tokens and reserved words
# Whitespace and comments are (almost) everywhere
# This lexer treats these delimiters specially:
# Delimiters define scopes, and the scope is how the meaning of
# the `|' is resolved - is it a case/handle expression, or function
# definition by cases? (This is not how the Definition works, but
# it's how MLton behaves, see http://mlton.org/SMLNJDeviations)
# Punctuation that doesn't overlap symbolic identifiers
# Special constants: strings, floats, numbers in decimal and hex
# Some reserved words trigger a special, local lexer state change
# Regular identifiers, long and otherwise
# Main parser (prevents errors in files that have scoping errors)
# In this scope, I expect '|' to not be followed by a function name,
# and I expect 'and' to be followed by a binding site
# Special behavior of val/and/fun
# In this scope, I expect '|' and 'and' to be followed by a function
# Special behavior of '|' and '|'-manipulating keywords
# Character and string parsers
# Dealing with what comes after module system keywords
# Dealing with what comes after the 'fun' (or 'and' or '|') keyword
# Ignore interesting function declarations like "fun (x + y) = ..."
# Dealing with what comes after the 'val' (or 'and') keyword
# Ignore interesting patterns like 'val (x, y)'
# Dealing with what comes after the 'type' (or 'and') keyword
# A type binding includes most identifiers
# Dealing with what comes after the 'datatype' (or 'and') keyword
# common case - A | B | C of int
# Dealing with what comes after an exception
# Series of type variables
# most of these aren't strictly keywords
# but if you color only real keywords, you might just
# as well not color anything
# matches both stuff and `stuff`
# '{' and '}' are treated elsewhere
# because they are also used for inserts
# copied from the caml lexer, should be adapted
# factorizing these rules, because they are inserted many times
# we could parse the actual set of directives instead of anything
# starting with @, but this is troublesome
# because it needs to be adjusted all the time
# and assuming we parse only sources that compile, it is useless
# number literals
# color literals
# string literals
# char literal, should be checked because this is the regexp from
# the caml lexer
# this is meant to deal with embedded exprs in strings
# every time we find a '}' we pop a state so that if we were
# inside a string, we are back in the string state
# as a consequence, we must also push a state every time we find a
# '{' or else we will have errors when parsing {} for instance
# html literals
# this is a much more strict that the actual parser,
# since a<b would not be parsed as html
# but then again, the parser is way too lax, and we can't hope
# to have something as tolerant
# db path
# matching the '[_]' in '/a[_]' because it is a part
# of the syntax of the db path definition
# unfortunately, i don't know how to match the ']' in
# /a[1], so this is somewhat inconsistent
# putting the same color on <- as on db path, since
# it can be used only to mean Db.write
# 'modules'
# although modules are not distinguished by their names as in caml
# the standard library seems to follow the convention that modules
# only area capitalized
# = has a special role because this is the only
# way to syntactic distinguish binding constructions
# unfortunately, this colors the equal in {x=2} too
# coercions
# type variables
# we need this rule because we don't parse specially type
# definitions so in "type t('a) = ...", "'a" is parsed by 'root'
# id literal, #something, or #{expr}
# this avoids to color '2' in 'a2' as an integer
# default, not sure if that is needed or not
# (r'.', Text),
# it is quite painful to have to parse types to know where they end
# this is the general rule for a type
# a type is either:
# * -> ty
# * type-with-slash
# * type-with-slash -> ty
# * type-with-slash (, type-with-slash)+ -> ty
# the code is pretty funky in here, but this code would roughly
# translate in caml to:
# let rec type stream =
# match stream with
# | [< "->";  stream >] -> type stream
# | [< "";  stream >] ->
# and type_1 stream = ...
# parses all the atomic or closed constructions in the syntax of type
# expressions: record types, tuple types, type constructors, basic type
# and type variables
# this case is not in the syntax but sometimes
# we think we are parsing types when in fact we are parsing
# some css, so we just pop the states until we get back into
# the root state
# type-with-slash is either:
# * type-1
# * type-1 (/ type-1)+
# same remark as above
# we go in this state after having parsed a type-with-slash
# while trying to parse a type
# and at this point we must determine if we are parsing an arrow
# type (in which case we must continue parsing) or not (in which
# case we stop)
# the look ahead here allows to parse f(x : int, y : float -> truc)
# correctly
# no need to do precise parsing for tuples and records
# because they are closed constructions, so we can simply
# find the closing delimiter
# note that this function would be not work if the source
# contained identifiers like `{)` (although it could be patched
# to support it)
# 'type-tuple': [
# ],
# 'type-tuple-1': [
# 'type-record':[
# 'type-record-field-expr': [
# the copy pasting between string and single-string
# is kinda sad. Is there a way to avoid that??
# all the html stuff
# can't really reuse some existing html parser
# because we must be able to parse embedded expressions
# we are in this state after someone parsed the '<' that
# started the html literal
# we are in this state after someone parsed the '</' that
# started the end of the closing tag
# this is a star, because </> is allowed
# we are in this state after having parsed '<ident(:ident)?'
# we thus parse a possibly empty list of attributes
# this is a tail call!
# we should probably deal with '\' escapes here
# for infix applications
# for quoting
# https://www.openpolicyagent.org/docs/latest/philosophy/#the-opa-document-model
# Global variable for accessing base and virtual documents
# Represents synchronously pushed base documents
# Compound operators
# Single-character operators
# Autogenerated by external/scheme-builtins-generator.scm
# using Guile 3.0.5.130-5a1e7.
# Do nothing. This must be defined in subclasses.
# There is also a w statement that is generated internally and should not be
# used; see https://github.com/csound/csound/issues/750.
# z is a constant equal to 800,000,000,000. 800 billion seconds is about
# 25,367.8 years. See also
# https://csound.com/docs/manual/ScoreTop.html and
# https://github.com/csound/csound/search?q=stof+path%3AEngine+filename%3Asread.c.
# Braced strings are not allowed in Csound scores, but this is needed because the
# superclass includes it.
# https://github.com/csound/csound/search?q=XIDENT+path%3AEngine+filename%3Acsound_orc.lex
# https://github.com/csound/csound/search?q=unquote_string+path%3AEngine+filename%3Acsound_orc_compile.c
# Format specifiers are highlighted in all strings, even though only
# work with strings that contain format specifiers. In addition, these opcodes’
# handling of format specifiers is inconsistent:
# See https://github.com/csound/csound/issues/747 for more information.
# These tokens are based on those in XmlLexer in pygments/lexers/html.py. Making
# CsoundDocumentLexer a subclass of XmlLexer rather than RegexLexer may seem like a
# better idea, since Csound Document files look like XML files. However, Csound
# Documents can contain Csound comments (preceded by //, for example) before and
# after the root element, unescaped bitwise AND & and less than < operators, etc. In
# other words, while Csound Document files look like XML files, they may not actually
# be XML files.
# do not call Lexer.get_tokens() because stripping is not optional.
# JSXFragment <>|</>
# Same for React.Context
# Use same tokens as `JavascriptLexer`, but with tags and attributes support
# Use same tokens as `TypescriptLexer`, but with tags and attributes support
# Offsets
# Metrics
# Params
# Other states
# user variable
# builtin
# here-string
# To support output lexers (say diff output), the output
# needs to be broken by prompts whenever the output lexer
# changes.
# we need to count pairs of parentheses for correct highlight
# of '$(...)' blocks in strings
# escaped syntax
# .net [type]s
# A TAP version may be specified.
# Specify a plan with a plan line.
# A test failure
# A test success
# Diagnostics start with a hash.
# TAP's version of an abort statement.
# TAP ignores any unrecognized lines.
# Consume whitespace (but not newline).
# A plan may have a directive with it.
# Or it could just end.
# Anything else is wrong.
# A test may have a directive with it.
# Extract todo items.
# Extract skip items.
# use different colors for different instruction types
# Traditional math
# Move, imperatives
# Stack ops, imperatives
# Befunge-98 stack ops
# Strings don't appear to allow escapes
# Single character
# Trampoline... depends on direction hit
# Fingerprints
# Whitespace doesn't matter
# C pre-processor directive
# Whitespace, comments
# Recognised attributes
# CAmkES-level include
# C-level include
# Properties
# 638 functions
# Condition Types
# Have to be careful not to accidentally match JavaDoc/Doxygen syntax here,
# since that's quite common in ordinary C/C++ files.  It's OK to match
# JavaDoc/Doxygen keywords that only apply to Objective-C, mind.
# The upshot of this is that we CANNOT match @class or @interface
# Matches [ <ws>? identifier <ws> ( identifier <ws>? ] |  identifier? : )
# (note the identifier is *optional* when there is a ':'!)
# Carbon types
# Carbon built-ins
# @ can also prefix other expressions like @{...} or @(...)
# methods
# method marker
# begin of method name
# TODO unsure if ellipses are allowed elsewhere, see
# discussion in Issue 789
# Lower than C
# Lower than C++
# Global Types
# Protocols
# Typealiases
# Foundation/Cocoa
# Implicit Block Variables
# Binary Literal
# Octal Literal
# Hexadecimal Literal
# Decimal Literal
# String Literal
# Nested
# quick hack for ternary
# FIXME when e is present, no decimal point needed
# Storage qualifiers
# Layout qualifiers
# Interpolation qualifiers
# Auxiliary qualifiers
# Parameter qualifiers. Some double as Storage qualifiers
# Precision qualifiers
# Invariance qualifiers
# Precise qualifiers
# Memory qualifiers
# Statements
# Boolean values
# Miscellaneous types
# Floating-point scalars and vectors
# Integer scalars and vectors
# Boolean scalars and vectors
# Matrices
# Floating-point samplers
# Shadow samplers
# Signed integer samplers
# Unsigned integer samplers
# Floating-point image types
# Signed integer image types
# Unsigned integer image types
# Reserved for future use.
# All names beginning with "gl_" are reserved.
# vector and matrix types
# built-in functions
# system-value semantics
# attributes
# backslash at end of line -- usually macro continuation
# String literals are awkward; enter separate state.
# Slight abuse: use Oct to signify any explicit base system
# References
# These keywords taken from
# <http://www.math.ubc.ca/~cass/graphics/manual/pdf/a1.pdf>
# Is there an authoritative list anywhere that doesn't involve
# trawling documentation?
# Conditionals / flow control
# simple string (TeX friendly)
# C style string (with character escapes)
# Since an asy-type-name can be also an asy-function-name,
# in the following we test if the string "  [a-zA-Z]" follows
# the Keyword.Type.
# Of course it is not perfect !
# Now the asy-type-name which are not asy-function-name
# except yours !
# Perhaps useless
# return arguments
# don't add the newline to the Comment token
# semicolon and newline end the argument list
# newline ends the string too
# escaped single quote
# normal backslash
# Any limbo module implements something
# Whitespace and comments
# Special Keywords
# class related keywords
# Variables with @ prefix
# Types and classes
# Variables and predicates
# valid names for identifiers
# well, names can only not consist fully of numbers
# but this should be good enough for now
# Use within verbose regexes
# Recognizing builtins.
# Scheme has funky syntactic rules for numbers. These are all
# valid number literals: 5.0e55|14, 14/13, -1+5j, +1@5, #b110,
# #o#Iinf.0-nan.0i.  This is adapted from the formal grammar given
# in http://www.r6rs.org/final/r6rs.pdf, section 4.2.1.  Take a
# deep breath ...
# It would be simpler if we could just not bother about invalid
# numbers like #b35. But we cannot parse 'abcdef' without #x as a
# Radix, optional exactness indicator.
# Simple unsigned number or fraction.
# Add decimal numbers.
# If you have a headache now, say thanks to RnRS editors.
# Doing it this way is simpler than splitting the number(10)
# regex in a floating-point and a no-floating-point version.
# includes [+-](inf|nan).0
# The 'scheme-root' state parses as many expressions as needed, always
# delegating to the 'scheme-value' state. The latter parses one complete
# expression and immediately pops back. This is needed for the LilyPondLexer.
# When LilyPond encounters a #, it starts parsing embedded Scheme code, and
# returns to normal syntax after one expression. We implement this
# by letting the LilyPondLexer subclass the SchemeLexer. When it finds
# the #, the LilyPondLexer goes to the 'value' state, which then pops back
# to LilyPondLexer. The 'root' state of the SchemeLexer merely delegates the
# work to 'scheme-root'; this is so that LilyPondLexer can inherit
# 'scheme-root' and redefine 'root'.
# the comments
# and going to the end of the line
# multi-line comment
# commented form (entire sexpr following)
# commented datum
# signifies that the program text that follows is written with the
# lexical and datum syntax described in r6rs
# whitespaces - usually not relevant
# strings, symbols, keywords and characters
# special operators
# first variable in a quoted string like
# '(this is syntactic sugar)
# Functions -- note that this also catches variables
# defined in let/let*, but there is little that can
# be done about it.
# find the remaining variables
# the famous parentheses!
# Push scheme-root to enter a state that will parse as many things
# as needed in the parentheses.
# Pop one 'value', one 'scheme-root', and yet another 'value', so
# we get back to a state parsing expressions as needed in the
# enclosing context.
# Pops back from 'string', and pops 'value' as well.
# Hex escape sequences, R6RS-style.
# We try R6RS style first, but fall back to Guile-style.
# Other special escape sequences implemented by Guile.
# Escape sequences are not overly standardized. Recognizing
# a single character after the backslash should be good enough.
# NB: we have DOTALL.
# The rest
# couple of useful regexes
# characters that are not macro-characters and can be used to begin a symbol
# whitespace or terminating macro characters
# symbol token, reverse-engineered from hyperspec
# Take a deep breath...
# (cf. Hyperspec 2.4.8.19)
# single-line comment
# encoding comment (?)
# strings and characters
# quoting
# decimal numbers
# sharpsign strings and characters
# vector
# bitstring
# uninterned symbol
# read-time and load-time evaluation
# function shorthand
# binary rational
# octal rational
# hex rational
# radix rational
# structure
# reference
# read-time comment
# read-time conditional
# special operators that should have been parsed already
# special constants
# functions and variables
# parentheses
# This is a *really* good indicator (and not conflicting with Visual Prolog)
# '(defun ' first on a line
# section keyword alone on line e.g. 'clauses'
# the comments - always starting with semicolon
# strings, symbols and characters
# highlight the special forms
# Technically, only the special forms are 'keywords'. The problem
# is that only treating them as keywords means that things like
# 'defn' and 'ns' need to be highlighted as builtins. This is ugly
# and weird for most styles. So, as a compromise we're going to
# highlight them as Keyword.Declarations.
# the remaining functions
# Hy accepts vector notation
# Hy accepts map notation
# Generated by example.rkt
# Numbers: Keep in mind Racket reader hash prefixes, which
# can denote the base or the type. These don't map neatly
# onto Pygments token types; some judgment calls here.
# #d or no prefix
# Inexact without explicit #i
# The remaining extflonums
# #b
# #o
# #x
# #i is always inexact, i.e. float
# Strings and characters
# Keyword argument names (e.g. #:keyword)
# Reader extensions
# Other syntax
# list of built-in functions for newLISP version 10.3
# valid names
# shebang
# comments starting with semicolon
# braces
# [text] ... [/text] delimited strings
# 'special' operators...
# the remaining variables
# braced strings...
# tagged [text]...[/text] delimited strings...
# vectors
# read syntax for char tables
# \* ... *\
# \\ ...
# list of known keywords and builtins taken form vim 6.4 scheme.vim
# syntax file.
# support for uncommon kinds of numbers -
# have to figure out what the characters mean
# (r'(#e|#i|#b|#o|#d|#x)[\d.]+', Number),
# highlight the keywords
# valid names for Scheme identifiers (names cannot consist fully
# of numbers, but this should be good enough for now)
# valid characters in xtlang names & types
# keep track of when we're exiting the xtlang form
# type annotations
# quoted symbols
# char literals
# common to both xtlang and Scheme
# binary/oct/hex literals
# true/false constants
# go into xtlang mode
# this list is current as of Fennel version 0.10.0.
# based on the scheme definition, but disallowing leading digits and
# commas, and @ is not allowed.
# the only comment form is a semicolon; goes to the end of the line
# these are technically strings, but it's worth visually
# distinguishing them because their intent is different
# from regular strings.
# special forms are keywords
# these are ... even more special!
# lua standard library are builtins
# special-case the vararg symbol
# regular identifiers
# all your normal paired delimiters for your programming enjoyment
# the # symbol is shorthand for a lambda function
# XXX: gets too slow
#flags = re.MULTILINE | re.VERBOSE
# obsolete builtin macros
# obsolete builtin functions
# XXX: this form not usable to pass to `suffix=`
#_token_end = r'''
#'''
# ...so, express it like this
# exponent marker, optional sign, one or more alphanumeric
# 2af3__bee_
# 12_000__
# E-23
# lower or uppercase e, optional sign, one or more digits
# radix number
# hex number
# decimal number
# strings and buffers
# long-strings and long-buffers
# things that hang out on front
# collection delimiters
# other symbols
# Binary/hex numbers. Note that these take priority over names,
# which may begin with numbers.
# Bang operators
# Unknown bang operators are an error
# Names and identifiers
# Place numbers after keywords. Names/identifiers may begin with
# numbers, and we want to parse 1X as one name token as opposed to
# a number and a name.
# Misc. punctuation
# Double-quoted string, a la C
# No escaping inside a code block - everything is literal
# Assume that the code inside a code block is C++. This isn't always
# true in TableGen, but is the far most common scenario.
# Shebang script
# Definitions
# Standard Library
# Copula
# Short Keywords
# include('nounDefinition'),
# Floats
# Integers
# Characters
# Operators, Punctuation
# Language operators
# finite element spaces
# preprocessor
# Language keywords
# Language shipped functions and class ( )
# function parameters
# deprecated
# do not highlight
# okay, this is the hardest part of parsing Crystal...
# match: 1 = <<-?, 2 = quote? 3 = name 4 = quote? 5 = rest of line
# <<-?
# quote ", ', `
# heredoc name
# quote again
# this may find other heredocs, so limit the recursion depth
# this is the outer heredoc again, now we can process them all
# end of heredoc not found -- error!
# This allows arbitrary text after '\ for simplicity
# Crystal doesn't have "symbol:"s but this simplifies function args
# double-quoted string and symbol
# https://crystal-lang.org/docs/syntax_and_semantics/literals/string.html#percent-string-literals
# https://crystal-lang.org/docs/syntax_and_semantics/literals/array.html#percent-array-literals
# https://crystal-lang.org/docs/syntax_and_semantics/is_a.html
# start of function, class and module names
# https://crystal-lang.org/api/toplevel.html
# https://crystal-lang.org/api/Object.html#macro-summary
# normal heredocs
# empty string heredocs
# multiline regex (after keywords or assignments)
# multiline regex (in method calls or subscripts)
# multiline regex (this time the funny no whitespace rule)
# lex numbers and ignore following regular expressions which
# are division operators in fact (grrrr. i hate that. any
# better ideas?)
# since pygments 0.7 we also eat a "?" operator after numbers
# so that the char operator does not work. Chars are not allowed
# there so that you can use the ternary operator.
# stupid example:
# 3 separate expressions for floats because any of the 3 optional
# parts makes it a float
# https://crystal-lang.org/reference/syntax_and_semantics/literals/char.html
# macro expansion
# annotations
# this is needed because Crystal attributes can look
# like keywords (class) or like this: ` ?!?
# Names can end with [!?] unless it's "!="
# https://crystal-lang.org/reference/syntax_and_semantics/literals/string.html
# more attributes
# ";"s are allowed to combine separate metadata lines
# okay, this is the hardest part of parsing Ruby...
# match: 1 = <<[-~]?, 2 = quote? 3 = name 4 = quote? 5 = rest of line
# <<[-~]?
# begin
# end[mixounse]*
# end
# easy ones
# Since Ruby 1.9
# quoted string and symbol
# braced quoted strings
# these must come after %<brace>!
# %r regex
# regular fancy strings with qsw
# special forms of fancy strings after operators or
# in method calls with braces
# and because of fixed width lookbehinds the whole thing a
# second time for line startings...
# all regular fancy strings without qsw
# special methods
# this is needed because ruby attributes can look
# optional scope name, like "self."
# or operator override
# or element reference/assignment override
# or the undocumented backtick override
# copied from PerlLexer:
# balanced delimiters (copied from PerlLexer):
# Symbols
# Multi-line DoubleQuotedString
# DoubleQuotedString
# operators, must be below functions
# numbers - / checks are necessary to avoid mismarking regexes,
# see comment in RubyLexer
# based on https://neo4j.com/docs/cypher-refcard/3.3/
# hashbang script
# Timing
# Generic System Commands
# Prompt
# Function Names
# Parentheses
# File Symbols
# Binary Values
# Nulls/Infinities
# Timestamps
# Datetimes
# GUIDs
# Byte Vectors
# Long Integers
# name constants
# name, assignments and functions
# Header's section
# boolean
# character
# array index
# color
# Note: Literals can be labeled too
# literal
# Note: Attributes can be labeled too
# Switch structure
# Single Line Strings
# Multi Line Strings
# sugar syntax
# Interpolation
# Closing Quote
# String Content
# darcs add [_CODE_] special operators for clarity
# line-based
# We can only assume "[-" after "[-" before "-]" is `nested`,
# for instance wdiff to wdiff outputs. We have no way to
# distinct these marker is of wdiff output from original text.
# for performance
# Inform 7 maps these four character classes to their ASCII
# equivalents. To support Inform 6 inclusions within Inform 7,
# Inform6Lexer maps them too.
# Array initialization
# Second angle bracket in an action statement
# Expressions
# Values
# Values prefixed by hashes
# Metaclasses
# Veneer routines
# Other built-in symbols
# Other values
# Values after hashes
# [, Replace, Stub
# Array
# Attribute, Property
# Class, Object, Nearby
# Extend, Verb
# Import
# Include, Link, Message
# Keywords used in directives
# Assembly
# 'in' is either a keyword or an operator.
# If the token two tokens after 'in' is ')', 'in' is a keyword:
# Otherwise, it is an operator:
# There are three variants of Inform 7, differing in how to
# interpret at signs and braces in I6T. In top-level inclusions, at
# signs in the first column are inweb syntax. In phrase definitions
# and use options, tokens in braces are treated as I7. Use options
# also interpret "{N}".
# For Inform6TemplateLexer
# Inform 7 can include snippets of Inform 6 template language,
# so all of Inform6Lexer's states are copied here, with
# modifications to account for template syntax. Inform7Lexer's
# own states begin with '+' to avoid name conflicts. Some of
# Inform6Lexer's states begin with '_': these are not modified.
# They deal with template syntax either by including modified
# states, or by matching r'' then pushing to modified states.
# This regex can't use `(?i)` because escape sequences are
# case-sensitive. `<\XMP>` works; `<\xmp>` doesn't.
# Averts an infinite loop
# It might be a VerbRule macro.
# It might be a macro like DefineAction.
# Two-token keywords
# Compiler-defined macros and built-in properties
# Then expression (conditional operator)
# Embedded expressions
# For/foreach loop initializer or short-form anonymous function
# Local
# List
# Parameter list
# Statements and expressions
# Tags
# Regular expressions
# Not in a false #if
# In a false #if
# This is a fairly unique keyword which is likely used in source as well
# Operator.Name but not enough emphasized with that
# registers
# constructor
# class names in the form Lcom/namespace/ClassName;
# I only want to color the ClassName part, so the namespace part is
# treated as 'Text'
# Ideally processing of the number would happen in the 'number'
# but that doesn't seem to work
# Declaration part
# Implementation part
# GAP prompts are a dead give away, although hypothetical;y a
# file in another language could be trying to compare a variable
# "gap" as in "gap> 0.1". But that this should happen at the
# start of a line seems unlikely...
# http://reference.wolfram.com/mathematica/guide/Syntax.html
# (r'\b(?:adt|linalg|newDomain|hold)\b', Name.Builtin),
# bc doesn't support exponential
# Shaders
# 4 groups
#: optional Whitespace or /*...*/ style comment
# Include preprocessor directives (C style):
# Define preprocessor directives (C style):
# devicetree style with file:
# devicetree style with property:
# Open until EOF, so no ending delimiter
# Nodes
# Opcodes in Csound 6.18.0 using:
# While the spec reads more like "an int must not start with 0" we use a
# lookahead here that says "after a 0 there must be no digit". This makes the
# '0' the invalid character in '01', which looks nicer when highlighted.
# tag types
# type or any
# occurrence
# cuts
# rangeop
# ctlops
# into choice op
# unwrap op
# double und single slash
# Bytestrings
# Barewords as member keys (must be matched before values, types, typenames,
# groupnames).
# Token type is String as barewords are always interpreted as such.
# predefined types
# user-defined groupnames, typenames
# hexfloat
# hex
# Float
# Int
# (r";.+$", Token.Other),
# often used regexes
# all chars which can be part of type definition. Starts with
# either letter, or [ (maps), or | (funcs)
# Multiline
# Single line
# TODO: highlight references in fandocs
# Fandoc
# Shell-style
# Duration with dot
# Float/Decimal
# Hex
# Opening quote
# Opening accent
# Bool & null
# DSL
# Type/slot literal
# Empty list
# Typed empty list
# Empty Map
# Escaped backslash
# Escaped "
# Escaped `
# Subst var
# Subst expr
# Closing quot
# String content
# TODO: remove copy/paste str/uri
# Closing tick
# URI content
# Using stmt
# Symbol
# Inheritance list
# Type var := val
# var := val
# .someId( or ->someId( ###
# .someId  or ->someId
# new makeXXX (
# Type name (
# Return type and whitespace
# method name + open brace
# ArgType argName,
# ArgType argName)
# Covered in 'insideParen' state
# ArgType argName -> ArgType|
# ArgType argName|
# Type var
# consume whitespaces
# ffi
# podname
# jump out to root state
# TODO: use Name.Namespace if appropriate.  This needs
# work to disinguish imports from aspects.
# Data Types
# Intrinsics
# Comparing Operators
# Data Types: INTEGER, REAL, COMPLEX, LOGICAL, CHARACTER and DOUBLE PRECISION
# Operators: **, *, +, -, /, <, >, <=, >=, ==, /=
# Logical (?): NOT, AND, OR, EQV, NEQV
# Builtins:
# http://gcc.gnu.org/onlinedocs/gcc-3.4.6/g77/Table-of-Intrinsic-Functions.html
# 'call' is handled separately.
# 'goto' is handled separately.
# Opening quotes without a closing quote on the same line are errors.
# Turtle and Tera Term macro files share the same file extension
# but each has a recognizable and distinct syntax.
# tag/end tag begin
# stray bracket
# attribute with value
# tag argument (a la [color=green])
# tag end
# Ignore-next
# Titles
# Literal code blocks, with optional shebang
# Formatting
# Lists
# Other Formatting
# Macro
# Link
# Horizontal rules
# these blocks are allowed to be nested in Trac, but not MoinMoin
# slurp boring text
# allow loose { or }
# section header
# lookup lexer if wanted and existing
# no lexer for this language. handle it like it was a code block
# highlight the lines with the lexer.
# from docutils.parsers.rst.states
# Heading with overline
# Plain heading
# Bulleted lists
# Numbered lists
# Numbered, but keep words at BOL from becoming lists
# Line blocks
# Sourcecode directives
# A directive
# A reference target
# A footnote/citation target
# A substitution def
# Field list marker
# Definition list
# Code blocks
# code
# reference with inline target
# role
# role (content first)
# Strong emphasis
# Emphasis
# Footnote or citation
# Hyperlink
# has two lines
# they are the same length
# the next line both starts and ends with
# ...a sufficiently high header
# Regular characters, slurp till we find a backslash or newline
# groff has many ways to write escapes.
# FIXME: aren't the offsets wrong?
# heading with '#' prefix (atx-style)
# subheading with '#' prefix (atx-style)
# heading with '=' underlines (Setext-style)
# subheading with '-' underlines (Setext-style)
# task list
# bulleted list
# numbered list
# code block fenced by 3 backticks
# code block with language
# Some tools include extra stuff after the language name, just
# highlight that as text. For example: https://docs.enola.dev/use/execmd
# warning: the following rules eat outer tags.
# eg. **foo _bar_ baz** => foo and baz are not recognized as bold
# bold fenced by '**'
# bold fenced by '__'
# italics fenced by '*'
# italics fenced by '_'
# strikethrough
# mentions and topics (twitter and github stuff)
# (image?) links eg: ![Image of Yaktocat](https://octodex.github.com/images/yaktocat.png)
# reference-style links, e.g.:
# Headings
# Unordered lists items, including TODO items and description items
# Ordered list items
# Dynamic blocks
# Comment blocks
# Source code blocks
# TODO: language-dependent syntax highlighting (see Markdown lexer)
# Other blocks
# Properties and drawers
# Line break operator
# Deadline, Scheduled, CLOSED
# Bold
# Italic
# Verbatim
# TODO token
# Code
# Strikethrough
# Underline
# Macros
# Footnotes
# Links
# Tables
# Any other text
# title in metadata section
# headings
# bulleted or numbered lists or single-line block quotes
# (can be mixed)
# multi-line block quotes
# table header
# table footer or caption
# table class
# CSS style block
# created or modified date
# italics
# superscript
# subscript
# underscore
# TiddlyWiki variables
# TiddlyWiki style or class
# HTML tags
# HTML escaped symbols
# Wiki links
# External links
# Transclusion
# URLs
# Exclude comment end (-->)
# No tag end
# Pick the last match in case of multiple matches
# Case sensitive
# ABC
# FIXME: Use ABC lexer in the future
# a-z removed to prevent linter from complaining, REMEMBER to use (?i)
# ZhConverter.php
# WuuConverter.php
# UzConverter.php
# TlyConverter.php
# TgConverter.php
# SrConverter.php
# ShiConverter.php
# ShConverter.php
# KuConverter.php
# IuConverter.php
# GanConverter.php
# EnConverter.php
# CrhConverter.php
# BanConverter.php
# Redirects
# Subheadings
# Double-slashed magic words
# Raw URLs
# Magic links
# Description lists
# Ordered lists, unordered lists and indents
# Signatures
# Entities
# Bold & italic
# Comments & parameters & templates
# Media links
# Wikilinks
# <nowiki>
# <pre>
# <categorytree>
# <hiero>
# <math>
# <chem>
# <ce>
# <charinsert>
# <templatedata>
# <gallery>
# <graph>
# <dynamicpagelist>
# <inputbox>
# <rss>
# <imagemap>
# <syntaxhighlight>
# <syntaxhighlight>: Fallback case for self-closing tags
# <source>
# <source>: Fallback case for self-closing tags
# <score>
# <score>: Fallback case for self-closing tags
# Other parser tags
# LanguageConverter markups
# LanguageConverter markups: composite conversion grammar
# LanguageConverter markups: fallbacks
# Quit in case of another wikilink
# Quit in case of link/template endings
# Parameters
# Magic variables
# Parser functions & templates
# <tvar> legacy syntax
# <tvar>
# Templates allow line breaks at the beginning, and due to how MediaWiki handles
# comments, an extra state is required to handle things like {{\n<!---->\n name}}
# Parser functions
# Use [ \t\n\r\0\x0B] instead of \s to follow PHP trim() behavior
# Endings
# Table rows
# Captions
# Table data
# Table headers
# Requires another state for || handling inside headers
# Return to root state for self-closing tags
# There states below are just like their non-tag variants, the key difference is
# they forcibly quit when encountering tag closing markup
# These are *really* good indicators (and not conflicting with the other languages)
# end-scope first on line e.g. 'end implement'
# These are *really* good indicators
# end-scope first on line e.g. 'end grammar'
# section keyword alone on line e.g. 'rules'
# Note that a backslash is included, PHP uses a backslash as a namespace
# separator.
# But not inside strings.
# put the empty comment here, it is otherwise seen as
# the start of a docstring
# don't add to the charclass above!
# source: http://php.net/manual/en/language.oop5.magic.php
# source: http://php.net/manual/en/language.constants.predefined.php
# private option argument for the lexer itself
# collect activated functions in a set
# import:
# tags:
# tokens:
# number:
# comment:
# string:
# text block:
# date:
# point:
# double points:
# Colorize the prompt as such,
# then put rest of line into current_code_block
# We have reached a non-prompt line!
# If we have stored prompt lines, need to process them first.
# Weave together the prompts and highlight code.
# Reset vars for next code block.
# Now process the actual line itself, this is output from R.
# If we happen to end on a code block with nothing after it, need to
# process the last code block. This is neither elegant nor DRY so
# should be changed.
# whitespaces
# calls:
# blocks:
# (r'\{', Punctuation, 'block'),
# 'block': [
# To account for verbatim / LaTeX-like / and R-like areas
# would require parsing.
# catch escaped brackets and percent sign
# special macros with no arguments
# special preprocessor macros
# non-escaped brackets
# everything else
# https://www.w3.org/TR/WGSL/#syntax-ident_pattern_token
# https://www.w3.org/TR/WGSL/#var-and-value
# https://www.w3.org/TR/WGSL/#keyword-summary
# https://www.w3.org/TR/WGSL/#reserved-words
# https://www.w3.org/TR/WGSL/#predeclared-enumerants
# https://www.w3.org/TR/WGSL/#predeclared-types
# Predeclared type aliases for vectors
# https://www.w3.org/TR/WGSL/#vector-types
# Predeclared type aliases for matrices
# https://www.w3.org/TR/WGSL/#matrix-types
# https://www.w3.org/TR/WGSL/#blankspace
# Line ending comments
# Match up CR/LF pair first.
# Attributes.
# https://www.w3.org/TR/WGSL/#attributes
# Mark the '@' and the attribute name as a decorator.
# Predeclared
# Decimal float literals
# https://www.w3.org/TR/WGSL/#syntax-decimal_float_literal
# 0, with type-specifying suffix.
# Other decimal integer, with type-specifying suffix.
# Hex float literals
# https://www.w3.org/TR/WGSL/#syntax-hex_float_literal
# Hexadecimal integer literals
# https://www.w3.org/TR/WGSL/#syntax-hex_int_literal
# Decimal integer literals
# https://www.w3.org/TR/WGSL/#syntax-decimal_int_literal
# We need two rules here because 01 is not valid.
# Must match last.
# Return-type arrow
# TODO: Treat context-depedendent names specially
# https://www.w3.org/TR/WGSL/#context-dependent-name
# TODO: templates start and end tokens.
# https://www.w3.org/TR/WGSL/#template-lists-sec
# https://www.w3.org/TR/WGSL/#block-comment
# Auto-generated for Macaulay2-1.24.11. Do not modify this file manually.
# opstart_re = '+\-\*/%=\!><\|&\^'
# root state, start of line
# comments, continuation lines, and directives start in column 1
# as do labels
# statement state, line after continuation or label
# ASCII equivalents of original operators
# | for the EBCDIC equivalent, ! likewise
# \ for EBCDIC negation
# Accept SPITBOL syntax for real numbers
# as well as Macro SNOBOL4
# Goto
# Goto block
# everything after the END statement is basically one
# big heredoc.
# keywords: go before method names to avoid lexing "throw new XYZ"
# as a method signature
# Escaped quote
# Bare backslash
# Closing quote
# Includes:
# Only highlight soft modifiers if they are eventually followed by
# the correct keyword. Note that soft modifiers can be followed by a
# sequence of regular modifiers; [a-z\s]* skips those, and we just
# check that the soft modifier is applied to a supported statement.
# using is a soft keyword, can only be used in the first position of
# a parameter or argument list.
# end is a soft keyword, should only be highlighted in certain cases
# inline is a soft modifier, only highlighted if followed by if,
# match or parameters.
# '{...} or ${...}
# '[...]
# modifiers etc.
# Groovy allows a file to start with a shebang
# or double-quoted method name
# or single-quoted method name
# Regexps
# Documentation
# Text
# Mimic
# Origin
# Base
# Ground
# DefaultBehaviour Literals
# DefaultBehaviour Case
# DefaultBehaviour Reflection
# DefaultBehaviour Aspects
# DefaultBehaviour
# DefaultBehavior BaseBehavior
# DefaultBehavior Internal
# DefaultBehaviour Conditions
# default cellnames
# It's safe to consider 'ns' a declaration thing because it defines a new
# namespace.
# TODO / should divide keywords/symbols into namespace/rest
# but that's hard, so just pretend / is part of the name
# Clojure accepts vector notation
# Clojure accepts map notation
# shebang for kotlin scripts
# Built-in types
# Dot access
# Annotations
# Object expression
# additionally handle nullable types
# escaped backslash
# escaped quote
# bare backslash
# TODO: make tests pass without \s+
# Instructions
# Multi-Dialect Modula-2 Lexer
# blank lines
# PIM Dialect Tag
# ISO Dialect Tag
# M2R10 Dialect Tag
# ObjM2 Dialect Tag
# Aglet Extensions Dialect Tag
# GNU Extensions Dialect Tag
# p1 Extensions Dialect Tag
# XDS Extensions Dialect Tag
# Base-2, whole number
# Base-16, whole number
# Base-10, real number with exponent
# integral part
# fractional part
# exponent
# Base-10, real number without exponent
# Base-10, whole number
# Base-8, whole number
# Base-8, character code
# Base-16, number
# Dot Product Operator
# Array Concatenation Operator
# M2R10 + ObjM2
# Inequality Operator
# ISO + PIM
# Less-Or-Equal, Subset
# Greater-Or-Equal, Superset
# Identity Operator
# Type Conversion Operator
# Assignment Symbol
# Postfix Increment Mutator
# Postfix Decrement Mutator
# ISO 80000-2 compliant Set Difference Operator
# Relational Operators
# Dereferencing Operator
# Dereferencing Operator Synonym
# Logical AND Operator Synonym
# PIM + ISO
# Logical NOT Operator Synonym
# Smalltalk Message Prefix
# ObjM2
# Range Constructor
# Opening Chevron Bracket
# M2R10 + ISO
# Closing Chevron Bracket
# Blueprint Punctuation
# Distinguish |# and # in M2 R10
# Distinguish ## and # in M2 R10
# Distinguish |* and * in M2 R10
# Common Punctuation
# Case Label Separator Synonym
# Single Line Comment
# Block Comment
# Template Block Comment
# ISO Style Pragmas
# ISO, M2R10 + ObjM2
# Pascal Style Pragmas
# PIM
# Common Reserved Words Dataset
# 37 common reserved words
# Common Builtins Dataset
# 16 common builtins
# Common Pseudo-Module Builtins Dataset
# 4 common pseudo builtins
# Lexemes to Mark as Error Tokens for PIM Modula-2
# PIM Modula-2 Additional Reserved Words Dataset
# 3 additional reserved words
# PIM Modula-2 Additional Builtins Dataset
# 16 additional builtins
# PIM Modula-2 Additional Pseudo-Module Builtins Dataset
# 5 additional pseudo builtins
# Lexemes to Mark as Error Tokens for ISO Modula-2
# ISO Modula-2 Additional Reserved Words Dataset
# 9 additional reserved words (ISO 10514-1)
# 10 additional reserved words (ISO 10514-2 & ISO 10514-3)
# ISO Modula-2 Additional Builtins Dataset
# 26 additional builtins (ISO 10514-1)
# 5 additional builtins (ISO 10514-2 & ISO 10514-3)
# ISO Modula-2 Additional Pseudo-Module Builtins Dataset
# 14 additional builtins (SYSTEM)
# 13 additional builtins (COROUTINES)
# 9 additional builtins (EXCEPTIONS)
# 3 additional builtins (TERMINATION)
# 4 additional builtins (M2EXCEPTION)
# Lexemes to Mark as Error Tokens for Modula-2 R10
# Modula-2 R10 reserved words in addition to the common set
# 12 additional reserved words
# 2 additional reserved words with symbolic assembly option
# Modula-2 R10 builtins in addition to the common set
# 26 additional builtins
# Modula-2 R10 Additional Pseudo-Module Builtins Dataset
# 13 additional builtins (TPROPERTIES)
# 4 additional builtins (CONVERSION)
# 35 additional builtins (UNSAFE)
# 11 additional builtins (ATOMIC)
# 7 additional builtins (COMPILER)
# 5 additional builtins (ASSEMBLER)
# Lexemes to Mark as Error Tokens for Objective Modula-2
# Objective Modula-2 Extensions
# reserved words in addition to Modula-2 R10
# 16 additional reserved words
# builtins in addition to Modula-2 R10
# 3 additional builtins
# pseudo-module builtins in addition to Modula-2 R10
# Aglet Extensions
# reserved words in addition to ISO Modula-2
# builtins in addition to ISO Modula-2
# 9 additional builtins
# Aglet Modula-2 Extensions
# pseudo-module builtins in addition to ISO Modula-2
# GNU Extensions
# reserved words in addition to PIM Modula-2
# 10 additional reserved words
# builtins in addition to PIM Modula-2
# 21 additional builtins
# pseudo-module builtins in addition to PIM Modula-2
# p1 Extensions
# p1 Modula-2 Extensions
# 1 additional builtin
# XDS Extensions
# 1 additional reserved word
# XDS Modula-2 Extensions
# 22 additional builtins (SYSTEM)
# 3 additional builtins (COMPILER)
# PIM Modula-2 Standard Library Modules Dataset
# PIM Modula-2 Standard Library Types Dataset
# PIM Modula-2 Standard Library Procedures Dataset
# PIM Modula-2 Standard Library Variables Dataset
# PIM Modula-2 Standard Library Constants Dataset
# ISO Modula-2 Standard Library Modules Dataset
# TO DO
# ISO Modula-2 Standard Library Types Dataset
# ISO Modula-2 Standard Library Procedures Dataset
# ISO Modula-2 Standard Library Variables Dataset
# ISO Modula-2 Standard Library Constants Dataset
# Modula-2 R10 Standard Library ADTs Dataset
# Modula-2 R10 Standard Library Blueprints Dataset
# Modula-2 R10 Standard Library Modules Dataset
# Modula-2 R10 Standard Library Types Dataset
# TO BE COMPLETED
# Modula-2 R10 Standard Library Procedures Dataset
# Modula-2 R10 Standard Library Variables Dataset
# Modula-2 R10 Standard Library Constants Dataset
# Dialect modes
# Lexemes to Mark as Errors Database
# Lexemes to reject for unknown dialect
# LEAVE THIS EMPTY
# Lexemes to reject for PIM Modula-2
# Lexemes to reject for ISO Modula-2
# Lexemes to reject for Modula-2 R10
# Lexemes to reject for Objective Modula-2
# Lexemes to reject for Aglet Modula-2
# Lexemes to reject for GNU Modula-2
# Lexemes to reject for p1 Modula-2
# Lexemes to reject for XDS Modula-2
# Reserved Words Database
# Reserved words for unknown dialect
# Reserved words for PIM Modula-2
# Reserved words for Modula-2 R10
# Reserved words for ISO Modula-2
# Reserved words for Objective Modula-2
# Reserved words for Aglet Modula-2 Extensions
# Reserved words for GNU Modula-2 Extensions
# Reserved words for p1 Modula-2 Extensions
# Reserved words for XDS Modula-2 Extensions
# Builtins Database
# Builtins for unknown dialect
# Builtins for PIM Modula-2
# Builtins for ISO Modula-2
# Builtins for Objective Modula-2
# Builtins for Aglet Modula-2 Extensions
# Builtins for GNU Modula-2 Extensions
# Builtins for p1 Modula-2 Extensions
# Builtins for XDS Modula-2 Extensions
# Pseudo-Module Builtins Database
# Standard Library ADTs Database
# Empty entry for unknown dialect
# Standard Library ADTs for PIM Modula-2
# No first class library types
# Standard Library ADTs for ISO Modula-2
# Standard Library ADTs for Modula-2 R10
# Standard Library ADTs for Objective Modula-2
# Standard Library ADTs for Aglet Modula-2
# Standard Library ADTs for GNU Modula-2
# Standard Library ADTs for p1 Modula-2
# Standard Library ADTs for XDS Modula-2
# Standard Library Modules Database
# Standard Library Modules for PIM Modula-2
# Standard Library Modules for ISO Modula-2
# Standard Library Modules for Modula-2 R10
# Standard Library Modules for Objective Modula-2
# Standard Library Modules for Aglet Modula-2
# Standard Library Modules for GNU Modula-2
# Standard Library Modules for p1 Modula-2
# Standard Library Modules for XDS Modula-2
# Standard Library Types Database
# Standard Library Types for PIM Modula-2
# Standard Library Types for ISO Modula-2
# Standard Library Types for Modula-2 R10
# Standard Library Types for Objective Modula-2
# Standard Library Types for Aglet Modula-2
# Standard Library Types for GNU Modula-2
# Standard Library Types for p1 Modula-2
# Standard Library Types for XDS Modula-2
# Standard Library Procedures Database
# Standard Library Procedures for PIM Modula-2
# Standard Library Procedures for ISO Modula-2
# Standard Library Procedures for Modula-2 R10
# Standard Library Procedures for Objective Modula-2
# Standard Library Procedures for Aglet Modula-2
# Standard Library Procedures for GNU Modula-2
# Standard Library Procedures for p1 Modula-2
# Standard Library Procedures for XDS Modula-2
# Standard Library Variables Database
# Standard Library Variables for PIM Modula-2
# Standard Library Variables for ISO Modula-2
# Standard Library Variables for Modula-2 R10
# Standard Library Variables for Objective Modula-2
# Standard Library Variables for Aglet Modula-2
# Standard Library Variables for GNU Modula-2
# Standard Library Variables for p1 Modula-2
# Standard Library Variables for XDS Modula-2
# Standard Library Constants Database
# Standard Library Constants for PIM Modula-2
# Standard Library Constants for ISO Modula-2
# Standard Library Constants for Modula-2 R10
# Standard Library Constants for Objective Modula-2
# Standard Library Constants for Aglet Modula-2
# Standard Library Constants for GNU Modula-2
# Standard Library Constants for p1 Modula-2
# Standard Library Constants for XDS Modula-2
# initialise a lexer instance
# check dialect options
# valid dialect option found
# Fallback Mode (DEFAULT)
# no valid dialect option
# check style options
# use lowercase mode for Algol style
# Check option flags
# call superclass initialiser
# Set lexer to a specified dialect
# if __debug__:
# check dialect name against known dialects
# compose lexemes to reject set
# add each list of reject lexemes for this dialect
# compose reserved words set
# add each list of reserved words for this dialect
# compose builtins set
# add each list of builtins for this dialect excluding reserved words
# compose pseudo-builtins set
# compose ADTs set
# add each list of ADTs for this dialect excluding reserved words
# compose modules set
# add each list of builtins for this dialect excluding builtins
# compose types set
# add each list of types for this dialect excluding builtins
# compose procedures set
# add each list of procedures for this dialect excluding builtins
# compose variables set
# add each list of variables for this dialect excluding builtins
# compose constants set
# add each list of constants for this dialect excluding builtins
# update lexer state
# Extracts a dialect name from a dialect tag comment string  and checks
# the extracted name against known dialects.  If a match is found,  the
# matching name is returned, otherwise dialect id 'unknown' is returned
# check comment string for dialect indicator
# extract dialect indicator
# check against known dialects
# indicator matches known dialect
# indicator does not match any dialect
# invalid indicator string
# intercept the token stream, modify token attributes and return them
# check for dialect tag if dialect has not been set by tag
# token is a dialect indicator
# reset reserved words and builtins
# check for reserved words, predefined and stdlib identifiers
# mark prefix number literals as error for PIM and ISO dialects
# mark base-8 number literals as errors for M2 R10 and ObjM2
# mark suffix base-16 literals as errors for M2 R10 and ObjM2
# mark real numbers with E as errors for M2 R10 and ObjM2
# mark single line comment as error for PIM and ISO dialects
# mark ISO pragma as error for PIM dialects
# mark PIM pragma as comment for other dialects
# token is neither Name nor Comment
# mark lexemes matching the dialect's error token set as errors
# substitute lexemes when in Algol mode
# return result
# Check if this looks like Pascal, if not, bail out early
# Procedure is in Modula2
# FUNCTION is only valid in Pascal, but not in Modula2
# A variable assignment
# A top-level module
# newlines okay
# newlines okay in a list
# A map key
# newlines okay in a map
# Just re-use map syntax
# keywords extracted from lexer.mll in the haxe compiler source
# idtype in lexer.mll
# combined ident and dollar and idtype
# ident except keywords
# store the current stack
# restore the stack back to right before #if
# remove the saved stack of previous #if
# #if and #elseif should follow by an expr
# #error can be optionally follow by the error msg
# top-level expression
# although it is not supported in haxe, but it is common to write
# expression in web pages the positive lookahead here is to prevent
# an infinite loop at the EOF
# space/tab/comment/preproc
# wildcard import
# same as 'preproc-expr' but able to chain 'preproc-expr-chain'
# optional colon
# same as 'ident' but set token as Name.Decorator instead of Name
# the comma is made optional here, since haxe2
# requires the comma but haxe3 does not allow it
# local function, anonymous or not
# function arguments
# custom getter/setter
# makes semicolon optional here, just to avoid checking the last
# one is bracket or not.
# EReg
# macro reification
# cast can be written as "cast expr" or "cast(expr, type)"
# optionally give a type as the 2nd argument of cast()
# do-while loop
# the while after do
# optional multiple expr under a case
# identity that CAN be a Haxe keyword
# type-param can be a normal type or a constant literal...
# type-param part of a type
# ie. the <A,B> path in Map<A,B>
# optional type-param that may include constraint
# ie. <T:Constraint, T2:(ConstraintA,ConstraintB)>
# the optional constraint inside type-param
# a parenthesis expr that contain exactly one expr
# optional more var decl.
# optional assignment
# optional type flag
# colon as part of a ternary operator (?:)
# after a call param
# bracket can be block or object
# is object
# is block
# code block
# object in key-value pairs
# a key of an object
# after a key-value pair in object
# Compiler switches with one dash
# Compilerswitches with two dashes
# Targets and other options that take an argument
# Options that take only numerical arguments
# An Option that defines the size, the fps and the background
# color of an flash movie
# options with two dashes that takes arguments
# Single line comment, multiline ones are not allowed.
# Syntax from syntax/sas.vim by James Kidd <james.kidd@covance.com>
# SAS is multi-line regardless, but * is ended by ;
# Special highlight for proc, data, quit, run
# Special highlight cards and datalines
# Special highlight for put NOTE|ERROR|WARNING (order matters)
# Keywords, statements, functions, macros
# Strings and user-defined variables and macros (order matters)
# AFAIK, macro variables are not evaluated in single quotes
# (r'&', Name.Variable, 'validvar'),
# SAS numbers and special variables
# 'operators': [
# Whitespace / Comment
# Block-statement
# Line-statement
# String / Char
# Operator
# Variable types
# Net types
# Simulation control tasks (20.2)
# Simulation time functions (20.3)
# Timescale tasks (20.4)
# Conversion functions
# Data query functions (20.6)
# Array query functions (20.7)
# Math functions (20.8)
# Bit vector system functions (20.9)
# Severity tasks (20.10)
# Assertion control tasks (20.12)
# Sampled value system functions (20.13)
# Coverage control functions (20.14)
# Probabilistic distribution functions (20.15)
# Stochastic analysis tasks and functions (20.16)
# PLA modeling tasks (20.17)
# Miscellaneous tasks and functions (20.18)
# Display tasks (21.2)
# File I/O tasks and functions (21.3)
# Memory load tasks (21.4)
# Memory dump tasks (21.5)
# Command line input (21.6)
# VCD tasks (21.7)
# from https://github.com/leanprover/vscode-lean/blob/1589ca3a65e394b3789409707febbd2d166c9344/syntaxes/lean.json#L186C20-L186C217
# same as Lean3Lexer, with `!` and `?` allowed
# Sorts
# hotkeys and labels
# technically, hotkey names are limited to named keys and buttons
# (r'.', Text),      # no cheating
# Keywords, functions, macros from au3.keywords.properties
# which can be found in AutoIt installed directory, e.g.
# c:\Program Files (x86)\AutoIt3\SciTE\au3.keywords.properties
# Line continuation
# sendkeys
# mimetypes = ['text/x-chapel']
# imaginary integers
# reals cannot end with a period due to lexical ambiguity with
# .. operator. See reference for rationale.
# integer literals
# -- binary
# -- hex
# -- octal
# -- decimal
# regular function name, including secondary
# support for legacy destructors
# allow `proc (atomic T).foo`
# consume newline
# Square brackets may be used for array indices
# and for string literal.  Look for arrays
# before matching string literals.
# everything else is not colored
# COMAL allows for some strange characters in names which we list here so
# keywords and word operators will not be recognized at the start of an
# identifier.
# nb: **NOT** re.DOTALL! (totally spanners comment handling)
# Note these lists are auto-generated by pwa/p2js.exw, when pwa\src\p2js_keywords.e (etc)
#Alt:
# Aside: Phix only supports/uses the ascii/non-unicode tilde
# First value, and every second after that, is a separator.
# Start with (pseudo)separator similarly as with pipes
# VariableTokenizer tokenizes this later.
# Following code copied directly from Robot Framework 2.7.5.
# Giving start to enumerate only in Py 2.6+
# factor allows a file to start with a shebang
# defining words
# imports and namespaces
# tuples and classes
# other syntax
# vocab.private
# boolean constants
# symbols and literals
# everything else is text
# single line comment
# lid header
# no header match, switch to code
# binary integer
# octal integer
# floating point
# decimal integer
# hex integer
# Most operators are picked up as names and then re-flagged.
# This one isn't valid in a name though, so we pick it up now.
# Pick up #t / #f before we match other stuff with #.
# #"foo" style keywords
# #rest, #key, #all-keys, etc.
# required-init-keyword: style keywords.
# class names
# define variable forms.
# define constant forms.
# everything else. We re-flag some of these in the method above.
# annotations of atoms
# block starts
# blocks or output
# % raw ruby statements
# block ends
# jinja/django comments
# django comments
# raw jinja blocks
# filter blocks
# Genshi and Cheetah lexers courtesy of Matt Good.
# TODO support other Python syntax like $foo['bar']
# yield style and script blocks as Other
# one more than the XmlErbLexer returns
# FIXME: I want to make these keywords but still parse attributes.
# note: '\w\W' != '.' without DOTALL.
# svn keywords
# directives: begin, end
# directives: evoque, overlay
# see doc for handling first name arg: /directives/evoque/
# + minor inconsistency: the "name" in e.g. $overlay{name=site_base}
# should be using(PythonLexer), not passed out as String
# directives: if, for, prefer, test
# directive clauses (no {} expression)
# There is a special rule for allowing html in single quoted
# strings, evidently.
# negative lookbehind is for strings with embedded >
# (r'<cfoutput.*?>', Name.Builtin, '#push'),
# same as HTML lexer
# Comment start {{!  }} or {{!--
# HTML Escaping open {{{expression
# {{blockOpen {{#blockOpen {{/blockClose with optional tilde ~
# HTML Escaping close }}}
# blockClose}}, includes optional tilde ~
# {{opt=something}}
# Partials {{> ...}}
# borrowed from DjangoLexer
# tags and block tags
# output tags
# builtin logic blocks
# other builtin blocks
# end of block
# builtin tags (assign and include are handled together with usual tags)
# other tags or blocks
# end of output
# end of filters and output
# states for unknown markup
# params with colons or equals
# explicit variables
# fallback for switches / variables / un-quoted strings / ...
# end of tag
# states for different values types
# decides for variable, string, keyword or number
# states for builtin blocks
# Note that a backslash is included in the following two patterns
# PHP uses a backslash as a namespace separator
# twig comments
# raw twig blocks
# {{meal.name}}
# (click)="deleteOrder()"; [value]="test"; [(twoWayTest)]="foo.bar"
# *ngIf="..."; #f="ngForm"
# Variabletext
# inline If
# dbt's ref function
# dbt's source function
# Jinja macro
# TODO: add '*.s' and '*.asm', which will require designing an analyse_text
# method for this lexer and refactoring those from Gas and Nasm in order to
# have relatively reliable detection
# Arithmetic insturctions
# Multiplication/division
# Bitwise operations
# Shifts
# Comparisons
# Move data
# Jump
# branch
# Load
# Store
# coproc: swc1 sdc1
# Concurrent load/store
# Trap handling
# Exception / Interrupt
# --- Floats -----------------------------------------------------
# Arithmetic
# "c.gt.s", "c.gt.d",
# Move Floats
# Conversion
# Math
# Arithmetic & logical
# branches
# loads
# move
# coproc: "mfc1.d",
# comparisons
# load-store
# need warning face
# Preprocessor?
# This seems quite unique to unicon -- doesn't appear in any other
# example source we have (A quick search reveals that \SELF appears in
# Perl/Raku code)
# core functions
# mosel exam mmxprs | sed -n -e "s/ [pf][a-z]* \([a-zA-Z0-9_]*\).*/'\1',/p" | sort -u
# mosel exam mmsystem | sed -n -e "s/ [pf][a-z]* \([a-zA-Z0-9_]*\).*/'\1',/p" | sort -u
# mosel exam mmjobs | sed -n -e "s/ [pf][a-z]* \([a-zA-Z0-9_]*\).*/'\1',/p" | sort -u
# common cases going from math/markup into code mode
# unnumbered list
# numbered list variant
# escaped
# links
# special chars shorthand
# escapes
# line break
# end of math mode
# named arguments in math functions
# both variables and symbols (_ isn't supported for variables)
# FIXME: make this work
## (r'(import|include)( *)(")([^"])(")',
## from imports like "import a: b" or "show: text.with(..)"
# markup is equivalent to root
# note: this allows tag names not used in HTML like <x:with-dash>,
# this is to support yet-unknown template engines and the like
# fallback cases for when there is no closing script tag
# first look for newline and then go back into root state
# if that fails just read the rest of the file
# this is similar to the error handling logic in lexer.py
# fallback cases for when there is no closing style tag
# conditional sections
# less than HTML
# xpl is XProc
# Haml can include " |\n" anywhere,
# which is ignored and used to wrap long lines.
# To accommodate this, use this custom faux dot instead.
# In certain places, a comma at the end of the line
# allows line wrapping as well.
# Scaml does not yet support the " |\n" notation to
# wrap long lines.  Once it does, use the custom faux
# dot instead.
# _dot = r'(?: \|\n(?=.* \|)|.)'
# compat
# Comments:
# Identifier:
# Constants:
# Builtin types:
# Other keywords:
# Type identifier:
# Operators:
# Punctuation:
# String:
# Binary string:
# See https://msdn.microsoft.com/en-us/library/ms174986.aspx.
# See https://msdn.microsoft.com/en-us/library/ms189822.aspx.
# See https://msdn.microsoft.com/en-us/library/ms187752.aspx.
# See https://msdn.microsoft.com/en-us/library/ms174318.aspx.
# An inter_word_char. Necessary because \w matches all alphanumeric
# Unicode characters, including ones (e.g., 𝕊) that BQN treats special.
# '#' is a comment that continues to the end of the line
# Null Character
# Literal representation of the null character
# This token type is used for diamond, commas
# and  array and list brackets and strand syntax
# Expression Grouping
# Since this token type is important in BQN, it is not included in
# Includes the numeric literals and the Nothing character
# 2-Modifiers
# Needs to come before the 1-modifiers due to _𝕣 and _𝕣_
# 1-Modifiers
# The monadic or dyadic function primitives and function
# operands and arguments, along with function self-reference
# Define/Export/Change
# Blocks
# Extra characters
# ================
# FIX UNICODE LATER
# ncnamestartchar = (
# ncnamechar = ncnamestartchar + (r"|-|\.|[0-9]|\u00B7|[\u0300-\u036F]|"
# elementcontentchar = (r'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|'
# quotattrcontentchar = (r'\t|\r|\n|[\u0020-\u0021]|[\u0023-\u0025]|'
# aposattrcontentchar = (r'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|'
# CHAR elements - fix the above elementcontentchar, quotattrcontentchar,
# x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
# transition to root always - don't pop off stack
# if we have run out of our state stack, pop whatever is on the pygments
# state stack
# make sure we have at least the root state on invalid inputs
# i don't know if i'll need this, but in case, default back to root
# xquery comments
# (r'\)|\?|\]', Punctuation, '#push'),
# eXist specific XQUF
# support for current context on rhs of Simple Map Operator
# finally catch all string literals and stay in operator state
# Marklogic specific type?
# handle operator state
# order on numbers matters - handle most complex first
# NAMESPACE DECL
# NAMESPACE KEYWORD
# VARNAMEs
# ANNOTATED GLOBAL VARIABLES AND FUNCTIONS
# ITEMTYPE
# (r'</', Name.Tag, 'end_tag'),
# ATTRIBUTE
# ELEMENT
# PROCESSING_INSTRUCTION
# sometimes return can occur in root state
# URI LITERALS - single and double quoted
# Marklogic specific
# STANDALONE QNAMES
# QML is based on javascript, so much of this is taken from the
# JavascriptLexer above.
# pasted from JavascriptLexer, with some additions
# QML insertions
# the rest from JavascriptLexer
# character group definitions ::
# terminal productions ::
# Lexer token definitions ::
# keywords ::
# IRIs ::
# blank nodes ::
# prefixed names ::
# function names ::
# boolean literals ::
# double literals ::
# decimal literals ::
# integer literals ::
# operators ::
# punctuation characters ::
# line comments ::
# strings ::
# Simplified character range
# Base / prefix
# The shorthand predicate 'a'
# IRIREF
# PrefixedName
# BlankNodeLabel
# operator keywords ::
# Handle multi-line comments
# Handle numbers
# Handle variable names in things
# Handle strings
# variable assignment
# Word operators
# Table names
# interpolation - e.g. $(variableName)
# Quotes denote a field/file name
# Square brackets denote a field/file name
# Operator symbols
# Strings denoted by single quotes
# Words as text
# Basic punctuation
# <built-in function some>
# Contents generated by the script lilypond-builtins-generator.ly
# found in the external/ directory of the source tree.
# Refer to tamil.utf8.tamil_letters from open-tamil for a stricter version of this.
# This much simpler version is close enough, and includes combining marks.
# Syntax based on
# - http://fmwww.bc.edu/RePEc/bocode/s/synlightlist.ado
# - https://github.com/isagalaev/highlight.js/blob/master/src/languages/stata.js
# - https://github.com/jpitblado/vim-stata/blob/master/syntax/stata.vim
# Comments are a complicated beast in Stata because they can be
# nested and there are a few corner cases with that. See:
# - github.com/kylebarron/language-stata/issues/90
# - statalist.org/forums/forum/general-stata-discussion/general/1448244
# this ends and restarts a comment block. but need to catch this so
# that it doesn\'t start _another_ level of comment blocks
# Match anything else as a character inside the comment
# A // breaks out of a comment for the rest of the line
# `"compound string"' and regular "string"; note the former are
# nested.
# A local is usually
# However, there are all sorts of weird rules wrt edge
# cases. Instead of writing 27 exceptions, anything inside
# `' is a local.
# A global is more restricted, so we do follow rules. Note only
# locals explicitly enclosed ${} can be nested.
# Built in functions and statements
# http://www.stata.com/help.cgi?operators
# Stata numbers
# Stata formats
# Everything below this line is auto-generated from the GoogleSQL source code.
# slices
# treat anything as word
# S...S(...) or S...0
# the singleton 0
# a''...
# (...+...)
# no matches
# ~<...>
# Aa:<...>
# <...&...>
# ...=...
# exclude whole match
# this group matched
# there's whitespace in rules
# try line number
# actual number present
# whitespace is required after a line number
# at this point it could be a comment
# anything after the closing bracket is invalid
# do not attempt to process the rest
# fantasy push or pop
# one formula, possibly containing subformulae
# not well-formed
# skip whitespace after formula
# rule proving this formula a theorem
# skip whitespace after rule
# line marker
# if orig was never defined, fine
# Really 'root', not '#push': in 'interpolation',
# parentheses inside the interpolation expression are
# Punctuation, not String.Interpol.
# Keywords.
# Comments.
# Multiline, can nest.
# Single line.
# Attribute or shebang.
# Names and operators.
# Strings.
# Raw string
# Other string
# Escape.
# Byte escape.
# Unicode escape.
# Long Unicode escape.
# All remaining characters.
# redefine closing paren to be String.Interpol
# line continuations
# It seems the builtin types aren't actually keywords, but
# can be used as functions. So we need two declarations.
# imaginary_lit
# float_lit
# int_lit
# -- octal_lit
# -- hex_lit
# -- decimal_lit
# char_lit
# StringLiteral
# -- raw_string_lit
# -- interpreted_string_lit
# Tokens
# yes, all of the are valid in identifiers
# must have exactly two hex digits
# there doesn't really seem to be a PDDL homepage.
# the following are requirements
# - handle Experimental and deprecated tags with specific tokens
# - handle Angles and Durations with specific tokens
# if blob size doesn't match blob format (example : "\B(2)(aaa)")
# yield blob as a string
# if blob is well formatted, yield as Escape
# +1 is the ending ")"
# deprecated keywords, use a meaningful token when available
# ignored keywords, use a meaningful token when available
# don't match single | and &
# Float, Integer, Angle and Duration
# handle binary blob in strings
# from http://pygments.org/docs/lexerdevelopment/#changing-states
#triggers
# Data Types: by PICTURE and USAGE
# Operators: **, *, +, -, /, <, >, <=, >=, =, <>
# Logical (?): NOT, AND, OR
# Reserved words:
# http://opencobol.add1tocobol.com/#reserved-words
# Intrinsics:
# http://opencobol.add1tocobol.com/#does-opencobol-implement-any-intrinsic-functions
# (r'[\s]+', Text),
# Figurative constants
# Reserved words STATEMENTS and other bolds
# inactive reserved words
# (r'(::)', Keyword.Declaration),
# \"[^\"\n]*\"|\'[^\'\n]*\'
# apparently strings can be delimited by EOL if they are continued
# in the next line
# function calls
# method implementation
# method calls
# call methodnames returning style
# text elements
# keywords with dashes in them.
# these need to be first, because for instance the -ID part
# of MESSAGE-ID wouldn't get highlighted if MESSAGE was
# first in the list of keywords.
# keyword kombinations
# simple kombinations
# single word keywords.
# operators which look like variable names before
# parsing variable names.
# standard operators after variable names,
# because < and > are part of field symbols.
# Lazy catch-all
# Syntax:
# https://github.com/gooddata/GoodData-CL/raw/master/cli/src/main/resources/com/gooddata/processor/COMMANDS.txt
# Function call
# Argument list
# Space is not significant
# IDENTITY
# IDENTIFIER
# NUMBER
# STRING
# :=
# OBJECT
# FUNCNAME
# TODO String interpolation @VARNAME@ inner matches
# TODO keyword_arg: value inner matches
# This list was extracted from the v0.58 reference manual
# Control flow
# Enumeration
# any base
# C-like hex
# C-like bin
# float exp
# the root rules
# ignored whitespaces
# line breaks
# a comment
# the '%YAML' directive
# the %TAG directive
# document start and document end indicators
# indentation spaces
# trailing whitespaces after directives or a block scalar indicator
# the %YAML directive
# the version number
# a tag handle and the corresponding prefix
# block scalar indicators and indentation spaces
# trailing whitespaces are ignored
# whitespaces preceding block collection indicators
# block collection indicators
# the beginning a block line
# an indented line in the block context
# the line end
# whitespaces separating tokens
# key with colon
# tags, anchors and aliases,
# block collections and scalars
# flow collections and quoted scalars
# a plain scalar
# tags, anchors, aliases
# a full-form tag
# a tag in the form '!', '!suffix' or '!handle!suffix'
# an anchor
# an alias
# implicit key
# literal and folded scalars
# a flow sequence
# a flow mapping
# a single-quoted scalar
# a double-quoted scalar
# the content of a flow collection
# simple indicators
# tags, anchors and aliases
# nested collections and quoted scalars
# a flow sequence indicated by '[' and ']'
# include flow collection rules
# the closing indicator
# a flow mapping indicated by '{' and '}'
# block scalar lines
# empty line
# indentation spaces (we may leave the state here)
# line content
# the content of a literal or folded scalar
# indentation indicator followed by chomping flag
# chomping flag followed by indentation indicator
# ignored and regular whitespaces in quoted scalars
# leading and trailing whitespaces are ignored
# line breaks are ignored
# other whitespaces are a part of the value
# single-quoted scalars
# include whitespace and line break rules
# escaping of the quote character
# regular non-whitespace characters
# the closing quote
# double-quoted scalars
# escaping of special characters
# escape codes
# the beginning of a new line while scanning a plain scalar
# empty lines
# indentation spaces (we may leave the block line state here)
# a plain scalar in the block context
# the scalar ends with the ':' indicator
# the scalar ends with whitespaces followed by a comment
# a plain scalar is the flow context
# the scalar ends with an indicator character
# the scalar ends with a comment
# No validation of integers, floats, or constants is done.
# As long as the characters are members of the following
# sets, the token will be considered valid. For example,
# true|false|null
# // or /*
# The queue is used to store data that may need to be tokenized
# differently based on what follows. In particular, JSON object
# keys are tokenized differently than string values, but cannot
# be distinguished until punctuation is encountered outside the
# string.
# A ":" character after the string indicates that the string is
# an object key; any other character indicates the string is a
# regular string value.
# The queue holds tuples that contain the following data:
# By default the token type of text in double quotes is
# String.Double. The token type will be replaced if a colon
# is encountered after the string closes.
# Fall through so the new character can be evaluated.
# Exhaust the queue. Accept the existing token types.
# The first letters of true|false|null
# Yield from the queue. Replace string token types.
# There can be only three types of tokens before a ':':
# Whitespace, Comment, or a quoted string.
# If it's a quoted string we emit Name.Tag.
# Otherwise, we yield the original token.
# In all other cases this would be invalid JSON,
# but this is not a validating JSON lexer, so it's OK.
# This is the beginning of a comment.
# Yield any remaining text.
# primitive types
# string types
# exception types
# typed array types
# buffer source types
#: only one /* */ style comment
# preprocessor directives: without whitespace
# or with whitespace
# standalone option, supported by some INI parsers
# String keys, which obey somewhat normal escaping
# Bare keys (includes @)
# delete value
# As far as I know, .reg files do not support line continuation.
# ending a comment or whitespace-only line
# eat whitespace at the beginning of a line
# start lexing a key
# non-escaped key characters
# separator is the first non-escaped whitespace or colon or '=' on the line;
# if it's whitespace, = and : are gobbled after it
# maybe we got no value after all
# non-escaped value characters
# end the value on an unescaped newline
# line continuations; these gobble whitespace at the beginning of the next line
# other escapes
# Kconfig *always* interprets a tab as 8 spaces, so this is the default.
# Edit this if you are in an environment where KconfigLexer gets expanded
# input (tabs expanded to spaces) and the expansion tab width is != 8,
# e.g. in connection with Trac (trac.ini, [mimeviewer], tab_width).
# Value range here is 2 <= {tab_width} <= 8.
# Regex matching a given indentation {level}, assuming that indentation is
# a multiple of {tab_width}. In other cases there might be problems.
# Adjust this if new kconfig file names appear in your environment
# No re.MULTILINE, indentation-aware help text needs line-by-line handling
# If indentation >= {level} is detected, enter state 'indent{level}'
# Print paragraphs of indentation level >= {level} as String.Doc,
# ignoring blank lines. Then return to 'root' state.
# Help text is indented, multi-line and ends when a lower indentation
# level is detected.
# Skip blank lines after help token, if any
# Determine the first help line's indentation level heuristically(!).
# Attention: this is not perfect, but works for 99% of "normal"
# indentation schemes up to a max. indentation level of 7.
# for incomplete help sections without text
# Handle text for indentation levels 7 to 1
# XXX: /integer is a subnet mark, but what is /IP ?
# There is no test where it is used.
# mimetype
# (r'[a-zA-Z._-]+', Keyword),
# catch all
# pathname
# leftover characters
# dockerfile line break regex
# Parse a terraform heredoc
# match: 1 = <<[-]?, 2 = name 3 = rest of line
# <<[-]?
# leading whitespace is always accepted
# e.g. terraform {
# e.g. egress {
# Assignment with attributes, e.g. something = ...
# Assignment with environment variables and similar, e.g. "something" = ...
# or key value assignment, e.g. "SlotName" : ...
# Functions, e.g. jsonencode(element("value"))
# List of attributes, e.g. ignore_changes = [last_modified, filename]
# e.g. resource "aws_security_group" "allow_tls" {
# e.g. backend "consul" {
# here-doc style delimited strings
# variable definitions
# keyword lines
# variable references
# you can escape literal "$" as "$$"
# (Leading space is allowed...)
# flags to on
# built-in special values
# repository
# architecture
# outfile
# Based on the TOML spec: https://toml.io/en/v1.0.0
# The following is adapted from CPython's tomllib:
# Note that we make an effort in order to distinguish
# moments at which we're parsing a key and moments at
# which we're parsing a value. In the TOML code
# the first "1234" should be Name, the second Integer.
# Assignment keys
# After "=", find a value
# Table header
# Start of bare key (only ASCII is allowed here).
# Quoted key
# Dots act as separators in keys
# This is like 'key', but highlights the name components
# and separating dots as Keyword because it looks better
# when the whole table header is Keyword. We do highlight
# strings as strings though.
# Inline whitespace allowed
# Datetime, baretime
# Recognize as float if there is a fractional part
# and/or an exponent.
# Infinities and NaN
# Start of array
# Start of inline table
# Whitespace, including newlines, is ignored inside arrays,
# and comments are allowed.
# End of array
# Parse a value and come back
# Note that unlike inline arrays, inline tables do not
# allow newlines or comments.
# Keys
# End of inline table
# Comment: # ...
# Inline dictionary: {...}
# Inline list: [...]
# empty multiline string item: >
# multiline string item: > ...
# empty list item: -
# list item: - ...
# empty multiline key item: :
# multiline key item: : ...
# empty dict key item: ...:
# dict key item: ...: ...
# HACK: somehow we miscounted our braces
# dim: special rule
# const: special rule
# option: special rule
# an only operator
# binary (but i have never seen...)
# range
# concat
# repetition (<a>*<b>element) including nRule
# Strictly speaking, these are not keyword but
# are called `Core Rule'.
# nonterminals (ALPHA *(ALPHA / DIGIT / "-"))
# punctuation
# All operators
# Other punctuation
# Character classes
# Single and double quoted strings (with optional modifiers)
# Nonterminals are not whitespace, operators, or punctuation
# Fallback
# Although these all seem to be keywords
# https://github.com/microsoft/Kusto-Query-Language/blob/master/src/Kusto.Language/Syntax/SyntaxFacts.cs
# it appears that only the ones with tags here
# https://github.com/microsoft/Kusto-Query-Language/blob/master/src/Kusto.Language/Parser/QueryGrammar.cs
# are highlighted in the Azure portal log query editor.
# From
# Numbers can take the form 1, .1, 1., 1.1, 1.1111, etc.
# These are dummy manual pages, not actual functions
# no re.MULTILINE
# Vernacular commands
# Gallina
# Tactics
# Terminators
# Control
# 'as', 'assert', 'begin', 'class', 'constraint', 'do', 'done',
# 'downto', 'else', 'end', 'exception', 'external', 'false',
# 'for', 'fun', 'function', 'functor', 'if', 'in', 'include',
# 'inherit', 'initializer', 'lazy', 'let', 'match', 'method',
# 'module', 'mutable', 'new', 'object', 'of', 'open', 'private',
# 'raise', 'rec', 'sig', 'struct', 'then', 'to', 'true', 'try',
# 'type', 'val', 'virtual', 'when', 'while', 'with'
# 'Π', 'Σ', # Not defined in the standard library
# Very weak heuristic to distinguish the Set vernacular from the Set sort
# (r'\b([A-Z][\w\']*)(\.)', Name.Namespace, 'dotted'),
# Extra keywords to highlight only in this scope
# Consume comments like ***** as one token
# Match the contents within delimiters such as /<contents>/
# TODO: regexes can have other delims
# Who decided that doublequote was a good comment character??
# Inexact list.  Looks decent.
# These are postprocessed below
# TODO: builtins are only subsequent tokens on lines
# ----- pseudo-states for inclusion -----
# ISO 8601-based date/time constraints
# ISO 8601-based duration constraints + optional trailing slash
# ISO 8601 date with optional 'T' ligature
# ISO 8601 time
# ISO 8601 duration
# term code
# list continuation
# ADL 1.4 ordinal constraint
# ----- real states -----
# effective URI terminators
# handle +/-
# if it is a code
# if it is tuple with attribute names
# if it is an integer, i.e. Xpath child index
# handle use_archetype statement
# attribute name
# x-ref path
# x-ref path starting with key
# is_in / not is_in char
# there_exists / not there_exists / for_all / and / or
# regex in slot or as string constraint
# for cardinality etc
# [{ is start of a tuple value
# type name
# for lists of values
# for assumed value
# blank line ends
# comment-only line
# repeating the following two rules from the root state enable multi-line
# strings that start in the first column to be dealt with
# template overlay delimiter
# numbers and version ids
# Guids
# TODO: nested comments (* (* ... *) ... (* ... *) *) not supported!
# char code
# hexadecimal number
# real number
# decimal whole number
# single quoted string
# double quoted string
# Logical AND Operator
# Logical NOT Operator
# Split up in multiple functions so it's importable by jython, which has a
# per-method size limit.
# groups 1, 2
# group 3
# groups 4, 5, 6, 7, 8
# this function template-izes the tag line for a specific type of tag, which will
# have a different pattern and different token. otherwise, everything about a tag
# line is the same
# official localized tags, Notes and Title
# (normal part is insensitive, locale part is sensitive)
# other official tags
# user-defined tags
# non-conforming tags, but still valid
# Addon Files
# at time of writing, this file suffix conflict's with one of Tex's in
# markup.py. Tex's anaylse_text() appears to be definitive (binary) and does not
# share any likeness to WoW TOCs, which means we wont have to compete with it by
# abitrary increments in score.
# while not required, an almost certain marker of WoW TOC's is the interface tag
# if this tag is omitted, players will need to opt-in to loading the addon with
# an options change ("Load out of date addons"). the value is also standardized:
# `<major><minor><patch>`, with minor and patch being two-digit zero-padded.
# Lua file listing is good marker too, but probably conflicts with many other
# lexers
# ditto for XML files, but they're less used in WoW TOCs
# Regular expressions for analyse_text()
# Identifiers for analyse_text()
# 1 = $, 2 = delimiter, 3 = $
# 4 = string contents
# 5 = $, 6 = delimiter, 7 = $
# Have a copy of the entire text to be used by `language_callback`.
# TODO: better logging
# print >>sys.stderr, "language not found:", lang
# cast
# quoted identifier
# psql variable in SQL
# FIXME: use inheritance
# extend the keywords list
# Add specific PL/pgSQL rules (before the SQL ones)
# actually, a datatype
# #variable_conflict
# not public
# prompt-output cycle
# consume the lines of the command: start with an optional prompt
# and continue until the end of command is detected
# Identify a shell prompt in case of psql commandline example
# Identify a psql prompt
# Check if this is the end of the command
# TODO: better handle multiline comments at the end with
# a lexer with an external state?
# Emit the combined stream of command and prompt(s)
# Emit the output lines
# push the line back to have it processed by the prompt
# This match estimated cost and effectively measured counters with ANALYZE
# Then, we move to instrumentation state
# Misc keywords
# We move to sort state in order to emphasize specific keywords (especially disk access)
# These keywords can be followed by an object, like a table
# These keywords can be followed by a predicate
# Special keyword to handle ON CONFLICT
# Special keyword for InitPlan or SubPlan
# Emphasize these keywords
# join keywords
# Treat "on" and "using" as a punctuation
# explain header
# Settings
# Handle JIT counters
# Handle Triggers counters
# matches any kind of parenthesized expression
# the first opening paren is matched by the 'caller'
# This is a cost or analyze measure
# if object_name is parenthesized, mark opening paren as
# punctuation, call 'expression', and exit state
# matches possibly schema-qualified table and column names
# if we encounter a comma, another object is listed
# special case: "*SELECT*"
# Variable $1 ...
# if predicate is parenthesized, mark paren as punctuation
# otherwise color until newline
# TODO: Backslash escapes?
# not a real string literal in ANSI SQL
# allow $s in strings for Oracle
# Float variant 1, for example: 1., 1.e2, 1.2e3
# Float variant 2, for example: .1, .1e2
# Float variant 3, for example: 123e45
# Below we use \w even for the first "real" character because
# tokens starting with a digit have already been recognized
# as Number above.
# names for temp tables and anything else
# parameter for prepared statements
# Found T-SQL variable declaration.
# We need to check if there are any names using
# backticks or brackets, as otherwise both are 0
# and 0 >= 2 * 0, so we would always assume it's true
# Found at least twice as many [name] as `name`.
# MySQL requires paired hex characters in this form.
# Mandatory integer, optional fraction and exponent
# Mandatory fraction, optional integer and exponent
# Exponents with integer significands are still floats
# Integers that are not in a schema object name
# Date literals
# Time literals
# Timestamp literals
# Date part
# Whitespace between date and time
# Time part
# For demonstrating prepared statements
# Exceptions; these words tokenize differently in different contexts.
# In all other known cases, "SET" is tokenized by MYSQL_DATATYPES.
# Schema object names
# Note: Although the first regex supports unquoted all-numeric
# identifiers, this will not be a problem in practice because
# numeric literals have already been handled above.
# Multiline comment substates
# String substates
# ----------------
# Variable substates
# ------------------
# Schema object name substates
# ----------------------------
# "Name.Quoted" and "Name.Quoted.Escape" are non-standard but
# formatters will style them as "Name" by default but add
# additional styles based on the token name. This gives users
# flexibility to add custom styles as desired.
# Same logic as above in the TSQL analysis
# Found at least twice as many `name` as [name].
# Constants, types, keywords, functions, operators
# for backwards compatibility with Pygments 1.5
# The trailing ?, rather than *, avoids a geometric performance drop here.
# Hexadecimal part in an hexadecimal integer/floating-point literal.
# This includes decimal separators matching.
# Decimal part in an decimal integer/floating-point literal.
# Integer literal suffix (e.g. 'ull' or 'll').
# Identifier regex with C and C++ Universal Character Name (UCN) support.
# Single and multiline comment regexes
# Beware not to use *? for the inner content! When these regexes
# are embedded in larger regexes, that can cause the stuff*? to
# match more than it would have if the regex had been used in
# a standalone way ...
# Regex to match optional comments
# Labels:
# Line start and possible indentation.
# Not followed by keywords which can be mistaken as labels.
# Actual label, followed by a single colon.
# Hexadecimal floating-point literals (C11, C++17)
# Vector intrinsics
# Microsoft-isms
# template specification
# Mark identifiers preceded by `case` keyword as constants.
# C++11 raw strings
# C++ Microsoft-isms
# Offload C++ extensions, http://offload.codeplay.com/
# 'enum class' and 'enum struct' C++11 support
# log start/end
# hack
# normal msgs
# /me msgs
# join/part msgs
# (r'^#$', Comment),
# application/calendar+xml can be treated as application/xml
# if there's not a better match.
# *.todotxt is not a standard extension for Todo.txt files; including it
# makes testing easier, and also makes autodetecting file type easier.
# Aliases mapping standard token types of Todo.txt format concepts
# Chosen to de-emphasize complete tasks
# Incomplete tasks should look like plain text
# Priority should have most emphasis to indicate importance of tasks
# Dates should have next most emphasis because time is important
# Project and context should have equal weight, and be in different colors
# If tag functionality is added, it should have the same weight as Project
# and Context, and a different color. Generic.Traceback would work well.
# Regex patterns for building up rules; dates, priorities, projects, and
# contexts are all atomic
# TODO: Make date regex more ISO 8601 compliant
# Compound regex expressions
# Should parse starting at beginning of line; each line is a task
# Complete task entry points: two total:
# 1. Complete task with two dates
# 2. Complete task with one date
# Incomplete task entry points: six total:
# 1. Priority plus date
# 2. Priority only
# 3. Leading date
# 4. Leading context
# 5. Leading project
# 6. Non-whitespace catch-all
# Parse a complete task
# Newline indicates end of task, should return to root
# Tokenize contexts and projects
# Tokenize non-whitespace text
# Tokenize whitespace not containing a newline
# Parse an incomplete task
# square bracket literals
# Join
# Union, Intersection and Subtraction
# Concatention
# Epsilon Transition
# EOF Actions
# Global Error Actions
# Local Error Actions
# To-State Actions
# From-State Actions
# Transition Actions and Priorities
# Repetition
# Negation
# Grouping
# keep host code in largest possible chunks
# exclude unsafe characters
# allow escaped { or }
# strings and comments may safely contain unsafe characters
# multi-line javadoc-style comment
# ruby comment
# regular expression: There's no reason for it to start
# with a * and this stops confusion with comments.
# / is safe now that we've handled regex and javadoc comments
# a single % sign is okay, just not 2 of them
# ruby/ragel comment
# regular expression
# Single Line FSM.
# Please don't put a quoted newline in a single line FSM.
# That's just mean. It will break this.
# Multi Line FSM.
# keep ragel code in largest possible chunks.
# } is okay as long as it's not followed by %
# ...well, one %'s okay, just not two...
# ...and } is okay if it's escaped
# allow / if it's preceded with one of these symbols
# (ragel EOF actions)
# specifically allow regex followed immediately by *
# so it doesn't get mistaken for a comment
# allow / as long as it's not followed by another / or by a *
# We want to match as many of these as we can in one block.
# Not sure if we need the + sign here,
# does it help performance?
# square bracket literal
# optionsSpec
# tokensSpec
# attrScope
# exception
# rule
# throwsSpec
# Additional throws
# ruleScopeSpec - scope followed by target language code or name of action
# TODO finish implementing other possibilities for scope
# L173 ANTLRv3.g from ANTLR book
# ruleAction
# finished prelims, go to rule alts!
# These might need to go in a separate 'block' state triggered by (
# Tokens start with capital letter.
# Rules start with small letter.
# backslashes are okay, as long as we are not backslashing a %
# Now that we've handled regex and javadoc comments
# it's safe to let / through.
# keep host code in largest possible chunks.
# http://www.antlr.org/wiki/display/ANTLR3/Code+Generation+Targets
# Antlr language is Java by default
# Simple long string
# Simple long string alt
# Start long string
# Mid long string
# End long string
# Simple String
# Start string
# Mid String
# End String
# Extern blocks won't be Lexed by Nit
# temporaries
# Not perfect can't allow whitespaces at the beginning and the
# without breaking everything
# else pop
# This state is a bit tricky since
# we can't just pop this state
# skip whitespace and comments
# squeak chunk delimiter
# Squeak fileout format (optional)
# BSD Make
# GNU Make
# GNU Automake
# Many makefiles have $(BIG_CAPS) style variables
# recipes (need to allow spaces because of expandtabs)
# assignment
# expansions
# (r'(ADD_CUSTOM_COMMAND|ADD_CUSTOM_TARGET|ADD_DEFINITIONS|'
# r'ADD_DEPENDENCIES|ADD_EXECUTABLE|ADD_LIBRARY|ADD_SUBDIRECTORY|'
# r'ADD_TEST|AUX_SOURCE_DIRECTORY|BUILD_COMMAND|BUILD_NAME|'
# r'CMAKE_MINIMUM_REQUIRED|CONFIGURE_FILE|CREATE_TEST_SOURCELIST|'
# r'ELSE|ELSEIF|ENABLE_LANGUAGE|ENABLE_TESTING|ENDFOREACH|'
# r'ENDFUNCTION|ENDIF|ENDMACRO|ENDWHILE|EXEC_PROGRAM|'
# r'EXECUTE_PROCESS|EXPORT_LIBRARY_DEPENDENCIES|FILE|FIND_FILE|'
# r'FIND_LIBRARY|FIND_PACKAGE|FIND_PATH|FIND_PROGRAM|FLTK_WRAP_UI|'
# r'FOREACH|FUNCTION|GET_CMAKE_PROPERTY|GET_DIRECTORY_PROPERTY|'
# r'GET_FILENAME_COMPONENT|GET_SOURCE_FILE_PROPERTY|'
# r'GET_TARGET_PROPERTY|GET_TEST_PROPERTY|IF|INCLUDE|'
# r'INCLUDE_DIRECTORIES|INCLUDE_EXTERNAL_MSPROJECT|'
# r'INCLUDE_REGULAR_EXPRESSION|INSTALL|INSTALL_FILES|'
# r'INSTALL_PROGRAMS|INSTALL_TARGETS|LINK_DIRECTORIES|'
# r'LINK_LIBRARIES|LIST|LOAD_CACHE|LOAD_COMMAND|MACRO|'
# r'MAKE_DIRECTORY|MARK_AS_ADVANCED|MATH|MESSAGE|OPTION|'
# r'OUTPUT_REQUIRED_FILES|PROJECT|QT_WRAP_CPP|QT_WRAP_UI|REMOVE|'
# r'REMOVE_DEFINITIONS|SEPARATE_ARGUMENTS|SET|'
# r'SET_DIRECTORY_PROPERTIES|SET_SOURCE_FILES_PROPERTIES|'
# r'SET_TARGET_PROPERTIES|SET_TESTS_PROPERTIES|SITE_NAME|'
# r'SOURCE_GROUP|STRING|SUBDIR_DEPENDS|SUBDIRS|'
# r'TARGET_LINK_LIBRARIES|TRY_COMPILE|TRY_RUN|UNSET|'
# r'USE_MANGLED_MESA|UTILITY_SOURCE|VARIABLE_REQUIRES|'
# r'VTK_MAKE_INSTANTIATOR|VTK_WRAP_JAVA|VTK_WRAP_PYTHON|'
# r'VTK_WRAP_TCL|WHILE|WRITE_FILE|'
# r'COUNTARGS)\b', Name.Builtin, 'args'),
# explicitly legal
# Whitespace, punctuation and the rest
# Final resolve (for variable names and such)
# (r'(%s)' % (bb_name), Name.Variable),
# ? == Bool // % == Int // # == Float // $ == String
# preprocessor directives
# preprocessor variable (any line starting with '#' that is not a directive)
# Native data types
# Exception handling
# Flow Control stuff
# not used yet
# catch the rest
# array (of given size)
# generics
# if it starts with a line number, it shouldn't be a "modern" Basic
# like VB.net
# can't use regular \b because of X$()
# XXX: use words() here
# date or time value
# Unterminated string
# Some special cases to make functions come out nicer
# With obvious borrowings & inspiration from the Java, Python and C lexers
# Uncomment the following if you want to distinguish between
# '/*' and '/**', à la javadoc
# (r'/[*]{2}(.|\n)*?[*]/', Comment.Multiline),
# stereotypes
# (r'([a-zA-Z_]\w*)(::)([a-zA-Z_]\w*)',
# bygroups(Text, Text, Text)),
# There is no need to distinguish between String.Single and
# String.Double: 'strings' is factorised for 'dqs' and 'sqs'
# double-quoted string
# single-quoted string
# numbers: excerpt taken from the python lexer
# See erlang(3) man page
# Erlang script shebang
# EEP 43: Maps
# http://www.erlang.org/eeps/eep-0043.html
# all valid sigil terminators (excluding heredocs)
# heredocs have slightly different rules
# Various kinds of characters
# '::' has to go before atoms
# atoms
# [keywords: ...]
# @attributes
# anon func arguments
# strings and heredocs
# Flow Control.
# Types.
# Built-in operations.
# Built-in functions.
# Compiler directives.
# Not the greatest match for patterns, but generally helps
# disambiguate between start of a pattern and just a division
# operator.
# Port
# IPv4 Address
# IPv6 Address
# Numeric
# Hostnames
# The "ternary if", which uses '?' and ':', could instead be
# treated as an Operator, but colons are more frequently used to
# separate field/identifier names from their types, so the (often)
# less-prominent Punctuation is used even with '?' for consistency.
# Copypasta from the Python lexer
# Left out 'group' and 'require'
# Since they're often used as attributes
# future time operators
# past time operators
# attr=value (nvpair)
# need this construct, otherwise numeric node ids
# are matched as scores
# elem id:
# scores
# keywords (elements and other)
# builtin attributes (e.g. #uname)
# acl_mod:blah
# rsc_id[:(role|action)]
# NB: this matches all other identifiers
# expression template placeholder
# TODO single quoted strings and escape sequences outside of
# double-quoted strings
# a variable has been declared using `element` or `attribute`
# after an xsd:<datatype> declaration there may be attributes
# attributes take the form { key1 = value1 key2 = value2 ... }
# Julia v1.6.0-rc1
# prec-assignment
# prec-conditional, prec-lazy-or, prec-lazy-and
# prec-colon
# prec-plus
# prec-decl
# prec-pair
# prec-arrow
# prec-comparison
# prec-pipe
# prec-times
# prec-rational, prec-bitshift
# prec-power
# unary-ops, excluding unary-and-binary-ops
# Generated with the following in Julia v1.6.0-rc1
# M or G commands
# (r'\\\n', Text), # line continuations
# Removed in 2.072
# FloatLiteral
# -- HexFloat
# CharacterLiteral
# -- WysiwygString
# -- AlternateWysiwygString
# -- DoubleQuotedString
# -- EscapeSequence
# -- HexString
# -- DelimitedString
# -- TokenString
# Line
# don't lex .md as MiniD, reserve for Markdown
# We only look for the open bracket here since square bracket
# Separate states for both types of strings so they don't entangle
# this handles the unquoted snbt keys
# Used to denotate the start of a block comment, borrowed from Github's mcfunction
# The start of a command (either beginning of line OR after the run keyword)
# UUID
# normal command names and scoreboards
# resource names have to be lowercase
# similar to above except optional `:``
# Scoreboard player names
# these are like unquoted strings and appear in many places
## Generic Property Container
# There are several, differing instances where the language accepts
# Property Maps:
# - Starts with either `[` or `{`
# - Key separated by `:` or `=`
# - Deliminated by `,`
# Property Lists:
# - Starts with `[`
# For simplicity, these patterns match a generic, nestable structure
# This allow some "illegal" structures, but we'll accept those for
# Examples:
# - `[facing=up, powered=true]` (blockstate)
# - `[name="hello world", nbt={key: 1b}]` (selector + nbt)
# - `[{"text": "value"}, "literal"]` (json)
# This state gets included in root and also several substates
# We do this to shortcut the starting of new properties
# lists can have sequences of items
# resource names (for advancements)
# must check if there is a future equals sign if `:` is in the name
# unquoted NBT key
# quoted JSON or NBT key
# index for a list
# unquoted resource names are valid literals here
# keywords for optional word and field types
# possible punctuations
# title line
# title line with a version code, formatted
# `major.minor.patch-prerelease+buildmeta`
# TODO: give this to a perl guy who knows how to parse perl...
# common delimiters
# balanced delimiters
# yes, there's no shortage
# of punctuation in Perl!
# hash syntax?
# argument specifier
# argument declaration
# := is not valid Perl, but it appears in unicon, so we should
# become less confident if we think we found Perl with :=
#Phasers
#Keywords
#Traits
#Booleans
#Classes
# Perl 6 has a *lot* of possible bracketing characters
# this list was lifted from STD.pm6 (https://github.com/perl6/std)
# it's not a mirrored character, which means we
# just need to look for the next occurrence
# we need to look for the corresponding closing character,
# keep nesting in mind
# next_close_pos < next_open_pos
# if we didn't find a closer, just highlight the
# rest of the text in this class
# if we encounter an opening brace and we're one level
# below a token state, it means we need to increment
# the nesting level for braces so we know later when
# we should return to the token rules.
# if we encounter a free closing brace and we're one level
# below a token state, it means we need to check the nesting
# level to see if we need to return to the token state.
# If you're modifying these rules, be careful if you need to process '{' or '}'
# characters. We have special logic for processing these characters (due to the fact
# that you can nest Perl 6 code in regex blocks), so if you need to process one of
# them, make sure you also process the corresponding one!
# deal with a special case in the Perl 6 grammar (role q { ... })
# copied from PerlLexer
# make sure that quotes in character classes aren't treated as strings
# make sure that '#' characters in quotes aren't treated as comments
# XXX handle block comments
# check for my/our/has declarations
# match v6; use v6; use v6.0; use v6.0.0;
# match class, module, role, enum, grammar declarations
# Same logic as above for PerlLexer
# Skip white spaces
# There is one operator
# verbatim strings
# TODO: "correctly" parse complex code attributes
# void is an actual keyword, others are in glib-2.0.vapi
# Lower than C/C++ and Objective C/C++
# Match it here so it won't be matched as a function in the rest of root
# SWIG directives
# Special variables
# Stringification / additional preprocessor directives
# This is a far from complete set of SWIG directives
# Most common directives
# Less common directives
# Search for SWIG directives, which are conventionally at the beginning of
# a line. The probability of them being within a line is low, so let another
# lexer win in this case.
# Fraction higher than MatlabLexer
# Language sketch main structure functions
# Language 'variables'
# Promela's language reference:
# https://spinroot.com/spin/Man/promela.html
# Promela's grammar definition:
# https://spinroot.com/spin/Man/grammar.html
# LTL Operators
#remoterefs
# Predefined (data types)
# ControlFlow
# BasicStatements
# Embedded C Code
# Predefined (local/global variables)
# Predefined (functions)
# Predefined (operators)
# Declarators
# Declarators (suffixes)
# MetaTerms (declarators)
# MetaTerms (keywords)
# Instruction keywords
# State Spaces and Suffixes
# PTX Directives
# Fundamental Types
# default with no args
# -v is assumed, but for explicitness it is passed
# TODO Consider removing, reraise does not seem to be called anywhere
### Legacy support for LooseVersion / LegacyVersion, e.g. 2.4-openbsd
### https://github.com/pypa/packaging/blob/21.3/packaging/version.py#L106-L115
### License: BSD, Accessed: Jan 14th, 2022
# pad for numeric comparison
# ensure that alpha/beta/candidate are before final
# We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch
# greater than or equal to 0. This will effectively put the LegacyVersion,
# which uses the defacto standard originally implemented by setuptools,
# as before all PEP 440 versions.
# This scheme is taken from pkg_resources.parse_version setuptools prior to
# it's adoption of the packaging library.
# remove "-" before a prerelease tag
# remove trailing zeros from each series of numeric parts
# Relations
# Command
# Note that we also provide the session ID here, since cmd()
# will not automatically add it as there is already a '-t'
# argument provided.
# Computed properties
# Clear alerts (bell, activity, or silence) in all windows
# Catch empty string and default (`None`)
# as of 2014-02-08 tmux 1.9-dev doesn't expand ~ in new-window -c.
# output
# empty string for window_index will use the first one available
# Dunder
# Aliases
# Legacy: Redundant stuff we want to remove
# Adjustments
# Manual
# Expand / Shrink
# Manual resizing
# tmux allows select-layout without args
# The shlex.split function splits the args at spaces, while also
# retaining quoted sub-strings.
# Climbers
# Deprecated in 3.1 in favor of -l
# find current sessions prefixed with tmuxp
# Zoom
# Mouse
# Optional flags
# Zoom / Unzoom
# tmux < 1.7. This is added in 1.7.
# New Pane, not self
# add the command arguments to cmd
# remove trailing newlines from stdout
# filter empty values
# openbsd has no tmux -V
# Allow latest tmux HEAD
# Raise generic option error
# as of 2014-02-08 tmux 1.9-dev doesn't expand ~ in new-session -c.
# noqa: PERF401
# "session_grouped",  Apparently unused, mistake found while adding dataclasses
# format_window()
# format_winlink()
# "pane_title",  # removed in 3.1+
# See QUIRK_TMUX_3_1_X_0001
# Not detected by script
# Filter empty values
# TODO: Add a deep Mappingionary matcher
# if isinstance(rhs, Mapping) and isinstance(data, Mapping):
# via https://github.com/pypa/packaging/blob/22.0/packaging/_structures.py
# via https://github.com/pypa/packaging/blob/22.0/packaging/version.py
#: Prefix used for test session names to identify and cleanup test sessions
#: Number of seconds to wait before timing out when retrying operations
#: Can be configured via :envvar:`RETRY_TIMEOUT_SECONDS` environment variable
#: Defaults to 8 seconds
#: Interval in seconds between retry attempts
#: Can be configured via :envvar:`RETRY_INTERVAL_SECONDS` environment variable
#: Defaults to 0.05 seconds (50ms)
# Get ``window_id`` before returning it, it may be killed within context.
# If we previously set this variable in this context, remove it from _unset
# If we haven't saved the original value yet, save it
# Don't delete variables that were reset
# unfortunately, the type of HashTrieMap is generic, and if used as an attrs
# converter, the generic type is presented to mypy, which then fails to match
# the concrete type of a type checker mapping
# this "do nothing" wrapper presents the correct information to mypy
# bool inherits from int, so ensure bools aren't reported as ints
# for reference material on Protocols, see
# in order for Sphinx to resolve references accurately from type annotations,
# it needs to see names like `jsonschema.TypeChecker`
# therefore, only import at type-checking time (to avoid circular references),
# but use `jsonschema` for any types which will otherwise not be resolvable
# For code authors working on the validator protocol, these are the three
# use-cases which should be kept in mind:
# 1. As a protocol class, it can be used in type annotations to describe the
# 2. It is the source of autodoc for the validator documentation
# 3. It is runtime_checkable, meaning that it can be used in isinstance()
# Since protocols are not base classes, isinstance() checking is limited in
# its capabilities. See docs on runtime_checkable for detail
#: An object representing the validator's meta schema (the schema that
#: describes valid schemas in the given version).
#: A mapping of validation keywords (`str`\s) to functions that
#: validate the keyword with that name. For more information see
#: `creating-validators`.
#: A `jsonschema.TypeChecker` that will be used when validating
#: :kw:`type` keywords in JSON schemas.
#: A `jsonschema.FormatChecker` that will be used when validating
#: :kw:`format` keywords in JSON schemas.
#: A function which given a schema returns its ID.
#: The schema that will be used to validate instances
# FIXME: Include context for each unevaluated property
# noqa: S310
# Ha ha ha ha magic numbers :/
# preemptively don't shadow the `Validator.format_checker` local
# TODO: include new meta-schemas added at runtime
# noqa: F402
# REMOVEME: Legacy ref resolution state management.
# set details if not already set by the called fn
# noqa: B026, E501
# noqa: B019
# Resolve via path
# noqa: SIM105
# Requests has support for detecting the correct encoding of
# json over http
# Otherwise, pass off to urllib and assume utf-8
# noqa: SIM115, PTH123
# noqa: D103
#: A format checker callable.
# Oy. This is bad global state, but relied upon for now, until
# deprecation. See #519 and test_format_checkers_come_with_defaults
# fqdn.FQDN("") raises a ValueError due to a bug
# however, it's not clear when or if that will be fixed, so catch it
# here for now
# The built-in `idna` codec only implements RFC 3890, so we go elsewhere.
# noqa: DTZ007
# TODO: I don't want to maintain this, so it
# Definition taken from:
# https://tools.ietf.org/html/draft-handrews-relative-json-pointer-01#section-3
# digits with a leading "0" are not allowed
# FIXME: See bolsote/isoduration#25 and bolsote/isoduration#21
# We ignore this as we want to simply crash if this happens
# pragma: no cover -- uncovered but deprecated  # noqa: E501
# pragma: no cover -- partially uncovered but to be removed  # noqa: E501
# prefer errors which are ...
# 'deeper' and thereby more specific
# earlier (for sibling errors)
# for a non-low-priority keyword
# for a high priority keyword
# at least match the instance's type
# otherwise we'll treat them the same
# Calculate the minimum via nsmallest, because we don't recurse if
# all nested errors have the same relevance (i.e. if min == max == all)
# When `instance` is large and `dB` is less than one,
# quotient can overflow to infinity; and then casting to int
# raises an error.
# In this case we fall back to Fraction logic, which is
# exact and cannot overflow.  The performance is also
# acceptable: we try the fast all-float option first, and
# we know that fraction(dB) can have at most a few hundred
# digits in each part.  The worst-case slowdown is therefore
# for already-slow enormous integers or Decimals.
# pragma: no cover -- untested, but to be removed
# These should be indistinguishable from just `subschema`
# Unregistered errors should not be caught
# This is bad :/ but relied upon.
# The docs for quite awhile recommended people do things like
# validate(..., format_checker=FormatChecker())
# We should change that, but we can't without deprecation...
# We're doing crazy things, so if they go wrong, like a function
# behaving differently on some other interpreter, just make them
# not happen.
# This messy logic is because the test suite is terrible at indicating
# what remotes are needed for what drafts, and mixes in schemas which
# have no $schema and which are invalid under earlier versions, in with
# other schemas which are needed for tests.
# invalid boolean schema
# draft<NotThisDialect>/*.json
# noqa: FLY002
# pragma: no cover  # noqa: E501
# noqa: T100
# noqa: SIM117
# Ha ha urllib.request.Request "normalizes" header names and
# Request.get_header does not also normalize them...
# There isn't a better way now I can think of to ensure that the
# latest version was used, given that the call to validator_for
# is hidden inside the CLI, so guard that that's the case, and
# this test will have to be updated when versions change until
# we can think of a better way to ensure this behavior.
# Cannot handle required
# Can now process required and properties
# As well as still handle objects.
# FIXME: #442
# Just the message should show if any of the attributes are unset
# as $ref ignores siblings
# TODO: These really need unit tests for each individual keyword, rather
# missing "children"
# TODO: These all belong upstream
# needs to be an integer
# Make sure original cause is attached
# Make sure it works without the empty fragment
# pragma: no cover  # noqa: E722
# just verify it succeeds
# in order to confirm that none of the above were incorrectly typed as 'Any'
# ensure that each of these assignments to a non-validator variable requires an
# DBus names and paths
# Copyright (C) 2019  Red Hat, Inc.  All rights reserved.
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301
# USA#
# DBus constants
# Status codes.
# No flags are set.
# System environment variable holding the DBus session address.
# Return values of org.freedesktop.DBus.RequestName.
# Flags of org.freedesktop.DBus.RequestName.
# Support for DBus types
# For more info about DBus type system see:
# https://dbus.freedesktop.org/doc/dbus-specification.html#type-system.
# Basic types.
# Default integer type: int will be treated as Int32.
# All integer types.
# Type of an object path.
# Container types.
# Use Tuple, Dict and List from typing.
# Use Variant from GLib and get_variant.
# Use Structure instead of Dict[Str, Variant].
# DBus representation of basic types.
# Default integer.
# Integer types.
# Other basic types.
# DBus representation of container types.
# pylint: disable=unhashable-member
# pylint: enable=unhashable-member
# Try base types.
# Try container types.
# Or raise an error.
# Return the container base type of the "origin" or None.
# See: https://bugzilla.redhat.com/show_bug.cgi?id=1598574
# Get the arguments of the container.
# Check the typing.
# Generate string.
# Identification of DBus objects, interfaces and services
# Support for DBus errors
# Clear the list.
# Add the default rules.
# Support for XML representation
# For more info about DBus specification see:
# https://dbus.freedesktop.org/doc/dbus-specification.html#introspection-format
# Remove newlines and extra whitespaces,
# Generate pretty xml.
# Normalize attributes.
# Representation of an event loop
# Copyright (C) 2020  Red Hat, Inc.  All rights reserved.
# Support for DBus structures
# Class attribute for DBus fields.
# Generate the DBus fields from the members of the class cls.
# Skip private members.
# Skip all but properties.
# Support for Unix file descriptors.
# Copyright (C) 2022  Red Hat, Inc.  All rights reserved.
# Handle the unix file descriptor.
# Get the swapping function.
# Swap the values.
# Do nothing if there is no unix file descriptor to handle.
# Get a new value of the variant.
# Create a new variant.
# Process Unix file descriptors in parameters.
# Call the DBus method.
# Restore Unix file descriptors in the result.
# Prepare the user's callback.
# Retrieve the result of the call.
# Call user's callback.
# Emit the signal without Unix file descriptors.
# Process Unix file descriptors in the reply.
# Send the reply.
# Restore Unix file descriptors in parameters.
# Representation of a signal
# The list of callbacks can be changed, so
# use a copy of the list for the iteration.
# Support for DBus XML specifications
# Representation of specification members.
# Specification data holders.
# The XML parser.
# Iterate over interfaces.
# Parse the interface.
# Iterate over members.
# Add the member specification to the mapping.
# Representation of DBus connections
# pylint: disable=arguments-differ
# We don't provide this service.
# We don't try to access this service from the main thread.
# Templates for DBus interfaces
# Server support for DBus containers
# Server support for DBus properties
# Find the property specification.
# Get the property value.
# Create a request.
# Server support for publishable Python objects
# Server support for DBus objects
# Drop the extra args if the handler doesn't support them.
# Server support for DBus interfaces
# Class attribute for the XML specification.
# Method attribute for the @returns_multiple_arguments decorator.
# Method attribute for the @accepts_additional_arguments decorator.
# The XML generator.
# The pattern of a DBus member name.
# Collect all interfaces that class inherits.
# Generate a new interface.
# Generate XML specification for the given class.
# Visit interface_cls and base classes in reversed order.
# Skip classes with no specification.
# Update found interfaces.
# Search class members.
# Check it the name is exportable.
# Skip names already defined in implemented interfaces.
# Generate XML element for exportable member.
# Add generated element to the interface.
# Is it a signal, a property or a method?
# Does it have the same name?
# The member is already defined.
# Only input parameters can be defined.
# All parameters are exported as output parameters
# (see specification).
# Get type hints for parameters.
# Iterate over method parameters, skip cls.
# Check the kind of the parameter
# Ignore **kwargs and all arguments after * and *args
# if the method supports additional arguments.
# Check if the type is defined.
# Is the return type defined?
# Is the return type other than None?
# Generate multiple output arguments if requested.
# The return type has to be a tuple.
# The return type has to contain multiple arguments.
# Iterate over types in the tuple
# Otherwise, return only one output argument.
# Process the setter.
# Process the getter.
# Property has both.
# Process the parameters.
# Create the parameter element.
# Add the element to the method element.
# Add comment about specified class.
# Add interfaces sorted by their names.
# Client support for DBus proxies
# Set of local instance attributes.
# Set of instance attributes.
# Client support for DBus properties
# Client support for DBus observers
# Client support for DBus objects
# Infinite timeout of a DBus call
# Create a signal.
# Subscribe to a DBus signal.
# Keep the subscriptions.
# Create variants.
# Create variant types.
# Collect arguments.
# Get the callback.
# Choose the type of invocation.
# Handle a remote DBus error.
# Create a new exception.
# Raise a new instance of the exception class.
# Handle a timeout error.
# Or re-raise the original error.
# Unwrap a variant tuple.
# Return None if there are no values.
# Return one value.
# Return multiple values.
# workaround for pytest-dev/pluggy#358
# save the limit for the chainer to honor
# all keyrings passing the limit filter
# type: ignore[arg-type] #659
# invoke the priority to ensure it is viable, or raise a RuntimeError
# noqa: B018
# load the keyring class name, and then load this keyring
# Attributes set dynamically by the ArgumentParser
# Tons of things can go wrong here:
# So, we play on the safe side, and catch everything.
# mypy doesn't see `cls` is `type[Self]`, might be fixable in jaraco.classes
# raise ValueError("Username cannot be empty")
# for backward-compatibility, don't require a backend to implement
# @abc.abstractmethod
# The default implementation requires a username here.
# from jaraco.classes 3.2.2
# by default, use Unix convention
# unicode only characters
# Sourced from The Quick Brown Fox... Pangrams
# http://www.columbia.edu/~fdc/utf8/
# ensure no-ascii chars slip by - watch your editor!
# set the password and save the result so the test runner can clean
# for the non-existent password
# common usage
# for the empty password
# Missing/wrong username should not return a cred
# override viability as 'priority' cannot be determined
# until other backends have been constructed
# prefer pywin32-ctypes
# force demand import to raise ImportError
# fallback to pywin32
# first attempt to get the password under the service name
# It wasn't found so attempt to get it with the compound name
# not found
# resave the existing password using a compound target
# Make sure there is actually a secret service running
# See https://github.com/jaraco/keyring/issues/296
# the user pressed "cancel" when prompted to unlock their keyring.
# User dismissed the prompt
# explicit bool and int required for Python 3.10 compatibility
# Copyright 2015 Dustin Spicuzza <dustin@virtualroadside.com>
# To make this file work with older PyGObject we expose our init code
# as init_template() but make it a noop when we call it ourselves first
# resources_get_info() doesn't handle overlays but we keep using it
# as a fast path.
# https://gitlab.gnome.org/GNOME/pygobject/issues/230
# Copyright (C) 2005-2009 Johan Dahlin <johan@gnome.org>
# support overrides in different directories than our gi module
# we can't have pygobject 2 loaded at the same time we load the internal _gobject
# Needed for compatibility with "pygobject.h"/pygobject_init()
# Options which can be enabled or disabled after importing 'gi'. This may affect
# repository imports or binding machinery in a backwards incompatible way.
# When True, importing Gtk or Gdk will call Gtk.init() or Gdk.init() respectively.
# only for backwards compatibility
# it was loaded before by another import which depended on this
# namespace or by C code like libpeas
# part of glib (we have bigger problems if versions change there)
# the version was forced using require_version()
# 2.7 included
# fixed again in 3.5+, see https://bugs.python.org/issue24305
# Note: see PEP302 for the Importer Protocol implemented below.
# is_registered() is faster than enumerate_versions() and
# in the common case of a namespace getting loaded before its
# dependencies, is_registered() returns True for all dependencies.
# Import all dependencies first so their init functions
# (gdk_init, ..) in overrides get called.
# https://bugzilla.gnome.org/show_bug.cgi?id=656314
# “exec” the module and consequently populate the module's namespace
# Copyright (C) 2007-2009 Johan Dahlin <johan@gnome.org>
# Cache of IntrospectionModules that have been loaded.
# If we reach the end of the introspection info class hierarchy, look
# for an existing wrapper on the GType and use it as a base for the
# new introspection wrapper. This allows static C wrappers already
# registered with the GType to be used as the introspection base
# (_gi.GObject for example)
# Otherwise use builtins.object as the base
# Create a wrapper.
# Check if there is already a Python wrapper that is not a parent class
# of the wrapper being created. If it is a parent, it is ok to clobber
# g_type.pytype with a new child class wrapper of the existing parent.
# Note that the return here never occurs under normal circumstances due
# to caching on the __dict__ itself.
# Register the new Python wrapper.
# Cache the newly created wrapper which will then be
# available directly on this introspection module instead of being
# lazily constructed through the __getattr__ we are currently in.
# update *set* because some repository attributes have already been
# wrapped by __getattr__() and included in self.__dict__; but skip
# Callback types, as these are not real objects which we can actually
# Copyright (C) 2024 James Henstridge <james@jamesh.id.au>
# Build lists of indices prior to adding the docs because it is possible
# the index retrieved comes before input arguments being used.
# skip exclusively output args
# allow-none or user_data from a closure
# TODO: Can we retrieve the default value?
# Remove defaults from params after the last required parameter.
# skip exclusively input args
# pygobject - Python bindings for the GObject library
# Copyright (C) 2006-2007 Johan Dahlin
# License along with this library; if not, see <http://www.gnu.org/licenses/>.
# Copyright (C) 2013 Simon Feltman <sfeltman@gnome.org>
#: Module storage for currently registered doc string generator function.
# Build input argument strings
# Build return + output argument strings
# start with \n to avoid auto indent of other lines
# Don't show default constructor for disguised (0 length) structs
# Copyright (C) 2007 Johan Dahlin
# Remember that G_MINFLOAT and G_MINDOUBLE are something different.
# Always clobber __doc__ with blurb even if blurb is empty because
# we don't want the lengthy Property class documentation showing up
# on instances.
# Call after setting blurb for potential __doc__ usage.
# do not call self.setter() here, as this defines the property name
# already
# Always clobber docstring and blurb with the getter docstring.
# with a setter decorator, we must ignore the name of the method in
# install_properties, as this does not need to be a valid property name
# and does not define the property name. So set the name here.
# Getter and Setter
# not same as the built-in
# if a property was defined with a decorator, it may already have
# a name; if it was defined with an assignment (prop = Property(...))
# we set the property's name to the member name
# we will encounter the same property multiple times in case of
# custom setter methods
# Copyright (C) 2014 Simon Feltman <sfeltman@gnome.org>
# NOTE: This file should not have any dependencies on introspection libs
# like gi.repository.GLib because it would cause a circular dependency.
# Developers wanting to use the GError class in their applications should
# use gi.repository.GLib.GError
# Copyright 2017 Christoph Reiter
# Raised in case this is not the main thread -> give up.
# Someone has called set_wakeup_fd while func() was active,
# so let's re-revert again.
# We save the signal pointer so we can detect if glib has changed the
# signal handler behind Python's back (GLib.unix_signal_add)
# Something has set the handler before import, we can't get a ptr
# for the default handler so make sure the pointer will never match.
# To handle multiple levels of event loops we need to call the last
# callback first, wait until the inner most event loop returns control
# and only then call the next callback, and so on... until we
# reach the outer most which manages the signal handler and raises
# in the end
# This is an inner event loop, append our callback
# to the stack so the parent context can call it.
# There is a signal handler set by the user, just do nothing
# Try to use the running loop. If there is none, get the policy and
# try getting one in the hope that this will give us an event loop for the
# correct context.
# -*- Mode: Python; py-indent-offset: 4 -*-
# vim: tabstop=4 shiftwidth=4 expandtab
# Copyright (C) 2025 James Henstridge <james@jamesh.id.au>
# If __gtype__ is not set, this is a new enum or flags defined
# from Python. Register a new GType for it.
# Copyright (C) 2012 Simon Feltman
# If obj is a GObject, than we call this signal as a closure otherwise
# it is used as a re-application of a decorator.
# If self is already an allocated name, use it otherwise create a new named
# signal using the closure name as the name.
# Return a new value of this type since it is based on an immutable string.
# import inspect only when needed because it takes ~10 msec to load
# Fixup a signal which is unnamed by using the class variable name.
# Since Signal is based on string which immutable,
# we must copy and replace the class variable.
# Setup signal closures by adding the specially named
# method to the class in the form of "do_<signal_name>".
# Copyright (C) 2006  Johannes Hoelzl
# _process_args() returns the remaining parameters in rargs.
# The prepended program name is used to all g_set_prgname()
# The program name is cut away so it doesn't appear in the result.
# Copyright (C) 2021 Benjamin Berg <bberg@redhat.com
# Copyright (C) 2019 James Henstridge <james@jamesh.id.au>
# _may_iterate will be False anyway, but might as well set it
# A mainloop in case we want to run our context
# Nothing to do if we are not running or dispatched by ourselves
# Nested main context iteration (by using glib API)
# Stop recursively
# Outermost nesting
# The time is floor'ed here.
# Python dispatches everything ready within the next _clock_resolution.
# Try to access the corresponding Task (or whatever) through the
# self parameter of the bound method.
# If _glib_idle_priority does not exist or it is not a bound method
# then we'll just catch the AttributeError exception.
# Just use the underlying python dispatch.
# Update priority
# The idle source disables itself and we are in the other which will not recurse
# Pause so that the main Source is not going to dispatch
# Note that this is pretty expensive, we could optimize it by detecting
# it when it happens and only doing the detach/attach dance if needed.
# There are (new) tasks available to run, ensure the priority is correct
# Simply quit the mainloop
# This class exists so we don't need to copy the ProactorEventLoop.run_forever,
# instead, we change the MRO using a metaclass, so that super() sees this class
# when called in ProactorEventLoop.run_forever.
# NOTE: self._check_running was only added in 3.8 (with a typo in 3.7)
# It is *not* safe to run the *python* part of the mainloop recursively.
# This error must be caught further up in the chain, otherwise the
# mainloop will be blocking without an obvious reason.
# WARNING: We must not under *any* circumstance have a reference back
# and creating a loop. The GLib.Source.__del__ handler sets the pointer
# to NULL and the BaseEventLoop.__del__ tries to close the loop causing
# FDs to be unregistered.
# By making sure there are no references back we (hopefully) force the
# GC to be well behaved and first clean up the eventloop and selector
# before destroying the source.
# Now, wag the dog by its tail
# This is based on the selector event loop, but never actually runs select()
# in the strict sense.
# We use the selector to register all FDs with the main context using our
# own GSource. For python timeouts/idle equivalent, we directly query them
# from the context by providing the _get_timeout_ms function that the
# GSource uses. This in turn accesses _ready and _scheduled to calculate
# the timeout and whether python can dispatch anything non-FD based yet.
# The Selector select() method simply returns the information we already
# collected.
# The rest is done by the mixin which overrides run_forever to simply
# iterate the main context.
# _UnixSelectorEventLoop uses _signal_handlers, we could do the same,
# with the difference that close() would clean up the handlers for us.
# Used by run_once to not busy loop if the timeout is floor'ed to zero
# Use our custom Task subclass
# Can be useful while testing failures
# assert sig != signal.SIGALRM
# Pure python demands that there is only one signal handler
# Set up a new source with a higher priority than our main one
# Really unref the underlying GSource so that GLib resets the signal handler
# GLib does not restore the original signal handler.
# Try to restore the python handler for SIGINT, this makes
# Ctrl+C work after the mainloop has quit.
# Pass over to python mainloop
# Note: SelectorEventloop should only be passing FDs
# NOTE: Always return False, FDs are queried in check and the timeout
# ERR/HUP/NVAL trigger both read/write (PRI cannot happen)
# Subclass to attach _tag
# re-register the keys with the new source
# NOTE: may be called after __del__ has been called.
# We could override modify, but it is only slightly when the "events" change.
# This metaclass changes the MRO so that when run_forever is called, it
# first calls asyncio.ProactorEventLoop and then chains into
# _GLibEventLoopRunMixin.run_forever using super().
# The alternative would be to copy asyncio.ProactorEventLoop.run_forever
# This is based on the Windows ProactorEventLoop
# Sets both self._proactor and self._selector to the proactor
# None denotes it is disabled (and will also not handle timeouts)
# Disabled, do not handle timeouts either
# We always use the same Source on windows, it disables itself
# Get the thread default main context
# If there is none, and we are on the main thread, then use the default context
# We do not create a main context implicitly;
# we create a mainloop for an existing context though
# Note: We cannot attach it to ctx, as getting the default will always
# Only accept glib event loops, otherwise things will just mess up
# We do permit unsetting the current loop/context
# Only allow attaching if the thread has no main context yet
# We shouldn't have any running loops at this point, and the ones that
# got created should be closed eventually.
# Explicitly close all loops here, it is not reasonable for them to be
# used after we unregister the EventLoopPolicy below.
# Do not supress any exceptions
# NOTE: We do *not* provide a GLib based ChildWatcher implementation!
# This is *intentional* and *required*. The issue is that python provides
# API which uses wait4() internally. GLib at the same time uses a thread to
# handle SIGCHLD signals, which causes a race condition resulting in a
# critical warning.
# We just provide a reasonable sane child watcher and disallow the user
# from choosing one as e.g. MultiLoopChildWatcher is problematic.
# TODO: Use PidfdChildWatcher when available
# Don't mask regular methods or base class methods with TypeClass methods.
# If a method name starts with "do_" assume it is a vfunc, and search
# in the base classes for a method with the same name to override.
# Recursion is necessary as overriden methods in most immediate parent
# classes may shadow vfuncs from classes higher in the hierarchy.
# If we did not find a matching method name in the bases, we might
# be overriding an interface virtual method. Since interfaces do not
# provide implementations, there will be no method attribute installed
# on the object. Instead we have to search through
# InterfaceInfo.get_vfuncs(). Note that the infos returned by
# get_vfuncs() use the C vfunc name (ie. there is no "do_" prefix).
# Check to see if there are vfuncs with the same name in the bases.
# We have no way of specifying which one we are supposed to override.
# Only InterfaceInfo and ObjectInfo have the get_vfuncs() method.
# We skip InterfaceInfo because interfaces have no implementations for vfuncs.
# Also check if __info__ in __dict__, not hasattr('__info__', ...)
# because we do not want to accidentally retrieve __info__ from a base class.
# Special case skipping of vfuncs for GObject.Object because they will break
# the static bindings which will try to use them.
# All wrapped interfaces inherit from GInterface.
# This can be seen in IntrospectionModule.__getattr__() in module.py.
# We do not need to search regular classes here, only wrapped interfaces.
# We also skip GInterface, because it is not wrapped and has no __info__ attr.
# Skip bases without __info__ (static _gi.GObject)
# Only look at this classes vfuncs if it is an interface.
# Recurse into the parent classes
# don't register the class if already registered
# Do not register a new GType for the overrides, as this would sort of
# defeat the purpose of overrides...
# For repository classes, dynamically generate a doc string if it wasn't overridden.
# TODO: If this turns out being too slow, consider using generators
# Python causes MRO's to be calculated starting with the lowest
# base class and working towards the descendant, storing the result
# in __mro__ at each point. Therefore at this point we know that
# we already have our base class MRO's available to us, there is
# no need for us to (re)calculate them.
# conflict, reject candidate
# remove candidate
# Avoid touching anything else than the base class.
# Boxed will raise an exception
# if arguments are given to __init__
# Copyright (C) 2009 Johan Dahlin <johan@gnome.org>
# SPDX-FileCopyrightText: (C) 2012 Robert Park <r@robru.ca>
# Copyright (C) 2012 Thibault Saunier <thibault.saunier@collabora.com>
# License along with this program; if not, write to the
# Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
# Boston, MA 02110-1301, USA.
# SPDX-License-Identifier: LGPL-2.0-or-later
# Ensuring that PyGObject loads the URIHandler interface
# so we can force our own implementation soon enough (in gstmodule.c)
# ElementFactory
# Compute greatest common divisor
# From https://docs.python.org/3/library/itertools.html
# Make it behave like a tuple similar to the PyGObject generated API for
# the `Gst.Buffer.map()` and friends.
# maybe more python and less C some day if core turns a bit more introspection
# and binding friendly in the debug area
# Make sure PyGst is not usable if GStreamer has not been initialized
# Copyright (C) 2024 Daniel Morin <daniel.morin@dmohub.org>
# Exposed for unit-testing.
# using inner function above since entries can leave out optional arguments
# NOTE: This must come before any other Window/Dialog subclassing, to ensure
# that we have a correct inheritance hierarchy.
# TODO: and not GTK5
# Increment the warning stacklevel for sub-classes which implement their own __init__.
# buttons was overloaded by PyGtk but is needed for Gtk.MessageDialog
# as a pass through, so type check the argument and give a deprecation
# when it is not of type Gtk.ButtonsType
# Note, the "manager" keyword must work across the entire 3.x series because
# "recent_manager" is not backwards compatible with PyGObject versions prior to 3.10.
# TODO: Accept a dictionary for row
# model.append(None,{COLUMN_ICON: icon, COLUMN_NAME: name})
# do not try to set None values, they are causing warnings
# Signals supporting python iterables as tree paths
# insert_with_valuesv got renamed to insert_with_values with 4.1.0
# https://gitlab.gnome.org/GNOME/gtk/-/commit/a1216599ff6b39bca3e9
# gtk_list_store_insert() does not know about the "position == -1"
# case, so use append() here
# for compatibility with PyGtk
# Doubly deprecated initializer, the stock keyword is non-standard.
# Simply give a warning that stock items are deprecated even though
# we want to deprecate the non-standard keyword as well here from
# the overrides.
# Gtk.Widget.set_focus_on_click should be used instead but it's
# no obvious how because of the shadowed method, so override here
# Gtk.Widget.get_focus_on_click should be used instead but it's
# The value property is set between lower and (upper - page_size).
# Just in case lower, upper or page_size was still 0 when value
# was set, we set it again here.
# Delegate to child model
# TODO: or GTK5
# namespace -> (attr, replacement)
# Silence the exception if the attribute was previously found.
# delete the descriptor, then set the instance value
# delete the descriptor
# We use sys.modules so overrides can import from gi.repository
# but restore everything at the end so this doesn't have any side effects
# backwards compat:
# gedit uses gi.importer.modules['Gedit']._introspection_module
# Avoid checking for an ImportError, an override might
# depend on a missing module thus causing an ImportError
# backwards compat: for gst-python/gstmodule.c,
# which tries to access Gst.Fraction through
# Gst._overrides_module.Fraction. We assign the proxy instead as that
# contains all overridden classes like Fraction during import anyway and
# there is no need to keep the real override module alive.
# Gedit puts a non-string in __all__, so catch TypeError here
# Replace deprecated module level attributes with a descriptor
# which emits a warning when accessed.
# We use a list of argument names to maintain order of the arguments
# being deprecated. This allows calls with positional arguments to
# continue working but with a deprecation message.
# Print warnings for calls with positional arguments.
# Print warnings for alias usage and transfer them into the new key.
# Print warnings for defaults different than what is already provided by the property
# Remove keywords that should be ignored.
# AT-SPI - Assistive Technology Service Provider Interface
# Copyright 2025 SUSE LLC.
# License along with this library; if not, write to the
# Copyright 2018 Christoph Reiter <reiter.christoph@gmail.com>
# Copyright (C) 2010 Paolo Borelli <pborelli@gnome.org>
# -*- Mode: Python -*-
# vi:si:et:sw=4:sts=4:ts=4
# Copyright (C) 2025 Netflix Inc.
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA
# Copyright (C) 2012 Alessandro Decina <alessandro.d@gmail.com>
# Copyright 2025 Simon McVittie
# In older versions of GLib there was some confusion between the
# platform-specific classes in GioUnix and their equivalents in Gio,
# resulting in functions like g_desktop_app_info_get_action_name()
# being assumed to be a global function that happened to take a
# Gio.DesktopAppInfo first parameter, instead of being a method on a
# GioUnix.DesktopAppInfo instance. There are not very many classes and
# methods in GioUnix, so we can wrap them and provide the intended API.
# Copyright (C) 2010 Ignacio Casal Quinteiro <icq@gnome.org>
# stateful action
# stateless action
# First create the task (in case it raises an exception)
# Store a reference to the task so that it cannot be garbage collected
# https://bugzilla.gnome.org/show_bug.cgi?id=744690
# for "if mysettings" we don't want a dictionary-like test here, just
# if the object isn't None
# get_value() aborts the program on an unknown key
# set_value() aborts the program on an unknown key
# determine type string of this key
# v is boxed empty array, type of its elements is the allowed value type
# v is an array with the allowed values
# return exception as value
# the first positional argument is the signature, unless we are calling
# a method without arguments; then signature is implied to be '()'.
# asynchronous call
# synchronous call
# to be compatible with standard Python behaviour, unbox
# single-element tuples and return None for empty result tuples
# find_with_equal_func() is not suited for language bindings,
# see: https://gitlab.gnome.org/GNOME/glib/-/issues/2447.
# Use find_with_equal_func_full() instead.
# A file isn't guaranteed to have a path associated and returning
# `None` here will result in a `TypeError` trying to subscribe to it.
# Add support for using platform-specific Gio symbols.
# Fallback if we don't have the original name.
# https://bugzilla.gnome.org/show_bug.cgi?id=673396
# Gdk.Color was deprecated since 3.14 and dropped in Gtk-4.0
# This is required (even when __eq__ is defined) in order
# for != operator to work as expected
# Introduced since Gtk-3.0
# Newer GTK/gobject-introspection (3.17.x) include GdkRectangle in the
# typelib. See https://bugzilla.gnome.org/show_bug.cgi?id=748832 and
# https://bugzilla.gnome.org/show_bug.cgi?id=748833
# https://bugzilla.gnome.org/show_bug.cgi?id=756364
# These methods used to be functions, keep aliases for backwards compat
# Gdk.Window had to be made abstract,
# this override allows using the standard constructor
# manually bind GdkEvent members to GdkEvent
# right now we can't get the type_info from the
# field info so manually list the class names
# whitelist all methods that have a success return we want to mask
# add the event methods
# use the _gsuccess_mask decorator if this method is whitelisted
# end GdkEvent overrides
# Since g_object_newv (super.__new__) does not seem valid for
# direct use with GdkCursor, we must assume usage of at least
# one of the C constructors to be valid.
# Note, we cannot override the entire class as Gdk.Atom has no gtype, so just
# hack some individual methods
# fall back to atom index
# Gom.py
# Copyright (C) 2015 Mathieu Bridon <bochecha@daitauha.fr>
# This file is free software; you can redistribute it and/or
# This file is distributed in the hope that it will be useful,
# Copyright (C) 2010 Tomeu Vizoso <tomeu.vizoso@collabora.co.uk>
# Copyright (C) 2011, 2012 Canonical Ltd.
# Types and functions still needed from static bindings
# Handle cases where self.domain was set to an integer for compatibility
# with the introspected GLib.Error.
# Monkey patch methods that rely on GLib introspection to be loaded at runtime.
# Since we discarded all leaf types, this must be a container
# object path
# Calling unref will cause gi and gi.repository.GLib to be
# imported. However, if the program is exiting, then these
# modules have likely been removed from sys.modules and will
# raise an exception. Assume that's the case for ImportError
# and ignore the exception since everything will be cleaned
# up, anyway.
# We're not using just hash(self.unpack()) because otherwise we'll have
# hash collisions between the same content in different variant types,
# which will cause a performance issue in set/dict/etc.
# simple values
# dictionary
# variant (just unbox transparently)
# maybe
# eat the surrounding ()
# prefixes, keep collecting
# consume until corresponding )/}
# otherwise we have a simple type
# Pythonic iterators
# Array, dict, tuple
# lookup_value() only works for string keys, which is certainly
# the common case; we have to do painful iteration for other
# key types
# array/tuple
# Pythonic bool operations
# unpack works recursively, hence bool also works recursively
# backwards compatible names from old static bindings
# spelling for the win
# these are not currently exported in GLib gir, presumably because they are
# platform dependent; so get them from our static bindings
# Backwards compatible constructor API
# Backwards compatible API with default value
# use our custom pygi_source_new() here as g_source_new() is not
# bindable
# We destroy and finalize the box from here, as GLib might hold
# a reference (e.g. while the source is pending), delaying the
# finalize call until a later point.
# use our custom pygi_source_set_callback() if for a GSource object
# with custom functions
# otherwise, for Idle and Timeout, use the standard method
# as get/set_priority are introspected, we can't use the static
# property(get_priority, ..) here
# backwards compatible API
# The GI GLib API uses g_io_add_watch_full renamed to g_io_add_watch with
# a signature of (channel, priority, condition, func, user_data).
# Prior to PyGObject 3.8, this function was statically bound with an API closer to the
# non-full version with a signature of: (fd, condition, func, *user_data)
# We need to support this until we are okay with breaking API in a way which is
# not backwards compatible.
# This needs to take into account several historical APIs:
# - calling with an fd as first argument
# - calling with a Python file object as first argument (we keep this one as
# - calling without a priority as second argument
# shift the arguments around
# backwards compatibility: Call with priority kwarg
# backwards compatibility: Allow calling with fd
# backwards compatibility: Allow calling with Python file
# note, size_hint is just to maintain backwards compatible API; the
# old static binding did not actually use it
# note, size_hint is just to maintain backwards compatible API;
# the old static binding did not actually use it
# note, this appends an empty line after EOF; this is
# bug-compatible with the old static bindings
# The GI GLib API uses g_child_watch_add_full renamed to g_child_watch_add with
# a signature of (priority, pid, callback, data).
# non-full version with a signature of: (pid, callback, data=None, priority=GLib.PRIORITY_DEFAULT)
# we need this to be accessible for unit testing
# backwards compatible API with default argument, and ignoring bytes_read
# output argument
# obsolete constants for backwards compatibility
# Copyright (C) 2012 Canonical Ltd.
# Author: Martin Pitt <martin.pitt@ubuntu.com>
# Copyright (C) 2012-2013 Simon Feltman <sfeltman@src.gnome.org>
# Copyright (C) 2012 Bastian Winkler <buz@netbuz.org>
# API aliases for backwards compatibility
# deprecated constants
# TODO: this uses deprecated Glib attributes, silence for now
# Deprecated, use GLib directly
# Deprecated, use: GObject.ParamFlags.* directly
# PARAM_READWRITE should come from the gi module but cannot due to:
# https://gitlab.gnome.org/GNOME/gobject-introspection/issues/75
# Deprecated, use: GObject.SignalFlags.* directly
# Static types
# XXX: This is the same as self.g_type, but the field marshalling
# code is currently very slow.
# Workaround the introspection marshalers inability to know
# these methods should be marshaling boxed types. This is because
# the type information is stored on the GValue.
# Fall back to _gvalue_set which handles some more cases
# like fundamentals for which a converter is registered
# get_variant was missing annotations
# https://gitlab.gnome.org/GNOME/glib/merge_requests/492
# n_params',
# Return a named tuple to allows indexing which is compatible with the
# static bindings along with field like access of the gi struct.
# Note however that the n_params was not returned from the static bindings
# so we must skip over it.
# GObject accumulators with pure Python implementations
# These return a tuple of (continue_emission, accumulation_result)
# Stop emission but return the result of the last handler
# Stop emission if the last handler returns True
# Statically bound signal functions which need to clobber GI (for now)
# Function wrapper for signal functions used as instance methods.
# This is needed when the signal functions come directly from GI.
# (they are not already wrapped)
# Generic data methods are not needed in python as it can be handled
# with standard attribute access: https://bugzilla.gnome.org/show_bug.cgi?id=641944
# The following methods as unsupported until we verify
# they work as gi methods.
# Make all reference management methods private but still accessible.
# The following methods are static APIs which need to leap frog the
# gi methods until we verify the gi methods can replace them.
# Swap obj with the last element in args which will be the user
# data passed to the connect function.
# Deprecated Methods
# Deprecated naming "property" available for backwards compatibility.
# Keep this at the end of the file to avoid clobbering the builtin.
# modify it under the terms of the GNU Library General Public
# Library General Public License for more details.
# You should have received a copy of the GNU Library General Public
# Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# vim:set et sts=4 sw=4:
# ibus - The Input Bus
# Copyright (c) 2012 Daiki Ueno <ueno@unixuser.org>
# Copyright (c) 2011 Peng Huang <shawn.p.huang@gmail.com>
# for newer pygobject: https://bugzilla.gnome.org/show_bug.cgi?id=686828
# from ..module import get_introspection_module
# IBus = get_introspection_module('IBus')
# Backward compatibility: allow non-keyword arguments
# Backward compatibility: allow keyword arguments
# Backward compatibility: accept default arg
# Backward compatibility: unset value if value is None
# Note that we don't call GLib.Variant.unpack here
# Backward compatibility: rename
# Copyright (C) 2010 Simon van der Linden <svdlinden@src.gnome.org>
# FIXME: doesn't work yet
# self.long_ = long_
# return self
# libcaca       Colour ASCII-Art library
# Copyright (c) 2010 Alex Foulon <alxf@lavabit.com>
# This library is free software. It comes without any warranty, to
# the extent permitted by applicable law. You can redistribute it
# and/or modify it under the terms of the Do What the Fuck You Want
# to Public License, Version 2, as published by Sam Hocevar. See
# http://www.wtfpl.net/ for more details.
# set buffer for writing utf8 value
#standard modules
#functions to handle string/bytes in python3+
#color constants
#styles constants
#key constants
#event constants
#memory error occured otherwise
#memory occurs otherwise
# this has to be at the top level, see how setup.py parses this
#: Distribution version number.
# Note that image is commented out in the spec as "this isn't an
# element that can end up on the stack, so it doesn't matter,"
# Heading elements need to be ordered
# entitiesWindows1252 has to be _ordered_ and needs to have an index. It
# therefore can't be a frozenset.
# 0x80  0x20AC  EURO SIGN
# 0x81          UNDEFINED
# 0x82  0x201A  SINGLE LOW-9 QUOTATION MARK
# 0x83  0x0192  LATIN SMALL LETTER F WITH HOOK
# 0x84  0x201E  DOUBLE LOW-9 QUOTATION MARK
# 0x85  0x2026  HORIZONTAL ELLIPSIS
# 0x86  0x2020  DAGGER
# 0x87  0x2021  DOUBLE DAGGER
# 0x88  0x02C6  MODIFIER LETTER CIRCUMFLEX ACCENT
# 0x89  0x2030  PER MILLE SIGN
# 0x8A  0x0160  LATIN CAPITAL LETTER S WITH CARON
# 0x8B  0x2039  SINGLE LEFT-POINTING ANGLE QUOTATION MARK
# 0x8C  0x0152  LATIN CAPITAL LIGATURE OE
# 0x8D          UNDEFINED
# 0x8E  0x017D  LATIN CAPITAL LETTER Z WITH CARON
# 0x8F          UNDEFINED
# 0x90          UNDEFINED
# 0x91  0x2018  LEFT SINGLE QUOTATION MARK
# 0x92  0x2019  RIGHT SINGLE QUOTATION MARK
# 0x93  0x201C  LEFT DOUBLE QUOTATION MARK
# 0x94  0x201D  RIGHT DOUBLE QUOTATION MARK
# 0x95  0x2022  BULLET
# 0x96  0x2013  EN DASH
# 0x97  0x2014  EM DASH
# 0x98  0x02DC  SMALL TILDE
# 0x99  0x2122  TRADE MARK SIGN
# 0x9A  0x0161  LATIN SMALL LETTER S WITH CARON
# 0x9B  0x203A  SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
# 0x9C  0x0153  LATIN SMALL LIGATURE OE
# 0x9D          UNDEFINED
# 0x9E  0x017E  LATIN SMALL LETTER Z WITH CARON
# 0x9F  0x0178  LATIN CAPITAL LETTER Y WITH DIAERESIS
# Setup the initial tokenizer state
# The current token being created
# Start processing. When EOF is reached self.state will return False
# instead of True and the loop will terminate.
# Consume all the characters that are in range while making sure we
# don't hit an EOF.
# Convert the set of characters consumed to an int.
# Certain characters get replaced with others
# Should speed up this check somehow (e.g. move the set to a constant)
# Try/except needed as UCS-2 Python builds' unichar only works
# within the BMP.
# Discard the ; if present. Otherwise, put it back on the queue and
# invoke parseError on parser.
# Initialise to the default output for when no entity is matched
# Read the next character to see if it's hex or decimal
# charStack[-1] should be the first digit
# At least one digit found, so consume the whole number
# No digits found
# At this point in the process might have named entity. Entities
# are stored in the global variable "entities".
# Consume characters and compare to these to a substring of the
# entity names in the list until the substring no longer matches.
# At this point we have a string that starts with some characters
# that may match an entity
# Try to find the longest entity the string will match to take care
# of &noti for instance.
# Add token to the queue to be yielded
# we had some duplicated attribute, fix so first wins
# Below are the various tokenizer states worked out.
# Tokenization ends.
# Directly after emitting a token you switch back to the "data
# state". At that point spaceCharacters are important so they are
# emitted separately.
# No need to update lastFourChars here, since the first space will
# have already been appended to lastFourChars and will have broken
# any <!-- or --> sequences
# XXX In theory it could be something besides a tag name. But
# do we really care?
# XXX data can be _'_...
# (Don't use charsUntil here, because tag names are
# very short and it's faster to not do anything fancy)
# XXX If we emit here the attributes are converted to a dict
# without being checked and when the code below runs we error
# because data is a dict not a list
# Attributes are not dropped at this stage. That happens when the
# start tag token is emitted so values can still be safely appended
# to attributes, but we do want to report the parse error in time.
# XXX Fix for above XXX
# Make a new comment token and give it as value all the characters
# until the first > or EOF (charsUntil checks for EOF automatically)
# and emit it.
# Eat the character directly after the bogus comment which is either a
# ">" or an EOF.
# All the characters read before the current 'data' will be
# [a-zA-Z], so they're garbage in the bogus doctype and can be
# discarded; only the latest character might be '>' or EOF
# and needs to be ungetted
# XXX EMIT
# pylint:disable=redefined-variable-type
# Deal with null here rather than in the parser
# Non-unicode versions of constants for use in the pre-parser
# Use one extra step of indirection and create surrogates with
# eval. Not using this indirection would introduce an illegal
# unicode literal on platforms not supporting such lone
# surrogates.
# pylint:disable=eval-used
# Cache for charsUntil()
# chunk number, offset
# Work around Python bug #20007: read(0) closes the connection.
# http://bugs.python.org/issue20007
# Also check for addinfourl wrapping HTTPResponse
# Such platforms will have already checked for such
# surrogate errors, so no need to do this checking.
# List of where new lines occur
# number of (complete) lines in previous chunks
# number of columns in the last line of the previous chunk
# Deal with CR LF and surrogates split over chunk boundaries
# Already a file object
# Read a new chunk from the input stream if necessary
# Deal with CR LF and surrogates broken across chunks
# We have no more data, bye-bye stream
# Replace invalid characters
# Someone picked the wrong compile option
# You lose
# Pretty sure there should be endianness issues here
# We have a surrogate pair!
# Use a cache of regexps to find the required characters
# Find the longest matching prefix
# If nothing matched, and it wasn't because we ran out of chunk,
# then stop
# If not the whole chunk matched, return everything
# up to the part that didn't match
# If the whole remainder of the chunk matched,
# use it all and read the next chunk
# Reached EOF
# Only one character is allowed to be ungotten at once - it must
# be consumed again before any further call to unget
# unget is called quite rarely, so it's a good idea to do
# more work here if it saves a bit of work in the frequently
# called char and charsUntil.
# So, just prepend the ungotten character onto the current
# chunk:
# Raw Stream - for unicode objects this will encode to utf-8 and set
# Encoding Information
# Number of bytes to use when looking for a meta element with
# encoding information
# Number of bytes to use when using detecting encoding using chardet
# Things from args
# Determine encoding
# Call superclass
# BOMs take precedence over everything
# This will also read past the BOM if present
# If we've been overridden, we've been overridden
# Now check the transport layer
# Look for meta elements with encoding information
# Parent document encoding
# "likely" encoding
# Guess with chardet, if available
# Try the default encoding
# Fallback to html5lib's default if even that hasn't worked
# Go to beginning of file and read in 4 bytes
# Try detecting the BOM using bytes from the string
# Need to detect UTF-32 before UTF-16
# UTF-32
# UTF-16
# Set the read position past the BOM if one was found, otherwise
# set it to the start of the stream
# pylint:disable=unused-argument
# Py2 compat
# use property for the error-checking
# if we have <meta not followed by a space so just keep going
# We have a valid meta element we want to search for attributes
# Try to find the next attribute after the current position
# If the next byte is not an ascii letter either ignore this
# fragment (possible start tag case) or treat it according to
# handleOther
# return to the first step in the overall "two step" algorithm
# reprocessing the < byte
# Read all attributes
# Step 1 (skip chars)
# Step 2
# Step 3
# Step 4 attribute name
# Step 6!
# Step 5
# Step 7
# Step 8
# Step 9
# Step 10
# 10.1
# 10.2
# 10.3
# 10.4
# 10.5
# Step 11
# Check if the attr name is charset
# otherwise return
# If there is no = sign keep looking for attrs
# Look for an encoding between matching quote marks
# Unquoted value
# Return the whole remaining value
# Raise an exception on the first error encountered
# only used with debug mode
# "quirks" / "limited quirks" / "no quirks"
# state already is data state
# self.tokenizer.state = self.tokenizer.dataState
# When the loop finishes it's EOF
# XXX The idea is to make errorcode mandatory.
# The name of this method is mostly historical. (It's also used in the
# specification.)
# Check for conditions that should only happen in the innerHTML
# case
# Generic RCDATA/RAWTEXT Parsing algorithm
# For most phases the following is correct. Where it's not it will be
# overridden.
# Note the caching is done here rather than BoundMethodDispatcher as doing it there
# requires a circular reference to the Phase, and this ends up with a significant
# (CPython 2.7, 3.8) GC cost when parsing many short inputs
# In Py2, using `in` is quicker in general than try/except KeyError
# In Py3, `in` is quicker when there are few cache hits (typically short inputs)
# bound the cache size in case we get loads of unknown tags
# this makes the eviction policy random on Py < 3.7 and FIFO >= 3.7
# XXX Need a check here to see if the first start tag token emitted is
# this token... If it's not, invoke self.parser.parseError().
# helper methods
# the real thing
# Encoding it as UTF-8 here is a hack, as really we should pass
# the abstract Unicode string, and just use the
# ContentAttrParser on that, but using UTF-8 allows all chars
# to be encoded and as a ASCII-superset works.
# Need to decide whether to implement the scripting-disabled case
# Caller must raise parse error first!
# http://www.whatwg.org/specs/web-apps/current-work/#parsing-main-inbody
# the really-really-really-very crazy mode
# Set this to the default handler
# helper
# the real deal
# Stop parsing
# Sometimes (start of <pre>, <listing>, and <textarea> blocks) we
# want to drop leading newlines
# The tokenizer should always emit null on its own
# This must be bad for performance
# XXX Need tests that trigger the following
# input type=hidden doesn't change framesetOK
# No really...
# XXX Localization ...
# Need to get the parse error right for the case where the token
# has a namespace not equal to the xmlns attribute
# Not sure this is the correct name for the parse error
# We repeat the test for the body end tag token being ignored here
# Put us back in the right whitespace handling mode
# http://svn.whatwg.org/webapps/complete.html#adoptionAgency revision 7867
# XXX Better parseError messages appreciated.
# Step 1
# Step 4:
# Let the formatting element be the last element in
# the list of active formatting elements that:
# - is between the end of the list and the last scope
# marker in the list, if any, or the start of the list
# otherwise, and
# - has the same tag name as the token.
# If there is no such node, then abort these steps
# and instead act as described in the "any other
# end tag" entry below.
# Otherwise, if there is such a node, but that node is
# not in the stack of open elements, then this is a
# parse error; remove the element from the list, and
# abort these steps.
# Otherwise, if there is such a node, and that node is
# also in the stack of open elements, but the element
# is not in scope, then this is a parse error; ignore
# the token, and abort these steps.
# Otherwise, there is a formatting element and that
# element is in the stack and is in scope. If the
# element is not the current node, this is a parse
# error. In any case, proceed with the algorithm as
# written in the following steps.
# Step 5:
# Let the furthest block be the topmost node in the
# stack of open elements that is lower in the stack
# than the formatting element, and is an element in
# the special category. There might not be one.
# Step 6:
# If there is no furthest block, then the UA must
# first pop all the nodes from the bottom of the stack
# of open elements, from the current node up to and
# including the formatting element, then remove the
# formatting element from the list of active
# formatting elements, and finally abort these steps.
# Step 8:
# The bookmark is supposed to help us identify where to reinsert
# nodes in step 15. We have to ensure that we reinsert nodes after
# the node before the active formatting element. Note the bookmark
# can move in step 9.7
# Node is element before node in open elements
# Step 9.6
# Step 9.7
# Step 9.8
# Replace node with clone
# Step 9.9
# Remove lastNode from its parents, if any
# Step 9.10
# Foster parent lastNode if commonAncestor is a
# table, tbody, tfoot, thead, or tr we need to foster
# parent the lastNode
# Step 12
# Step 13
# Step 14
# Step 15
# The rest of this method is all stuff that only happens if
# document.write works
# http://www.whatwg.org/specs/web-apps/current-work/#in-table
# "clear the stack back to a table context"
# self.parser.parseError("unexpected-implied-end-tag-in-table",
# When the current node is <html> it's an innerHTML case
# processing methods
# If we get here there must be at least one non-whitespace character
# Do the table magic!
# XXX associate with form
# innerHTML case
# pretty sure we should never reach here
# http://www.whatwg.org/specs/web-apps/current-work/#in-caption
# XXX Have to duplicate logic here to find out if the tag is ignored
# AT this code is quite similar to endTagTable in "InTable"
# http://www.whatwg.org/specs/web-apps/current-work/#in-column
# http://www.whatwg.org/specs/web-apps/current-work/#in-table0
# the rest
# XXX AT Any ideas on how to share this with endTagTable?
# http://www.whatwg.org/specs/web-apps/current-work/#in-row
# helper methods (XXX unify this with other table helper methods)
# XXX how are we sure it's always ignored in the innerHTML case?
# Reprocess the current tag if the tr end tag was not ignored
# http://www.whatwg.org/specs/web-apps/current-work/#in-cell
# sometimes innerHTML case
# http://www.whatwg.org/specs/web-apps/current-work/#in-select
# We need to imply </option> if <option> is the current node.
# </optgroup> implicitly closes <option>
# It also closes </optgroup>
# But nothing else
# XXX this isn't in the spec but it seems necessary
# This is needed because data is to be appended to the <html> element
# here and not to whatever is currently open.
# http://www.whatwg.org/specs/web-apps/current-work/#in-frameset
# If we're not in innerHTML mode and the current node is not a
# "frameset" element (anymore) then switch.
# http://www.whatwg.org/specs/web-apps/current-work/#after3
# pylint:enable=unused-argument
# XXX after after frameset
# skip multi-character entities
# prefer &lt; over &LT; and similarly for &amp;, &gt;, etc.
# XXX: Should we cache this?
# attribute quoting options
# be secure by default
# tag syntax options
# escaping options
# miscellaneous options
# pylint:disable=too-many-nested-blocks
# Alphabetical attributes is here under the assumption that none of
# the later filters add or change order of attributes; it needs to be
# before the sanitizer so escaped elements come out correctly
# WhitespaceFilter should be used before OptionalTagFilter
# for maximum efficiently of this latter filter
# TODO: Add namespace support here
# XXX The idea is to make data mandatory.
# Without the
# We don't really support characters above the BMP :(
# output from the above
# Simpler things
# Other non-xml characters
# Platforms not supporting lone surrogates (\uD800-\uDFFF) should be
# caught by the below test. In general this would be any platform
# using UTF-16 as its encoding of unicode strings, such as
# Jython. This is because UTF-16 itself is based on the use of such
# surrogates, and there is no mechanism to further escape such
# escapes.
# We need this with u"" because of http://bugs.jython.org/issue2039
# see https://docs.python.org/3/reference/datamodel.html#object.__get__
# on a function, __get__ is used to bind a function to an instance as a bound method
# Some utility functions to deal with weirdness around UCS2 vs UCS4
# python builds
# Module Factory Factory (no, this isn't Java, I know)
# Here to stop this being duplicated all over the place.
# XXX: NEVER cache here, caching is done in the etree submodule
# tag name
# attributes (sorted for consistent ordering)
# self-closing
# Buffer the events so we can pass in the following one
# Don't forget the final event!
# pylint:disable=unused-variable
# It might be the root Element
# This is assumed to be an ordinary element
# Text node
# strip &;
# XXX: we cannot use a "bool(node) and node[0] or None" construct here
# because node[0] might evaluate to False if it has no child element
# tail
# else: fallback to "normal" processing
# FIXME: What to do?
# pylint:disable=arguments-differ
# HTML attributes
# MathML attributes
# SVG attributes
# Sanitize the +html+, escaping all elements not in ALLOWED_ELEMENTS, and
# stripping out all attributes not in ALLOWED_ATTRIBUTES. Style attributes
# are parsed, and a restricted set, specified by ALLOWED_CSS_PROPERTIES and
# ALLOWED_CSS_KEYWORDS, are allowed through. attributes in ATTR_VAL_IS_URI
# are scanned, and only URI schemes specified in ALLOWED_PROTOCOLS are
# allowed.
# accommodate filters which use token_type differently
# Remove forbidden attributes
# Remove attributes with disallowed URL values
# I don't have a clue where this regexp comes from or why it matches those
# characters, nor why we call unescape. I just know it's always been here.
# Should you be worried by this comment in a sanitizer? Yes. On the other hand, all
# this will do is remove *more* than it otherwise would.
# remove replacement characters from unescaped characters
# disallow urls
# gauntlet
# An html element's start tag may be omitted if the first thing
# inside the html element is not a space character or a comment.
# A head element's start tag may be omitted if the first thing
# inside the head element is an element.
# XXX: we also omit the start tag if the head element is empty
# A body element's start tag may be omitted if the first thing
# inside the body element is not a space character or a comment,
# except if the first thing inside the body element is a script
# or style element and the node immediately preceding the body
# element is a head element whose end tag has been omitted.
# XXX: we do not look at the preceding event, so we never omit
# the body element's start tag if it's followed by a script or
# a style element.
# A colgroup element's start tag may be omitted if the first thing
# inside the colgroup element is a col element, and if the element
# is not immediately preceded by another colgroup element whose
# end tag has been omitted.
# XXX: we do not look at the preceding event, so instead we never
# omit the colgroup element's end tag when it is immediately
# followed by another colgroup element. See is_optional_end.
# A tbody element's start tag may be omitted if the first thing
# inside the tbody element is a tr element, and if the element is
# not immediately preceded by a tbody, thead, or tfoot element
# whose end tag has been omitted.
# omit the thead and tfoot elements' end tag when they are
# immediately followed by a tbody element. See is_optional_end.
# An html element's end tag may be omitted if the html element
# is not immediately followed by a space character or a comment.
# A li element's end tag may be omitted if the li element is
# immediately followed by another li element or if there is
# no more content in the parent element.
# An optgroup element's end tag may be omitted if the optgroup
# element is immediately followed by another optgroup element,
# or if there is no more content in the parent element.
# A tr element's end tag may be omitted if the tr element is
# immediately followed by another tr element, or if there is
# A dt element's end tag may be omitted if the dt element is
# immediately followed by another dt element or a dd element.
# A dd element's end tag may be omitted if the dd element is
# immediately followed by another dd element or a dt element,
# A p element's end tag may be omitted if the p element is
# immediately followed by an address, article, aside,
# blockquote, datagrid, dialog, dir, div, dl, fieldset,
# footer, form, h1, h2, h3, h4, h5, h6, header, hr, menu,
# nav, ol, p, pre, section, table, or ul, element, or if
# there is no more content in the parent element.
# An option element's end tag may be omitted if the option
# element is immediately followed by another option element,
# or if it is immediately followed by an <code>optgroup</code>
# element, or if there is no more content in the parent
# element.
# An rt element's end tag may be omitted if the rt element is
# immediately followed by an rt or rp element, or if there is
# An rp element's end tag may be omitted if the rp element is
# A colgroup element's end tag may be omitted if the colgroup
# element is not immediately followed by a space character or
# a comment.
# XXX: we also look for an immediately following colgroup
# element. See is_optional_start.
# A thead element's end tag may be omitted if the thead element
# is immediately followed by a tbody or tfoot element.
# A tbody element's end tag may be omitted if the tbody element
# is immediately followed by a tbody or tfoot element, or if
# A tfoot element's end tag may be omitted if the tfoot element
# is immediately followed by a tbody element, or if there is no
# more content in the parent element.
# XXX: we never omit the end tag when the following element is
# a tbody. See is_optional_start.
# A td element's end tag may be omitted if the td element is
# immediately followed by a td or th element, or if there is
# A th element's end tag may be omitted if the th element is
# replace charset with actual encoding
# insert meta into empty head
# insert meta into head (if necessary) and flush pending queue
# Test on token["data"] above to not introduce spaces where there were not
# Come up with a sane default (pref. from the stdlib)
# NEVER cache here, caching is done in the dom submodule
# NEVER cache here, caching is done in the etree submodule
# The scope markers are inserted when entering object elements,
# marquees, table cells, and table captions, and are used to prevent formatting
# from "leaking" into tables, object elements, and marquees.
# The tag name associated with the node
# The parent of the current node (or None for the document node)
# The value of the current node (applies to text nodes and comments)
# A dict holding name -> value pairs for attributes of the node
# A list of child nodes of the current node. This must include all
# elements but not necessarily other node types.
# A list of miscellaneous flags that can be set on the node.
# XXX - should this method be made more general?
# pylint:disable=not-callable
# Document class
# The class to use for creating a node
# The class to use for creating comments
# The class to use for creating doctypes
# Fragment class
# XXX - rename these to headElement, formElement
# If we pass a node in we match that. if we pass a string
# match any node with that name
# We should never reach this point
# Within this algorithm the order of steps described in the
# specification is not quite the same as the order of steps in the
# code. It should still do the same though.
# Step 1: stop the algorithm when there's nothing to do.
# Step 2 and step 3: we start with the last element. So i is -1.
# Step 6
# This will be reset to 0 below
# Step 5: let entry be one earlier in the list.
# Mainly to get a new copy of the attributes
# Check for Marker first because if it's a Marker it doesn't have a
# name attribute.
# We should be in the InTable mode. This means we want to do
# special magic element rearranging
# The foster parent element is the one which comes before the most
# recently opened table element
# XXX - this is really inelegant
# XXX - we should really check that this parent is actually a
# node here
# XXX td, th and tr are not actually needed
# XXX This is not entirely what the specification says. We should
# investigate it more closely.
# assert self.innerHTML
# HACK: allow text nodes as children of the document node
# pylint:disable=protected-access
# The actual means to get a module!
# calling .items _always_ allocates, and the above truthy check is cheaper than the
# allocation on average
# Insert the text as the tail of the last child element
# Insert the text before the specified node
# Use the superclass constructor to set all properties on the
# wrapper element
# Full tree case
# Text in a fragment
# Fragment case
# self.fragmentClass = builder.DocumentFragment
# Because of the way libxml2 works, it doesn't seem to be possible to
# alter information like the doctype after the tree has been parsed.
# Therefore we need to use the built-in parser to create our initial
# tree, after which we can add elements like normal
# Append the initial comments:
# Create the root document and add the ElementTree to it
# Give the root element the right name
# Add the root element to the internal child/open data structures
# Reset to the default insert comment function
# Copyright (C) 2001, 2002 Brailcom, o.p.s.
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# GNU Lesser General Public License for more details.
# Copyright (C) 2003-2008 Brailcom, o.p.s.
#TODO: Blocking variants for speak, char, key, sound_icon.
# Constants representing \r\n. and \r\n..
# Read-write shutdown here is necessary, otherwise the socket.recv()
# function in the other thread won't return at last on some platforms.
# Wait for the other thread to terminate
# If the socket has been closed, exit the thread
# This is not an index mark nor an event
# Ignore the event if no callback function has been registered.
# Get message and client ID of the event
# TODO: This check is dumb but seems to work.  The main thread
# hangs without it, when the Speech Dispatcher connection is lost.
# The list is sorted, read the first item
# Escape the end-of-data marker even if present at the beginning
# The start of the string is also the start of a line.
# Escape the end of data marker at the start of each subsequent
# line.  We can do that by simply replacing \r\n. with \r\n..,
# since the start of a line is immediately preceded by \r\n,
# when the line is not the beginning of the string.
# TODO: does that ever happen?
# Deprecated ->
# Resolve connection parameters:
# Respect address method argument and SPEECHD_ADDRESS environemt variable
# Respect the old (deprecated) key arguments and environment variables
# TODO: Remove this section in 0.8 release
# Read the environment variables
# Prefer old (deprecated) function arguments, but if
# not specified and old (deprecated) environment variable
# is set, use the value of the environment variable
# Suppose server might not be running, try the autospawn mechanism
# Autospawn is however not guaranteed to start the server. The server
# will decide, based on it's configuration, whether to honor the request.
# The additional parameters was not set, let's stay with defaults
# Failed conversion to int
# Check whether we are not connecting to a remote host
# TODO: This is a hack. inet sockets specific code should
# belong to _SSIPConnection. We do not however have an _SSIPConnection
# yet.
# Check resolved addrinfos for presence of localhost
# The hostname didn't resolve on localhost in neither case,
# do not spawn server on localhost...
# TODO: Here we risk, that the callback arrives earlier, than we
# add the item to `self._callback_handler'.  Such a situation will
# lead to the callback being ignored.
# Deprecated but retained for backwards compatibility
# This class was introduced in 0.7 but later renamed to CommunicationMethod
# Copyright (C) 2003, 2006, 2007 Brailcom, o.p.s.
# TODO: This needs to be fixed. There is no guarantee that
# the message will start in one second nor is there any
# guarantee that it will start at all. It can be interrupted
# by other applications etc. Also there is no guarantee that
# the cancel will arrive on time and the end callback will be
# received on time. Also the combination cancel/end does not have
# to work as expected and SD and the interface can still be ok.
# -- Hynek Hanke
# Wait for pending events...
# Emulate SPT_DEBUG showing process info in the C module.
# Call getproctitle to initialize structures and avoid problems caused
# by fork() on macOS (see #113).
# Find the offset for when it doesn't have DST:
# OK, no DST during winter, get this offset
# Windows is special. It has unique time zone names (in several
# meanings of the word) available, but unfortunately, they can be
# translated to the language of the operating system, so we need to
# do a backwards lookup, by going through all time zones and see which
# one matches.
# Windows 7 and later
# For some reason this returns a string with loads of NUL bytes at
# least on some systems. I don't know if this is a bug somewhere, I
# just work around it.
# Don't support XP any longer
# Nope, that didn't work. Try adding "Standard Time",
# it seems to work a lot of times:
# Return what we have.
# DST is disabled, so don't return the timezone name,
# instead return Etc/GMT+offset
# The DST is turned off in the windows configuration,
# but this timezone doesn't have DST so it doesn't matter
# I can't convert this to an hourly offset
# This has whole hours as offset, return it as Etc/GMT
# If the timezone does NOT come from a TZ environment variable,
# verify that it's correct. If it's from the environment,
# we accept it, this is so you can run tests with different timezones.
# This file is autogenerated by the update_windows_mapping.py script
# Do not edit.
# Old name for the win_tz variable:
# No one has timezone offsets less than a minute, so this should be close enough:
# Yup, it's a timezone
# It's a file specification, expand it, if possible
# Is it a zone info zone?
# Yup, it is
# Maybe it's a short one, like UTC?
# Indeed
# Some weird format that exists:
# TZ specifies a file
# Try to see if we can figure out the name
# Nope, not a standard timezone name, just take the filename
# TZ must specify a zoneinfo zone.
# That worked, so we return this:
# Nope, it's something like "PST4DST" etc, we can't handle that.
# First try the ENV setting.
# Are we under Termux on Android?
# proot environment or failed to getprop
# Now look for distribution specific configuration files
# that contain the timezone name.
# Stick all of them in a dict, to compare later.
# Empty file, skip
# Get rid of host definitions and comments:
# File doesn't exist or is a directory, or it's a binary file.
# CentOS has a ZONE setting in /etc/sysconfig/clock,
# OpenSUSE has a TIMEZONE setting in /etc/sysconfig/clock and
# Gentoo has a TIMEZONE setting in /etc/conf.d/clock
# We look through these files for a timezone:
# Look for the ZONE= setting.
# No ZONE= setting. Look for the TIMEZONE= setting.
# Some setting existed
# We found a timezone
# UnicodeDecode handles when clock is symlink to /etc/localtime
# systemd distributions use symlinks that include the zone name,
# see manpage of localtime(5) and timedatectl(1)
# Only need first valid relative path in simlink.
# We found some explicit config of some sort!
# Uh-oh, multiple configs. See if they match:
# For some reason some distros are removing support for /etc/timezone,
# which is bad, because that's the only place where the timezone is stated
# in plain text, and what's worse, they don't delete it. So we can't trust
# it now, so when we have conflicting configs, we just ignore it, with a warning.
# We found exactly one config! Use it.
# Look them up in /usr/share/zoneinfo, and find what they
# really point to:
# No explicit setting existed. Use localtime
# We are using a file in etc to name the timezone.
# Verify that the timezone specified there is actually used:
# Without destdir
# Convert Windows paths to use / for consistency
# Validate the passed values.
# Prefix #! and newline after.
# According to distlib, Darwin can handle up to 512 characters. But I want
# to avoid platform sniffing to make this as platform agnostic as possible.
# The "complex" script isn't that bad anyway.
# The launcher can just use the command as-is.
# Shebang support for an executable with a space in it is under-specified
# and platform-dependent, so we use a clever hack to generate a script to
# run in ``/bin/sh`` that should work on all reasonably modern platforms.
# Read the following message to understand how the hack works:
# https://github.com/pradyunsg/installer/pull/4#issuecomment-623668717
# I don't understand a lick what this is trying to do.
# calculate 'headers' path, not currently in sysconfig - see
# https://bugs.python.org/issue44445. This is based on what distutils does.
# TODO: figure out original vs normalised distribution names
# Borrowed from https://github.com/python/cpython/blob/v3.9.1/Lib/shutil.py#L52
# According to https://www.python.org/dev/peps/pep-0427/#file-name-convention
# Adapted from https://github.com/python/importlib_metadata/blob/v3.4.0/importlib_metadata/__init__.py#L90  # noqa
# According to https://www.python.org/dev/peps/pep-0427/#id7
# write our new shebang
# copy the rest of the stream
# skip first line
# Borrowed from https://github.com/python/importlib_metadata/blob/v3.4.0/importlib_metadata/__init__.py#L115  # noqa
# TODO: make this a proper error, which can be caught.
# Borrowed from:
# https://github.com/pypa/pip/blob/0f21fb92/src/pip/_internal/utils/unpacking.py#L93
# Ensure compatibility with this wheel version.
# Determine where archive root should go.
# If it's in not `{distribution}-{version}.data`, then it's in root_scheme.
# Figure out which scheme this goes to.
# RECORD handling
# Write the entry_points based scripts.
# Write all the files from the wheel.
# Skip the RECORD, which is written at the end, based on this info.
# Figure out where to write this file.
# Write all the installation-specific metadata
# NAME-VER.dist-info
# looks like a directory
# both are for digital signatures, and not mentioned in RECORD
# Incorrectly contained
# Assert that RECORD doesn't have size and hash.
# Incorrectly contained hash / size
# Report empty hash / size
# Convert the record file into a useful mapping
# Pop record with empty default, because validation is handled by `validate_record`
# https://github.com/pypa/pip/blob/0f21fb92/src/pip/_internal/utils/unpacking.py#L96-L100
# Copyright 2012-2023, Andrey Kislyuk and argcomplete contributors.
# Licensed under the Apache License. See https://github.com/kislyuk/argcomplete for more info.
# Copyright 2012-2023, Andrey Kislyuk and argcomplete contributors. Licensed under the terms of the
# `Apache License, Version 2.0 <http://www.apache.org/licenses/LICENSE-2.0>`_. Distribution of the LICENSE and NOTICE
# files with source copies of this package and derivative works is **REQUIRED** as specified by the Apache License.
# See https://github.com/kislyuk/argcomplete for more info.
# not an argument completion invocation
# _ARGCOMPLETE is set by the shell script to tell us where comp_words
# should start, based on what we're completing.
# 1: <script> [args]
# 2: python <script> [args]
# 3: python -m <module> [args]
# Special case for when the current word is "--optional=PARTIAL_VALUE". Give the optional to the parser.
# TODO: accomplish this with super
# Logic adapted from take_action in ArgumentParser._parse_known_args
# (members are saved by vendor._argparse.IntrospectiveArgumentParser)
# This means the action will fail to parse if the word under the cursor is not given
# to it, so give it exclusive control over completions (flush previous completions)
# Special case for when the current word is "--optional=PARTIAL_VALUE".
# The completer runs on PARTIAL_VALUE. The prefix is added back to the completions
# (and chopped back off later in quote_completions() by the COMP_WORDBREAKS logic).
# Only run completers if current word does not start with - (is not an optional)
# Use the single greedy action (if there is one) or all active actions.
# action is a positional
# Any positional arguments after this may slide down into this action
# if more arguments are added (since the user may not be done yet),
# so it is extremely difficult to tell which completers to run.
# Running all remaining completers will probably show more than the user wants
# but it also guarantees we won't miss anything.
# completer = getattr(active_action, "completer", DefaultCompleter())
# If the word under the cursor was quoted, escape the quote char.
# Otherwise, escape all special characters and specially handle all COMP_WORDBREAKS chars.
# Bash mangles completions which contain characters in COMP_WORDBREAKS.
# This workaround has the same effect as __ltrim_colon_completions in bash_completion
# (extended to characters other than the colon).
# tcsh and fish escapes special characters itself.
# Nothing can be escaped in single quotes, so we need to close
# the string, escape the single quote, then open a new string.
# PowerShell uses ` as escape character.
# zsh uses colon as a separator between a completion and its description.
# Similar functionality in bash was previously turned off by supplying the "-o nospace" option to complete.
# Now it is conditionally disabled using "compopt -o nospace" if the match ends in a continuation character.
# This code is retained for environments where this isn't done natively.
# Look for the first importlib ModuleSpec that has `origin` set, indicating it's not a namespace package.
# TODO: replace "universal_newlines" with "text" once 3.6 support is dropped
# Fix if someone passes in a string instead of a list
# Using 'bind' in this and the following commands is a workaround to a bug in bash
# that was fixed in bash 5.3 but affects older versions. Environment variables are not treated
# correctly in older versions and calling bind makes them available. For details, see
# https://savannah.gnu.org/support/index.php?111125
# empty iterator
# Iterate on target_dir entries and filter on given predicate
# If the script path contain a space, this would generate an invalid function name.
# use path for absolute paths
# / not allowed in function name
# If no script was specified, default to the executable being completed.
# TODO: make this less ugly
# posix
# non-posix
# if len(prefix) > 0 and prefix[0] in lexer.quotes:
# TODO: check if this is ever unsafe
# raise ArgcompleteException("Unexpected end of input")
# Argument is the full path to the console script.
# Find the module and function names that correspond to this
# assuming it is actually a console script.
# Python 3.12+ returns a tuple of entry point objects
# whereas <=3.11 returns a SelectableGroups object
# Check this looks like the script we really expected.
# Look for the argcomplete marker in the script it imports.
# PEP 366
# created by homebrew
# created by bash-completion
# TODO: warn if running as superuser
# This copy of shlex.py from Python 3.6 is distributed with argcomplete.
# It contains only the shlex class, with modifications as noted.
# Modified by argcomplete: 2/3 compatibility
# if self.posix:
# remove any punctuation chars from wordchars
# Modified by argcomplete: Record last wordbreak position
# This file contains argparse introspection utilities used in the course of argcomplete execution.
# Not sure what this should be, but this previously always returned False
# so at least this won't break anything that wasn't already broken.
# Added by argcomplete
# seen arguments, assuming that actions that use the default
# value don't really count as "present"
# Begin added by argcomplete
# When a subparser action is taken and fails due to incomplete arguments, it does not merge the
# contents of its parsed namespace into the parent namespace. Do that here to allow completers to
# access the partially parsed arguments for the subparser.
# End added by argcomplete
# Python 3.12.7+
# Python 3.11.9+, 3.12.3+, 3.13+
# If the pattern is not open (e.g. no + at the end), remove the action from active actions (since
# it wouldn't be able to consume any more args)
# if we didn't use all the Positional objects, there were too few
# arg strings supplied.
# make sure all required actions were present
# SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
# Copyright (c) 2018, Mellanox Technologies. All rights reserved.
# Copyright (c) 2019 Mellanox Technologies, Inc. All rights reserved. See COPYING file
# Author: Mario Lenz <m@riolenz.de>
# SPDX-FileCopyrightText: Ansible Project, 2022
# Copyright (c) 2021, Ansible Team
# (c) 2019 Red Hat Inc.
# JSONDecodeError only available on Python 3.5+
# Set the 'SEC' header
# Clean up tokens
# (c) 2019, Adam Miller (admiller@redhat.com)
# FIXME - provide correct example here
# if module.params['name']:
# name=dict(required=False, type='str'),
# id=dict(required=False, type='str'),
# required_one_of=[
# FIXME - handle the scenario in which we can search by name and this isn't a required param anymore
# FIXME - once this is sorted, add it to module_utils
# if module.params['state'] == 'present':
# The note we want exists either by ID or by text name, verify
# FIXME FIXME FIXME - can we actually delete these via the REST API?
# if module.params['state'] == 'absent':
# Set it to the default as provided by the QRadar Instance
# Already enabled
# Not enabled, enable It
# Already disabled
# Not disabled, disable It
# Copyright 2022 Red Hat Inc.
# find log source types details
# This will be removed, once all of the available modules
# are moved to use action plugin design, as otherwise test
# would start to complain without the implementation.
# This allows us to exclude specific argspec keys from being included by
# the rest data that don't follow the qradar_* naming convention
# FIXME - make use of handle_httperror(self, exception) where applicable
# https://www.ibm.com/support/knowledgecenter/SS42VS_7.3.1/com.ibm.qradar.doc/9.2--staged_config-deploy_status-POST.html
# Documentation says we should get 1002, but I'm getting 1004 from QRadar
# Because for some reason some QRadar REST API endpoint use the
# query string to modify state
# return self.post("/{0}".format(rest_path), payload=data)
# PATCH
# Copyright (C) 2020 IBM CORPORATION
# Author(s): Peng Wang <wangpww@cn.ibm.com>
# logging setup
# Required
# internal variable
# Handling missing mandatory parameters name
# when check_mode is enabled
# Make command
# Run command
# Any error will have been raised in svc_run_command
# chmkdiskgrp does not output anything when successful.
# chhost does not output anything when successful.
# TBD: Implement a more generic way to check for properties to modify.
# This is where we detect if chmdisk should be called.
# This is where we would modify
# Copyright (C) 2023 IBM CORPORATION
# Author(s): Sanjaikumaar M <sanjaikumaar.m@ibm.com>
# Dynamic variables
# Copyright (C) 2021 IBM CORPORATION
# Author(s): Rohit kumar <rohit.kumar6@ibm.com>
# Required when migration across clusters
# Required when migration across pools
# Check for missing mandatory parameter
# Check for invalid parameters
# Command does not output anything when successful.
# Check for missing parameters
# Copyright (C) 2022 IBM CORPORATION
# Author(s): Sreshtant Bohidar <sreshtant.bohidar@ibm.com>
# Initialize changed variable
# creating an instance of IBMSVCRestApi
# Author(s): Shilpi Jain <shilpi.jain1@ibm.com>
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Required parameters
# function to check whether volume group exists or not
# Dynamic variable
# converttoclone accepts only type=clone, so update validation will include that
# If type=thinclone (or some invalid value) was passed, return error.
# Making new call as snapshot_policy_start_time not available in lsvolumegroup CLI
# Make new call as volume list is not present in lsvolumegroup CLI
# If existing volumegroup is a thinclone but command params don't contain
# Source volumes for clone volumes needs to be fetched for verification
# 1. First get the volumes associated with volumegroup provided
# 2. Run lsvdisk for each volume provided in command to get source_volume_name
# Make a set from source volumes of all volumes
# Add the value of 'source_volume_name' to the merged_result
# If pool is provided, verify that pool matches with the one provided in command
# Mapping the parameters with the existing data for comparision
# If policy is changed, existing policystarttime will be erased so adding time without any check
# Adding snapshotpolicysuspended to props
# Handle cases other than '' to clone
# Optional parameters
# If thinclone or clone is to be created from volumes, do following:
# 1. Create transient snapshot with 5-min retentionminutes
# 2. Create a thinclone volumegroup from this snapshot
# 3. There is no need to delete snapshot, as it is auto-managed due to retentionminutes
# Any error would have been raised in svc_run_command
# update the volume group
# Run converttoclone command
# Check whether provided source volumes are same as in existing volumegroup
# Internal variable
# creating an instance of IBMSVCRestApi for local system
# creating an instance of IBMSVCRestApi for remote system
# perform some basic checks
# Handling for mandatory parameter 'state'
# Parameter validation for creating IP partnership
# Parameter validation for deleting IP partnership
# Parameter validation for updating IP partnership
# fetch system IP address
# get all partnership
# filter partnership data
# get local partnership
# get all the attributes of a partnership
# fetch partnership data
# while updating and removing existing partnership
# while creating partnership
# create a new IP partnership
# when executed with check mode
# delete an existing partnership
# probe a partnership
# unsupported parameters while updating
# supported parameters while updating
# start a partnership
# stop a partnership
# update a partnership
# stop the partnership
# perform update operation
# start the partnership
# perform the update operation
# parameter vaidation while removing partnership
# removal of partnership on both local and remote system
# Author(s): Rohit Kumar <rohit.kumar6@ibm.com>
# Handling missing mandatory paratmeter name
# chrcconsistgrp does not output anything when successful.
# rmrcconsistgrp does not output anything when successful.
# setting the default value if unspecified
# perform some basic handling for few parameters
# function to fetch lssystem data
# function to probe lssystem data
# function to execute chsystem commands
# function to fetch existing email user
# function to check if email server exists or not
# function to check if email user exists or not
# function to create an email server
# function to update email user
# function to manage support email user
# for US timezone, callhome0@de.ibm.com is used
# for ROW, callhome1@de.ibm.com is used
# function to create an email user
# function to enable email callhome
# function to disable email callhome
# function to update email data
# function for checking if proxy server exists
# function for removing a proxy
# function for creating a proxy
# function for probing existing proxy data
# function for updating a proxy
# function for fetching existing cloud callhome data
# function for enabling cloud callhome
# function for doing connection test for cloud callhome
# the connection testing can take some time to complete.
# function for managing proxy server
# function for disabling cloud callhome
# function to initiate callhome with cloud
# manage proxy server
# update email data
# manage cloud callhome
# perform connection test
# cloud callhome takes some time to get enabled.
# the module will exit without performing connection test.
# function to initiate callhome with email notifications
# manage email server
# manage support email user
# manage local email user
# manage email data
# enable email callhome
# enable cloud callhome
# enable both cloud and email callhome
# manage chsystem parameters
# disable email callhome
# disable cloud callhome
# Author(s): Sumit Kumar Gupta <sumit.gupta16@ibm.com>
# Handling missing mandatory parameter rname
# Handling missing mandatory parameter cvname
# command
# lsvdisk <basevolume>
# This is where we detect if chhostcluster should be called
# local SSH keys will be used in case of password less SSH connection
# Assisted by watsonx Code Assistant
# Decode the bytes object directly
# Test missing parameters for creation of truststore
# Fail if there are any missing parameters
# If truststore exists, change required fields
# Test missing parameters for updating truststore
# Even though probe_truststore() throws error for flashgrid attribute,
# self.flashgrid has to be checked here for supporting check_mode=True
# The reason to probe before running update_truststore, is to avoid running a CLI in case of no change
# Required parameters for module
# Handling missing mandatory parameter command
# Connect to the storage
# Default parameters
# Check mandatory parameter src_volumegroup_name
# Copyright (C) 2024 IBM CORPORATION
# Required parameter(s)
# Mandatory parameter drive_id check
# drive_state and task are mutually-exclusive
# If task is to trigger dump, use triggerdrivedump instead of chdrive
# For other tasks such as format, erase, certify, recover, check currently running tasks if any
# If same task is already running for this drive_id, declare success and return
# If any other tasks are being run, return SVC error
# Show SVC error message to user
# Varialbe to store some frequently used data
# Parameters for deletion
# Variable to cache data
# Required during creation of user
# Handling for mandatory parameter name
# Handling for mandatory parameter state
# Handling mutually exclusive cases amoung parameters
# function to get user data
# function for creating new user
# Handling unsupported parameter during user creation
# Handling for mandatory parameter role
# function for probing an existing user
# function for updating an existing user
# function for removing an existing user
# Handling unsupported parameter during user removal
# initiate probing of an existing user
# initiate creation of new user
# initiate updation os an existing user
# initiate deletion of an existing user
# Required Parameters
# Optional Parameters
# assemble iogrp
# for validating mandatory parameters of the module
# for validating parameter while removing an existing volume
# for validating parameter while creating a volume
# for validating parameter while renaming a volume
# for validating if volume type is supported or not
# The value many indicates that the volume has more than one copy
# function to get existing volume data
# This function does following:
# 1. It fetches a set of volumes from SVC in all_vols_set
# 2. It converts self.name into a provided_vols_set
# 3. It gets common volumes set from both that meet the criteria
# 4. If any user-provide volume(s) do not exist on the cluster, returns error
# 5. It sets a new attribute self.target_vols_list_str which is a string formed
# function to get list of associated iogrp to a volume
# function to create a transient (short-lived) snapshot
# return value: snapshot_id
# function to create a new volume
# function to remove an existing volume
# function that data in other units to b
# function to probe an existing volume
# check for changes in iogrp
# check for changes in volume size
# Check for standard or compressed volume
# check for changes in volumegroup
# check for presence of novolumegroup
# check for change in -thin parameter
# a standard volume or a compressed volume
# check for change in -compressed parameter
# not a compressed volume
# check for change in -deduplicated parameter
# not a deduplicated volume
# check for change in pool
# Check for change in cloud backup
# Check for change in fromsourcevolume
# Check for change in type
# function to expand an existing volume size
# function to shrink an existing volume size
# add iogrp
# remove iogrp
# For a list of volumes, target_volumes_list_str will be set by get_all_target_volumes()
# function to update an existing volume
# raise error for unsupported parameter
# updating iogrps of a volume
# updating size of a volume
# function for renaming an existing volume with a new name
# Special handling for list of volumes
# Only applicable for converting thinclone volumes list to clone
# self.target_vols_list_str will be set after calling get_all_target_volumes()
# If an existing volume was passed along with type=clone
# but not fromsourcevolume, user wants to convert thinclone to clone
# If volume is thinclone, convert it to clone.
# If volume is not thinclone, just return message.
# This is for cases, where it was either already done
# the last time it was run, or was never a thinclone.
# Both type and fromsourcevolume needed together
# Handling missing mandatory parameter name
# Author(s): Sudheesh Reddy Satti<Sudheesh.Reddy.Satti@ibm.com>
# Varialbe to cache data
# Error(if any) will be raised in svc_run_command
# Handling missing mandatory parameters
# Handling duplicate fcwwpn
# Handling duplicate iscsiname
# Handling duplicate nqn
# Handling for missing mandatory parameter name
# Handling for parameter protocol
# for validating parameter while renaming a host
# TBD: The parameter is fcwwpn but the view has fcwwpn label.
# Symmetric difference finds elements that are in either set but not both.
# IO_Grps in input but not in existing
# IO_Grps in existing but not in input
# update the host
# function for renaming an existing host with a new name
# This is where we detect if chhost should be called
# ignorefailures will be allowed only when access and secretkey are entered
# ignorefailures can be provided only when accesskeyid and secretaccesskey are given
# Can't validate the below parameters.
# These parameters are loners; cannot be specified with any other parameters in common_invalids
# Remove attr itself from list, and get invalids with this loner
# Handle Partition migration logic
# At source cluster, handle 'location' parameter
# If partition is currently not in migration with desired target,
# continue with chpartition -location command, else do nothing
# At target cluster, handle 'migrationaction' parameter
# We need to avoid running "chpartition -migrationaction fixeventwithchecks partition_name" in
# below 2 cases:
# 1. Partition migration just got initiated: i.e. migration_status = 'in_progress' on target
# 2. Partition migration got completed: At this stage, it has already completed, so don't run it.
# draft=False directly implies, that user wants to publish the partition
# If existing partition's data indicates, that partition is already published, then nothing to do.
# If not, then insert publish=true in API
# Get source partition 'partition_to_merge' details. If it is absent, it is either already merged, or does not
# exist. In both such cases, ansible won't return error.
# for validating parameter while renaming a portset
# function for renaming an existing portset with a new name
# So ext is optional to mkmdiskgrp but make required in ansible
# until all options for create are implemented.
# if not self.ext:
# updte the mdisk group
# This is where we detect if chmdiskgrp should be called.
# Handline for mandatory parameter volname
# chmvdisk does not output anything when successful.
# Handling for volume
# Handling for host and hostcluster
# Discover the vdisk type. this function is called if the volume already exists.
# input poolA or poolB must belong to given Volume
# rmvolume does not output anything when successful.
# Perform basic checks and fail the module with appropriate error msg if requirements are not satisfied
# Discover System Topology
# Discover the existing vdisk type.
# Check if there is change in configuration
# create_vdisk_flag = self.discover_site_from_pools()
# if not create_vdisk_flag:
# This is where we would modify if required
# Getting all ID's in list_object
# Those commands in which all ids are unique (ex. lsmdisk, lsvdisk etc)
# Author(s): Sanjaikumaar <sanjaikumaar.m@ibm.com>
# Gathering required arguments for the module
# Initializing ansible module
# Required during creation of user group
# Handing mutually exclusive cases
# Handling unsupported parameter while removing an usergroup
# function to get user group data
# function for creating new user group
# Handling unsupported parameter during usergroup creation
# function for probing an existing user group
# function for updating an existing user group
# function for removing an existing user group
# initiate probing
# Copyright (C) 2025 IBM CORPORATION
# Required parameter
# name is to be used only during creating grid, so it is mutually-exclusive with other params below
# join, accept and remove require both action and target_cluster_name params together
# Check self role
# Check target cluster role
# Create FlashsystemGrid
# Member's join request
# Accept a pending join request
# Remove a member
# Delete flashsystem-grid
# User is trying to create the flashsystem-grid
# Create flashsystem-grid
# In case, current cluster and/or target cluster are part of flashgrid, get their roles
# Check self.action and check whether it is idempotency
# or a case where member and coordinator are already part of different flashsystem grids
# This cluster is part of some other flashsystem grid
# Run on coordinator node
# All checks done. Execute the desired action (join/accept/remove) now
# state==absent, user is trying to delete the flashsystemgrid.
# Required fields for module
# Handling missing mandatory parameter
# system related parameters
# dns related parameter
# license related parameters
# SVC throw
# To modify existing name or ip
# System Configuration
# DNS configuration
# For honour based licenses
# For key based licenses
# Make sure we can connect through the RestApi
# Catch any output or errors and pass back to the caller to deal with.
# Pass back in result for error handling
# Original payload data has nicer formatting
# pass, will mean both data and error are None.
# Exponential backoff: 1s, 2s, 4s
# Abort
# Exit the loop if not rate-limited
# Aborts
# Might be None
# Object did not exist, which is quite valid.
# Fail for anything else
# connect through SSH
# Copyright: (c) 2014, Will Thames <will@thames.id.au>
# Common configuration for all AWS services
# Note: If you're updating MODULES, PLUGINS probably needs updating too.
# Formatted for Modules
# - modules don't support 'env'
# Formatted for non-module plugins
# Copyright: (c) 2022,  Ansible Project
# Standard Tagging related parameters
# Modules and Plugins can (currently) use the same fragment
# Minimum requirements for the collection
# Copyright: (c) 2022, Ansible, Inc
# (c) 2016, Bill Wang <ozbillwang(at)gmail.com>
# (c) 2017, Marat Bakeev <hawara(at)gmail.com>
# (c) 2018, Michael De La Rue <siblemitcom.mddlr(at)spamgourmet.com>
# Handled by AWSLookupBase
# validate arguments 'on_missing' and 'on_denied'
# Lookup by path
# Shorten parameter names. Yes, this will return
# duplicate names with different values.
# Lookup by parameter name - always returns a list with one or
# no entry.
# pylint: disable=duplicate-except
# (c) 2016 James Turner <turnerjsm@gmail.com>
# on Python 3+, json.decoder.JSONDecodeError is raised for bad
# JSON. On 2.x it's a ValueError
# Copyright: (c) 2018, Aaron Smith <ajsmith10381@gmail.com>
# (c) 2023 Ansible Project
# pylint: disable=too-many-return-statements
# Copyright: (c) 2018, Pat Sharkey <psharkey@cleo.com>
# Based on the ssh connection plugin by Michael DeHaan
# Replace or strip sequence (at terminal width)
# Initialize S3 client
# Initialize SSM client
# Initialize FileTransferManager
# User has provided a path to the ssm plugin, ensure this is a valid path
# find executable from path 'session-manager-plugin'
# For non-windows Hosts: Ensure the session has started, and disable command echo and prompt.
# Read stdout between the markers
# see https://github.com/pylint-dev/pylint/issues/8909)
# Wrap command in markers accordingly for the shell used
# Get command return code
# Throw away final lines
# Windows is a little more complex
# Value of $LASTEXITCODE will be the line after the mark
# output to keep will be before the mark
# If the return code contains #CLIXML (like a progress bar) remove it
# If it looks like JSON remove any newlines
# Check the return code
# (C) 2018 Ansible Project
# will be captured by imported HAS_BOTO3
# When someone looks up the TEMPLATABLE_OPTIONS using get() any templates
# will be templated using the loader passed to parse.
# pylint: disable=too-many-arguments
# Try pulling a list of regions from the service
# Not all clients support describe
# boto3 has hard coded lists of available regions for resources, however this does bit-rot
# As such we try to query the service, and fall back to ec2 for a list of regions
# fallback to local list hardcoded in boto3 if still no regions
# I give up, now you MUST give me regions
# false when refresh_cache or --flush-cache is used
# get the user-specified directive
# if cache expires or cache file doesn"t exist
# We weren't explicitly told to flush the cache, and there's already a cache entry,
# this means that the result we're being passed came from the cache.  As such we don't
# want to "update" the cache as that could reset a TTL on the cache entry.
# (c) 2022 Red Hat Inc.
# Should be overridden with the plugin-type specific exception
# We don't know what the correct exception is to raise, so the actual "raise" is handled by
# _do_fail()
# (c) 2023 Red Hat Inc.
# Do not attempt to reuse the existing session on retries
# This will cause the SSM session to be completely restarted,
# as well as reinitializing the boto3 clients
# Fetch the location of the bucket so we can open a client against the 'right' endpoint
# This /should/ always work
# Create another client for the region the bucket lives in, so we can nab the endpoint URL
# @{'key' = 'value'; 'key2' = 'value2'}
# Due to https://github.com/curl/curl/issues/183 earlier
# versions of curl did not create the output file, when the
# response was empty. Although this issue was fixed in 2015,
# some actively maintained operating systems still use older
# versions of it (e.g. CentOS 7)
# No Windows setup for now
# Ensure SSM Session has started
# Disable echo command
# pylint: disable=unreachable
# Disable prompt command
# Send command
# Allow easier grouping by region
# Use constructed if applicable
# Composed variables
# Complex groups based on jinja2 conditionals, hosts that meet the conditional are added to group
# Create groups based on variable values and add the corresponding hosts to it
# get user specifications
# Update the cache once we're done
# The mappings give an array of keys to get from the filter name to the value
# returned by boto3's EC2 describe_instances method.
# 'network-interface.requester-id': (),
# If filter not in allow_filters -> use it as a literal string
# By default find non-terminated/terminating instances
# This method returns only one hostname
# SSM inventory filters Values list can contain a maximum of 40 items so we need to retrieve 40 at a time
# https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_InventoryFilter.html
# note that while this mirrors what the script used to do, it has many issues with unicode and usability in python
# Handled by AnsibleAWSModule
# Sanitize filters
# Turn the boto3 result into ansible_friendly_snaked_names
# Copyright (c) 2014 Ansible Project
# Copyright (c) 2021 Alina Buzachis (@alinabuzachis)
# Copy snapshot
# Create snapshot
# Snapshot exists and we're not creating a copy - modify exising snapshot
# TODO - add other modifications aside from purely tags
# protected by AnsibleAWSModule
# Lambda returns a dict rather than the usual boto3 list of dicts
# If there's another change that needs to happen, we always re-upload the code
# get account ID and assemble ARN
# create list of layer version arn
# Get function configuration if present, False otherwise
# Update existing Lambda function
# Get current state
# Update function configuration
# Update configuration if needed
# If VPC configuration is desired
# Compare VPC config with current config
# No VPC configuration is desired, assure VPC config is empty when present in current config
# Check layers
# compare two lists to see if the target layers are equal to the current
# Upload new configuration if configuration has changed
# Tag Function
# Update code configuration
# Describe function code and configuration
# We're done
# "ZipFile" attribute contains non UTF-8 data. Ansible considers it an error
# starting with version 2.18. Removing it from the output avoids the error.
# Function doesn't exist, create new Lambda function
# If VPC configuration is given
# Layers
# Function would have been created if not check mode
# Finally try to create function
# Delete existing Lambda function
# Function already absent, do nothing
# How many retries to perform when an API call is failing
# how many seconds to wait between propagation status polls
# If the record name and type is not equal, move to the next record
# only save this zone id if the private status of the zone matches
# the private_zone_in boolean specified in the params
# NOTE: These details aren't available in other boto3 methods, hence the necessary
# extra API call
# If alias is True then you must specify alias_hosted_zone as well
# state=present, absent, create, delete THEN value is required
# failover, region and weight are mutually exclusive
# failover, region, weight and geo_location require identifier
# connect to the route53 endpoint
# Find the named zone ID
# Verify that the requested zone is already defined in Route53
# Build geo_location suboptions specification
# On CAA records order doesn't matter
# Retrieve name servers associated to the zone.
# record does not exist
# describe_images is *very* slow if you pass the `Owners`
# param (unless it's self), for some reason.
# Converting the owners to filters and removing from the
# owners param greatly speeds things up.
# Implementation based on aioue's suggestion in #24886
# self not a valid owner-alias filter (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
# describing launch permissions of images owned by others is not permitted, but shouldn't cause failures
# it may be possible that creation_date does not always exist
# caught by AnsibleAWSModule
# Get existing option groups and compare to our new options spec
# Security groups need to be handled separately due to different keys on request and what is
# returned by the API
# Check if there is an existing options group
# Check tagging
# Check if existing options require updating
# Check if there are options to be added or removed
# If changed, get updated version of option group
# No options were supplied. If options exist, remove them
# Here we would call our remove options function
# Add optional values
# check if metric filter exists
# KeyMaterial is returned by create_key_pair, but not by describe_key_pairs
# KeyType is only set by describe_key_pairs
# Write the private key to disk and remove it from the return value
# find an unused name
# check if key already exists
# Attachments lurk in a 'deleted' state, for a while, ignore them so we
# can reuse the names
# Workaround for alarms created before TreatMissingData was introduced
# Prevent alarm without dimensions to always return changed
# Exclude certain props from change detection
# compare current resource parameters with the value from module parameters
# When calling modify_db_cluster_parameter_group() function
# A maximum of 20 parameters can be modified in a single request.
# This is why we are creating chunk containing at max 20 items
# Create RDS cluster parameter group
# If this parameter is an empty list it can only be used with modify_db_instance (as the parameter UseDefaultProcessorFeatures)
# For modify_db_instance method, update parameters to include all params that need to be modified by comparing them to current instance attributes
# Determine which parameters need to be modified
# Validate multi_tenant option
# Once set to True, it cannot be modified to False
# noqa: E712
# Validate changes to storage type options
# Bundle Iops and AllocatedStorage while updating io1 RDS Instance
# when you just change from gp2 to gp3, you may not add the iops parameter
# must be always specified when changing iops
# Check for any pending cloudwatch logs exports configuration changes and add them to CloudwatchLogsExportConfiguration option
# If there are no pending cloudwatch logs exports configuration changes, set CloudwatchLogsExportConfiguration option to match current enabled cloudwatch
# logs exports attribute
# Check for pending changes on other attributes, if not then set options to current attribute values
# Convert current instance's attributes that are lists of dicts to lists of string values for comparison and set the options to updated lists
# PerformanceInsightsEnabled is not returned on older RDS instances it seems
# Neither of these are returned via describe_db_instances, so if either is specified during a check_mode run, changed=True
# TODO: allow other purge_option module parameters rather than just checking for things to add
# Compare lists
# There are associated security groups to be purged
# Desired option set is entirely contained within current option set and purge is False, nothing to change
# Current option is a list and desired option is a string
# Desired option is in current options, nothing to change
# Current option and desired option are the same - continue loop
# Processor features are the same, continue loop
# Current option and desired option are different - add to changing_params list
# Update to use default processor features
# Update cloudwatch logs enabled/disabled
# Set enable list to any items from desired not in current
# If purge is true, set disable list to difference between current and desired
# Update cloudwatch logs configuration option to reflect changes
# Get newly created DB instance
# Check tagging/promoting/rebooting/starting/stopping instance
# 'StatusInfos' only exists when the instance is a read replica
# See https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/describe-db-instances.html
# Ensure engine type supports associating IAM roles
# Don't update on check_mode
# Sanitize instance identifiers
# Sanitize processor features
# Ensure dates are in lowercase
# Throw warning regarding case when allow_major_version_upgrade is specified in check_mode
# describe_rds_instance never returns this value, so on check_mode, it will always return changed=True
# In non-check mode runs, changed will return the correct value, so no need to warn there.
# see: amazon.aws.module_util.rds.handle_errors.
# Exit on create/delete if check_mode
# Exit on check_mode when parameters to modify
# Check IAM roles
# Copyright (c) 2017, 2018 Michael De La Rue
# Copyright (c) 2017, 2018 Will Thames
# Turn the boto3 result in to ansible friendly_snaked_names
# Turn the boto3 result in to ansible friendly tag dictionary
# Split out paginator to allow for the backoff decorator to function
# Set PaginationConfig with max_items
# Check that both params are set if type is applied
# Convert to seconds
# Create a new snapshot if we didn't find an existing one to use
# successful delete
# remove empty value params
# if createVolumePermission is already "Public", adding "user_ids" is not needed
# Modify boto3 tags list to be ansible friendly dict and then camel_case
# Added id to interface info to be compatible with return values of ec2_eni module:
# VPC-supported IANA protocol numbers
# http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
# Turn the boto3 result into ansible friendly snake cases
# convert boto3 tags list into ansible dict
# Convert NACL entries
# Read subnets from NACL Associations
# Read Network ACL id
# entry list format
# [ rule_num, protocol name or number, allow or deny, ipv4/6 cidr, icmp type, icmp code, port from, port to]
# We have one or more subnets at this point.
# Check if there is any tags update
# Sort the subnet groups before we compare them
# See if anything changed.
# Modify existing group.
# disassociating address from instance
# Release or Release on disassociation
# Tags for *searching* for an EIP.
# Allocate address
# Associate address to instance
# Find instance
# Allocate an IP for instance since no public_ip was provided
# check if the address is already associated to the device
# Associate address object (provided or allocated) with instance
# Ensure tags
# Find existing address
# Read zip file if any
# collect parameters
# Set subnet_ids to empty list if it is None
# init empty list for return vars
# Get the basic VPC info
# convert tag list to ansible dict
# Copyright (c) 2014-2017 Ansible Project
# create default values for query if not specified.
# if function name exists, query should default to 'all'.
# if function name does not exist, query should default to 'config' to limit the runtime when listing all lambdas.
# Function name is specified - retrieve info on that function
# Function name is not specified - retrieve all function names
# keep returning deprecated response (dict of dicts) until removed
# query = 'config' returns info such as FunctionName, FunctionArn, Description, etc
# these details should be returned regardless of the query
# add current lambda to list of lambdas
# return info
# get_policy returns a JSON string so must convert to dict before reassigning to its key
# validate function_name if present
# Deprecate previous return key of `function`, as it was a dict of dicts, as opposed to a list of dicts
# great, the profile we asked for is what's there
# update association
# check for InvalidAssociationID.NotFound
# create association
# private_ip_addresses: ensure only one private ip is set as primary
# Ensure none of 'private_ip_address', 'private_ip_addresses', 'ipv6_addresses' were provided
# when launching more than one instance
# They specified network_interfaces_ids (mutually exclusive with security_group(s) options)
# They specified network interfaces using `network` or `network_interfaces` options
# No network interface configuration specified and no launch template
# Build network interface using subnet_id and security group(s) defined in the module
# handle list of `network.interfaces` options
# This is a non-modifiable attribute.
# Either we only have one network interface or multiple network interface with 'assign_public_ip=false'
# Anyways the value 'assign_public_ip' in the first item should be enough to determine whether
# the user wants to update the public IP or not
# Check that public ip assignment is the same and warn if not
# Check that the CpuOptions set are the same and warn if not
# Validate network parameters
# Build network specs
# Build volume specs
# IAM profile
# the user asked not to wait for anything
# In check mode, there is no change even if you wait.
# Map ansible state to boto3 waiter type
# user data is an immutable property
# ParamMapper('user_data', 'UserData', 'userData', value_wrapper),
# Attribute=mapping.attribute_name,
# Read interface subnet
# Read groups
# network.source_dest_check is nested, so needs to be treated separately
# to have a deterministic sorting order, we sort by AZ so we'll always pick the `a` subnet first
# there can only be one default-for-az subnet per AZ, so the AZ key is always unique in this list
# Avoid breaking things 'reboot' is wrong but used to be returned
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html
# The Ansible behaviour of issuing a stop/start has a minor impact on user billing
# This will need to be changelogged if we ever change to client.reboot_instance
# Map ansible state to ec2 state
# Before terminating an instance we need for them to leave
# 'pending' or 'stopping' (if they're in those states)
# TODO use a client-token to prevent double-sends of these start/stop/terminate commands
# https://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html
# Before stopping an instance we need for them to leave
# 'pending'
# Already moving to the relevant state
# Ensure that the instance is stopped before changing the instance type
# force wait for the instance to be stopped
# Modify instance type
# Ensure instance state
# Name is a tag rather than a direct parameter, we need to inject 'Name'
# into tags, but since tags isn't explicitly passed we'll treat it not being
# set as purge_tags == False
# modify instance attributes
# launch instances
# terminate instances
# sort the instances from least recent to most recent based on launch time
# get the instance ids of instances with the count tag on them
# include data for all matched instances in addition to the list of terminations
# allowing for recovery of metadata from the destructive operation
# Find instances
# Update instance attributes
# If check mode is enabled,suspend 'ensure function'.
# Wait for instances to exist in the EC2 API before
# attempting to modify them
# Wait for instances to exist (don't check state)
# If we came from enforce_count, create a second list to distinguish
# between existing and new instances when returning the entire cohort
# If the instance profile has just been created, it takes some time to be visible by ec2
# So we wait 10 second and retry the run_instances
# all states except shutting-down and terminated
# Network filters
# add the instance state filter if any filter key was provided
# running/present are synonyms
# as are terminated/absent
# -*- coding: utf-8 -*
# Not all aliases are actually associated with a key
# strip off leading 'alias/' and add it to key's aliases
# Handle pagination here as list_resource_tags does not have
# a paginator
# grants and tags get snakified differently
# key is disabled after deletion cancellation
# set this so that ensure_enabled_disabled works correctly
# We will only add new aliases, not rename existing ones
# If we can't fetch the current policy assume we're making a change
# Could occur if we have PutKeyPolicy without GetKeyPolicy
# make results consistent with kms_facts before returning
# KMS doesn't use 'Key' and 'Value' as other APIs do.
# make results consistent with kms_facts
# Note - fetching a key's metadata is very inconsistent shortly after any sort of update to a key has occurred.
# Combinations of manual waiters, checking expecting key values to actual key value, and static sleeps
# Integration tests will wait for 10 seconds to combat this issue.
# See https://github.com/ansible-collections/community.aws/pull/1052.
# Fetch by key_id where possible
# Or try alias as a backup
# We can't create keys with a specific ID, if we can't access the key we'll have to fail
# Loop through the results and add the other VPC attributes we gathered
# add the two DNS attributes
# If we know the gateway_id, use it to avoid bugs with using filters
# See https://github.com/ansible-collections/amazon.aws/pull/766
# Ensure the gateway exists before trying to attach it or add tags
# Modify tags
# Update igw
# First, check if this dhcp option is associated to any other vpcs
# If we were given tags, try to match on them
# We need to listify this one
# `if existing_index` evaluates to False on index 0, so be very specific and verbose
# Return boto3-style details, consistent with the _info module
# We can't describe without an option id, we might get here when creating a new option set in check_mode
# Look up the option id first by matching the supplied options
# If we were given a vpc_id then we need to look at the configuration on that
# if we've been asked to inherit existing options, do that now
# Do the vpc's dhcp options already match what we're asked for? if so we are done
# If no vpc_id was given, or the options don't match then look for an existing set using tags
# Now let's cover the case where there are existing options that we were told about by id
# If a dhcp_options_id was supplied we don't look at options inside, just set tags (if given)
# Preserve the boto2 module's behaviour of checking if the option set exists first,
# and return the same error message if it does not
# If we still don't have an options ID, create it
# If we were given a vpc_id, then attach the options we now have to that before we finish
# Validate Requirements
# call your function here
# ALB exists so check subnets, security groups and tags match what has been passed
# Subnets
# Security Groups
# ALB attributes
# Tags - only need to play with tags if tags parameter has been set to something
# Exit on check_mode
# Delete necessary tags
# Add/update tags
# Create load balancer
# Add ALB attributes
# Listeners
# Delete listeners
# Add listeners
# Modify listeners
# If listeners changed, mark ALB as changed
# Rules of each listener
# Create/Update/Delete Listener Rules
# Update ALB ip address type only if option has been provided
# Exit on check_mode - no changes
# Get the ALB again
# Get the ALB listeners again
# Update the ALB attributes
# Convert to snake_case and merge in everything we want to return to the user
# For each listener, get listener rules
# Change tags to ansible friendly dict
# ip address type
# Quick check of listeners parameters
# Update security group if default is specified
# only replace created routes
# Delete routes
# Replace routes
# Create routes
# Delete all gateway associations that have state = associated
# Subnet associations are handled in its method
# disassociate subnets and gateway before deleting route table
# backwards compatibility
# If no route table returned then create new route table
# try to wait for route table to be present before moving on
# pause to allow route table routes/subnets/associations to be updated before exiting with final state
# Although a list is returned by list_account_aliases AWS supports maximum one alias per account.
# If an alias is defined it will be returned otherwise a blank string is filled in as account_alias.
# see https://docs.aws.amazon.com/cli/latest/reference/iam/list-account-aliases.html#output
# The iam:ListAccountAliases permission is required for this operation to succeed.
# Lacking this permission is handled gracefully by not returning the account_alias.
# Copyright (c) 2017, 2018, 2019 Will Thames
# Try again with the VPC/Peer relationship reversed
# Reload peering conection info to return latest state/params
# Turn the resource tags from boto3 into an ansible friendly tag dictionary
# Copyright (c) 2024 Mandar Vijay Kulkarni (@mandar242)
# Turn the boto3 result in to ansible_friendly_snaked_names
# Release IP address
# Get new result
# Get new results
# may be based on a variable (ie. {foo*3/4}) so
# just pass it on through to the AWS SDK
# convert True/False to 1/0
# engine-default parameters do not have a ParameterValue, so we'll always override those.
# modify_db_parameters takes at most 20 parameters
# Historically these were all there, but set to null when empty'
# Name change between boto v2 and boto v3, return both
# Copyright (c) 2022 Ansible Project
# Copyright (c) 2022 Alina Buzachis (@alinabuzachis)
# describe_db_clusters does not return StorageType for clusters with storage config as "aurora standard"
# describe_db_clusters returns the Domain member ship information as follow
# 'DomainMemberships': [
# start_db_cluster supports only DBClusterIdentifier
# stop_db_cluster supports only DBClusterIdentifier
# for replica cluster - wait for cluster to change status from 'available' to 'promoting'
# only replica/secondary clusters have "GlobalWriteForwardingStatus" field
# if wait=true, wait for db cluster remove from global db operation to complete
# Fall to default value
# Add filter by name if provided
# Include only active states if "include_deleted" is False
# Include any additional filters provided by the user
# We're changing the role, so we always need to remove the existing one first
# As of botocore 1.34.3, the APIs don't support updating the Name or Path
# AWS uses VpnGatewayLimitExceeded for both 'Too many VGWs' and 'Too many concurrent changes'
# we need to look at the mesage to tell the difference.
# to handle check mode case where vgw passed to this function is {}
# return the deleted VpnGatewayId as this is not included in the above response
# Check if provided vgw already exists
# if existing vgw, handle changes as required
# [0] as find_vgw returns list[dict] i.e. [{vgw_info}] as it is possible to have multiple vgw with same names
# if not existing vgw, create new and return
# if vpc_id provided, attach vgw to vpc
# Update tags
# check_mode is handled by esure_ec2_tags()
# Manage VPC attachments
# if vgw is attached to a vpc
# if provided vpc is differenct than current vpc, then detach current vpc, attach new vpc
# if vgw is not currently attached to a vpc, attach it to provided vpc
# if vpc_id not provided, then detach vgw from vpc
# If an existing vgw name and type matches our args, then a match is considered to have been
# found and we will take steps to delete it.
# check if a gateway matching our module args already exists
# detach the vpc from the vgw
# attempt to detach any attached vpcs
# no vpc's are attached so attempt to delete the vgw
# Check that a name and type argument has been supplied if no vgw-id
# now that the vpc has been detached, delete the vgw
# get Backup Vault info
# Now check to see if our vault exists and get status and tags
# vault doesn't exist return None
# Get existing backup vault facts
# If the vault exists set the result exists variable
# If Trail exists go ahead and delete
# Backup Vault doesn't exist just go create it
# If we aren't in check_mode then actually create it
# Get facts for newly created Backup Vault
# If we are in check mode create a fake return structure for the newly created vault
# Check if we need to update tags on resource
# Populate backup vault facts in output
# TODO: add support for datetime-based parameters
# import datetime
# import time
# request_spot_instances() always creates a new spot request
# params['ValidFrom'] = module.params.get('valid_from')
# params['ValidUntil'] = module.params.get('valid_until')
# valid_from=dict(type='datetime', default=datetime.datetime.now()),
# valid_until=dict(type='datetime', default=(datetime.datetime.now() + datetime.timedelta(minutes=60))
# Add create-specific params
# Add update-specific params
# Check whether rules match
# Check whether advanced backup settings match
# Set initial result values
# Get supplied params from module
# Get existing backup plan details and ID if present
# Create or update plan
# Plan does not exist, create it
# Use supplied params as result data in check mode
# Plan exists, update as needed
# Delete plan
# Plan does not exist, can't delete it
# Plan exists, delete it
# Copyright (c) 2024 Aubin Bikouo (@abikouo)
# handled by AnsibleAWSModule
# If no name or id supplied, just try volume creation based on module parameters
# If no IOPS value is specified and there was a volume_type update to gp3,
# the existing value is retained, unless a volume type is modified that supports different values,
# otherwise, the default iops value is applied.
# Use the default value if any iops has been specified when volume_type=gp3
# If device_name isn't set, make a choice based on best practices here:
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
# In future this needs to be more dynamic but combining block device mapping best practices
# (bounds for devices, as above) with instance.block_device_mapping data would be tricky. For me ;)
# volumes without MultiAttach Enabled can be attached to 1 instance only
# volume_in_use can return *shortly* before it appears on the instance
# filter 'state', return attachment matching wanted state
# The ID of the instance must be specified if you are detaching a Multi-Attach enabled volume.
# Ensure we have the zone or can get the zone
# Set volume detach flag
# Here we need to get the zone info for the instance. This covers situation where
# instance is specified but zone isn't.
# Useful for playbooks chaining instance launch with volume create + attach and where the
# zone doesn't matter to the user.
# Use platform attribute to guess whether the instance is Windows or Linux
# Check if there is a volume already mounted there.
# No volume found so this is another volume
# Add device, volume_id and volume_type parameters separately to maintain backward compatibility
# Delaying the checks until after the instance check allows us to get volume ids for existing volumes
# without needing to pass an unused volume_size
# Try getting volume
# Sometimes AWS takes its time to create a subnet and so using
# new subnets's id to do things like create tags results in
# Wait for cidr block to be disassociated
# The list contains only one subnet
# Ensure IPv6 CIDR Block
# Modify subnet attribute 'MapPublicIpOnLaunch'
# Modify subnet attribute 'AssignIpv6AddressOnCreation'
# Ensure subnet tags
# Wait for tags to be updated
# GET calls are not monotonic for map_public_ip_on_launch and assign_ipv6_address_on_creation
# so we only wait for those if necessary just before returning the subnet
# Initialize start so max time does not exceed the specified wait_timeout for multiple operations
# Subnet will be None when check_mode is true
# Uses the list error handler because we "update" as a quick test for existence
# when our next step would be update or create.
# Apply new password / update password for the user
# In theory we could skip this check outside check_mode
# Manage managed policies
# Wait for user to be fully available before continuing
# Get the user again
# `LoginProfile` is only returned on `create_login_profile` method
# (camel_dict_to_snake_dict doesn't handle lists, so do this as a merge of two dictionaries)
# Remove any attached policies
# User is not present
# Check mode means we would remove this user
# Prior to removing the user we need to remove all of the related resources, or deletion will
# Because policies (direct and indrect) can contain Deny rules, order is important here in case
# we fail during deletion: lock out the user first *then* start removing policies...
# - Prevent the user from creating new sessions
# - Remove policies and group membership
# Simple helper to perform the module.fail_... call once we have module available to us
# equal protocols can interchange `(-1, -1)` and `(None, None)`
# take a Rule, output the serialized grant
# group_id/group_name are mutually exclusive - give group_id more precedence as it is more specific
# outputs a rule tuple
# there may be several IP ranges here, which is ok
# amazon-elb and amazon-prefix rules don't need
# group-id specified, so remove it when querying
# from permission
# EC2-Classic cross-account
# EC2-VPC cross-account VPC peering
# this is a foreign Security Group. Since you can't fetch it you must create an instance of it
# Matches on groups like amazon-elb/sg-5a9c116a/amazon-elb-sg, amazon-elb/amazon-elb-sg,
# and peer-VPC groups like 0987654321/sg-1234567890/example
# amazon-elb and amazon-prefix rules don't need group_id specified,
# For cross-VPC references we'll use group_id as it is more specific
# We can't create a group in check mode...
# The group exists, but didn't show up in any of our previous describe-security-groups calls
# Try searching on a filter for the name, and allow a retry window for AWS to update
# the model on their end.
# Simplest case, the rule references itself
# Already cached groups
# both are VPC groups, this is ok
# both are EC2 classic, this is ok
# if we got here, either the target group does not exist, or there
# is a mix of EC2 classic + VPC groups. Mixing of EC2 classic + VPC
# is bad, so we have to create a new SG because no compatible group
# exists
# Without a group description we can't create a new group, try looking up the group, or fail
# with a descriptive error message
# retry describing the group
# Get just the non-source/port info from the rule
# expands out all possible combinations of ports and sources for the rule
# This results in a list of pairs of dictionaries...
# Combines each pair of port/source dictionaries with rest of the info from the rule
# While icmp_type/icmp_code could have been aliases, this wouldn't be obvious in the
# documentation
# takes a list of ports and returns a list of (port_from, port_to)
# Someone passed a range
# If a host bit is incorrectly set, ip_network will throw an error at us,
# we'll continue, convert the address to a CIDR AWS will accept, but issue a warning.
# Try evaluating as an IPv4 network, it'll throw a ValueError if it can't parse cidr_ip as an
# IPv4 network
# Try again, evaluating as an IPv6 network.
# When a group is created, an egress_rule ALLOW ALL
# to 0.0.0.0/0 is added automatically but it's not
# reflected in the object returned by the AWS API
# call. We re-read the group for getting an updated object
# amazon sometimes takes a couple seconds to update the security group so wait till it exists
# For cases where set comparison is equivalent, but invalid port/proto exist
# Add name to filters rather than params['GroupNames']
# because params['GroupNames'] only checks the default vpc if no vpc is provided
# Don't filter by description to maintain backwards compatibility
# maintain backwards compatibility by using the last matching group
# Prefix list (ansible option is 'ip_prefix')
# XXX bug - doesn't cope with a list of ids/names
# --diff during --check
# Ensure content of these keys is sorted
# Returns the first value plus a prefix so the types get clustered together when sorted
# /end Deprecated
# Short circuit things if we're in check_mode
# Description is immutable
# rule does not use numeric protocol spec
# List comprehensions for rules to add, rules to modify, and rule ids to determine purging
# when no egress rules are specified and we're in a VPC,
# we add in a default allow all out rule, which was the
# default behavior before egress rules were added
# named_tuple_ingress_list and named_tuple_egress_list get updated by
# method update_rule_descriptions, deep copy these two lists to new
# variables for the record of the 'desired' ingress and egress sg permissions
# Revoke old rules
# Authorize new rules
# A new group with no rules provided is already being awaited.
# When it is created we wait for the default egress rule to be added by AWS
# keep pulling until current security group rules match the desired ingress and egress rules
# We have historically allowed for lists of lists in cidr_ip and cidr_ipv6
# https://github.com/ansible-collections/amazon.aws/pull/1213
# PORTS / ICMP_TYPE + ICMP_CODE / TO_PORT + FROM_PORT
# A target must be specified
# If you specify an ICMP code, you must specify the ICMP type
# Ensure requested group is absent
# Ensure requested group is present
# Order final rules consistently
# When passed a simple string, Ansible doesn't quote it to ensure it's a *quoted* string
# Rule exists so update rule, targets and state
# Rule does not exist, so create new rule and targets
# Rule doesn't exist so don't need to delete
# Don't need to include response metadata noise in response
# Identify and remove extraneous targets on AWS
# Identify targets that need to be added or updated on AWS
# The rule matches AWS only if all rule data fields are equal
# to their corresponding local value defined in the task
# keys with none values must be scrubbed off of self.targets
# The remote_targets contain quotes, so add
# quotes to temp
# list_targets_by_rule return input_template as string
# if existing value is string "<instance> is in state <state>", it returns '"<instance> is in state <state>"'
# if existing value is <JSON>, it returns '<JSON>'
# therefore add quotes to provided input_template value only if it is not a JSON
# remote_targets is snakified output of client.list_targets_by_rule()
# therefore snakified version of t should be compared to avoid wrong result of below conditional
# 'TimeoutInMinutes', 'EnableTerminationProtection' and
# 'OnFailure' only apply on creation, not update.
# Use stack ID to follow stack state in case of on_create_failure = DELETE
# changesets don't accept ClientRequestToken parameters
# Determine if this changeset already exists
# Make sure we don't enter an infinite loop
# a failed change set does not trigger any stack events so we just want to
# skip any further processing of result and just return it directly
# Lets not hog the cpu/spam the AWS API
# if the state is present and the stack already exists, we try to update it.
# AWS will tell us if the stack template and parameters are the same and
# don't need to be updated.
# If the stack previously existed, and now can't be found then it's
# been deleted successfully.
# stacks may delete fast, look in a few ways.
# it covers ROLLBACK_COMPLETE and UPDATE_ROLLBACK_COMPLETE
# Possible states: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-describing-stacks.html#w1ab2c15c17c21c13
# note the ordering of ROLLBACK_COMPLETE, DELETE_COMPLETE, and COMPLETE, because otherwise COMPLETE will match all cases.
# note the ordering of ROLLBACK_FAILED and FAILED, because otherwise FAILED will match both cases.
# this can loop forever :/
# total time 5 min
# if the changeset doesn't finish in 5 mins, this `else` will trigger and fail
# collect the parameters that are passed to boto3. Keeps us from having so many scalars floating around.
# can't check the policy when verifying.
# set parameter based on a dict to allow additional CFN Parameter Attributes
# allow default k/v configuration to set a template parameter
# Wrap the cloudformation client methods that this module uses with
# automatic backoff / retry for throttling error codes
# format the stack output
# always define stack_outputs, but it may be empty
# can be blank, apparently
# absent state is different because of the way delete_stack works.
# problem is it it doesn't give an error if stack isn't found
# so must describe the stack first
# GNU General Public License v3.0+ (see COPYING or https://wwww.gnu.org/licenses/gpl-3.0.txt)
# describe_metric_alarms_info_response = describe_metric_alarms_info_response[alarm_type_to_return]
# Taken care of by AnsibleAWSModule
# Use the subnets attached to the VPC to find which VPC we're in and
# limit the search
# We have a number of complex parameters which can't be validated by
# AnsibleModule or are only required if the ELB doesn't exist.
# Validate that protocol is one of the permitted values
# When creating a new ELB
# Pass check_mode down through to the module
# Be forgiving if we can't see the attributes
# Note: This will break idempotency if someone has set but not describe
# Shouldn't happen, but Amazon could change the rules on us...
# True if succeeds, exception raised if not
# create_load_balancer only returns the DNS name
# Remove proxy_protocol, it has to be handled as a policy
# Some attributes are configured on creation, others need to be updated
# after creation.  Skip updates for those set on creation
# XXX We should probably set 'None' parameters based on the
# current state prior to deletion
# the only way to change the scheme is by recreating the resource
# We need to wait for it to be gone-gone
# Unfortunately even though the ELB itself is removed quickly
# the interfaces take longer so reliant security groups cannot
# be deleted until the interface has registered as removed.
# Can take longer than creation
# instance state counts: InService or OutOfService
# # return stickiness info?
# We can't use sets here: dicts aren't hashable, so convert to the boto3
# format and use a generator to filter
# If we're not purging, then we need to remove Listeners
# where the full definition doesn't match, but the port does
# Update is a delete then add, so do the deletion first
# Subnets parameter not set, nothing to change
# You can't add multiple subnets from the same AZ.  Remove first, then
# add.
# zones parameter not set, nothing to changeA
# Add before we remove to reduce the chance of an outage if someone
# replaces all zones at once
# Security Group Names should already by converted to IDs by this point.
# Set health check values on ELB as needed
# For disabling we only need to compare and pass 'Enabled'
# Used when purging stickiness policies or updating a policy (you can't
# update a policy while it's connected to a Listener)
# Make sure that the list of policies and listeners is up to date, we're
# going to make changes to all listeners
# We shouldn't get here...
# To update a policy we need to delete then re-add, and we can only
# delete if the policy isn't attached to a listener
# Already deleted
# This needs to be in place for comparisons, but not passed to the
# Currently only supports setting ProxyProtocol policies
# Only look at the listeners for which proxy_protocol is defined
# If anyone's set proxy_protocol to true, make sure we have our policy
# in place.
# original boto style
# boto3 style
# Copyright (c) 2023 Gomathi Selvi Srinivasan (@GomathiselviS)
# only save zone names that match the public/private setting
# could be in different regions or have different VPCids
# If get_dnssec command output returns "NOT_SIGNING",
# the Domain Name System Security Extensions (DNSSEC) signing is not enabled for the
# Amazon Route 53 hosted zone.
# Enable DNSSEC
# DNSSEC signing is in the process of being removed for the hosted zone.
# Disable DNSSEC
# if dnssec_status == "DELETING":
# The first one for backwards compatibility
# Enable/Disable DNSSEC
# Update result with information about DNSSEC
# Handle Tags
# Sort the lists and compare them to make sure they contain the same items
# Once the deprecation is over we can merge this into a single call to validate_iam_identifiers
# There is a service limit (typically 5) of policy versions.
# Rather than assume that it is 5, we'll try to create the policy
# and if that doesn't work, delete the oldest non default policy version
# and try again.
# This needs to return policy_version, changed
# If the current policy matches the existing one
# No existing version so create one
# rvalue is incomplete
# As of botocore 1.34.3, the APIs don't support updating the Description
# If anything has changed we need to refresh the policy
# Detach policy
# Delete Versions
# Delete policy
# -- Delete public access block if necessary
# -- Create / Update public access block
# Short circuit comparisons if no change was requested
# No change needed
# Make a change
# As for request payement, it happens quite a lot of times that the put request was not taken into
# account, so we retry one more time
# Non KMS is simple
# When working with KMS, we also need to check the Key
# Versioning
# Requester pays
# Public access clock configuration
# Policy
# Encryption
# -- Bucket ownership
# -- Bucket ACL
# -- Transfer Acceleration
# TODO - MERGE handle_bucket_object_lock and handle_bucket_object_lock_retention
# -- Object Lock
# -- Object Lock Default Retention
# -- Inventory
# Module exit
# We should never get here since we check the bucket presence before calling the create_or_update_bucket
# method. However, the AWS Api sometimes fails to report bucket presence, so we catch this exception
# We shouldn't get here, the only time this should happen is if
# current_encryption != expected_encryption and retries == max_retries
# server_side_encryption_configuration ={'Rules': [{'BucketKeyEnabled': encryption}]}
# We have to merge the Versions and DeleteMarker lists here, as DeleteMarkers can still prevent a bucket deletion
# remove VersionId from cases where they are `None` so that
# unversioned objects are deleted using `DeleteObject`
# rather than `DeleteObjectVersion`, improving backwards
# compatibility with older IAM policies.
# if there are contents then we need to delete them (including versions) before we can delete the bucket
# ** Warning **
# we support non-AWS implementations, only force/purge options should have a
# default set for any top-level option.  We need to be able to identify
# unset options where we can ignore NotImplemented exceptions.
# Parameter validation
# Get instances from reservations
# include instances attributes
# name but not path or group
# filter by name when a path or group was specified
# If an exact matching using a list of CIDRs isn't found, check for a match with the first CIDR as is documented for C(cidr_block)
# Defaults to False (including None)
# Wait up to 30 seconds for vpc to exist and for state 'available'
# "ipv6_cidr": false and no Ipv6CidrBlockAssociationSet
# At least one 'Amazon' IPv6 CIDR block must be associated.
# All 'Amazon' IPv6 CIDR blocks must be disassociated.
# this_ip is a IPv4 CIDR that may or may not have host bits set
# Get the network bits.
# let AWS handle invalid CIDRs
# Fetch current state from vpc_object
# There's no block associated, and we want one to be associated
# Wait for DhcpOptionsId to be updated
# Set on-creation defaults
# validate function name
# set API parameters
# check if alias exists and get facts
# check if alias has changed -- only version and description can change
# create new function alias
# state = 'absent'
# delete the function
# Get all buckets
# Filter buckets if requested
# Return proper list (filtered or all)
# Iterate over all buckets and append Retrieved facts to bucket
# we just pass on error - error means that resources is undefined
# Replace 'null' with 'us-east-1'?
# Strip response metadata (not needed)
# Ensure we have an empty dict
# Define mutually exclusive options
# Including ec2 argument spec
# Get parameters
# Set up connection
# Get basic bucket list (name + creation date)
# Add information about name/name_filter to result
# Gather detailed information about buckets if requested
# MAIN
# EIGW gave back a bad attachment state or an invalid response so we error out
# Describe launch templates
# Describe launch templates versions
# format output
# This is nice to have, not essential
# if the user didn't specify a name
# compatibility with autoscaling_group module
# workaround for https://github.com/ansible/ansible/pull/25015
# Limit of 20 similar to https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_DescribeLoadBalancers.html
# get asg lifecycle hooks if any
# Check mode means we would create the group
# Update the path if necessary
# Manage group memberships
# Get the group again
# Check mode means we would remove this group
# Remove any attached policies otherwise deletion fails
# Remove any users in the group otherwise deletion fails
# Copyright (c) 2016, Pierre Jodouin <pjodouin@virtualcomputing.solutions>
# Now that we have the policy, check if required permission statement is present and flatten to
# simple dictionary if found.
# check if function policy exists
# check if the policy exists
# remove the policy statement
# Find the default version
# Ensure we are not deleting the default version
# By default delete all non default version before the launch template deletion
# Update default version
# Delete versions
# Delete the launch template when a list of versions was not specified
# Source version passed as int
# get source template version
# Create Launch template version
# Modify default version
# Describe launch template
# IAM instance profile
# Convert Launch template data
# Tag specifications
# Create Launch template
# Ensure default version
# Caching lookup for aliases
# Make sure we have the canonical ARN, we might have been passed an alias
# We can only get aliases for our own account, so we don't need the full ARN
# There's also a number of "Warmed" states that we could support with relatively minimal effort, but
# we can't test them (currently)
# Returns the minimum information we need for a new instance when adding a new node in check mode
# We don't need to change these instances, we may need to wait for them
# We'll need to wait for the in-progress changes to complete
# These instances are ready to terminate
# We have to wait for instances to transition to their stable states before changing them
# On the basis of "be conservative in what you do, be liberal in what you accept from others"
# We'll treat instances that someone else has terminated, as "detached" from the ASG, since
# they won't be attached to the ASG.
# These instances are ready to detach
# We have to wait for instances to transition to their stable state before changing them
# These instances need to be attached
# Ids that need to be removed
# We have to wait for instances to transition to Detached before we can re-attach them
# This includes potentially waiting for instances which were Pending when we started
# While, in theory, we could make the ordering of Add/Remove configurable, the logic becomes
# difficult to test.  As such we're going to hard code the order of operations.
# Add/Wait/Terminate is the order least likely to result in 0 available
# instances, so we do any termination after ensuring instances are InService.
# We just need to wait for these
# We need to wait for these before we can attach/re-activate them
# These instances need to be brought out of standby
# They've left the ASG of their own accord, we'll leave them be...
# We need to wait for these instances to enter "InService" before we can do anything with them
# These instances are ready to move to Standby
# These instances are moving into Standby
# We have to wait for instances to transition to InService
# This includes potentially waiting for instances which were "Entering" Standby when we started
# nb. With Health the API documentation's inconsistent:
# it appears to want Capitalized for set(), but spits out UPPERCASE for get()
# Not valid for standby/terminated/detached
# These instances are terminating, we can't do anything with them.
# We need to wait for these instances to enter "InService" or "Standby" before we can do anything with them
# Turn the boto3 result into ansible friendly tag dictionary
# Remove ResponseMetadata from object_acl_info, convert to snake_case
# Remove ResponseMetadata from object_attributes_info, convert to snake_case
# Remove ResponseMetadata from object_legal_hold_info, convert to snake_case
# Remove ResponseMetadata from object_legal_lock_configuration_info, convert to snake_case
# Remove ResponseMetadata from object_retention_info, convert to snake_case
# Remove ResponseMetadata from object_tagging_info, convert to snake_case
# Remove non-requested facts
# Below APIs do not return object_name, need to add it manually
# Remove ResponseMetadata from object_info, convert to snake_case
# check if specified bucket exists
# check if specified object exists
# if specific details are not requested, return object metadata
# return list of all objects in a bucket if object name and object details not specified
# Modify boto3 tags list to be ansible friendly dict
# but don't camel case tags
# Check that Conditions values are not both empty
# Default AWS Conditions return value
# build data specified by user
# state is present but backup vault doesnt exist
# Create stack output and stack parameter dictionaries
# Create optional stack outputs
# normalize iso datetime fields in result
# Try to use the module parameters to match any existing endpoints
# If we have an endpoint now, just ensure tags and exit
# describe and normalize iso datetime fields in result after adding tags
# For some reason delete_vpc_endpoints doesn't throw exceptions it
# returns a list of failed 'results' instead.  Throw these so we can
# catch them the way we expect
# Ensure resource is present
# The ec2_metadata_facts module is a special case, while we generally dropped support for Python < 3.6
# this module doesn't depend on the SDK and still has valid use cases for folks working with older
# OSes.
# pylint: disable=consider-using-f-string
# Decoding as UTF-8 failed, return data without raising an error
# Check if data is compressed using zlib header
# Data is compressed, attempt decompression and decode using UTF-8
# Unable to decompress, return original data
# Data is not compressed, decode using UTF-8
# request went bad, retry once then raise
# fail out now
# Parse out the IAM role name (which is _not_ the same as the instance profile name)
# not a stringified JSON string
# create session token for IMDS
# populate _data with metadata
# clear out metadata in _data
# populate _data with dynamic data
# Maintain old key for backwards compatibility
# Beware, S3 is a "special" case, it sometimes catches botocore exceptions and
# re-raises them as boto3 exceptions.
# AccessDenied errors may be triggered if 1) file does not exist or 2) file exists but
# user does not have the s3:GetObject permission.
# S3 default content type
# determine object metadata and extra arguments
# Promote supported headers to boto3 ExtraArgs and place unknown ones under Metadata
# Fall back to Metadata for non-ExtraArgs headers
# For in-memory content uploads, use put_object so promoted headers
# (e.g. ContentType, ContentDisposition, CacheControl) are applied consistently
# actually fail on last pass through the loop.
# otherwise, try again, this may be a transient timeout.
# will ClientError catch SSLError?
# aws_retry=True,
# Tags is None, we shouldn't touch anything
# Ensure existing tags that aren't updated by desired tags remain
# Nothing to change, we shouldn't touch anything
# the content will be uploaded as a byte string, so we must encode it first
# if putting an object in a bucket yet to be created, acls for the bucket and/or the object may be specified
# these were separated into the variables bucket_acl and object_acl above
# if encryption mode is set to aws:kms then we're forced to use s3v4, no point trying the
# original signature.
# Return the download URL for the existing object and ensure tags are updated
# only use valid object acls for the upload_s3file function
# Delete an object from a bucket, not the entire bucket
# If the bucket does not exist then bail out
# if both creating a bucket and putting an object in it, acls for the bucket and/or the object may be specified
# these were separated above into the variables bucket_acl and object_acl
# setting valid object acls for the create_dirkey function
# object has been created using multipart upload, compute ETag using
# object content to ensure idempotency.
# Set ETag to None, to force function to compute ETag from content
# Key does not exist in source bucket
# Source and destination objects ETag differ
# Metadata from module inputs differs from what has been retrieved from object header
# The destination object does not exists
# S3 objects are equals, ensure tags will not be updated
# S3 objects differ
# 'MetadataDirective' Specifies whether the metadata is copied from the source object or replaced
# with metadata that's provided in the request. The default value is 'COPY', therefore when user
# specifies a metadata we should set it to 'REPLACE'
# perform a "managed" copy rather simply using copy_object.  This will automatically use
# multi-part uploads where necessary (https://github.com/boto/boto3/issues/1715)
# We can't set the ACLs & tags during the copy, update them afterwards
# copy recursively object(s) from source bucket to destination bucket
# list all the objects from the source bucket
# copy single object from source bucket into destination bucket
# Copy the parameters dict, we shouldn't be directly modifying it.
# Bucket deletion does not require obj.  Prevents ambiguity with delobj.
# If the object starts with / remove the leading character
# if bucket ownership controls are not found
# Beware: this module uses an action plugin (plugins/action/s3_object.py)
# so that src parameter can be either in 'files/' lookup path on the
# controller, *or* on the remote host that the task is executed on.
# Utility methods
# Find subnets by Network ACL ids
# Egress Rules
# Ingress Rules
# Only purge tags if tags is explicitly set to {} and purge_tags is True
# A rule is considered as a new rule if either the RuleNumber does exist in the list of
# current Rules stored in AWS or if the Rule differs with the Rule stored in AWS with the same RuleNumber
# transform rules: from ansible list to boto3 dict
# find added rules
# find removed rules
# Removed Rules
# Added Rules
# Create Network ACL
# Associate Subnets to Network ACL
# Create Network ACL entries (ingress and egress)
# Find default NACL associated to the VPC
# Replace Network ACL association
# Delete Network ACL
# Find Subnets by ID
# Find Subnets by Name
# Get updated information about KSK
# waiting took too long
# get healthy, inservice instances from ASG
# we catch a race condition that sometimes happens if the instance exists in the ASG
# but has not yet show up in the ELB
# if the health_check_type is ELB, we want to query the ELBs directly for instance
# status as to avoid health_check_grace period that is awarded to ASG instances
# New ASG being created, no suspended_processes defined yet
# Wait for target group health if target group(s)defined
# process tag changes
# Handle load balancer attachments/detachments
# Attach load balancers if they are specified but none currently exist
# Update load balancers if they are specified and one or more already exists
# Get differences
# check if all requested are already existing
# if wanted contains less than existing, then we need to delete some
# if has contains less than wanted, then we need to add some
# Handle target group attachments/detachments
# Attach target groups if they are specified but none currently exist
# Update target groups if they are specified and one or more already exists
# check for attributes that aren't required for updating an existing ASG
# check if min_size/max_size/desired capacity have been specified and if not use ASG values
# Get the launch object (config or template) if one is provided in args or use the existing one attached to ASG if not.
# Prefer LaunchTemplateId over Name as it's more specific.  Only one can be used for update_asg.
# Wait for ELB health if ELB(s)defined
# Required to maintain the default value being set to 'true'
# Mirror above behavior for Launch Templates
# If replacing all instances, then set replace_instances to current set
# This allows replace_instances and replace_all_instances to behave same
# check to see if instances are replaceable if checking launch configs
# set temporary settings and wait for them to be reached
# This should get overwritten if the number of instances left is less than the batch size.
# break out of this loop if we have enough new instances
# check if provided instance exists in asg, create list of instances to detach which exist in asg
# check if setting decrement_desired_capacity will make desired_capacity smaller
# than the currently set minimum size in ASG configuration
# old instances are those that have the old launch config
# Check if migrating from launch_template to launch_config first
# old instances are those that have the old launch template or version of the same launch template
# Check if migrating from launch_config_name to launch_template_name first
# check to make sure instances given are actually in the given ASG
# and they have a non-current launch config
# if are some leftover old instances, but we are already at capacity with new ones
# we don't want to decrement capacity
# we wait to make sure the machines we marked as Unhealthy are
# no longer in the list
# make sure we have the latest stats after that last loop.
# now we make sure that we have enough instances in a viable state
# Only replace instances if asg existed at start of call
# Only detach instances if asg existed at start of call
# (c) 2016, Pierre Jodouin <pjodouin@virtualcomputing.solutions>
# ---------------------------------------------------------------------------------------------------
# lamba_fuction_arn contains only the function name (not the arn)
# Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
# Default 100.
# check if required sub-parameters are present and valid
# optional boolean value needs special treatment as not present does not imply False
# check if event mapping exist
# current_state is 'present'
# check if anything changed
# remove the stream event mapping
# Update settings which can't be managed on creation
# 'Description' is documented as a key of the role returned by create_role
# but appears to be an AWS bug (the value is not returned using the AWS CLI either).
# Get the role after creating it.
# nb. doesn't use get_iam_role because we need to retry if the Role isn't there
# Check Assumed Policy document
# Check Description update
# Check MaxSessionDuration update
# Check PermissionsBoundary
# Check Managed Policies
# Get list of current attached managed policies
# current attributes
# Attempt to list the policies early so we don't leave things behind if we can't find them.
# Get role
# If role is None, create it
# Get the role again
# Fetch existing Profiles
# Profile already exists
# Make sure an instance profile is created
# Remove the role from the instance profile(s)
# Delete the instance profile if the role and profile names match
# Before we try to delete the role we need to remove any
# - attached instance profiles
# - attached managed policies
# - embedded inline policies
# Not currently supported by the APIs
# We need to handle both None and True
# Handled by HAS_BOTO
# In lieu of an Id we perform matches against the following values:
# - ip_addr
# - fqdn
# - type (immutable)
# - request_interval
# - port
# Because the list and route53 provides no 'filter' mechanism,
# the using a paginator would result in (on average) double the
# number of API calls and can get really slow.
# Additionally, we can't properly wrap the paginator, so retrying means
# starting from scratch with a paginator
# Handle the deletion race condition as cleanly as possible
# In general, if a request is repeated with the same CallerRef it won't
# result in a duplicate check appearing.  This means we can safely use our
# retry decorators
# if not resource_path:
# It's possible to update following parameters
# - ResourcePath
# - SearchString
# - FailureThreshold
# - Disabled
# - IPAddress
# - Port
# - FullyQualifiedDomainName
# - ChildHealthChecks
# - HealthThreshold
# If updating based on Health Check ID or health_check_name, we can update
# No changes...
# This makes sure we're starting from the version we think we are...
# Default port
# If update or delete Health Check based on ID
# Delete Health Check
# Create Health Check
# Update Health Check
# If health_check_name is a unique identifier
# update the health_check if another health check with same name exists
# create a new health_check if another health check with same name does not exists
# Replace '.' with '_' in attribute key names to make it more Ansibley
# Get the attributes for each alb
# Get the listeners for each alb
# Get tags for each load balancer
# Private addresses
# Make sure that the 'name' parameter sets the Name tag
# check if provided private_ip_address is within the subnet's address range
# Once we have an ID make sure we're always modifying the same object
# Refresh the eni data
# How many of these addresses do we want to remove
# In the case of create, eni_id will not be a param but we can still get the eni_id after creation
# proceed only if we're unequivocally specifying an ENI
# Build list of remote security groups
# Do not purge tags unless tags is not None
# Don't fail here, just return [] to maintain backwards compat
# in case user doesn't have kms:ListAliases permissions
# get Trail info
# Now check to see if our trail exists and get status and tags
# Check for non-existent values and populate with None
# trail doesn't exist return None
# Get existing trail facts
# If the trail exists set the result exists variable
# If Trail exists see if we need to update it
# boto3 has inconsistent parameter naming so we handle it here
# We need to make an empty string equal None
# We'll check if the KmsKeyId casues changes later since
# user could've provided a key alias, alias arn, or key id
# and trail['KmsKeyId'] is always a key arn
# If we are in check mode copy the changed values to the trail facts in result output to show what would change.
# Determine if KmsKeyId changed
# Assume changed for a moment
# However, new_key could be a key id, alias arn, or alias name
# that maps back to the key arn in initial_kms_key_id. So check
# all aliases for a match.
# Check if we need to start/stop logging
# Populate trail facts in output
# Trail doesn't exist just go create it
# Get the trail status
# Set the logging state for the trail to desired value
# Get facts for newly created Trail
# If we are in check mode create a fake return structure for the newly minted trail
# Prior to 4.0.0 we documented returning log_groups=[log_group], but returned **log_group
# Return both to avoid a breaking change.
# Determine if the log group exists
# Thank you to iAcquire for sponsoring development of this module.
# Using a required_one_of=[['name', 'image_id']] overrides the message that should be provided by
# the required_if for state=absent, so check manually instead
# Get all associated snapshot ids before deregistering image otherwise this information becomes unavailable.
# When trying to re-deregister an already deregistered image it doesn't raise an exception, it just returns an object without image attributes.
# remove any keys with value=None
# We can only tag Ebs volumes
# Remove empty values injected by using options
# The NoDevice parameter in Boto3 is a string. Empty string omits the device from block device mapping
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.create_image
# Add createVolumePermission info to snapshots result
# Select all states except 'released' and 'released-permanent-failure'
# lookup == 'tag' but tags was not provided
# Update dedicated host
# Check if one of the following has changed ('HostRecovery', 'InstanceType', 'InstanceFamily', 'HostMaintenance', 'AutoPlacement')
# Allocate dedicated host
# Release EC2 dedicated host
# await response
# fire and forget
# dry_run overrides invocation type
# logs are base64 encoded in the API response
# AWS sends back stack traces and error messages when a function failed
# in a RequestResponse (synchronous) context.
# format the stacktrace sent back as an array into a multiline string
# vpn_connection_id may be provided via module option; takes precedence over any filter values
# if vpn_connection_id is provided it will take precedence over any filters since it is a unique identifier
# see if there is a unique matching connection
# unmodifiable options and their filter name counterpart
# fix filter names to be recognized by boto3
# reformat filters with special formats
# if customer_gateway, vpn_gateway, or vpn_connection was specified in the task but not the filter, add it
# change the flat dict into something boto3 will understand
# Found no connections
# Too many results
# deleted connections are not modifiable
# Found one viable result; return unique match
# Found a result but it was deleted already; since there was only one viable result create a new one
# Found unique match
# See Boto3 docs regarding 'create_vpn_connection'
# tunnel options for allowed 'TunnelOptions' keys.
# Initialize changes dict
# Get changes to routes
# Check if nonmodifiable attributes are attempted to be modified
# get combined current tags and tags to set
# get combined current routes and routes to add
# return the vpn_connection_id if it's known
# No match but vpn_connection_id was specified.
# Unique match was found. Check if attributes provided differ.
# check_for_update returns a dict with the keys routes_to_add, routes_to_remove
# No match was found. Create and tag a connection and add routes.
# get latest version if a change has been made and make tags output nice before returning it
# (c) 2018, Will Thames <will@thames.id.au>
# For backward compatibility check if the file exists on the remote; it should take precedence
# module handles error message for nonexistent files
# execute the s3_object module with the updated args
# Stickiness options set, non default value
# non-default config left over, probably invalid
# Multiple TGS, not simple
# with no TGs defined, but an ARN set, this is one of the minimum possible configs
# We don't care about the weight with a single TG
# non-default config left over
# We didn't find an ARN
# Only one
# ForwardConfig may be optional if we've got a single TargetGroupArn entry
# Remove the redundant ForwardConfig
# remove the client secret if UseExistingClientSecret, because aws won't return it
# add default values when they are not requested
# while AWS api also won't return UseExistingClientSecret key
# it must be added, because it's requested and compared
# Check if we're dealing with subnets or subnet_mappings
# Convert subnets to subnet_mappings format for comparison
# Use this directly since we're comparing as a mapping
# Build a subnet_mapping style struture of what's currently
# on the load balancer
# Other parameters
# Scheme isn't supported for GatewayLBs, so we won't add it here, even though we don't
# support them yet.
# Ansible module parameters specific to ALBs
# Something went wrong setting attributes. If this ELB was created during this task, delete it to leave a consistent state
# Ansible module parameters specific to NLBs
# Protocol
# SslPolicy and Certificates are compared if the new listener protocol is
# one of the following 'HTTPS', 'TLS'
# SslPolicy
# Certificates
# Default actions
# If the lengths of the actions are the same, we'll have to verify that the
# contents of those actions are the same
# If the action lengths are different, then replace with the new actions
# AlpnPolicy
# Check each current listener port to see if it's been passed to the module
# The current listener is not present in new_listeners
# Remove what we match so that what is left can be marked as 'to be added'
# Now compare the 2 listeners with the same Port value
# This functions prepare listener defined in module parameters
# 1. Remove None elements from listener attribute
# 2. Tranform AlpnPolicy, value is set as string but API is expected a list
# 3. For listener defining Target group name, replaced them with the corresponding Target group ARN
# Remove suboption argspec defaults of None from each listener
# Converts 'AlpnPolicy' attribute of listener from string type to list
# If a listener DefaultAction has been passed with a Target Group Name instead of ARN,
# lookup the ARN and replace the name.
# Prepare listeners to add/modify, Remove the key 'Rules' from attributes
# 'Rules' is not a valid attribute for 'create_listener' and 'modify_listener'
# If `purge_listeners` is set to False, we empty the list of listeners to delete
# handle multiple certs by adding only 1 cert during listener creation and make calls to add_listener_certificates to add other certs
# create listener
# only one cert can be specified per call to add_listener_certificates
# Do an initial sorting of condition keys
# 'Field' should match for the conditions to be equal, compare it once at the begining
# host-header: current_condition includes both HostHeaderConfig AND Values while
# condition can be defined with either HostHeaderConfig OR Values. Only use
# HostHeaderConfig['Values'] comparison if both conditions includes HostHeaderConfig.
# path-pattern: current_condition includes both PathPatternConfig AND Values while
# condition can be defined with either PathPatternConfig OR Values. Only use
# PathPatternConfig['Values'] comparison if both conditions includes PathPatternConfig.
# QueryString Values is not sorted as it is the only list of dicts (not strings).
# Not all fields are required to have Values list nested within a *Config dict
# e.g. fields host-header/path-pattern can directly list Values
# if actions have just one element, compare the contents and then update if
# they're different
# Priority
# List rules to update priority, 'Actions' and 'Conditions' remain the same
# only the 'Priority' has changed
# Skip the default rule, this one can't be modified
# The current rule has been passed with the same properties to the module
# Remove it for later comparison
# if only the Priority has changed
# You cannot both specify a client secret and set UseExistingClientSecret to true
# If the current rule was not matched against passed rules, mark for removal
# Prepare Rule to create (to add)
# For rules to create 'UseExistingClientSecret' should be set to False, The 'Priority' need
# To be an int value and the ListenerArn need to be set
# Prepare Rule to modify: We need to remove the 'Priority' key
# Elastic Load Balancers V2
# Rules
# Target Groups
# Author:
# Common functionality to be used by the modules:
# `list_certificates` requires explicit key type filter, or it returns only RSA_2048 certificates
# strip out response metadata
# Tags are a normal Ansible style dict
# {'Key':'Value'}
# hacking for backward compatibility
# in some states, ACM resources do not have a corresponding cert
# upload cert
# I'm not sure whether the API guarentees that the ARN will not change
# I'm failing just in case.
# If I'm wrong, I'll catch it in the integration tests.
# tag that cert
# Modules are responsible for handling this.
# get_policy() requires an ARN, and list_policies() doesn't return all fields, so we need to do both :(
# Error Handler will return False if the resource didn't exist
# Unlike the others this returns a single result, make this a list with 1 element.
# MFA Devices don't support Tags (as of 1.34.52)
# Groups don't support Tags (as of 1.34.52)
# Access Keys don't support Tags (as of 1.34.52)
# Note: This code should probably live in amazon.aws rather than community.aws.
# However, for the sake of getting something into a useful shape first, it makes
# sense for it to start life in community.aws.
# import typing
# While it would be nice to supliment this with the upstream data,
# unfortunately client doesn't have a public method for getting the
# waiter configs.
# We shouldn't get here, but if someone's trying to use this without botocore installed
# let's re-raise the actual import error
# Do something sensible when the user's passed a short timeout, but our default pause wouldn't
# have allowed any retries
# There are multiple ways to specifiy delegation of access to an account
# https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-accounts
# There are multiple ways to specify anonymous principals
# https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-anonymous
# Amazon will automatically convert bool and int to strings for us
# convert root account ARNs to just account IDs
# Sort the keys to ensure a consistent order for later comparison
# Converts special cases to a consistent form
# ensure we aren't returning deeply nested structures of length 1
# check to see if they're tuple-string
# always say strings are less than tuples (to maintain compatibility with python2)
# https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_version.html
# It would be nice to be able to use autoscaling.XYZ, but we're bound by Ansible's "empty-init"
# policy: https://docs.ansible.com/ansible-core/devel/dev_guide/testing/sanity/empty-init.html
# Not intended for general re-use / re-import
# Intended for general use / re-import
# ====================================
# TODO Move these about and refactor
# Copyright (c) 2017 Will Thames
# caught by imported HAS_BOTO3
# replaced by Id from the relevant MatchSet
# Used to live here, moved into ansible.module_utils.common.dict_transformations
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.arn
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.botocore
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.exceptions
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.modules
# The names have been changed in .modules to better reflect their applicability.
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.policy
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.retries
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.tagging
# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.transformation
# Handled by HAS_BOTO3
# The paginator does not exist for `describe_availability_zones()`
# The paginator does not exist for `describe_regions()`
# EC2 VPC Subnets
# EC2 VPC Route table
# EC2 VPC Route table Route
# EC2 VPC
# The paginator does not exist for `describe_vpc_attribute`
# EC2 VPC Peering Connection
# EC2 vpn
# The paginator does not exist for `describe_vpn_connections`
# EC2 Internet Gateway
# EC2 NAT Gateway
# EC2 Elastic IP
# The paginator does not exist for 'describe_addresses()'
# EC2 VPC Endpoints
# EC2 VPC Endpoint Services
# EC2 VPC DHCP Option
# EC2 vpn Gateways
# EC2 Volumes
# EC2 Instance
# The paginator does not exist for describe_instance_attribute()
# EC2 Key
# The paginator does not exist for `describe_key_pairs()`
# EC2 Image
# 'DescribeImages' can be paginated depending on the boto3 version
# The paginator does not exist for `describe_image_attribute()`
# EC2 Snapshot
# We do not use paginator here because the `ec2_snapshot_info` module excepts the NextToken to be returned
# Handle SnapshotCreationPerVolumeRateExceeded separately because we need a much
# longer delay than normal
# EC2 ENI
# EC2 Import Image
# EC2 Spot instance
# EC2 Security Groups
# EC2 Egress only internet Gateway
# EC2 Network ACL
# EC2 Placement Group
# EC2 Launch template
# Using this API, You can specify up to 200 launch template version numbers.
# Get all security groups
# If we have unmatched names that look like an ID, assume they are
# EC2 Transit Gateway VPC Attachment Error handler
# If there is no provided config, return the empty dictionary
# Handle single value keys
# Handle actual lists of values
# EC2 Transit Gateway
# EC2 Dedicated host
# Copyright (c) 2018 Red Hat, Inc.
# Handled by the calling module
# aws-gov throws UnsupportedArgument (consistently)
# aws throws MethodNotAllowed where acceleration isn't available /yet/
# Multi-part ETag; a hash of the hashes of each part.
# Compute the MD5 sum normally
# See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
# Spot special case of fakes3.
# _list_backup_inventory_configurations is a workaround for a missing paginator for listing
# bucket inventory configuration in boto3:
# https://github.com/boto/botocore/blob/1.34.141/botocore/data/s3/2006-03-01/paginators-1.json
# For practical purposes, the paginator ignores MaxKeys, if we've been passed MaxKeys we need to
# explicitly call list_objects_v3 rather than re-use the paginator
# This list of failures is based on this API Reference
# http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html
# TooManyRequestsException comes from inside botocore when it
# does retrys, unfortunately however it does not try long
# enough to allow some services such as API Gateway to
# complete configuration.  At the moment of writing there is a
# botocore/boto3 bug open to fix this.
# https://github.com/boto/boto3/issues/876 (and linked PRs etc)
# Copyright: (c) 2021, Felix Fontein <felix@fontein.de>
# This should be directly imported by modules, rather than importing from here.
# The import is being kept for backwards compatibility.
# minio seems to return [{}] as an empty tags_list
# Amazon have reserved 'aws:*' tags, we should avoid purging them as
# this probably isn't what people want to do...
# We will also export HAS_BOTO3 so end user modules can use it.
# pylint: disable-next=protected-access
# default config with user agent
# Inventory plugins don't have access to the same 'module', they need to throw
# an exception rather than calling module.fail_json
# here we don't need to make an additional call, will default to 'us-east-1' if the below evaluates to None.
# Botocore doesn't like empty strings, make sure we default to None in the case of an empty
# Caught here so that they can be deliberately set to '' to avoid conflicts when environment
# variables are also being used
# Used by normalize_boto3_result
# pylint: disable=import-outside-toplevel
# pylint: enable=import-outside-toplevel
# We can not use the paginator at the moment because if was introduced after boto3 version 1.22
# paginator = client.get_paginator("list_backup_plans")
# result = paginator.paginate(**params).build_full_result()["BackupPlansList"]
# Remove AWS API response and add top-level plan name
# paginator = client.get_paginator("list_backup_selections")
# result = paginator.paginate(**params).build_full_result()["BackupSelectionsList"]
# Copyright (c) 2017 Willem van Ketwich
# Currently only AnsibleAWSModule.  However we have a lot of Copy and Paste code
# for Inventory and Lookup modules which we should refactor
# Initialize warnings buffer early, before any code invokes self.warn()
# Inject warnings into the JSON result for ansible-core versions that
# do not include them automatically (>=2.19 based on CI behavior).
# Record warnings so unit tests can assert on them and include in JSON result
# Forward to underlying AnsibleModule.warn to emit warnings
# to_native is trusted to handle exceptions that str() could
# convert to text.
# An implementation of this used was originally in ec2.py, however Outposts
# aren't specific to the EC2 service
# not iterable
# caught by HAS_BOTO3
# In places where passing more information to module.fail_json would be helpful
# store the extra info.  Other plugin types have to raise the correct exception
# such as AnsibleLookupError, so can't easily consume this.
# Find an existing attachment based on filters
# Delete the transit gateway attachment
# Wait until attachment reaches the desired state
# Set or update the subnets associated with the attachment
# We'll pull the VPC ID from the subnets, no point asking for
# information we 'know'.
# Only one subnet per-AZ is permitted
# For now VPC Attachment options are all enable/disable
# Apply any configuration changes to the attachment
# Check if there are no changes to apply
# Wait for resources to finish creating before updating
# Apply the configuration
# Ensure tags are applied
# Set the configuration parameters
# Handle tags
# Set tags in the configuration manager
# Handle check mode updates
# Whitelist boto3 client methods for cluster and instance resources
# Handle retry codes
# Tagging
# using a waiter in module_utils/waiters.py
# Instance may be renamed and AWSRetry doesn't handle WaiterError
# Get tags
# Original s3_bucket format, no longer advertised but not officially deprecated
# Original s3_bucket_info format, no longer advertised but not officially deprecated
# Tags are always returned as text
# FUTURE: there's a case to be made that this moves up into AWSErrorHandler
# for now, we'll handle this just for S3, but wait and see if it pops up in too
# many other places
# Unlike most of our modules, we attempt to handle non-AWS clouds.  For read-only
# actions we sometimes need the ability to ignore unsupported features.
# describe_autoscaling_group doesn't add AutoScalingGroupName
# describe_autoscaling_group and describe_autoscaling_instances aren't consistent
# from .common import AnsibleAutoScalingError
# Terminated Instances can't reach "Healthy"
# Instances in an unhealthy state can end up being automatically terminated
# Instances without protection can end up being automatically terminated
# Terminated instances can't reach InService
# Terminated instances can't reach Standby
# (c) 2019, Evgeni Golov <evgeni@redhat.com>
# Foreman documentation fragment
# Copyright (c) 2019 Matthias Dellweg
# (c) 2015, 2016 Daniel Lobato <elobatocs@gmail.com>
# (c) 2016 Guido Günther <agx@sigxcpu.org>
# pylint: disable=super-with-arguments
# is only set to bool if try block succeeds
# proxy parses facts from report directly
# Ansible callback API
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>, Daniel Lobato Garcia <dlobatog@redhat.com>
# pylint: disable=raise-missing-from
# 3rd party imports
# from config
# workaround to address the follwing issues where 'verify' is overridden in Requests:
# process results
# FIXME: This assumes 'return type' matches a specific query,
# pylint: disable=no-else-break
# /hosts/:id dos not have a 'results' key
# /facts are returned as dict in 'results'
# check for end of paging
# get next page
# /hosts 's 'results' is a list of all hosts, returned is paginated
# We need a deep copy of the data, as we modify it below and this would also modify the cache
# Create ansible groups for hostgroup
# Create ansible groups for environment, location and organization
# Create Ansible groups for host collections
# set host vars from params
# create directly mapped groups
# set host vars from facts
# create group for host collections
# read config from file, this sets 'options'
# get connection host
# actually populate inventory
# (c) 2017 Matthias Dellweg & Bernhard Hopfenmüller (ATIX AG)
# We do not want a layout text for bulk operations
# sanitize name from template data
# The following condition can actually be hit, when someone is trying to import a
# template with the name set to '*'.
# Besides not being sensible, this would go horribly wrong in this module.
# module params are priorized
# make sure, we have a name
# sanitize user input, filter unuseful configuration combinations with 'name: *'
# prevent lookup
# Nothing to do; shortcut to exit
# not 'thin'
# The name could have been determined to late, so copy it again
# (c) 2024, Partha Aji
# (c) 2018, Sean O'Keeffe <seanokeeffe797@gmail.com>
# (c) 2020 Evgeni Golov
# (c) 2019 Christoffer Reijer (Basalt AB)
# additional parameter checks
# Priority: ldap_group_membership > use_netgroups > use_netgroups derived from ldap_group_membership
# (c) 2018 Matthias M Dellweg (ATIX AG)
# Allow to pass integers as string
# Fake the not serialized input value as output
# (c) 2022, Jeremy Lenz <jlenz@redhat.com>
# (c) 2020 Mark Hlawatschek (ATIX AG)
# along with This program.  If not, see <http://www.gnu.org/licenses/>.
# (c) 2017 Matthias M Dellweg (ATIX AG)
# Default templates do not support a scoped search
# see: https://projects.theforeman.org/issues/27722
# (c) 2019 Manisha Singhal (ATIX AG)
# (c) 2019 Baptiste Agasse
# (c) 2022, Paul Armstrong <parmstro@redhat.com>
# A filter always exists before we create a rule
# Get a reference to the content filter that owns the rule we want to manage
# figure out what kind of filter we are working with
# trying to find the existing rule is not simple...
# this filter type supports many rules
# there are really 2 erratum filter types by_date and by_id
# however the table backing them is denormalized to support both, as is the api
# for an erratum filter rule == errata_by_date rule, there can be only one rule per filter. So that's easy, its the only one
# we need to search by errata_id, because it really doesn't have a name field.
# these filter types support many rules
# the name is the key to finding the proper one and is required for these types
# uuid is also a required value creating, but is implementation specific and not easily knowable to the end user - we find it for them
# this filter type support many rules
# module_stream_ids are internal and non-searchable
# find the module_stream_id by NSVCA
# determine if there is a rule for the module_stream
# if the rule exists, return it in a form ammenable to the API
# if the state is present and the module_id is NOT in the exising list, add module_stream_id.
# (c) 2021 Eric Helms
# (c) 2021 Stejskal Leos (Red Hat)
# (c) 2016, Eric D Helms <ericdhelms@gmail.com>
# KatelloEntityAnsibleModule automatically adds organization to the entity scope
# but repositories are scoped by product (and these are org scoped)
# We do not want a template text for bulk operations
# (c) 2019 Bernhard Hopfenmüller (ATIX AG)
# power_status endpoint was only added in foreman 1.22.0 per https://projects.theforeman.org/issues/25436
# Delete this piece when versions below 1.22 are off common use
# begin delete
# end delete (on delete un-indent the below two lines)
# (c) Philipp Joos 2017
# (c) Baptiste Agasse 2019
# Apply changes on underlying compute attributes only when present
# Update or create compute attributes
# (c) 2018 Manuel Bonk (ATIX AG)
# Nothing to do shortcut to exit
# only need to record if not desired_absent, as ensure_entity already does all the proper recording for us
# workaround for https://projects.theforeman.org/issues/28138
# (c) 2021 Paul Armstrong
# (c) 2018 Baptiste AGASSE (baptiste.agasse@gmail.com)
# (c) 2020, Evgeni Golov <evgeni@golov.de>
# (c) Evgeni Golov
# (c) 2018 Markus Bucher (ATIX AG)
# (c) 2020 Baptiste Agasse (@bagasse)
# TODO: greatly similar to how parameters are managed, dry it up ?
# tried nested_list here but, if using nested_list, override_values are not part of loaded entity.
# override_values=dict(type='nested_list', elements='dict', foreman_spec=override_value_foreman_spec),
# smart_class_parameters are created on puppetclass import and cannot be created/deleted from API,
# so if we don't find it, it's an error.
# When override is set to false, foreman API don't accept parameter_type and all 'override options' have to be set to false if present
# Foreman API returns 'hidden_value?' instead of 'hidden_value' this is a bug ?
# Convert values according to their corresponding parameter_type
# (c) Mark Hlawatschek 2020
# KatelloInfoAnsibleModule automatically adds organization to the entity scope
# (c) 2021 Eric D Helms
# (c) 2017, Andrew Kofink <ajkofink@gmail.com>
# (c) 2019, Baptiste Agasse <baptiste.agasse@gmail.com>
# Default to 'Library' for new env with no 'prior' provided
# (c) 2019, Matthias Dellweg <dellweg@atix.de>
# (c) 2022, Jeffrey van Pelt <jeff@vanpelt.one>
# workround the fact that the API expects `max_count` when modifying the entity
# but uses `hosts_limit` when showing one
# (c) 2019, Matthias M Dellweg <dellweg@atix.de>
# (c) 2019 Baptiste AGASSE (baptiste.agasse@gmail.com)
# search for an existing filter
# disable signature checks, we might not have the key or the file might be unsigned
# pre 4.15 RPM needs to use the old name of the bitmask
# possible types in 3.12: docker, ostree, yum, puppet, file, deb
# (c) 2023, Julien Godin <julien.godin@camptocamp.com>
# (c) 2021 Evgeni Golov
# Nothing changed, but everything ok
# (c) 2017, Lester R Claudio <claudiol@redhat.com>
# (c) 2020 Evgeni Golov <evgeni@golov.de>
# (c) 2017, Sean O'Keeffe <seanokeeffe797@gmail.com>
# TODO: Make these 2 configurable, we need to work out which horribly
# undocumented API to use.
# 64K
# Hack to add options the way fetch_url expects
# (c) 2018 Manuel Bonk & Matthias Dellweg (ATIX AG)
# make sure certain values are set
# TemplateInputs need to be added as separate entities later
# Manage TemplateInputs here
# At this point, desired template inputs have been removed from the dict.
# workaround for https://projects.theforeman.org/issues/31390
# additional param validation
# When 'build'=True, 'managed' has to be True. Assuming that user's priority is to build.
# When 'build' is not given and 'managed'=False, have to clear 'build' context that might exist on the server.
# We use different APIs for creating a host with interfaces
# and updating it, so let's differentiate based on entity being present or not
# (c) 2018, Baptiste Agasse <baptiste.agagsse@gmail.com>
# (c) 2019, Maxim Burgerhout <maxim@wzzrd.com>
# (c) 2023 Evgeni Golov <evgeni@golov.de>
# this is a hack, otherwise update_entity() tries to update that
# (c) 2018 Bernhard Suttner (ATIX AG)
# (c) 2022 Manisha Singhal (ATIX AG)
# only update subscriptions of newly created or updated AKs
# copied keys inherit the subscriptions of the origin, so one would not have to specify them again
# deleted keys don't need subscriptions anymore either
# the auto_attach, release_version and service_level parameters can only be set on an existing AK with an update,
# not during create, so let's force an update. see https://projects.theforeman.org/issues/27632 for details
# (c) 2019 Jameer Pathan <jameerpathan111@gmail.com>
# (c) 2020 Anton Nesterov (@nesanton)
# Build a list of all existing templates of all supported types to check if we are adding any new
# components is None when we're managing a CCV but don't want to adjust its components
# only update CVC's of newly created or updated CV's that are composite if components are specified
# only record a subset of data
# When changing to latest=False & version is the latest we must send 'content_view_version' to the server
# Let's fake, it wasn't there...
# desired cvcs have already been updated and removed from `current_cvcs`
# some entries in "final" don't have an id yet, as it is only assigned on creation of a cv component,
# which didn't happen yet when we record the data
# (c) 2023, Griffin Sullivan <gsulliva@redhat.com>
# (c) 2023 Louis Tiches HallasTech
# (c) 2017, Matthias M Dellweg <dellweg@atix.de> (ATIX AG)
# workround the fact that the API expects `ignore_types` when modifying the entity
# but uses `select_all_types` when showing one
# (c) 2017 Bernhard Hopfenmüller (ATIX AG)
# Try to find the Operating System to work on
# name is however not unique, but description is, as well as "<name> <major>[.<minor>]"
# If we have a description, search for it
# If we did not yet find a unique OS, search by name & version
# In case of state == absent, those information might be missing, we assume that we did not find an operatingsytem to delete then
# we actually attempt to create a new one...
# (c) 2019 Kirill Shirinkin (kirill@mkdev.me)
# There is no way to find by name via API search, so we need
# to iterate over all external user groups of a given usergroup
# (c) 2021 William Bradford Clark
# but repository sets are scoped by product (and these are org scoped)
# List of allowed timezones
# List of allowed locales
# (c) 2020 Peter Ondrejka <pondrejk@redhat.com>
# command input required by api
# pylint: disable=ansible-format-automatic-specification,raise-missing-from
# pylint: disable=unused-import  # noqa: F401
# type: (str, str, Api) -> None
# type: () -> dict
# type: () -> List[Route]
# type: () -> List[Param]
# type: () -> List[Example]
# type: (Optional[dict], Optional[dict], Optional[dict], Optional[Any], Optional[dict]) -> Optional[dict]
# type: (Optional[dict]) -> Route
# type: (dict, Optional[Any], Optional[dict]) -> None
# type: (Optional[str], Optional[List[str]]) -> str
# pylint: disable=too-many-arguments,too-many-locals
# type: (Iterable[Param], dict, Optional[Any], Optional[dict], Optional[str]) -> None
# this will be caught in the next check
# pylint: disable=too-many-boolean-expressions
# type: (Optional[dict]) -> dict
# type: (dict) -> dict
# type: (Iterable[Param], dict) -> dict
# type: (Any) -> Any
# Find where to put the cache by default according to the XDG spec
# Not using just get('XDG_CACHE_HOME', '~/.cache') because the spec says
# that the defaut should be used if "$XDG_CACHE_HOME is either not set or empty"
# type: () -> Iterable[str]
# type: (Optional[str]) -> None
# type: () -> Iterable
# type: (str) -> Resource
# pylint:disable=all
# type: (str, bool) -> Optional[dict]
# type: (str, str, Optional[dict], Optional[dict], Optional[dict], Optional[dict], Optional[dict]) -> Optional[dict]
# type: (Action, Optional[dict], Optional[dict], Optional[dict], Optional[dict]) -> Optional[dict]
# type: (str, str, Optional[dict], Optional[dict], Optional[dict], Optional[dict]) -> Optional[dict]
# type: (str, str, str, str, str) -> None
# Foreman supports "per_page=all" since 2.2 (https://projects.theforeman.org/issues/29909)
# But plugins, especially Katello, do not: https://github.com/Katello/katello/pull/11126
# To still be able to fetch all results without pagination, we have this constant for now
# this is a workaround for https://projects.theforeman.org/issues/26937
# type: (str, Iterable[Tuple[str, str]]) -> str
# pylint: disable=too-many-instance-attributes,too-few-public-methods
# type: (Api, str) -> None
# type: () -> List
# type: (str) -> Action
# type: (str, Optional[dict], Optional[dict], Optional[dict], Optional[Any], Optional[dict]) -> Optional[dict]
# type: (str, str, str) -> None
# type: (Optional[dict]) -> str
# PSF License (see PSF-license.txt or https://opensource.org/licenses/Python-2.0)
# no way to know what a programmer means without asking him.
# (c) Matthias Dellweg (ATIX AG) 2017
# TODO: Organizations should be search by title (as foreman allows nested orgs) but that's not the case ATM.
# organizations='title',
# State recording for changed and diff reporting
# Katello
# verify that only one puppet module is returned with only one puppet class inside
# as provided search results have to be like "results": { "ntp": [{"id": 1, "name": "ntp" ...}]}
# and get the puppet class id
# workaround for https://projects.theforeman.org/issues/31874
# apypie will quote the params for us, but we need to do it twice for the cluster_id
# see https://projects.theforeman.org/issues/35438
# and https://github.com/theforeman/hammer-cli-foreman/pull/604
# and https://github.com/theforeman/foreman/pull/9383
# and https://httpd.apache.org/docs/current/mod/core.html#allowencodedslashes
# workaround for https://projects.theforeman.org/issues/31714
# Already looked up or not an entity(_list) so nothing to do
# No exception happend => scope is in place
# String comparison needs extra care in face of unicode
# ideally the type check would happen via foreman_spec.elements
# however this is not set for flattened entries and setting it
# confuses _flatten_entity
# special handling for parameters created by ParametersMixin
# they are defined as a list of dict, but the dicts should be really handled like
# entities, which means we only want to update the user-provided details
# workaround to ensure LCE and CV are always sent together, even if only one changed
# using the values from the existing entity, so the user doesn't need to pass it in their playbook
# In check_mode we emulate the server updating the entity
# Nothing needs changing
# If we were supposed to ignore check_mode we can assume this action was not a changing one.
# Explicit None to trigger the _thin_default mechanism lazily
# ensure parent and entity are the same type
# Convert current class name from CamelCase to snake_case
# Get entity name from snake case class name
# Parent does not exist so just exit here
# workaround for https://projects.theforeman.org/issues/29409
# needs to happen after super().__init__()
# _foreman_spec_helper() is called before we call check_requirements() in the __init__ of ForemanAnsibleModule
# and thus before the if HAS APYPIE check happens.
# We have to ensure that apypie is available before using it.
# There is two cases where we can call _foreman_spec_helper() without apypie available:
# * When the user calls the module but doesn't have the right Python libraries installed.
# * When Ansible generates docs from the argument_spec. As the inflector is only used to build foreman_spec and not argument_spec,
# So in conclusion, we only have to verify that apypie is available before using it.
# Lazy evaluation helps there.
# When translating to a flat name, the flattened entry should get the same "type"
# as Ansible expects so that comparison still works for non-strings
# Helper for (global, operatingsystem, ...) parameters
# Helper for converting lists of parameters
# Helper for templates
# No metadata, import template anyway
# Helper for titles
# Helper for puppetclasses
# Nothing to do, prevent removal
# Add to entity for reporting
# Helper constants
# interface specs
# Copyright Red Hat, Inc. All Rights Reserved.
# Copyright: (c) 2014, Hewlett-Packard Development Company, L.P.
# Standard openstack documentation fragment
# Copyright (c) 2012, Marco Vito Moscaritolo <marco@agavee.com>
# Copyright (c) 2013, Jesse Keating <jesse.keating@rackspace.com>
# Copyright (c) 2015, Hewlett-Packard Development Company, L.P.
# Copyright (c) 2016, Rackspace Australia
# Redirect logging to stderr so it does not mix with output, in
# particular JSON output of ansible-inventory.
# TODO: Integrate openstack's logging with Ansible's logging.
# TODO: It it wise to disregard a potential user configuration error?
# determine inventory hostnames
# self.get_option('inventory_hostname') == 'uuid'
# drop servers without addresses
# Ref.: https://docs.ansible.com/ansible/latest/dev_guide/
# calling openstacksdk's compute.servers() with
# details=True already fetched most facts
# cloud dict is used for legacy_groups option
# do not query OpenStack API for additional data
# TODO: Consider expanding 'flavor', 'image' and
# Ref.: https://opendev.org/openstack/openstacksdk/src/commit/\
# convert to dict before expanding servers
# to allow us to attach attributes
# details are required because 'addresses'
# attribute must be populated
# populate host_vars with 'ansible_host', 'ansible_ssh_host' and
# 'openstack' facts
# flatten addresses dictionary
# cloud was added by _expand_server()
# Copyright (c) 2018 Catalyst IT Ltd.
# := noqa no-log-needed
# Create cluster
# Update cluster
# Delete cluster
# Do nothing
# TODO: Implement support for updates.
# TODO: Complement *_id parameters with find_* functions to allow
# openstacksdk's create_cluster() returns a cluster's id only
# but we cannot use self.conn.container_infrastructure_management.\
# get_cluster(cluster_id) because it might return None as long as
# the cluster is being set up.
# cluster = self.conn.container_infrastructure_management.\
# update_cluster(...)
# state == 'absent' and not cluster:
# Create mapping
# Update mapping
# Delete mapping
# state == 'absent' and not mapping:
# Copyright (c) 2016 Hewlett-Packard Enterprise Corporation
# Copyright 2016 Sam Yaple
# Create service
# Update service
# Delete service
# len(matches) == 0
# state == 'absent' and not service:
# (c) 2021, Ashraf Hasson <ahasson@redhat.com>
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
# Copyright (c) 2020, Sagi Shnaidman <sshnaidm@redhat.com>
# Copyright (c) 2021 by Open Telekom Cloud, operated by T-Systems International GmbH
# Copyright (c) 2015 IBM Corporation
# Create project
# Update project
# Delete project
# Params name and domain are being used to find this project.
# state == 'absent' and not project:
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
# Copyright (c) 2013, Benno Joy <benno@ansible.com>
# resource attributes obtainable directly from params
# If both name and id are defined,then we might change the name
# else user may not be able to enumerate domains
# self.conn.image.create_image() cannot be used because it does
# not provide self.conn.create_image()'s volume parameter [0].
# [0] https://opendev.org/openstack/openstacksdk/src/commit/
# self.conn.image.delete_image() does not offer a wait parameter
# Check we are not trying to update an properties that cannot
# be modified
# Filter args to update call to the ones that have been modifed
# and are updatable. Adapted from:
# https://github.com/openstack/openstacksdk/blob/1ce15c9a8758b4d978eb5239bae100ddc13c8875/openstack/cloud/_network.py#L559-L561
# ensure user wants something specific
# and this is not what we have right now
# TODO: Merge with equal function from volume_type_access module.
# and state == 'absent'
# project_id not in project_ids and state == 'present'
# Owner could not be found so return empty list of stacks
# because *_info modules never raise errors on missing
# resources
# Project could not be found so return empty list of stacks
# Create policy
# Update policy
# state == 'absent' and not policy:
# Copyright (c) 2018 Catalyst Cloud Ltd.
# Create load_balancer
# Update load_balancer
# Delete load_balancer
# else assign_floating_ip
# With cascade=False the deletion of load-balancer
# would always fail if there are sub-resources.
# Associate floating ip
# and not ip
# Create new floating ip
# List disassociated floating ips on network
# Associate first disassociated floating ip
# No disassociated floating ips
# Create new floating ip on network
# Find disassociated floating ip
# Associate disassociated floating ip
# state == 'absent' and not load_balancer:
# Copyright (c) 2020 by Open Telekom Cloud, operated by T-Systems International GmbH
# self.conn.search_security_groups() cannot be used here,
# refer to git blame for rationale.
# TODO: Upgrade name_or_id code to match openstacksdk [1]?
# [1] https://opendev.org/openstack/openstacksdk/src/commit/
# (c) 2014, Hewlett-Packard Development Company, L.P.
# increased default value
# TODO(TheJulia): Presently this does not support updating nics.
# Update all known updateable attributes
# name can only be updated if id is given
# state == 'absent' and not node:
# Copyright (c) 2015, Jesse Keating <jlk@derpops.bike>
# If I(action) is set to C(shelve) then according to OpenStack's Compute
# API, the shelved server is in one of two possible states:
# But wait_for_server can only wait for a single server state. If a shelved
# server is offloaded immediately, then a exceptions.ResourceTimeout will
# be raised if I(action) is set to C(shelve). This is likely to happen
# because shelved_offload_time in Nova's config is set to 0 by default.
# This also applies if you boot the server from volumes.
# Calling C(shelve_offload) instead of C(shelve) will also fail most likely
# because the default policy does not allow C(shelve_offload) for non-admin
# users while C(shelve) is allowed for admin users and server owners.
# As we cannot retrieve shelved_offload_time from Nova's config, we fall
# back to waiting for one state and if that fails then we fetch the
# server's state and match it against the other valid states from
# _action_map.
# Ref.: https://docs.openstack.org/api-guide/compute/server_concepts.html
# TODO: Replace with self.conn.compute.find_server(
# [0] https://review.opendev.org/c/openstack/openstacksdk/+/857936/
# rebuild does not depend on state
# else perform action
# rebuild should ensure images exists
# TODO: Replace with shelve_offload function call when [0] has been
# [0] https://review.opendev.org/c/openstack/openstacksdk/+/857947
# shelve_offload is not supported in openstacksdk <= 1.0.0
# action != 'rebuild' and action != 'shelve_offload'
# reboot_* actions are using reboot_server method with an
# additional argument
# Do the action
# Copyright (c) 2023 Jakob Meng, <jakobmeng@web.de>
# Copyright (c) 2023 Red Hat, Inc.
# Copyright (c) 2016 IBM
# Copyright (c) 2023 Bitswalk, inc.
# Copyright (c) 2016, Mario Santos <mario.rf.santos@gmail.com>
# openstacksdk will not return details when looking up by name, so we
# need to refresh the server to get the metadata when updating.
# Can remove when
# https://review.opendev.org/c/openstack/openstacksdk/+/857987 merges
# Pass in all metadata keys to set_server_metadata so server
# object keeps all the keys
# Only remove keys that exist on the server
# Create member
# Update member
# Delete member
# state == 'absent' and not member:
# Copyright (c) 2016 Hewlett-Packard Enterprise
# Copyright (c) 2024 Red Hat, Inc.
# Fetch cloud param before it is popped
# Create creds
# Recreate immutable creds
# Delete creds
# (c) 2015, Hewlett-Packard Development Company, L.P.
# Fail early on invalid arguments
# User has requested desired state to be in maintenance state.
# Update maintenance state
# self.params['maintenance'] is False
# Maintenance state changed
# Update power state
# User has requested the node be powered off.
# In the event the power has been toggled on and
# deployment has been requested, we need to skip this
# step.
# Node is powered down when it is not awaiting to be
# provisioned
# User request has explicitly disabled deployment logic
# Node already in an active state
# TODO(TheJulia): Update instance info, however info is
# deployment specific. Perhaps consider adding rebuild
# support, although there is a known desire to remove
# rebuild support from Ironic at some point in the future.
# TODO(TheJulia): Add more error checking..
# self.params['state'] in ['absent', 'off']
# and node['provision_state'] in 'deleted'
# Copyright (c) 2016 Pason System Corporation
# TODO: Add missing network quota options 'check_limit'
# Some attributes in quota resources don't exist in the api anymore, e.g.
# compute quotas that were simply network proxies, and pre-Octavia network
# quotas. This map allows marking them to be skipped.
# 'fixed_ips',  # Available until Nova API version 2.35
# Available until Nova API version 2.35
# 'injected_file_content_bytes',  # Available until
# 'injected_file_path_bytes',     # Nova API
# 'injected_files',               # version 2.56
# Get current quota values
# If a quota state is set to absent we should assume there will be
# changes. The default quota values are not accessible so we can
# not determine if no changes will occur or not.
# Necessary since we can't tell what the default quotas are
# Try to get subnet by ID
# We do not support snapshot updates yet
# TODO: Implement module updates
# state == 'absent' and not snapshot
# state == 'absent' and not snapshot:
# (c) 2013, Benno Joy <benno@ansible.com>
# Ref.: https://docs.openstack.org/api-ref/network/v2/index.html#update-subnet
# else state is present
# Sort lists before doing comparisons comparisons
# fail early if incompatible options have been specified
# At this point filters can only contain project_id
# Copyright (c) 2016 Catalyst IT Limited
# Copyright (c) 2021 by Uemit Seren <uemit.seren@gmail.com>
# Create pool
# Update pool
# Delete pool
# Field listener_id is not returned from self.conn.load_balancer.\
# find_listener() so use listeners instead.
# Field load_balancer_id is not returned from self.conn.\
# load_balancer.find_load_balancer() so use load_balancers instead.
# state == 'absent' and not pool:
# None == auto choose device name
# refresh volume object
# Volume is not attached to this server
# Copyright (c) 2023 Cleura AB
# Create type
# Update type
# Delete type
# refresh volume_type information
# caller might not have permission to query projects
# so assume she gave a project id
# Copyright (c) 2021 by Red Hat, Inc.
# self.conn.baremetal.nodes() does not support searching by name or
# id which we want to provide for backward compatibility
# self.conn.get_machine_by_mac(mac) is not necessary
# because nodes can be filtered by instance_id
# fetch node details with self.conn.baremetal.get_node()
# because self.conn.baremetal.nodes() does not provide a
# query parameter to filter by a node's id
# not node_id
# return empty list when no matching node could be found
# because *_info modules do not raise errors on missing
# not name_or_id and not mac
# keep for backward compatibility
# TODO: Merge with equal function from compute_flavor_access module.
# Workaround for an issue in openstacksdk where
# self.conn.block_storage.find_type() will not
# find private volume types.
# domain has precedence over system
# project has precedence over domain and system
# Create object
# metadata is not returned by
# to_dict(computed=False) so return it explicitly
# Update object
# Delete object
# openstacksdk has no object_store.find_object() function
# state == 'absent' and not object:
# Copyright: (c) 2015, Hewlett-Packard Development Company, L.P.
# self.params['state'] == 'absent'
# First floating ip satisfies our requirements
# A specific floating ip address has been requested
# If a specific floating ip address has been requested
# and it does not exist yet then create it
# openstacksdk's create_ip requires floating_ip_address
# and floating_network_id to be set
# ip
# Requested floating ip address exists already
# Floating ip address exists and has been attached
# but to a different server
# Requested ip has been attached to different server
# Requested floating ip address does not exist or has not been
# assigned to server
# Requested floating ip address has been assigned to server
# and not floating_ip_address
# No specific floating ip has been requested and none of the
# floating ips which have been assigned to the server matches
# add_ips_to_server() will handle several scenarios:
# If a specific floating ip address has been requested then it
# will be attached to the server. The floating ip address has
# either been created in previous steps or it already existed.
# Ref.: https://github.com/openstack/openstacksdk/blob/
# If no specific floating ip address has been requested, reuse
# is allowed and a network has been given (with ip_pool) from
# which floating ip addresses will be drawn, then any existing
# floating ip address from ip_pool=network which is not
# attached to any other server will be attached to the server.
# If no such floating ip address exists or if reuse is not
# allowed, then a new floating ip address will be created
# within ip_pool=network and attached to the server.
# If no specific floating ip address has been requested and no
# network has been given (with ip_pool) from which floating ip
# addresses will be taken, then a floating ip address might be
# added to the server, refer to _needs_floating_ip() for
# Ref.:
# * https://github.com/openstack/openstacksdk/blob/
# Both floating_ip_address and network are mutually exclusive
# in add_ips_to_server(), i.e.add_ips_to_server will ignore
# floating_ip_address if network is not None. To prefer
# attaching a specific floating ip address over assigning any
# fip, ip_pool is only defined if floating_ip_address is None.
# No specific floating ip requested
# Found one or more floating ips which satisfy requirements
# update server details such as addresses
# Update the floating ip resource
# ips can be empty, e.g. when server has no private ipv4
# address to which a floating ip address can be attached
# Nothing to detach
# Silently ignore that ip might not be attached to server
# self.conn.network.update_ip(ip_id, port_id=None) does not
# handle nova network but self.conn.detach_ip_from_server()
# does so
# OpenStackSDK sets {"port_id": None} to detach a floating
# ip from a device, but there might be a delay until a
# server does not list it in addresses any more.
# Extract floating ips from server
# fetch server with details
# Returns a list not an iterator here because
# it is iterated several times below
# Check which floating ips matches our requirements.
# They might or might not be attached to our server.
# No specific floating ip and no specific fixed ip have been
# requested but a private network (nat_destination) has been
# given where the floating ip should be attached to.
# not floating_ip_address
# and (fixed_address or not nat_destination_name_or_id)
# An analysis of all floating ips of server is required
# Check if we have any floating ip on
# the given nat_destination network
# One or more floating ip addresses have been assigned
# to the requested nat_destination; return the first.
# Get any of the floating ips that matches fixed_address and/or network
# Requested network does not
# match network of floating ip
# and not nat_destination_name_or_id
# Any floating ip will fullfil these requirements
# A floating ip address has been assigned that
# points to the requested fixed_address
# fetch server details such as addresses
# Copyright (c) 2023 StackHPC Ltd.
# create or update deploy template
# create deploy template
# update deploy template
# remove deploy template
# Copyright 2019 Red Hat, Inc.
# Copyright (c) 2013, John Dewey <john@dewey.ws>
# fetch server details such as server['addresses']
# Create server
# Update server
# Delete server
# No floating ip has been requested, so
# do not add or remove any floating ip.
# Get floating ip addresses attached to the server
# Server has a floating ip address attached and
# no specific floating ip has been requested,
# so nothing to change.
# One or multiple floating ips have been requested,
# but none have been attached, so attach them.
# Nothing do to because either any floating ip address
# or no specific floating ips have been requested
# and any floating ip has been attached.
# A specific set of floating ips has been requested
# add specific ips which have not been added
# Detach ips which are not supposed to be attached
# Retrieve IDs of security groups attached to the server
# openstacksdk adds security groups to server using resources
# openstacksdk removes security groups from servers using resources
# Process metadata
# Process server attributes
# Updateable server attributes in openstacksdk
# (OpenStack API names in braces):
# - access_ipv4 (accessIPv4)
# - access_ipv6 (accessIPv6)
# - name (name)
# - hostname (hostname)
# - disk_config (OS-DCF:diskConfig)
# - description (description)
# Ref.: https://docs.openstack.org/api-ref/compute/#update-server
# A server's name cannot be updated by this module because
# it is used to find servers by name or id.
# If name is an id, then we do not have a name to update.
# If name is a name actually, then it was used to find a
# matching server hence the name is the user defined one
# already.
# Update all known updateable attributes although
# our module might not support them yet
# floating ip addresses will only be added if
# we wait until the server has been created
# Ref.: https://opendev.org/openstack/openstacksdk/src/commit/3f81d0001dd994cde990d38f6e2671ee0694d7d5/openstack/cloud/_compute.py#L945
# openstacksdk's create_server() might call meta.add_server_interfaces(
# ) which alters server attributes such as server['addresses']. So we
# do an extra call to compute.get_server() to return a clean server
# resource.
# Ref.: https://opendev.org/openstack/openstacksdk/src/commit/3f81d0001dd994cde990d38f6e2671ee0694d7d5/openstack/cloud/_compute.py#L942
# Nova returns server for some time with the "DELETED" state. Our tests
# are not able to handle this, so wait for server to really disappear.
# Refresh server attributes after security groups etc. have changed
# Use compute.get_server() instead of compute.find_server()
# to include server details
# Add specific ips which have not been added
# Whenever security groups of a server have changed,
# the server object has to be refreshed. This will
# be postponed until all updates have been applied.
# Server object cannot passed to self.conn.compute.update_server()
# entirely because its security_groups attribute was expanded by
# self.conn.compute.fetch_server_security_groups() previously which
# thus will no longer have a valid value for OpenStack API.
# Whenever server attributes such as metadata have changed,
# Replace net-name with net-id and keep optional nic args
# Ref.: https://github.com/ansible/ansible/pull/20969
# Delete net-name from a copy else it will
# disappear from Ansible's debug output
# state == 'absent' and not server:
# Create container
# Update container
# Delete container
# Swift metadata keys must be treated as case-insensitive
# openstacksdk has no container_store.find_container() function
# object_store.delete_container_metadata() does not delete keys
# from metadata dictionary so reload container
# metadata has higher precedence than delete_metadata_keys
# and thus is updated after later
# state == 'absent' and not container:
# We do not support backup updates, because
# openstacksdk does not support it either
# state == 'absent' and not backup
# state == 'absent' and not backup:
# Create flavor
# Update flavor
# Delete flavor
# Keep for backward compatibility
# No need to update extra_specs since flavor will be recreated
# Update flavor after extra_specs removal
# Because only flavor descriptions are updateable,
# flavor has to be recreated to "update" it
# state == 'absent' and not flavor:
# added external gateway info
# map of external fixed ip subnets to addresses
# User passed expected external_fixed_ips configuration.
# Build map of requested ips/subnets.
# Check if external ip addresses need to be added
# mismatching ip for subnet
# adding ext ip with subnet 'subnet'
# Check if external ip addresses need to be removed.
# removing ext ip with subnet (ip clash)
# removing ext ip with subnet
# Check if internal interfaces need update
# need to change interfaces
# We cannot update a router name because name is used to find routers
# by name so only any router with an already matching name will be
# considered for updates
# can't send enable_snat unless we have a network
# no external fixed ips requested
# get current external fixed ips
# but router has several external fixed ips
# keep first external fixed ip only
# Undefine external_fixed_ips to have possibility to unset them
# Build external interface configuration
# User passed external_fixed_ips configuration. Initialize ips list
# Build internal interface configuration
# TODO: We allow passing a subnet without specifing a
# portip not set, add any ip from subnet
# portip is set but has invalid value
# portip has valid value
# look for ports whose fixed_ips.ip_address matchs
# portip
# portip exists in net already
# No port with portip exists
# hence create a new port
# create ports that are missing
# Port exists for subnet but has the wrong ip. Schedule it for
# First try to find a network in the specified project.
# Fall back to a global search for the network.
# Validate and cache the subnet IDs so we can avoid duplicate checks
# and expensive API calls.
# Check if the system state would be changed
# if state == 'present' and router
# We need to detach all internal interfaces on a router
# before we will be allowed to delete it. Deletion can
# still fail if e.g. floating ips are attached to the
# router.
# name is id for federation protocols
# name is id for identity providers
# Copyright (c) 2021 T-Systems International GmbH
# Create subnet_pool
# Update subnet_pool
# Delete subnet_pool
# state == 'absent' and not subnet_pool:
# Copyright (c) 2020 by Tino Schreiber (Open Telekom Cloud), operated by T-Systems International GmbH
# node does not exist so no port could possibly be found
# (c) 2016, Mathieu Bultel <mbultel@redhat.com>
# (c) 2016, Steve Baker <sbaker@redhat.com>
# This method will always return True if state is present to
# include the case of stack update as there is no simple way
# to check if the stack will indeed be updated
# self.conn.get_stack() will not return stacks with status ==
# DELETE_COMPLETE while self.conn.orchestration.find_stack() will
# do so. A name of a stack which has been deleted completely can be
# reused to create a new stack, hence we want self.conn.get_stack()'s
# behaviour here.
# assume an existing stack always requires updates because there is
# no simple way to check if stack will indeed have to be updated
# Always wait because we expect status to be
# CREATE_COMPLETE or UPDATE_COMPLETE
# Copyright: (c) 2017, VEXXHOST, Inc.
# Regions have IDs but do not have names
# Ref.: https://docs.openstack.org/api-ref/identity/v3/#regions
# Create protocol
# Update protocol
# Delete protocol
# state == 'absent' and not protocol:
# Copyright (c) 2019, Phillipe Smith <phillipelnx@gmail.com>
# a domain must be disabled before it can be deleted and
# openstacksdk's cloud layer delete_domain() will just do that.
# (c) 2015-2016, Hewlett Packard Enterprise Development Company LP
# TODO(TheJulia): diff properties, ?and ports? and determine
# if a change occurred.  In theory, the node is always changed
# if introspection is able to update the record.
# Copyright (c) 2024 Binero AB
# create trunk
# update trunk
# delete trunk
# Copyright (c) 2015 IBM
# name is id for federation mappings
# handle id parameter separately because self.conn.identity.\
# mappings() does not allow to filter by id
# Ref.: https://review.opendev.org/c/openstack/
# Copyright (c) 2022 by Red Hat, Inc.
# NOTE: Keep handling of security group rules synchronized with
# Create security_group_rule
# Only exact matches will cause security_group_rule to be not None
# Delete security_group_rule
# When remote_ip_prefix is missing a netmask, then Neutron will add
# a netmask using Python library netaddr [0] and its IPNetwork
# class [1]. We do not want to introduce additional Python
# dependencies to our code base and neither want to replicate
# netaddr's parse_ip_network code here. So we do not handle
# remote_ip_prefix without a netmask and instead let Neutron handle
# [0] https://opendev.org/openstack/neutron/src/commit/\
# [1] https://github.com/netaddr/netaddr/blob/\
# [2] https://github.com/netaddr/netaddr/blob/\
# Check if the user is supplying -1 for ICMP.
# Rules with 'any' protocol do not match ports
# Replacing this code with self.conn.network.find_security_group_rule()
# is not possible because the latter requires an id or name.
# Check if the user is supplying -1, 1 to 65535 or None values
# for full TPC or UDP port range.
# (None, None) == (1, 65535) == (-1, -1)
# state == 'absent' and not security_group_rule:
# Copyright (c) 2020 Jesper Schmitz Mouridsen.
# Create health_monitor
# Update health_monitor
# Delete health_monitor
# Field pool_id is not returned from self.conn.load_balancer.\
# find_pool() so use pools instead.
# state == 'absent' and not health_monitor:
# Create listener
# Update listener
# Delete listener
# Field load_balancer_id is not returned from self.conn.load_balancer.\
# find_load_balancer() so use load_balancers instead.
# load_balancer.find_listener() so use load_balancers instead.
# state == 'absent' and not listener:
# Copyright (c) 2025, ScaleUp Technologies GmbH & Co. KG
# use network id in query if network parameter was specified
# create port
# update port
# delete port
# A port's name cannot be updated by this module because
# it is used to find ports by name or id.
# matching port hence the name is the user defined one
# updateable port attributes in openstacksdk
# - allowed_address_pairs (allowed_address_pairs)
# - binding_host_id (binding:host_id)
# - binding_profile (binding:profile)
# - binding_vnic_type (binding:vnic_type)
# - data_plane_status (data_plane_status)
# - device_id (device_id)
# - device_owner (device_owner)
# (- device_profile (device_profile))
# - dns_domain (dns_domain)
# - dns_name (dns_name)
# - extra_dhcp_opts (extra_dhcp_opts)
# - fixed_ips (fixed_ips)
# - is_admin_state_up (admin_state_up)
# - is_port_security_enabled (port_security_enabled)
# - mac_address (mac_address)
# - numa_affinity_policy (numa_affinity_policy)
# - qos_policy_id (qos_policy_id)
# - security_group_ids (security_groups)
# Ref.: https://docs.openstack.org/api-ref/network/v2/index.html#update-port
# Update attributes which can be compared straight away
# Compare dictionaries
# Attribute qos_policy_id is not supported by this module and would
# need special handling using self.conn.network.find_qos_policy()
# Compare attributes which are lists of dictionaries
# Compare security groups
# Compare dns attributes
# Fetch IDs of security groups next to fail early
# if any security group does not exist
# state == 'absent' and not port:
# Create cluster_template
# Update cluster_template
# Delete cluster_template
# cluster_template = self.conn.\
# container_infrastructure_management.update_cluster_template(...)
# state == 'absent' and not cluster_template:
# Create zone
# Update zone
# Delete zone
# designate expects upper case PRIMARY or SECONDARY
# state == 'absent' and not zone:
# Copyright (c) 2019, Bram Verschueren <verschueren.bram@gmail.com>
# create or update port
# assert node_name_or_id
# remove port
# Copyright 2016 Jakub Jursa <jakub.jursa1@gmail.com>
# TODO: Add get type_encryption by id
# No change is required
# Create new type encryption
# Update existing type encryption
# absent state requires type encryption delete
# Create security_group
# Update security_group
# Delete security_group
# module options name and project are used to find security group
# and thus cannot be updated
# Consider a change of security group rules only when option
# 'security_group_rules' was defined explicitly, because undefined
# options in our Ansible modules denote "apply no change"
# Update security group with created and deleted rules
# state == 'absent' and not security_group:
# kwargs is for passing arguments to subclasses
# Create resource
# Update resource
# Delete resource
# Fetch details to populate all resource attributes
# use find_* functions for id instead of get_* functions because
# get_* functions raise exceptions when resources cannot be found
# state == 'absent' and not resource:
# TODO(dtantsur): inherit the collection's base module
# DEPRECATED: This argument spec is only used for the deprecated old
# OpenStack modules. It turns out that modern OpenStack auth is WAY
# more complex than this.
# Consume standard OpenStack environment variables.
# This is mainly only useful for ad-hoc command line operation as
# in playbooks one would assume variables would be used appropriately
# Filter out all our custom parameters before passing to AnsibleModule
# for compatibility with old versions
# Due to the name shadowing we should import other way
# For 'interface' parameter, fail if we receive a non-default value
# Probably a cloud configuration/login error
# Fail if there are set unsupported for this version parameters
# New parameters should NOT use 'default' but rely on SDK defaults
# Filter out all arguments that are not from current SDK version
# if we got to this place, modules didn't exit
# Copyright: (c) 2020, Ansible Project
# Copyright: (c) 2020, Red Hat Inc.
# Options for common Helm modules
# Copyright: (c) 2018,  Red Hat | Ansible
# Options used by scale modules.
# Copyright: (c) 2020, Red Hat | Ansible
# Options for specifying object wait
# Copyright: (c) 2018, Red Hat | Ansible
# Options for providing an object configuration
# Options for authenticating with the API.
# Options for selecting or identifying a specific K8s object
# Options for specifying object state
# validate that at least one tool was found
# check input directory
# Based on the docker connection plugin
# Connection plugin for configuring kubernetes containers with kubectl
# (c) 2017, XuXinkun <xuxinkun@gmail.com>
# Note: kubectl runs commands as the user that started the container.
# It is impossible to set the remote user for a kubectl connection.
# Build command options based on doc string
# Translate verify_ssl to skip_verify_ssl, and output as string
# Redact password and token from console log
# -i is needed to keep stdin open which allows pipelining to work
# if the pod has more than one container, then container is required
# kubectl doesn't have native support for copying files into
# running containers, so we use kubectl exec to implement this
# kubectl doesn't have native support for fetching files from
# Copyright: (C), 2018 Red Hat | Ansible
# kubernetes library check happens in common.py
# (c) 2019, Fabian von Feilitzsch <@fabianvf>
# ImportError are managed by the common module already.
# This matches the behavior of kubectl when logging pods via a selector
# Parses selectors on an object based on the specifications documented here:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
# A few resources (like DeploymentConfigs) just use a simple key:value style instead of supporting expressions
# Get repository from all repositories added
# Get repository status
# no repo => rc=1 and 'no repositories to show' in output
# Install repository
# Delete repository
# (c) 2018, Chris Houseknecht <@chouseknecht>
# (c) 2021, Aubin Bikouo <@abikouo>
# Get Release from all deployed releases
# Get Release state from deployed release
# not install
# Copyright (c) 2020, Abhijeet Kasurde
# Handled during module setup
# Copyright (c) 2018, KubeVirt Team <@kubevirt>
# 'resource_definition:' has lower priority than module parameters
# Copyright: (c) 2020, Julien Huon <@julienhuon> Institut National de l'Audiovisuel
# Copyright (c) 2021, Aubin Bikouo <@abikouo>
# check mirror pod: cannot be delete using API Server
# Any finished pod can be deleted
# Pod with local storage cannot be deleted
# Check replicated Pod
# Pod not managed will be deleted as 'force' is true
# mirror pods warning
# local storage
# DaemonSet managed Pods
# delete options
# Mark node as unschedulable
# Filter pods
# Delete Pods
# drain node
# Delete or Evict Pods
# Copyright (c) 2021, Alina Buzachis <@alinabuzachis>
# ImportErrors are handled during module setup
# There are new taints to be added
# Patch with the new taints
# No new taints to be added, but maybe there is something to be updated
# Nothing to be removed
# Copyright: (c) 2021, Aubin Bikouo <@abikouo>
# Copyright (c) 2020, Red Hat
# Load kubernetes.client.Configuration
# hack because passing the container as None breaks things
# default to the first container available on pod
# Some command might change environment, but ultimately failing at end
# Handled in module setup
# when scaling multiple resource, the 'result' is changed to 'results' and is a list
# append result to the return attribute
# '--replace' is not supported by 'upgrade -i'
# install/upgrade
# the 'appVersion' specification is optional in a chart
# when deployed without an 'appVersion' chart value the 'helm list' command will return the entry `app_version: ""`
# Helm options
# Get real/deployed release status
# skip release statuses 'uninstalled' and 'uninstalling'
# Fetch chart info to have real version and real name for chart_ref from archive, folder or url
# Can't use '--dependency-update' with 'helm upgrade' that is the
# default chart install method, so if chart_repo_url is defined
# we can't use the dependency update command. But, in the near future
# we can get rid of this method and use only '--dependency-update'
# option. Please see https://github.com/helm/helm/pull/8810
# To not add --dependency-update option in the deploy function
# Not installed
# Set `helm pull` arguments requiring values
# Set `helm pull` arguments flags
# (c) 2018, Will Thames <@willthames>
# Copyright: © Ericsson AB 2024
# https://github.com/ansible-collections/kubernetes.core/issues/944
# Copyright (c) 2017, Toshio Kuratomi <tkuraotmi@ansible.com>
# Copyright (c) 2020, Ansible Project
# Get vault decrypted tmp file
# treat this as raw_params
# Options type validation strings
# Option `lstrip_blocks' was added in Jinja2 version 2.7.
# template is only supported by k8s module.
# local_path is only supported by k8s_cp module.
# find the kubeconfig in the expected search path
# kubeconfig is local
# decrypt kubeconfig found
# Check current transport connection and depending upon
# look for kubeconfig and src
# 'local' => look files on Ansible Controller
# Transport other than 'local' => look files on remote node
# find the file in the expected search path
# src is on remote node
# src is local
# Execute the k8s_* module.
# Map kubernetes-client parameters to ansible parameters
# Copyright [2021] [Red Hat, Inc.]
# Copyright [2017] [Red Hat, Inc.]
# patch merge keys taken from generated.proto files under
# staging/src/k8s.io/api in kubernetes/kubernetes
# ensure that last_applied doesn't come back as a dict of unicode key/value pairs
# json.loads can be used if we stop supporting python 2
# server_side_apply is forces content_type to 'application/apply-patch+yaml'
# The patch is the difference from actual to desired without deletions, plus deletions
# from last_applied to desired. To find it, we compute deletions, which are the deletions from
# last_applied to desired, and delta, which is the difference from actual to desired without
# deletions, and then apply delta to deletions as a patch, which should be strictly additive.
# list_merge applies a strategic merge to a set of lists if the patchMergeKey is known
# each item in the list is compared based on the patchMergeKey - if two values with the
# same patchMergeKey differ, we take the keys that are in last applied, compare the
# actual and desired for those keys, and update if any differ
# reinsert patch merge key to relate changes in other keys to
# a specific list element
# from ansible_collections.kubernetes.core.plugins.module_utils.ansiblemodule import AnsibleModule
# check if file exists
# check is remote path exists and is a file or directory
# find executable to list dir with
# create directory to copy file in
# remove trailing slash from destination path
# push command in chunk mode
# Copyright 2018 Red Hat | Ansible
# Download file
# Once we drop support for Ansible 2.9, ansible-base 2.10, and ansible-core 2.11, we can
# remove the _version.py file, and replace the following import by
# Implement ConfigMapHash and SecretHash equivalents
# Based on https://github.com/kubernetes/kubernetes/pull/49961
# Get name from metadata
# Workaround until https://github.com/helm/helm/pull/8622 is merged
# update certs from kubeconfig
# Helm 3 return "null" string when no values are set
# This is intended to provide a portable method for getting a username.
# It could, and maybe should, be replaced by getpass.getuser() but, due
# to a lack of portability testing the original code is being left in
# place.
# Version mismatch, need to refresh cache
# Prevent duplicate keys
# If there are multiple matches, prefer exact matches on api_version
# If there are multiple matches, prefer non-List kinds
# if multiple resources are found that share a GVK, prefer the one with the most supported verbs
# These are defined only for the sake of Ansible's checked import requirement
# FIXME: frustratingly bool(deployment.status) is True even if status is empty
# Furthermore deployment.status.availableReplicas == deployment.status.replicas == None if status is empty
# deployment.status.replicas is None is perfectly ok if desired replicas == 0
# Scaling up means that we also need to check that we're not in a
# situation where status.replicas == status.availableReplicas
# but spec.replicas != status.replicas
# These may be None
# There should never be more than one condition of a specific type
# Extract conditions from the resource's status
# Retry connection errors as it may be intermittent network issues
# The better solution would be typing.Protocol, but this is only in 3.8+
# Copyright: (c) 2021, Red Hat | Ansible
# ignore append_hash for resources other than ConfigMap and Secret
# With a timeout of 0 the waiter will do a single check and return, effectively not waiting.
# This is an initial check to get the resource or resources that we then need to wait on individually.
# There is either no result or there is a List resource with no items
# Now wait for the specified state of any resource instances we have found.
# Some resources, like ProjectRequests, can't be created multiple times,
# because the resources that they create don't match their kind
# In this case we'll mark it as unchanged and warn the user
# Delete the object
# If only metadata.generation and metadata.resourceVersion changed, ignore it
# hide_field should be able to cope with simple or more complicated
# field definitions
# e.g. status or metadata.managedFields or
# spec.template.spec.containers[0].env[3].value or
# metadata.annotations[kubectl.kubernetes.io/last-applied-configuration]
# Sort with reverse=true so that when we delete an item from the list, the order is not changed
# hide_field_split2 returns the first key in hidden_field and the rest of the hidden_field
# We expect the first key to either be in brackets, to be terminated by the start of a left
# bracket, or to be terminated by a dot.
# examples would be:
# field.another.next -> (field, another.next)
# field[key].value -> (field, [key].value)
# [key].value -> (key, value)
# [one][two] -> (one, [two])
# skip past right bracket and any following dot
# noqa: E203
# We'll create an empty definition and let merge_params set values
# from the module parameters.
# The following should only be set if we have values for them
# kubernetes import error is handled in module setup
# This is defined only for the sake of Ansible's checked import requirement
# If authorization variables aren't defined, look for them in environment variables
# Aliases in kwargs
# specific case for 'proxy_headers' which is a dictionary
# Removing trailing slashes if any from hostname
# We have enough in the parameters to authenticate, no need to load incluster or kubeconfig
# First try to do incluster config, then kubeconfig
# Override any values in the default configuration with Ansible parameters
# As of kubernetes-client v12.0.0, get_default_copy() is required here
# Delete all resources in the namespace for the specified resource type
# If needed, wait and/or create diff
# wait logic is a bit different for delete as `instance` may be a status object
# Connection plugin for building container images using buildah tool
# Written by: Tomas Tomecek (https://github.com/TomasTomecek)
# this _has to be_ named Connection
# String used to identify this Connection class from other classes
# container filesystem will be mounted here on host
# `buildah inspect` doesn't contain info about what the default user is -- if it's not
# set, it's empty
# shlex.split has a bug with text strings on Python-2.6 and can only handle text strings on Python-3
# Based on the buildah connection plugin
# Connection plugin to interact with existing podman containers.
# We know that pipelining does not work with podman. Do not enable it, or
# users will start containers and fail to connect to them.
# we actually don't need to unmount since the container is mounted anyway
# rc, stdout, stderr = self._podman("umount")
# display.vvvvv("RC %s STDOUT %r STDERR %r" % (rc, stdout, stderr))
# Written by Janos Gerzson (grzs@backendo.com)
# -i is required, because
# podman unshare should be executed in a login shell to avoid chdir permission errors
# Copyright (c) 2025
# Logging is controlled by Ansible verbosity flags
# name filtering
# label filtering
# additional include/exclude filters
# Set connection plugin and remote_addr (container id or name works)
# Common vars
# Composed and keyed groups
# Try built-in helper first (signature may vary by ansible-core), do not fail hard
# Always run manual keyed grouping to support dotted keys like labels.role
# Resolve dotted key path against hostvars
# Sanitize group names per Ansible rules
# 'buildah containers -a --format json' lists working containers
# Copyright (c) 2025 Ansible Project
# Compare all relevant attributes except Name
# Exact protocol match required for full URIs
# Simple hostname format - check if it's contained in the expanded URI
# podman expands "user@host" to "ssh://user@host:22/run/user/uid/podman/podman.sock"
# Check if URI matches the expected destination format
# Check if default status needs to change
# Compare "Default" only when it's set explicitly in parameters,
# first system connection is always default
# Check Identity if provided
# Validate required parameters - always require destination when state=present (except for rename)
# Get current connection state
# Handle rename operation
# Check if new name already exists
# Check if the existing connection with new_name is identical to current_conn
# Connections are identical - idempotent, no changes needed
# Different connection exists with the same name - fail
# Perform rename
# Get connection info after rename
# Handle add/update operation (destination is always provided)
# Remove existing connection if it exists with different parameters
# Add the connection with desired parameters
# Get final connection state
# In check mode, provide expected values
# Connection exists and matches desired state - idempotent
# Copyright (c) 2020 Red Hat
# flake8: noqa: E501
# make here any changes to self.defaults related to podman version
# Removing GID and UID from options list
# Collecting all other options in the list
# # For UID, GID
# if 'uid' in self.info or 'gid' in self.info:
# Check non idempotent parameters
# pylint: disable=unused-variable
# check if running not from root
# in case of mount/unmount, return path to the volume from stdout
# coding: utf-8 -*-
# 2022, Sébastien Gendre <seb@k-7.ch>
# Flag which indicate whether the targeted system state is modified
# Build the podman command, based on the module parameters
# add the restart policy to options
# Environment variables (only for Podman 4.3.0 and above)
# Run the podman command to generated systemd .service unit(s) content
# In case of error in running the command
# Print information about the error and return and empty dictionary
# In case of command execution success, its stdout is a json
# dictionary. This dictionary is all the generated systemd units.
# Each key value pair is one systemd unit. The key is the unit name
# and the value is the unit content.
# Load the returned json dictionary as a python dictionary
# Write the systemd .service unit(s) content to file(s), if
# If destination don't exist
# If  not in check mode, make it
# If destination exist but not a directory
# Stop and tell user that the destination is not a directory
# Write each systemd unit, if needed
# Build full path to unit file
# Force to replace the existing unit file
# See if we need to write the unit file, default yes
# Write the file, if needed
# If not in check mode, write the file
# When exception occurs while trying to write units file
# Return the systemd .service unit(s) content
# Build the list of parameters user can use
# Build result dictionary
# Build the Ansible Module
# Generate the systemd units
# Return the result
# work on input vars
# Copyright (c) 2023, Roberto Alfieri <ralfieri@redhat.com>
# In check mode, do not execute the logout command. We cannot reliably
# determine current login state here without side effects, so report
# no change.
# Treat "not logged into" as a no-op (idempotent) regardless of
# capitalization differences across podman versions.
# If the command is successful, we managed to log out
# Mind: This also applied if --all flag is used, while in this case
# there is no check whether one has been logged into any registry
# The command will return successfully but not log out the user if the
# credentials were initially created using docker. Catch this behaviour:
# Copyright (c) 2021, Christian Bourque <@ocafebabe>
# Copyright (c) 2023, Takuya Nishimura <@nishipy>
# podman_container_exec always returns changed=true
# The 'exists' test is available in podman >= 0.12.1
# The error message is e.g. 'Error: not logged into docker.io'
# Therefore get last word to extract registry name
# Handle quadlet state separately
# This should not fail in regular circumstances, so retry again
# https://github.com/containers/podman/issues/10225
# Use a checksum to check if the auth JSON has changed
# podman falls back to ~/.docker/config.json if the default authfile doesn't exist
# If the command is successful, we managed to login
# If we have managed to calculate a checksum before, check if it has changed
# due to the login
# Copyright (c) 2021, Sagi Shnaidman <sshnaidm@redhat.com>
# Filter for specific connection
# if not filtered_connections:
# For v3 it's impossible to find out DNS settings.
# compare only if set explicitly
# Currently only bridge is supported
# We don't support dual stack because it generates subnets randomly
# Disable idempotency of subnet for v4, subnets are added automatically
# TODO(sshnaidm): check if it's still the issue in v5
# TODO(sshnaidm): implement IP to CIDR convert and vice versa
# Disable idempotency of subnet for v3 and below
# We can't support dual stack, it generates subnets randomly
# We can't guess what subnet was used before by default
# for IP range and GW to set 'subnet' is required
# define or subnet or net config
# Copyright (c) 2024 Red Hat
# Copyright (c) 2023, Pavel Dostal <@pdostal>
# For Podman < 4.x
# For Podman > 4.x
# pod_name = extract_pod_name(module.params['kube_file'])
# hack to check if no resources are deleted
# the following formats are matched for a kube name:
# should match name field within metadata (2 or 4 spaces in front of name)
# the name can be written without quotes, in single or double quotes
# the name can contain -_
# Find all pods
# In case of one pod or replicasets
# Delete all pods
# Create a pod
# Remove detach from create command and don't set if attach is true
# Exception for etc_hosts and add-host
# Exception for healthcheck and healthcheck-command
# Exception for network_aliases and network-alias
# Exception for restart_policy and restart
# Exception for secrets and secret
# Exception for timezone and tz
# Add your own args for podman command
# https://github.com/containers/libpod/pull/5669
# Disabling idemotency check for cgroups as it's added by systemd generator
# https://github.com/containers/ansible-podman-collections/issues/775
# def diffparam_cgroups(self):
# Disabling idemotency check for cidfile as it's added by systemd generator
# def diffparam_cidfile(self):
# TODO(sshnaidm): to inspect image to get the default command
# Healthcheck is only defined in container config if a healthcheck
# was configured; otherwise the config key isn't part of the config.
# the "test" key is a list of 2 items where the first one is
# "CMD-SHELL" and the second one is the actual healthcheck command.
# In a idempotency 'lite mode' assume all images from different registries are the same
# Strip out labels that are coming from systemd files
# https://github.com/containers/ansible-podman-collections/issues/276
# def diffparam_pod(self):
# https://github.com/containers/ansible-podman-collections/issues/828
# def diffparam_pod_id_file(self):
# Disabling idemotency check for sdnotify as it's added by systemd generator
# def diffparam_sdnotify(self):
# if (LooseVersion(self.version) >= LooseVersion('1.8.0')
# Disabling idemotency check for exit policy as it's added by systemd generator
# https://github.com/containers/ansible-podman-collections/issues/774
# def diffparam_exit_policy(self):
# TODO(sshnaidm): https://github.com/containers/podman/issues/6968
# Disabling idemotency check for infra_conmon_pidfile as it's added by systemd generator
# def diffparam_infra_conmon_pidfile(self):
# Disabling idemotency check for pod id file as it's added by systemd generator
# older podman versions (1.6.x) don't have status in 'podman pod inspect'
# if other methods fail, use 'podman pod ps'
# from podman 5 onwards, this is a list of dicts,
# before it was just a single dict when querying
# a single pod
# self.pod.unpause()  TODO(sshnaidm): to unpause if state == started?
# Select correct parameter name based on version
# File does not exist, so all lines in file_content are different
# Read the file
# Function to remove comments from file content
# Remove comments from both file contents before comparison
# Get the different lines between the two contents
# We don't know where systemd files are located, nothing to delete
# Generated from https://github.com/containers/podman/blob/main/pkg/signal/signal_linux.go
# and https://github.com/containers/podman/blob/main/pkg/signal/signal_linux_mipsx.go
# Remove command args from the list
# This is a boolean argument and doesn't have value
# This is a key=value argument
# This is also a false/true switching argument
# Handle inline container file
# Return the command that will be executed for podman_actions tracking
# Extract image ID from output
# Add authentication
# Add build-specific arguments
# Add annotations
# Add containerfile hash as label
# Add volumes
# Add build file
# Add target
# Add extra args
# Add build context path
# Fallback to last line if no --> found
# Special handling for scp transport which uses 'podman image scp'
# Allow passing global --ssh options to podman
# Extra args (e.g., --quiet) if provided
# Source image (local)
# Destination host spec
# If user did not include '::' in dest, append it to copy into remote storage with same name
# Default push behavior for all other transports
# Build destination
# Handle simple registry destination
# Handle transport-specific destinations
# Initialize repository info
# Initialize components
# Initialize containerfile processor
# Results tracking
# Check architecture if specified
# Check Containerfile hash
# Get existing hash from image labels
# Update results with final image info
# Validate build configuration
# Get containerfile hash
# Build the image
# Try removing by ID if provided
# Remove tag if present
# Quadlet functionality will be handled by the main module
# Copyright (c) 2024 Sagi Shnaidman (@sshnaidm)
# This should be implemented in child classes if needed.
# Add an entry for each item in the list
# Add a single entry for the key
# the following are not implemented yet in Podman module
# end of not implemented yet
# Does not exist in module parameters
# add it in security_opt
# All these are in security_opt
# --security-opt unmask=ALL
# Work on params in params_map and convert them to a right form
# Work on params which are not in the param_map but can be calculated
# Work on params which are not in the param_map and add them to PodmanArgs
# Return params with custom processing applied
# Add more parameter mappings specific to networks
# This is a inherited class that represents a Quadlet file for the Podman pod
# This is a inherited class that represents a Quadlet file for the Podman volume
# 'opt': 'Options',
# This is a inherited class that represents a Quadlet file for the Podman kube
# This is a inherited class that represents a Quadlet file for the Podman image
# if params['validate_certs'] is not None:
# Let's detect which user is running
# Create a filename based on the issuer
# Check if the directory exists and is writable
# Specify file permissions
# default mode for new quadlet file only
# Check if file already exists and if it's different
# adjust file permissions
# Check with following command:
# QUADLET_UNIT_DIRS=<Directory> /usr/lib/systemd/system-generators/podman-system-generator {--user} --dryrun
# Copyright: (c) 2014, Trond Hindenes <trond@hindenes.com>
# Copyright: (c) 2020, Chocolatey Software
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub.  actual code lives in the .ps1
# file of the same name
# * Better parsing when a package has dependencies - currently fails
# * Time each item that is run
# * Support 'changed' with gems - would require shelling out to `gem list` first and parsing, kinda defeating the point of using chocolatey.
# * Version provided not as string might be translated to 6,6 depending on Locale (results in errors)
# Copyright: (c) 2018, Simon Baerlocher <s.baerlocher@sbaerlocher.ch>
# Copyright: (c) 2018, ITIGO AG <opensource@itigo.ch>
# (C) 2019 Red Hat Inc.
# Copyright (C) 2019 Western Telematic Inc.
# Module to retrieve WTI alarm information from WTI OOB and PDU devices.
# CPM remote_management
# define the available arguments/parameters that a user can pass to
# the module
# Module to upgeade the firmware on WTI OOB and PDU devices.
# define the available arguments/parameters that a user can pass to the module
# if a local file was defined lets see what family it is: Console or Power
# 1. Get the Version of the WTI device
# remove any 'alpha' or 'beta' designations if they are present)
# 2. Go online and find the latest version of the os image for this device family
# filter out keep-alive new chunks
# SEND the file to the WTI device
# 3. upload new os image to WTI device
# This is the correct syntax
# only remove if the file was downloaded
# Copyright (C) 2021 Western Telematic Inc.
# Module to retrieve WTI Network SYSLOG Server Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI firmware information from WTI OOB and PDU devices.
# Module to retrieve WTI Network Interface Parameters from WTI OOB and PDU devices.
# Module to execute WTI Serial Port Connection commands on WTI OOB and PDU devices.
# Copyright (C) 2020 Western Telematic Inc.
# Module to configure WTI network SNMP Parameters on WTI OOB and PDU devices.
# Module to retrieve WTI Serial Port Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Power information from WTI OOB and PDU devices.
# (C) 2018 Red Hat Inc.
# Copyright (C) 2018 Western Telematic Inc.
# Module to execute CPM User Commands on WTI OOB and PDU devices.
# for Adding there must be a password present
# Module to configure WTI network SYSLOG Client Parameters on WTI OOB and PDU devices.
# read in the list of syslog client addresses
# the number of idicies and addresses must match
# read in the list of syslog client ports
# read in the list of syslog client transport protocols
# read in the list of syslog client secure enable
# see if the port number was changed
# see if the transport type was changed
# see if the secure choice was changed
# Copyright (C) 2018 Western Telematic Inc. <kenp@wti.com>
# Module to execute WTI Plug Commands on WTI OOB and PDU devices.
# WTI remote_management
# Module to retrieve WTI Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Serial Port Connection status from WTI OOB and PDU devices.
# (C) 2023 Red Hat Inc.
# Copyright (C) 2023 Western Telematic Inc.
# Module to configure WTI network DNS Services Parameters on WTI OOB and PDU devices.
# Module to retrieve WTI Network IPTables Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Current information from WTI OOB and PDU devices.
# Module to execute WTI hostname parameters from WTI OOB and PDU devices.
# Module to retrieve WTI temperature information from WTI OOB and PDU devices.
# Module to retrieve WTI hostname parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Network SYSLOG Client Parameters from WTI OOB and PDU devices.
# Module to configure WTI network IPTables Parameters on WTI OOB and PDU devices.
# Module to execute WTI Serial Port Parameters on WTI OOB and PDU devices.
# Module to retrieve WTI general status information from WTI OOB and PDU devices.
# Module to retrieve WTI time date Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Network SNMP Parameters from WTI OOB and PDU devices.
# Copyright (C) 2024 Western Telematic Inc.
# Module to retrieve WTI Network Web Parameters from WTI OOB and PDU devices.
# Module to retrieve WTI Network DNS Services Parameters from WTI OOB and PDU devices.
# Module to execute WTI time date Parameters on WTI OOB and PDU devices.
# end ietf-ipv4 block
# end ietf-ipv6 block
# end ntp block
# Module to execute WTI Plug Configuration Commands on WTI OOB and PDU devices.
# Module to execute WTI network interface Parameters on WTI OOB and PDU devices.
# make sure we are working with the correct ethernet port
# end of dhcpclient
# end of ietf-ipv4
# end of interface
# Module to configure WTI network SYSLOG Server Parameters on WTI OOB and PDU devices.
# (C) 2024 Red Hat Inc.
# Module to configure WTI network WEB Parameters on WTI OOB and PDU devices.
# Put the JSON request together
# Copyright: (c) 2015, Peter Sprygada <psprygada@ansible.com>
# this filters out less specific lines
# Color codes
# Clear line (CSI K)
# Xterm change cursor mode (CSI ? 1 [h|l])
# Xterm change keypad (ESC [=|>])
# Xterm window title string (OSC <title string> BEL)
# Copyright 2024 Red Hat
# Copyright: (c) 2017, Ansible by Red Hat, inc
# remove default in aggregate spec, to handle common arguments
# Copyright 2019 Red Hat
# This file is auto generated by the resource
# Do not edit this file manually.
# Changes to this file will be over written
# Changes should be made in the model used to
# Delete all filtered configs
# get the current active config from the node or passed in via
# the config param
# create the candidate config object from the arguments
# create loadable config that includes only the configuration updates
# (c) 2017, Ansible by Red Hat, inc
# This file is part of Ansible by Red Hat
# domain-search differs in 1.3- and 1.4+
# state='absent' by itself has special meaning
# Clear everything
# These keys are lists which may need to  be reconciled with the device
# Empty list was passed, delete all values
# look both ways for public_keys to handle replacement
# if key doesn't exist in the item, get it from module.params
# validate the param value (if validator func exists)
# Always run at least once, and then `retries` more times.
# If _DEVICE_CONFIGS is non-empty and module.params["match"] is "none",
# return the cached device configurations. This avoids redundant calls
# to the connection when no specific match criteria are provided.
# parse native config using the Route_maps template
# import epdb;epdb.serve()
# typically data is populated from the current device configuration
# data = connection.get('show running-config | section ^interface')
# using mock data instead
# remove redundancies
# fancy regex to make sure we don't get a substring
# Copyright 2022 Red Hat
# parse native config using the Snmp_server template
# parse native config using the Hostname template
# parse native config using the Prefix_lists template
# parse native config using the Ntp template
# split the config into instances of the resource
# check 1.4+ first
# for pre 1.4, this is a string including possible commas
# and ! as an inverter. For 1.4+ this is a single flag per
# command and 'not' as the inverter
# pre 1.4 version with multiple flags
# <1.3 could be # (type), #/# (type/code) or 'type' (type_name)
# recent this is only for strings
# type/code
# number/unit
# operate on a collection of resource x
# parse native config using the Logging_global template
# Copyright 2020 Red Hat
# use set protocols ospf in order to get both ospf and ospfv3
# parse native config using the Ospf_interfaces template
# Search all set from configuration with set interface, including ethernet and bonding
# Parse interfaces that contains string or tuple when the interface is in a vlan
# parse native config using the Bgp_address_family template based on version
# match only on disable next to the interface name
# there are other sub-commands that can be disabled
# if state is merged, merge want onto have and then compare
# if state is deleted, empty out wantd and set haved to wantd
# remove superfluous config for overridden and deleted
# delete the whole thing and move on
# delete if not being replaced and value currently exists
# self.addcmd(entry, attrib, False)
# remove remaining items in have for replaced
# remove remaining prefix lists
# parser list for name and descriptions
# parser list for entries
# remove remaining entries
# removing the servername and commandlist from the list after deleting it from haved
# iterate through the top-level items to delete
# if everything is deleted add the delete command for {path} ntp
# this should be equiv: servernames == [] and commandlist == ["server"]:
# remove existing config for overridden and replaced
# Getting the list of the server names from haved
# removing the servername from the list after deleting it from haved
# do not delete configuration with options level
# delete static route operation per destination
# Iterate over the afi rule sets we already have.
# Iterate over each rule set we already have.
# In the desired configuration, search for the rule set we
# already have (to be replaced by our desired
# configuration's rule set).
# Remove the rules that we already have if the wanted
# rules exist under the same name.
# Merge the desired configuration into what we already have.
# Blank out the only rule set that it is removed.
# Note: if you are experiencing sticky configuration on replace
# you may need to add an explicit check for the key here. Anything that
# doesn't have a custom operation is taken care of by the `l_set` check
# below, but I'm not sure how any of the others work.
# It's possible that historically the delete was forced (but now it's
# checked).
# ipv6-name or ipv6
# searching a ruleset
# raise ValueError("name or filter must be provided or present in have")
# unless we want a wildcard
# 1.3 and below
# only with recursion call
# turn all lists of dicts into dicts prior to merge
# removing the interfaces from haved that are already negated
# if all firewall config needed to be deleted for specific interface
# when operation is delete.
# when rule set needed to be removed on
# (inbound|outbound|local interface)
# Append vif if interface contains a dot
# if interface name is bondX, then it's a bonding interface. Everything else is an ethernet
# Do the negation first
# remove surplus config for overridden and replaced
# remove surplus configs for given neighbor - replace and overridden
# de-duplicate child commands if parent command is present
# address intentionally omitted since it's coerced to addresses
# fix for new name/type
# pylint: disable=R0903
# This file is auto generated by the
# cli_rm_builder.
# Manually editing this file is not advised.
# To update the argspec make the desired changes
# in the module docstring and re-run
# pylint: disable=C0301
# service snmp community <>
# service snmp contact <>
# service snmp description <>
# service snmp listen-address <> port <>
# service snmp location <>
# service snmp smux-peer <>
# service snmp trap-source <>
# service snmp trap-target <>
# service snmp v3 engineid <>
# service snmp v3 group <>
# service snmp v3 trap-target <> auth <>
# service snmp v3 trap-target <> port <>
# service snmp v3 trap-target <> protocol <>
# service snmp v3 trap-target <> type <>
# service snmp v3 trap-target <> user <>
# service snmp v3 trap-target <> privacy <>
# service snmp v3 user <> auth <>
# service snmp v3 user <> privacy <>
# service snmp v3 user <> group <>
# service snmp v3  user <> mode <>
# service snmp v3 view <>
# 1.4+ by default
# 1.4 or greater, "system" for 1.3 or less
# 1.4 or greater, "allow-clients" for 1.3 or less
# add path to the data before rendering
# call the original method
# set system ntp allow_clients address <address>
# set system ntp allow_clients
# set system ntp listen_address <address>
# set system ntp listen_address
# set {{path}} ntp - for deleting the ntp configuration
# set system ntp server <name>
# set system ntp server <name> <options>
# Version 1.3 and below
# policy prefix-list <list-name>
# policy prefix-list <list-name> description <desc>
# policy prefix-list <list-name> rule <rule-num>
# policy prefix-list <list-name> rule <rule-num> action
# policy prefix-list <list-name> rule <rule-num> description <desc>
# policy prefix-list <list-name> rule <rule-num> ge <value>
# policy prefix-list <list-name> rule <rule-num> le <value>
# policy prefix-list <list-name> rule <rule-num> prefix <ip>
# Copyright (c) 2021, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# 'code': 404, 'error-message': 'Resource not found'
# (c) 2020 Red Hat Inc.
# (c) 2020 Dell Inc.
# Copyright (c) 2020 Dell Inc.
# if prompt is None most likely the terminal is hung up at a prompt
# Copyright 2024 Dell Inc. or its subsidiaries. All Rights Reserved
# © Copyright 2024 Dell Inc. or its subsidiaries. All Rights Reserved
# Copyright: (c) 2024, Peter Sprygada <psprygada@ansible.com>
# Copyright: (c) 2024, Dell Inc.
# { command: <str>, prompt: <str>, response: <str> }
# Copyright 2024 Dell EMC
# Copyright 2024 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright 2023 Dell Inc. or its subsidiaries. All Rights Reserved
# (c) 2024 Peter Sprygada, <psprygada@ansible.com>
# Copyright (c) 2024 Dell Inc.
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
# Start: This is to convert interface name from Eth1/1 to Eth1%2f1
# This check is to differentiate between requests and commands
# End
# Default: not used for cliconf
# This check is to differentiate between REST API requests and CLI commands
# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved
# just for linting purposes, remove
# Copyright 2022 Dell Inc. or its subsidiaries. All Rights Reserved
# Fetch data from the current device configuration
# (Skip if operating on previously fetched configuration.)
# split the unparsed route map configuration list into a list
# of parsed route map statement "instances" (dictonary "objects").
# Fetch the permit/deny action for the route map statement
# Create a dict object to hold "set" attributes.
# Fetch non-required top level set attributes
# Possible anomalous state due to partial deletion of metric config via REST
# Fetch BGP policy action attributes
# Fetch as_path_prepend config
# Fetch community list "delete" config
# Fetch community attributes.
# Fetch extended community attributes.
# Fetch other BGP policy "set" attributes
# Fetch the "call" policy configuration for the route map statement
# Create a dict object to hold "match" attributes.
# Fetch match as-path configuration
# Fetch BGP policy match attributes.
# Fetch other match attributes
# Fetch BGP policy match "config" attributes
# Fetch BGP policy match "evpn" attributes
# Set roce_enable to false when none
# Avoid raising error when there is no configuration
# Authentication configuration handling
# Authorization configuration handling
# Name-service configuration handling
# Copyright 2021 Dell Inc. or its subsidiaries. All Rights Reserved
# data = connection.get('show running-config | section neighbor')
# validate can add empties to config where values do not really exist
# go through each area for this vrf
# these are indentifying keys, area is invalid if doesn't have it
# only try grabbing settings in the JSON subsections if the sections exist
# authentication type should only have these two values
# doing if check on shortcut since it can be just lowecased if it is inside,
# a none return will cause issues
# If a value is currently configured for the stub 'default-cost' attribute,
# store it in the output 'facts' as the 'default_cost' for the area.
# Note: If OSPF NSSA is implemented for SONIC in the future with
# a different configurable "default cost" value, edits will need to be made.
# other settings are under stub subsection
# if auth type is none, don't need to display
# for some reason this stays around
# since two separate lists, combining them. doing check of if area found just in case, but area should always be found
# at this point
# note that substitute is mispelled in openconfig
# if these fields aren't inside means somehow missed area in areas list but there's inter-area policies for it.
# needed to move adding keys here to prevent reporting area exists all the time including when policies doesn't find any settings
# © Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved
# pfx_short_spec = "openconfig-routing-policy:prefix-set"
# (comment by Ansible): just for linting purposes, remove
# split the unparsed prefix configuration list into a list
# of parsed prefix set "instances" (dictonary "objects").
# Start with the top-level config from the device.
# Transform the "config" dict from the top-level device config.
# Transform the "state" dict from the top-level device config.
# Transform the binding config from the device.
# Change from OC naming to Ansible naming
# Copyright 2024444 Dell Inc. or its subsidiaries. All Rights Reserved
# (see COPYING or https: //www.gnu.org/licenses/gpl-3.0.txt)
# get poe settings
# get poe interface settings
# Save trunk vlans and vlan ranges as a list of single vlan dicts:
# Convert single vlan values to strings and convert any ranges
# to the argspec range format. (This block assumes that any string
# value received is a range, using either ".." or "-" as a
# separator between the boundaries of the range. It also assumes
# that any non-string value received is an integer specifying a
# single vlan.)
# data = connection.get('show running-config | section port-group')
# SONIC supports only one VxLan interface.
# © Copyright 2023 Dell Inc. or its subsidiaries. All Rights Reserved
# VLAN/Portchannel
# Copyright 2022 Dell EMC
# convert to argspec for ansible_facts
# validate can add null values for things missing from device config,
# facts get request returns a dictionary, key to get facts data we care about
# diff method works on dict, so creating temp dict
# if want is none, then delete all the vlans
# delete specific vlans
# Create URL and payload
# Required attribute
# Single VLAN ID
# Default edge_port is false, don't delete if false
# Default bpdu_guard is false, don't delete if false
# Default bpdu_filter is false, don't delete if false
# Default portfast is false, don't delete if false
# Default uplink_fast is false, don't delete if false
# Default shutdown is false, don't delete if false
# Default stp_enable is true, don't delete if true
# Currently delete at vlan-id or vlan-id/config URL levels aren't supported, so have to delete each attribute individually
# In state deleted, specific empty parameters are supported
# if want is none, then delete all the bgp_afs
# Check the route_map, if existing route_map is different from required route_map, delete the existing route map
# Check if the commands to be deleted are configured
# Deletion at advertise-afi-safi level
# Deletion at route-map level
# Deletion at vni-number level
# Deletion at config/attribute level
# Delete all address-families in BGPs that are not
# specified in overridden
# Delete AF configs in BGPs that are replaced/overridden
# Delete address-families that are not specified in overridden
# Delete existing route-map before configuring
# new route-map.
# Delete entire VNIs that are not specified
# Delete complete protocol redistribute
# configuration if not specified
# Delete metric, route_map for specified
# protocol if they are not specified.
# Spec value to payload value mappings
# Delete non-modified ACLs
# Modify existing ACLs
# Delete non-modified rules
# Replace existing rules
# Add new rules
# Add new ACLs
# Delete existing ACLs
# Delete entire ACL if only the name is specified
# Delete existing rules
# When state is deleted, options other than sequence_num are not considered
# Remove empties and validate the config with argument spec
# If the hexadecimal number is not enclosed within
# quotes, it will be passed as a string after being
# converted to decimal.
# Remove ethertype option if its value is False
# Remove vlan_tag_format option if the value is False
# natsort provides better result.
# The use of natsort causes sanity error due to it is not available in
# python version currently used.
# new_config = natsorted(new_config, key=lambda x: x['name'])
# For time-being, use simple "sort"
# Delete replaced groupings
# Apply the commands from the playbook
# Determine if there is any configuration specified in the playbook
# that is not contained in the current configuration.
# Determine if there is anything already configured that is not
# specified in the playbook.
# Idempotency check: If the configuration already matches the
# requested configuration with no extra attributes, no
# commands should be executed on the device.
# Delete all current route map configuration
# Note: This is consistent with current CLI behavior, but should be
# revisited if and when the SONiC REST implementation is enhanced
# for the "match peer" attribute.
# Find the corresponding route map statement in the "want" dict
# list and insert it into the current "command" dict.
# Create a "blank" template for the request
# Handle configuration for BGP policy "match" conditions
# Handle match as_path
# Handle match evpn
# Handle BGP policy match  configuration under the "config" dictionary
# Handle match interface
# Handle match IP address/prefix
# Handle match IPv6 address/prefix
# Handle match peer
# Handle match source protocol
# Handle match source VRF
# Handle match tag
# Get the current configuration (if any) for this route map statement
# Handle configuration for BGP policy "set" conditions
# ----------------------------------------------------
# Handle 'set' AS path prepend
# Handle'set' community list delete
# Handle 'set' community
# Abort the playbook if the Community "none' attribute is configured.
# Verify that no other community attributes are being requested
# at the same time as the "none" attribute and that no
# community attributes are currently configured. Abort the
# playbook execution if these conditions are not met.
# Abort the playbook if other Community "set" attributes are
# currently configured.
# Proceed with configuring 'none' if the validity checks passed.
# Handle set extcommunity
# to be located within the "config" sub-dictionary
# Handle set IP next hop.
# Handle set IPv6 next hop.
# Handle set local preference.
# Handle set metric
# Handle set origin
# Handle set weight
# Handle set tag
# Create requests for "eligible" attributes within the current route
# map statement. The content of the "command" object, on return from
# execution has only the subset of currently configured attributes
# within the full group of requested attributes for deletion from
# this route map statement.
# Validate the current command.
# Check for route map statement deletion before proceeding further.
# Proceed with validity checking and execution
# Remove any requested deletion items that aren't configured
# Handle generic top level match attributes.
# Handle BGP match items within the "config" sub-tree in the openconfig REST API definitons.
# Handle as_path
# Check for IP next hop deletion. This is a special case because "next_hop" is
# a level below "ip" in the argspec hierarchy. If 'ip' is the only key in
# delete_bgp_keys, and IP next hop deletion is not required, there is no
# BGP condition match attribute deletion required.
# Check for deletion of other BGP match attributes.
# Create requests for deletion of the eligible BGP match attributes.
# Handle metric "set" attributes.
# 'metric' is not in set_both_keys
# Handle BGP "set" items within the "config" sub-tree in the openconfig REST API definitons.
# Handle as_path_prepend
# Handle the "community list delete" (comm_list_delete) attribute
# Handle "set community": Handle named attributes first, then handle community numbers
# Append eligible entries to the delete list. Remember which entries
# are ineligible.
# Delete ineligible entries from the command list.
# No community attribute entries are configured. Pop the corresponding
# commands from the command list.
# Handle deletion of "set" community numbers.
# If no community number entries are configured, pop the entire
# community number command dict.
# Format and enqueue a request to delete eligible community attributes
# Handle set "extended community" deletion
# If no extcommunity entries of this type are configured,
# pop the entire extcommunity command sub-dict for this type.
# Format and enqueue a request to delete eligible extcommunity attributes
# Note: Although 'metric' (REST API 'set-med') is in this REST API configuration
# group, it is handled separately as part of deleting the top level, functionally
# related 'metric-action' attribute.
# Handle the special case of ip_next_hop
# Handle the special case of ipv6_next_hop
# Handle other BGP "config" attributes
# Get the current configuration (if any) for this route map
# If there's nothing configured for this route map, there's nothing
# to delete.
# Note: Because the "call" route map attribute is a "flat" attribute, not
# a dictionary, no "pre-delete" is required for this branch of the route map
# argspec for handling of "replaced" state
# If there are no 'match' attributes configured for this route map,
# there's nothing to delete.
# Obtain the set of "match" keys for which changes have been requested and
# the subset of those keys for which configuration currently exists.
# Only one peer key at a time can be configured.
# Remove all appropriate "match" configuration for this route map if any of the
# following criteria are met:  (See the note below regarding what configuration
# is "appropriate" for deletion.)
# 1) Any top level attribute is specified with a value different from its current
# 2) Any top level attribute is specified that is not currently configured.
# 3) The set of top level attributes specified does not include all currently
# (Note: Although the IPv6 attribute is defined as a nested dictionary
# to allow for future expansion, it is handled here as a top level
# attribute because it currently has only one member.)
# When deletion has been triggered, an attribute is deleted only if it is
# not present at all in the requested configuration. (If it is present in
# the requested configuration, the "merge" phase of the "replaced" state
# operation will modify it as needed, so it doesn't need to be explicitly
# deleted during the "deletion" phase.)
# Deletion has been triggered. First, delete all approriate top level
# Next, delete all appropriate sub dictionary attributes.
# Update the dict specifying deleted commands
# If no top level attribute changes were requested, check for changes in
# dictionaries nested below the top level.
# -----------------------------------------------------------------------
# Set the default uri_key value.
# Iterate through members of the parent dict.
# If there are no 'set' attributes configured for this route map,
# Obtain the set of "set" keys for which changes have been requested and the set
# of keys currently configured.
# Top level keys: Note: Although "metric" is defined as a dictionary, it
# is handled as a "top level" attribute because it can contain
# only one configured member (either an rtt_action or a "value").
# Remove all appropriate "set" configuration for this route map if any of the
# Handle top level attributes first. If top level attribute deletion is
# triggered, proceed with deletion of dictionaries and lists below the
# top level.
# Save nested command "set" items and refresh top level command "set" items.
# Proceed with deletion of dictionaries and lists below the top level.
# Check for deletion of set "community" lists. Delete the items in
# the currently configured list if it exists. As an optimization,
# avoid deleting list items that will be replaced by the received
# command.
# Delete eligible configured community numbers.
# Update the list of deleted community numbers in the "command" dict.
# Delete eligible configured community attributes.
# Update the list of deleted community attributes in the "command" dict.
# Check for deletion of set "extcommunity" lists. Delete the items in
# Delete eligible configured extcommunity list items for this
# extcommunity list
# ignore equivalent asn:nn with different as-notation format
# Update the list of deleted extcommunity list items of this type
# in the "command" dict.
# Check for deletion of ip_next_hop attributes.
# As an optimization, avoid deleting attributes that will be replaced
# by the received command.
# Delete eligible configured ip_next_hop members.
# Update the list of deleted ip_next_hop attributes in the "command" dict.
# Check for deletion of ipv6_next_hop attributes. Delete the attributes
# in the currently configured ipv6_next_hop dict list if they exist.
# Delete eligible configured ipv6_next_hop members.
# Update the list of deleted ipv6_next_hop attributes in the "command" dict.
# Check for replacement of set "community" lists. Delete the items in
# the currently configured list if it exists and any items for that
# list are specified in the received command.
# Check for replacement of set "extcommunity" lists. Delete any items in
# the currently configured list if the corresponding item is not
# specified in the received command.
# Append eligible entries to the delete list.
# Replace the requested extcommunity numbers for this type with the list of
# deleted extcommunity numbers (if any) for this type.
# If the "replaced" command set includes ip_next_hop attributes that
# differ from the currently configured attributes, delete
# ip_next_hop configuration, if it exists, for any ip_next_hop
# attributes that are not specified in the received command.
# If the "replaced" command set includes ipv6_next_hop attributes that
# ipv6_next_hop configuration, if it exists, for any ipv6_next_hop
# - Verify that parameters required for most "states" are present in
# each dict in the input list.
# - Check for interface names in the input configuration and
# perform any needed reformatting of the names.
# Verify the presence of a "sequence number" and "action" value
# for all states other than "deleted"
# Check for interface names requiring re-formatting.
# Authentication modification handling
# Authorization modification handling
# Name-service modification handling
# Authentication deletion handling
# Current SONiC behavior doesn't support single list item deletion
# Authorization deletion handling
# Name-service deletion handling
# Authentication diff handling
# Authorization diff handling
# Name-service diff handling
# if want is none, then delete all the users except admin
# Exclude admin user from new_have if it isn't present in new_want
# Delete all users except admin
# Merge want configuration
# Skip the admin user in 'deleted' state. we cannot delete all users
# Skip the asmin user in 'deleted' state. we cannot delete all users
# command being empty list means clear list
# early return there's nothing to delete
# for every existing item, either deleting or not
# keys only in command and not existing do not affect anything
# existing has a matching command for deleting, process it
# filter out if whole item was deleted. only keep items with something leftover
# existing has no matching command for deleting, can't be changed so keep it
# implementation is specifying just area id keys means delete the area
# default delete seems to be unable to handle case where existing config has many keys, and command
# only specifies a list to clear by providing an empty list, and no other keys. Not all existing keys need
# to be deleted, but default thinks existing should be cleared
# new_conf's networks will be things in existing that aren't in command
# area after deleting everything specified is empty or just the keys, disregard it
# only virtual link id specified, delete the virtual link
# have to try clearing all subsections first and then can know if it is ok to delete virtual link
# have taken care of subsections, now for the virtual link
# URI to ospf settings for one network_instance
# URI to ospf area settings for one network_instance
# URI to ospf inter-area-propagation-policies settings for one network_instance
# just used for diff mode, setting it to a default value that would show no differences. If there are changes then set to changed value
# empty or None assume delete everything
# commands is things in want that are in have and are not different aka same
# fill in defaults for helper information
# special case if want only has keys, then that means clear object
# returning what to delete
# list of all fields inside the config we are looking at
# handling deletion checks, only need to if field is in both want and have
# want to find key fields for nested items, first step is finding if it is in test_keys
# special case logic of blank lists means clear everything
# list of primitive types. cannot do logic of finding keyfields but set logic works
# use keys to find which items we need to figure out what to delete inside
# create dictionary mapping from each item's identifier to item. identifier can be made of multiple fields in item
# list items that are present in both want and have
# only need to look for deletes when an item appears in both
# for each item, get things that should be deleted
# key fields need to be passed to get right formatting back
# to make it easy, if an area appears in both have and want, just delete the whole area definition from have and then replace with the want
# new area, no previous definition. which means always need to add it
# there are differences between have and want, so removing the whole area and replace
# just new differences to add
# override: any areas that are in have but not in want are deleted
# override: any areas that are in want but not in have are added
# just deleting and adding the whole areas
# validate_config returns validated user input config. The returned data is based on the
# argspec definition. At each nested level of the argspec for which the user has specified
# one or more attributes, the returned data contains added nulls for any attributes that
# were not specified by the user input.
# not really using the none values in this module so getting thrown out. Use empty lists for clear
# only when trying to merge is it an issue if default cost is being set when not stub or NSSA
# finding out if default_cost can be set depends on have and commands
# key is always going to either be encrypted or not.
# key_encrypted field only really should exist if key exists so fill in encrypted if missing and defaults to false
# any state that is adding config cannot have a missing key
# whatever values were set for key and key encrypted should be kept together
# commands has an area with no settings to merge, that doesn't show up in facts because it can break other stuff,
# so putting in step to ignore
# virtual_d = virtual_d_keys.get(virtual_w["router_id"], None)
# key always has a key_encrypted setting. They are defined together so they must travel as a pair. Handle violations of this
# requirement presented by the default 'get_diff'
# ie playbook may specify encrypted key A, and device has encrypted key B. get diff will find the keys different, not the key_encrypted
# setting and only the key goes into the diff. The diff is missing the encrypted setting so it needs to grab that from existing settings.
# specified a different key that is also encrypted. fix that error in diff
# same key but different encryption
# this is likely error situation since single key can't work for both un- and encrypted
# and just make sure two are together for easier debugging
# message digest key didn't end up with a difference, so nothing to do
# specified a different key that is also encrypted. fixes that error
# tracking all formatted lists that will go into the final requests for each vrf
# list of areas is not organized by VRF and REST requests are being consolidated into a call on each VRF, so need to sort the area list.
# Also the organization is different between REST and argspec, argspec has config that goes into ospf areas and
# ospf global inter-area-propagation-policies of the REST format.
# Since REST has two distinct subsections, these are formatted separately and then conbimned into one REST request
# if an area is passed in, assuming want to create it no matter what settings (or just area id) passed in, so area has to exist
# merging vlinks is done as a separate request from rest of area settings.
# Area settings go to the vrf configuration endpoint, vlinks go to the virtual link endpoint.
# This allows vlinks with just the router id with no other settings specified to be created.
# These requests still depend on area being created first, so on the safe side, getting area requests first before handlign virtual links
# consolidate by Vrf
# build requests on vrf
# endpoint is an area's virtual links.
# need enabled flag in stub so can make a stub without setting other settings
# can't set default_cost on an area that isn't stub or NSAA.
# skip adding formatted virtual links into area settings because that will be added separately
# either settings found to be merged into the areas part of ospf settings or there's settings in the inter-area-policies section
# note it is mispelled in REST
# can add range even without other settings
# want to match so can find out if commands specifies everything within existing area so can de a delete all of area
# would mean nothing to delete, just ignore. should not hit since commands should be a subset of have
# can't go directly to deleting area, need to check for ranges, network, virtual links -
# most of nested and complex settings - and delete those first (and separately from area)
# allow specifying area id to mean clear it
# while clearing subsections for area, area itself may get removed in certain cases, for example clearing virtual links when it is the only
# config inside area.
# This variable tracks if above has happened. It updates regardless of whether or not module clears all settings for area, but
# is only used in case of clearing all settings and area. This prevents module from making a second unnecessary and
# error-causing request in cases where area is removed early.
# if this is the only setting left in area then it causes area delete
# if no stub or propagation settings, then area_already_deleted value would be whatever vlink_deleted_area is
# if there are either of those settings, then vlink_deleted_area is only guessing if area was deleted based off of vlinks
# and could be incorrect because of the other settings being there. but it is ok to set it because this variable is only
# used when clearing everything so those sections will end up deleting area.
# gathering stub deletions before checking area all gone because
# it is used to check if area can be deleted
# This section, for 'propagation endpoint', handles the inter-area propagation policies
# this doesn't use a "propagation_all_gone" flag because the nested subection ranges are dealt with separately above.
# The other settings of filter lists are nested in propagation in REST API but are in root area settings for argspec.
# This means that a simple check that commands and want (assuming this was passed commands that are in and the same value as have)
# are the same length is enough to cover the same behaviors
# delete all settings for area.
# either only area id was specified or any form of clear data,
# or all settings are named and for the more complex nested subsections of argspec,
# those subsections also match and are cleared (need the extra flag since length only compares the subsections existence not contents)
# there are cases where deleting sub-sections causes area to disappear. need to make sure to delete
# area too so need to check when adding that is needed
# either clearing everything left or there's nothing
# actually clearing stuff means deleting area
# area having nothing related to stub also ends up in this case. requests will be empty.
# Not all stub configuration is being deleted. (It is also possible that none of the stub options requested
# for deletion match any current configuration. In that case, no stub configuration is being deleted.)
# just the remote router id specified so deleting everything inside a
# virtual link. don't need to process individual attribute delete requests
# so delete the vlink and move to next
# check vlink subsections to see if they were also all deleted or need individual delete requests
# doing this first as this is used to determine if can just do one request to delete this vlink
# no message digest keys, means subsection does not need any work on it
# deleting everything inside a virtual link. don't need to process individual attribute delete requests
# deleting individual attributes of a vlink, set flag for info that st least one interface needs to delete individual attributes
# nothing in message digest keys to delete
# commands should only contain keys that are in have
# deleted everything in have, just return one command to delete root
# For md key deletion, only the specified key_id is used. If a key value
# and/or encrypted state are specified in the user playbook, they are
# ignored. These attributes are deleted from the configuration for the
# specified key_id solely based on the key_id value specified.
# doing filter list removes if necessary
# didn't call to clear everything
# deleting everything case, and actually have deleted things
# if just ranges are deleted then don't try to delete on root, deleting ranges will do that
# nothing for propagation - that isn't ranges settings - deleted
# nothing in ranges to delete
# should not hit as commands must be a subset of have
# only the range prefix specified or same number of fields specified means delete the whoe range
# it actually is mispelled as substitue in REST
# deleting all ranges
# if want is none, then delete all the vrfs
# if members are not mentioned delet the vrf name
# trap_action cannot be deleted
# max_med -> on_startup options are modified or deleted at once.
# Diff might not reflect the correct commands if only one of
# them is modified. So, update the command with want value.
# Diff will not reflect the correct commands if only one of
# if want is none, then delete all the bgps
# requests.append({'path': route_selection_del_path + "external-compare-router-id", 'method': DELETE})
# Delete the log_neighbor_changes only when existing values is True.
# if there is specific parameters to delete then delete those alone
# delete entire bgp
# reorder the requests to get default vrfs at end of the requests. so deletion will get success
# Delete entire BGP if not specified in overridden
# Delete config in BGP AS that are replaced/overridden
# - Modified attributes are not deleted, since they will be
# - log_neighbor_changes is enabled by default, therefore
# max_med -> on_startup options are deleted at once.
# Update the commands appropriately.
# Delete the LDAP server type configuration completely
# To ensure that the configuring attribute belongs only to those groups that are supported
# For example, 'sudoers_search_filter' only supported in 'global' and 'sudo'.
# Hence, for 'nss' and 'pam', this attr is ignored by this function
# To handle check_mode merged case when sonic_roles is set
# VRRP with VRRP ID 1 can be removed only if other VRRP
# groups are removed first
# Hence the check
# Add default values for after(generated)
# from ansible.module_utils.connection import ConnectionError
# Delete prefix lists that are not specified
# Modify existing prefix lists
# Delete sequences that are not specified
# Replace/modify existing sequences
# If only action is changed, then that sequence can be modified.
# Add new sequences
# Add new prefix lists
# The configured prefix set has no prefix list
# Check for matching key attributes
# Check for ge match
# Check for le match
# All key attributes match for this cfg_prefix
# No matching configured prefixes were found in the prefix set.
# Assuming IPv6 for this case
# if want is none, then delete all the tacacs_serveri except admin
# just in case weird arguments passed
# nothing that could be deleted
# want is empty, meaning want to delete all config
# afis parameter only stores the on device config at this point
# some mix of settings specified in both
# enabled and verify_mac can't be deleted from config, only set to default.
# apply the things to add or change
# need to add back in the source bindings since the diff could pick up only the different values in a source binding
# do needed deletes
# getting what needs to be added/changed after deletes
# just afi key supplied, interpreting this as delete all config for that afi
# only need to send a request if want from playbook is set to non default value and the setting currently configured is non default
# gathering list of vlans to be deleted. this section also handles cases where empty list of vlans is passed in
# which means delete all vlans
# gathering list of interfaces to be deleted. this section also handles cases where empty list of interfaces is passed in which
# removing interfaces that don't exist on device
# gathering list of source bindings to be deleted. this section also handles cases where empty list of bindings is passed in which
# removing bindings that don't exist on device
# need to check by the key since can have two different versions of same binding
# delete any vlans that are different
# delete anything that has a difference, covers things that are
# in have but not want and things in both but modified
# assuming source bindings considered a replaceable subsection ie the list afterwards
# should look exactly like what was passed into want
# replaced told want to replace existing with blank list, only thing to do is delete existing bindings for family
# if want is none, then delete ALL
# If both extended community list are of same name but different types
# If there are no members in any expanded ext community list of want, then
# abort the playbook with an error message explaining why the specified command is not valid
# If there are no members in any standard ext community list of want, then
# Change from Ansible naming to OC naming
# REST server deletion handling
# Telemetry server deletion handling
# The following options have defult values in the device IPv6
# configuration when they have been "deleted":
# enable => False,
# dad => "DISABLE",
# autoconf => False
# Enable correct handling for all states by filtering out these
# options when they have default values unless the target state
# for the currently executing playbook task is "merged" state.
# This is to enable idempotent handling for all states given the
# following coniderations:
# - In 'merged' state, the input playbook value and the configured value
# of each of these "defaulted" options is needed to enable the
# correct "diff" calculation for the changes to be applied to the
# device.
# - For 'deleted' state, the "deletion" of any of the "defaulted" options
# is a no-op. If any of these options currently has the "default" value
# configured, it is already "deleted" and no further action is needed to
# execute a request to "delete" the default value. If it is configured to
# a value other than the default value, then the request to "delete" the
# default value has no effect. Options having the "default" value should
# therefore be deleted from both the input playbook and the device
# configuration to be used for processing the playbook task.
# - For 'replaced' and 'overridden' states, absence of an option in the
# input playbook task causes deletion of that option from the device
# configuration if it has a non-default value. For the "defaulted" options,
# this is the same as configuring the option to the default value. For
# this reason, removal of a default value option from the input playbook
# task and also from the "filtered" device configuration results in the
# desired end result and allows correct idempotent handling for these options.
# This is because the states of these options in the input playbook
# task and filtered device configuration will match when the defaulted option
# is configured to the "default" value (or "deleted", which is the same thing
# for these options).
# Store the primary ip at end of the list. So primary ip will be deleted after the secondary ips
# if want is none, then delete all the radius_serveri except admin
# if deleted everything and just key left, means no config
# using a function made for handling interfaces for cards because they don't conflict
# tracks keys and values still entirely unsupported by platforms
# unsupported_values_dict should have same nesting scheme as argspec.
# if key is unsupported specify blank value, otherwise fill in with list of values or singular value that isn't supported
# keep key in if it has sub sections that have unsupported values
# setting new config to a default value, if there are changes then set to changed value
# contains existing individual cards and interfaces that are different, contains global section if there's differences
# special processing for replaced parts since might have deleted more the difference
# section refers to list of cards, list of interfaces or global settings
# if some portion of a section is being deleted as part of "replaced" state handling
# and there are differences to add, add full content from 'want' instead of just difference.
# For other sections, use the 'get_diff' result so that commands to add/modify configuration are sent only
# for added or changed attributes in that section.
# all things in have not in want or in both but different
# all things in want not in have or in both but different
# all settings for cards/interfaces that arent in introduced, and all global settings in have but not in introduced
# deleted will take empty as clear all, override having empty remove differences means do nothing.
# so need to check and prevent clearing
# merged only cares about things in want that are different from have. that's the exact list of changes
# giving an option to clear all, either None or empty config dict
# found matching card definition
# if just card id (its key) was specified, assuming want to delete whole card
# find all settings in card that should be deleted
# only settings with matching values should be deleted, so throw out anythign that doesn't match
# id will always remain in filtered_delete because the two cards have the same id
# if all settings are the same, then assume want to delete whole interface
# greater than 1 to account for id being inside
# find list of interfaces to process what needs to be deleted
# assuming want to delete everything unless a specific list is passed in
# found matching interface definition
# if just interface name (its key) was specified, assuming want to delete whole interface
# find all settings in interface that should be deleted
# name (the key) always remains in filtered_delete because the two interfaces have the same name
# greater than 1 to account for name being inside
# allow specifying blank for global section causes delete all global
# for every key in config check if it is in no_support_dict
# if key is in no_support_dict check if value is empty
# first removing null values so validate doesn't break
# validate returns validated config that is rooted at the root of argspec and with added nulls for fields of nested objects that
# didn't have a value passed in but some fields in the object did
# if none of settings that could be changed were found to have values to change, don't  send an unnecessary command to device.
# module knowing whether or not changes were made depends on if there are requests,
# so need to make sure all data and requests are in fact making changes
# don't use PUT. PoE is only a subsection of all interface settings and this resource
# module will only ever be handling that subsection and PUT will erase all other subsections
# For each attribute type to be included in the request, translate the user input as needed
# and format the corresponding REST API attribute to be specified in the request.
# since detection mode strings have some overlap with other categroies, poe_str2enum requires prepending 'detection-'
# assumption for to_delete means that all cards in to_delete are also in have, this has already been checked before this function
# since assuming all entries have to be deleted, means can assume that there is a match in current configuration
# possible to have global section in only one of two inputs
# nothing being substituded, everything is being deleted
# key fields needed for identification, keeping in
# only need to delete the fields that aren't the keys so only add if more options are found
# Specifying appropriate order for merge to succeed
# Delete all DHCP relay config for an interface, if only
# a single server address with no value is specified.
# This "special" YAML sequence is supported to provide
# "delete all AF parameters" functionality despite the Ansible
# infrastructure limitations that prevent use of a simpler
# syntax for deleting an entire AF parameter dictionary.
# Deleting all DHCP server addresses configured on an
# interface automatically removes all DHCP relay config in
# that interface. Therefore, seperate requests to delete
# other DHCP relay configs are not required.
# Specifying appropriate order for deletion to succeed
# Delete all DHCPv6 relay config for an interface, if only
# Deleting all DHCPv6 server addresses configured on an
# interface automatically removes all DHCPv6 relay config
# in that interface. Therefore, seperate requests to delete
# other DHCPv6 relay configs are not required.
# Delete all DHCP and DHCPv6 relay config for interfaces,
# that are not specified in overridden.
# Delete all DHCP relay config for an interface if not specified
# Delete all DHCP relay config for an interface, if
# all existing server addresses are to be replaced
# or if the VRF is to be removed.
# Delete all DHCPv6 relay config for an interface if not specified
# Delete all DHCPv6 relay config for an interface, if
# Delete all L2 interface config
# If both access and trunk are not mentioned, delete all config
# in that interface
# If access -> vlan is mentioned without value,
# delete existing access vlan config
# If trunk -> allowed_vlans is mentioned without
# value, delete existing trunk allowed vlans config
# Delete all config in interfaces not specified in overridden
# Delete config in interfaces that are replaced/overridden
# See the above comment about natsort module
# new_config = natsorted(new_config, key=lambda x: x['id'])
# if want is none, then delete all the port groups
# If only the type is specified, delete all ACLs of that type
# Greater than 0 is the same as less than 65535
# Range of 0 to x is the same as less than x and
# range of x to 65535 is the same as greater than x
# Remove protocol_options option if all tcp options are False
# Remove tcp option if its value is False
# Remove dscp option if its value is False
# Deletion my WRED profile name
# Get a list of requested servers to delete that are not present in the current
# configuration on the device. This list can be used to filter out these
# unconfigured servers from the list of "delete" commands to be sent to the switch.
# if want_any is none, then delete all NTP configurations
# Some of the servers requested for deletion are not in the current
# device configuration. Filter these out of the list to be used for sending
# "delete" commands to the device.
# ECMP rebalance is placed before ECMP enable to
# ensure the deletion is performed in the same order
# Delete all global PIM configurations for a VRF, if
# 1) Only the VRF name is specified.
# 2) State is overridden and VRF name is not specified.
# Delete all global PIM configurations for a VRF,
# if only the VRF name is specified.
# Handle ECMP Rebalance configuration seperately
# 1) For enabling, configure it after enabling ECMP
# 2) For disabling, configure it before disabling ECMP
# Remove default values
# Include only commands with options other than vrf_name
# if want is none, then delete all the bgp_neighbors_afs
# Obtain diff for VLAN ranges in unique_ip
# Obtain diff for VLAN ranges in peer_gateway
# Create list of VLANs to be deleted based on VLAN ranges in unique_ip
# Create list of VLANs to be deleted based on VLAN ranges in peer_gateway
# If 'domain_id' is modified, delete all mclag configuration.
# Delete unspecified configurations when:
# 1) state is overridden.
# 2) state is replaced and configuration other than
# Create lists of VLANs to be deleted and added based on VLAN ranges
# The options are removed from the dict to avoid
# comparing the VLAN ranges two more times using get_diff
# Delete all VLANs if empty 'vlans' list is provided
# Delete portchannels that are not specified
# To update 'gateway_mac' configuration in the device,
# delete already configured value.
# Set 'vlans' to None to delete all VLANs
# Delete all PIM configurations for an interface, if
# 1) Only the interface name is specified.
# 2) State is overridden and interface name is not specified.
# Delete all PIM configurations for an interface,
# if only the interface name is specified.
# Include only commands with options other than interface name
# if want is none, then delete all the lag interfaces and all portchannels
# delete specific lag interfaces and specific portchannels
# if want is none, then delete all the vxlans
# Need to delete in reverse order of creation.
# vrf_map needs to be cleared before vlan_map
# vlan_map needs to be cleared before tunnel(source-ip)
# Need to delete in the reverse order of creation.
# The tlv_select configs are enabled by default.Hence false leads deletion of configs.
# Delete port-breakout configuration for interfaces that are not specified
# if want is none, then delete all the port_breakout except admin
# Set passive to false if not specified for a new neighbor/peer-group
# Add default entries
# Set action to deny if not specfied for as-path-list
# Replace existing as-path-list
# Delete entire as-path-list if no members are specified
# If action is changed, delete the entire as-path list
# and add the given configuration
# Delete as-path-lists that are not specified
# Override existing as-path-list
# Use existing action if not specified
# Set action to deny if not specfied for a new as-path-list
# To Delete a single member
# data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets/as-path-set=xyz/config/as-path-set-member=11
# This will delete the as path and its all members
# data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets/as-path-set=xyz
# This will delete ALL as path completely
# data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets
# Handle scenarios in replaced state, when only the interface
# name is specified for deleting all ACL bindings in it.
# Delete all interface ACL bindings in the chassis
# Delete all bindings of ACLs belonging to a type in an
# interface, if only the ACL type is provided
# Update the payload for subinterfaces
# Delete all acl bindings in an interface, if only the
# interface name is provided
# When state is deleted, empty access_groups and acls are
# supported and therefore no futher changes are required.
# When state is replaced, if only the interface name is
# specified for deleting all ACL bindings in it do not
# remove that config.
# Delete all vlan mappings
# Checks if there is a interface matching the delete command
# Delete part or all of single mapping
# Delete all mappings in an interface
# Checks if there is a vlan mapping matching the delete command
# Delete dot1q_tunnel
# Delete vlan translation
# Delete entire mapping
# Delete priority
# Delete vlan ids
# Delete entire dot1q_tunnel
# Delete entire translation
# Delete match_single_tags
# Delete match_double_tags
# Delete entire tag
# Delete entire match-single-tags
# Delete entire match-double-tags
# If auto_neg is true, ignore speed
# Eth/VLAN/PortChannel
# removing the dict in case diff found
# if want is none, then delete all the interfaces
# Check all parameter if any one is different from existing
# if given interface is not present
# Create Loopback incase if not available in have
# For auto-negotiate, we assign value to False since deleting the attribute will become None if deleted
# In case, if auto-negotiate is disabled, both speed and advertised_speed will have default value.
# Utils
# Incase if the port belongs to port-group, we can not able to delete the speed
# To avoid multiple get requests
# Pop each leaf from dsp that is not in sp
# attr = 'ospf_attributes'
# Delete all attributes in have
# Delete specific attributes in have
# default false
# deleted will take empty as clear all, override having empty remove differences is do nothing.
# so need to check
# combining two lists of changes
# nothing to do here
# don't want to interpret none values. if want to delete all, empty lists or dicts must be passed
# for the "clear all config" instance. passing in empty dictionary to deleted means clear everything
# default value is false so only need to do the "delete" (actually reset) if values are true and match
# want to make sure setting specified and match
# either clear all settings, all collectors or certain collectors here
# a specified non-empty list means no longer clear everything
# either clear all settings, all interfaces, or certain interfaces
# just name specified means delete what is in have
# greater than one to account for name always being inside
# validation will add a bunch of Nones where values are missing in partially filled config dicts
# config always required in this endpoint
# since REST needs the collector list item with its settings and a nested config with a copy of those same settings
# listed interface doesn't actually have any configured settings, but name is hanging around
# if all the collectors match, is possible to delete all at once rather than go through the list deleting individually
# can't call delete on interfaces list endpoint, must delete individual interface
# need to go through interfaces and ignore ones that don't need to be deleted
# only interfaces that are in have and not want or have settings that are in have and not want need to be deleted
# find matching interface in introduced
# if only name key left, everything else matches and will get substituted.
# name left in becuase needed if there's any settings that do need deleting
# Matching values will not be available in diff.
# Hence, updating the required fields in diff with the values from have.
# Delete options that are disabled in want
# Delete a community
# https://100.94.81.19/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set=extest
# Delete all members but not community
# https://100.94.81.19/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set=extest/config/community-member
# Dete a memeber from the expanded community
# https://100.94.81.19/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set=extest/config/community-member=REGEX%3A100.100
# Delete ALL Bgp_communities and its members
# https://100.94.81.19/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets
# Delete community-list if only name is specified
# In case of 'standard' type, if 'members' -> 'aann' is empty
# 1) Delete the whole community-list, if other attributes are also to be deleted (or) not present.
# 2) Delete all 'aann' members otherwise.
# In case of 'expanded' type, if 'members' -> 'regex' is empty then delete the whole community-list.
# Empty values for suboptions of member (aann/regex) is supported.
# Hence, remove_empties is not used for deleted state.
# Weight must be deleted before scheduler type
# Deletion of scheduler policy by name
# © Copyright 2021 Dell Inc. or its subsidiaries. All Rights Reserved
# © Copyright 2022 Dell Inc. or its subsidiaries. All Rights Reserved
# Pre-defined Key Match Operations
# Pre-defined Merge Operations
# Pre-defined Delete Operations
# If the "cmd" is derived from playbook, that is "want", the below
# line should be good enough:
# n_conf = cmd
# config section setting appears in is comment in this vvv column
# detection mode
# power management model
# disconnect type
# powerup mode
# power limit type
# other strings and such.
# values for priority, power pairs, and power classification mode go here, which work because they are an easy to read word
# there are config with other types, so catch that just to make sure this function doesn't break
# Leaf
# Keys part of added are new and put into changed_dict
# if it is dict comparision convert dict into single entry list by adding 'config' as key
# get testkey of 'config'
# if testkey of 'config' is not in base data, introduce single entry list
# with 'temp_key' as config testkey and base_data as data.
# Generate a set with passed dictionary for comparison
# Check if input IPV4 is valid IP and expand IPV4 with its subnet mask
# remove the space in the given string
# search the numeric character(digit)
# Interface naming mode affects only ethernet ports
# Replace whole dict.
# asplain asn:nn or 3-dots for IPv4:NN
# To create Loopback, VLAN interfaces
# Read the valid_speeds
# Dell OpenManage Ansible Modules
# Version 9.3.0
# Copyright (C) 2020-2024 Dell Inc. or its subsidiaries. All Rights Reserved.
# Version 9.8.0
# Version 7.0.0
# Copyright (C) 2020-2022 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright (C) 2024 Dell Inc. or its subsidiaries. All Rights Reserved.
# Version 7.4.0
# Copyright (C) 2022-2023 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright (C) 2020-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright (C) 2019-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# reboot applicable only if staging false
# regex filtering ++
# Version 9.12.3
# Copyright (C) 2020-2025 Dell Inc. or its subsidiaries.  All Rights Reserved.
# Version 9.10.0
# Copyright (C) 2022-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# reset_map = {"generate_csr": False, "import": True, "export": False, "reset": True}
# Getting the first item
# messages
# Version 9.11.0
# Version 9.12.0
# Copyright (C) 2021-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright (C) 2024-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# Copyright (C) 2023-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# Version 9.12.2
# Legacy BIOS boot mode and Force int 10 are not supported from 17G onwards
# boot mode is a readonly attribute and Force int 10 is not an attribute in 17G
# can be further optimized if len(mset) == 1
# Special case handling for RemoteCommand, appends 1 after every post call to "remotecommandaction"
# for Devices, DeviceTypes, Groups, Severities
# Fetch specific template
# Fetch all the templates based on Name
# Fetch all templates
# Remove odata keys ["@odata.context", "@odata.type", "@odata.id"]
# Idempotency
# All ports are listed but with "OpticsType": "NotPresent" are shown on UI.
# considering tagged vlans take the 'Id'
# Version 9.12.1
# Copyright (C) 2018-2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# if parameters.get(param_key) is not None:
# If device_id is a list, iterate through each id and add to the targets
# If device_id is a single string, add it directly to the targets
# Validate the 'host' and 'servicetag' parameter existence
# Validate the job_wait parameter
# Validate the date_time parameter
# Validate the enter_maintenance_mode_timeout parameter
# Ensure host_ids is a list for uniform processing
# Version 9.9.0
# Version 9.12.4
# to disable universal timeout set value to -1
# int to str
# where enable NIC is present
# module.warn(json.dumps(diff))
# Version 7.1.0
# validating for each vd options
# To fetch drives data
# To fetch volumes data
# To fetch enclosures
# Version 7.6.0
# 30x10= 300 secs
# check for the SYS1003 after 2/3rds of retries
# msg = "{0}{1}".format(BIOS_RESET_TRIGGERED, "LOOPOVER")
# msg = "{0}{1}".format(BIOS_RESET_TRIGGERED, str(ex))
# if module.params.get("force", True) == False:
# Removing invalid keys from the payload
# assuming OnReset is always
# changes staged in pending attributes
# "job_params": {"type": "dict"}
# bond_tex = profile["BondingTechnology"]
# ignore_bond = 0 if profile['BondingTechnology'] == 'LACP' else -1
# if ignore_teaming else False
# , payload=payload)
# Performing patch twice because in iDRAC10, it gives 200 but not updating all values
# saying password is blank but password is given, this is workaround. will be fixed in future
# Escape special characters safely
# Build a clean character class: a-z, A-Z, 0-9, space, and escaped specials
# Version 9.4.0
# Hardcoding it as false because job tracking is done in idrac_redfish.py as well.
# Assigning it as false because job tracking is done in idrac_redfish.py as well.
# Assigning it as ALL because it is the only target for preview.
# Assigning it as false because job tracking is done in idrac_redfish.py as well
# Sorting based on startTime and to get latest execution instance.
# ,xcount=len(prof_list))
# For delete operation no response content is returned
# Export Destination
# setup DNS
# set up idrac vlan
# set up NIC
# setup iDRAC IPV4
# setup iDRAC Static IPv4
# Version 9.5.0
# To be overridden by the subclasses
# Special case handling for .gz catalog files
# setup SNMP Trap Destination
# setup Email Alerts
# setup iDRAC Alerts
# setup SMTP
# response_attr[k] = response.json_data[ATTR].get(k)
# setup NTP
# set up timezone
# if reset has just been done, check the service availability
# discover the Lifecycle Controller URL
# check the Lifecycle Controller status
# iDRAC 9 has generation == 16
# CustomDefaults is supported from 7.00.00.00 in iDRAC 9 and above
# Get Lifecycle Controller status
# Enable csior
# Disable csior
# Base URI to fetch all logical networks information
# Module Success Message
# Module Failure Messages
# Fetch network type and qos type information once
# Update each network type with qos type info
# Form URI to fetch network VLAN information
# Get network type and Qos Type information
# Update each network VLAN with network type and wos type information
# share_detail["ShareName"] = boot_iso_dict.get("share_name") if boot_iso_dict.get("share_name") else sh_ip
# case sensitive, remove whitespaces for optim
# module.exit_json(attr_detailed=attr_detailed, inp_attr=disp_adv_list, payload_attr=payload_attr, adv_list=adv_list)
# module.warn(json.dumps(nest_diff))
# migrate applicable in deployed state only
# For Redfish
# When user has given only invalid attribute, diff will 0 and _invalid_attr will have dictionary,
# Expecting HTTP Error from server.
# For Network hierarchy View in a Template
# Add ?$top=9999 if not query
# ports
# partitions
# all_templates = True
# all_templates = False
# if vlan_info is not None and not all_templates:
# For check mode changes.
# time gap between so consecutive job trigger
# optimize this
# Running, Queued, Starting, New
# 3 times retry for HTTP error
# Running, not failed, not completed state
# Failed states - job not running
# Refresh is secondary task hence not failing module
# Storage type 3000
# Mandatory
# GROUPS_HIERARCHY = "GroupService/AllGroupsHierarchy"
# Checking id first as name has a default value
# Static members
# For Query Groups MembershipTypeId = 24
# preserving list order
# not mandatory
# The order in list needs to be maintained
# POST Call taking average 50-60 seconds so api_timeout=120
# checking the domain service
# checking any existing running job
# test network connection
# validation for device id/tag/group
# exit if running in check mode
# extract log job operation
# 'version': None
# 'authenticationPassphrase': None,
# 'authenticationProtocol': None,
# 'localizationEngineID': None,
# 'privacyPassphrase': None,
# 'privacyProtocol': None,
# 'securityName': None
# Special handling, duplicating wsman to redfish as in GUI
# iDRAC credentials
# setup Webserver
# set up SNMP settings
# nic_group = nic_model[0]['SubAttributeGroups']
# VlanAttributes
# Will not report duplicates
# Version 8.4.0
# polling interval
# For job_wait False return a valid response, try 5 times
# apply now
# apply on
# TIMED OUT
# get the any single component update failure and record the only very first failure on failed_status True
# 'SYS226' Unable to transfer a file, Catalog/Catalog.xml, because of the
# reason described by the code 404 sent by the HTTP remote host server.
# 'SYS252' Unable to transfer a file, Catalog/Catalog.xml, because the file is
# not available at the remote host location.
# 'SYS261' Unable to transfer the file, Catalog/catalog.xml, because initial network
# connection to the remote host server is not successfully started.
# Returns from OMSDK
# Redfish
# proxy params
# ['proxy_type', 'SOCKS', ('proxy_port',)],
# Validate the catalog file
# Connect to iDRAC and update firmware
# device_list = get_group_devices_all(rest_obj, DEVICE_URI)
# append ids for service tags
# set to eliminate duplicates
# remove if exists as it is not required for create payload
# check attributes
# module.exit_json(attrib_dict=attrib_dict, modify_payload=modify_payload)
# Type is mandatory for import
# remove if exists as it is not required for import payload
# template_name = template.get('Name')
# already same template deployed
# Fetch specific account
# Fetch all the user based on UserName
# Fetch all users
# check for 200 status as GET only returns this for success
# Validate the job wait parameter
# Validate the time parameter
# Validate the repository profile
# Validate the cluster names
# "firmwareRepoId": payload.get("firmwareRepoId"),
# Retrieve job_schedule from drift API call
# Construct the 'after' payload for comparison
# Prepare the new payload
# If the error can't be loaded as JSON, capture it as a plain string
# when job is scheduled
# when job timed out
# OverrideLLDPConfiguration attribute not supported in msm 1.0 version
# update id/name in case of modify operation
# Version 9.6.0
# Create a mapping of conditions to actions
# Iterate over message IDs and determine the action
# Fetch specific job
# query applicable only for all jobs list fetching
# Fetch all jobs, filter and pagination options
# Mapping of command to their respective actions
# Remove the header and footer
# Format the remaining string with proper line breaks
# Add the header and footer back
# Dell OpenManage Ansible Module
# Main()
# see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt
# Version 9.2.0
# ome_job_status_map = {
# ensure job states are mutually exclusive
# idrac_redfish_job_sates = [ "New", "Scheduled", "Running", "Completed", "Downloading", "Downloaded",
# "Scheduling", "ReadyForExecution", "Waiting", "Paused", "Failed", "CompletedWithErrors", "RebootPending",
# "RebootFailed", "RebootCompleted", "PendingActivation", "Unknown"]
# unrecognised states, just wait
# Can this be in idrac_redfish???
# LCStatus remain 'Ready' even after triggering restart
# so waiting few seconds before loop
# Ensure the time is a string (or cast it to a string if it's not)
# Check if the time matches the 24-hour format (HH:MM)
# Version 9.1.0
# Copyright: (c) 2025, Dell Technologies
# Copyright (C) 2025 Dell Inc. or its subsidiaries. All Rights Reserved.
# Version 9.13.0
# Hardcoding value, since no api is available
# Get storage collection
# Get storage entity details (RAID controller)
# Waiting here because response comes as empty at first call
# Check if cluster_names is None or empty
# Fetch all available clusters
# If the response is a list (as per the error), return it directly
# If the response has a json_data attribute, return json_data
# Map cluster name to entity ID (clustId)
# Fetch group IDs for the identified cluster IDs
# Determine the groups that need to be added or removed
# Check if the required parameters are provided
# Construct the payload for the PATCH request
# Add the optional parameters if provided
# Construct the URL for the PATCH request
# Send the PATCH request using the appropriate method (e.g., self.omevv.invoke_request)
# Copyright: (c) 2024-2025, Dell Technologies.
# Apache License version 2.0 (see MODULE-LICENSE or http://www.apache.org/licenses/LICENSE-2.0.txt)
# Documentation fragment for Unity (unity)
# Copyright: (c) 2020-2025, Dell Technologies
# initialize the ansible module
# result is a dictionary that contains changed status and
# nas server details
# Checking all parameters individually because the nas obj return
# names are different compared to ansible parameter names.
# Current Unix Directory Service
# Rename NAS Server
# Is Replication Destination
# Is Multiprotocol Enabled
# Is Back Up Enabled
# Is Packet Reflect Enabled
# Allow Unmapped User
# Enable Windows To Unix User Mapping Flag
# Default Windows User
# Default Unix User
# Validate replication params
# Get remote system
# Form parameters when replication_reuse_resource is False
# Valdiate replication
# validate destination pool info
# Validate replication mode
# Validate replication type
# Validate destination NAS server name
# Get the enum for the corresponding offline_availability
# As creation is not supported and if NAS Server does not exist
# along with state as present, then error will be thrown.
# As deletion is not supported and if NAS Server exists along with
# state as absent, then error will be thrown.
# Contain hosts input & output parameters
# Default_access mapping. keys are giving by user & values are
# accepted by SDK
# validate its FQDN or not
# In case of incorrect name
# In case of incorrect id, sdk return nas object whose attribute
# existed=false, instead of raising UnityResourceNotFoundError
# Get nfs details from nfs ID
# nfs from snap
# Get nfs details from nfs name
# This block will be executed, when we are trying to get nfs
# details using nfs name & nas server.
# nfs is instance of UnityNfsShare class
# in case of incorrect id, sdk returns nfs object whose
# attribute existed=False
# Since we are supporting HOST STRING parameters instead of HOST
# parameters, so lets change given input HOST parameter name to
# HOST STRING parameter name and strip trailing ','
# SDK have param named 'root_access_hosts_string' instead of
# 'read_write_root_hosts_string'
# create nfs from FILESYSTEM take 'share_access' as param in SDK
# Share to be created from filesystem
# Share to be created from snapshot
# Existing nfs host is empty so lets directly add
# new_host_str as it is
# Lets extends actual_to_add list, which is new with existing
# Since SDK takes host_str as ',' separated instead of list, so
# lets convert str to list
# Note: explicity str() needed here to convert IP4/IP6 object
# existing list is already empty, so nothing to remove
# present-in-export
# absent-in-export
# Get nfs Share
# Delete nfs Share
# delete_nfs_share() does not return any value
# In case of successful delete, lets nfs_obj set None
# to avoid fetching and displaying attribute
# create
# modify
# Get display attributes
# Adding filesystem_name to nfs_share_details
# Adding nas server details
# Adding snap.id & snap.name if nfs_obj is for snap
# initialize the Ansible module
# Check if existing snapshot schedule has auto_delete = True and
# playbook sets desired retention without mentioning auto_delete
# Check if rule type is modified
# Convert desired retention to seconds
# Check if common parameters for the rules getting modified
# result is a dictionary that contains changed status and snapshot
# schedule details
# Verify if modify is required. If not required, return False
# result is a dictionary to contain end state and FileSystem details
# Copyright: (c) 2022-2025, Dell Technologies
# Check for domain
# Check file interfaces
# result is a dictionary that contains changed status and CIFS server details
# Validate the parameters
# Check if modification is required
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt))
# Check for Extend Credential
# result is a dictionary that contains changed status and NFS server details
# snapshot details
# There might be a case where SMB share with same name exists
# for different nas server. Hence, filesystem_obj is passed
# along with share name to get a unique resource.
# for different nas server. Hence, snap_obj is passed
# This elif is addressing scenario where nas server details is
# passed and neither filesystem nor snapshot details are passed.
# Multiple smb shares can be received, as only name is passed
# Checking if instance or list of instance is returned.
# Below statements will execute when there is only single
# smb share returned.
# nas_server_obj is required to uniquely identify filesystem
# resource. If neither nas_server_name nor nas_server_id
# is passed along with filesystem_name then error is thrown.
# Checking if filesystem supports SMB protocol or not.
# Snapshot Name and Snapshot ID both are unique across array.
# Hence no need to mention nas server details
# Get Snapshot NAME and ID if SMB share exists for Snapshot
# Get Filesystem NAME and ID
# Get NAS server NAME and ID
# Append details of host mapped to the consistency group
# in return response
# Add volume name to the dict
# Add snapshot schedule name to the dict
# Status of cg replication
# Check if volume is already part of another consistency group
# Get cg instance
# Check if replication is enabled for cg
# Update replication params
# Get destination pool id
# Get replication mode
# Form destination LUNs list
# Form destination cg name
# result is a dictionary that contains changed status and consistency
# group details
# Validate destination cg name
# If the snapshot has is_auto_delete True,
# Check if auto_delete in the input is either None or True
# Get Consistency Group Object
# Get host object for volume snapshots
# Check whether host_name or host_id is given in input
# along with host_state
# Check for error, if user tries to create a snapshot with the
# same name for other storage resource.
# check for valid expiry_time
# Check if in input auto_delete is True and expiry_time is not None
# Check whether to modify the snapshot or not
# Create a Snapshot
# Update the Snapshot
# Delete the Snapshot
# Add snapshot details to the result.
# If snapshot is not attached to any host.
# Copyright: (c) 2023-2025, Dell Technologies
# prepare io_limit_policy object
# prepare snap_schedule object
# result is a dictionary to contain end state and volume details
# this is for removing existing snap_schedule
# filesystem snapshot details
# Checking whether nfs/smb share created from fs_snapshot
# Get NAS server Object
# Check for error, if user tries to create a filesystem snapshot
# with the same name.
# check for fs_access_type
# Check whether to modify the filesystem snapshot or not
# Create Filesystem Snapshot
# Delete the Filesystem Snapshot
# Add filesystem snapshot details to the result.
# host details
# Create new host
# Modify host (Attributes and ADD/REMOVE Initiators)
# Modify host
# Add Initiators to host
# Remove initiators from host
# manage network address
# Delete a host
# Copyright: (c) 2021-2025, Dell Technologies
# result is a dictionary that contains changed status and storage pool details
# Create storage pool
# Get pool drive details
# All user quotas in the given filesystem
# Check the sharing protocol supported by the filesystem
# while creating a user quota
# Modify user quota. If no change modify_user_quota is false.
# result is a dictionary that contains changed status and Interface details
# noqa   # pylint: disable=unused-import
# Copyright: (c) 2022, Dell Technologies
# Documentation fragment for PowerFlex
# Copyright: (c) 2024, Dell Technologies
# edit resource group
# Add nodes
# Assign names to unnamed NVMe hosts and find the target host
# if created or modified, set changed to true
# RM cache size cannot be set only when RM cache is enabled
# Restructure IP-role parameter format
# set rmcache size in KB
# identify IPs to add or roles to update
# identify IPs to add
# identify IPs whose role needs to be updated
# identify IPs to remove
# Append protection domain name
# Append rmcache size in MB
# Append fault set name
# remove IPs from SDS
# add IPs to SDS
# update IP's role for an SDS
# !/usr/bin/python
# filter out NVMe host entities
# Add name to NVMe hosts without giving name
# Get the list of nvme hosts
# multiple filters on same key
# prev_val is list, so append new f_val
# prev_val is not list,
# so create list with prev_val & f_val
# check in master MDM
# check in secondary MDMs
# check in tie-breaker MDMs
# check in standby MDMs
# Append Performance profile
# Append list of configured MDM IP addresses
# check in master node
# check in secondary nodes
# check in tie-breaker nodes
# check in Standby nodes
# result is a dictionary to contain end state and MDM cluster details
# Add standby MDM
# Update performance profile
# Rename MDM
# Change MDM virtual IP interfaces
# change cluster mode
# Remove standby MDM
# change ownership of MDM cluster
# Setting Changed Flag
# Returning the updated MDM cluster details
# Checking whether owner of MDM cluster has changed
# Idempotency check for virtual IP interface
# Idempotency check for clear_interfaces
# clearing all virtual IP interfaces of MDM
# Copyright: (c) 2021, Dell Technologies
# Add ancestor volume name
# Add size in GB
# Add storage pool name
# Add retention in hours
# Match volume details with snapshot details
# Get datetime diff in hours
# result is a dictionary to contain end state and snapshot details
# A delta of two minutes is treated as idempotent
# result is a dictionary to contain end state and SDC details
# identify IPs to add or remove or roles to update
# Check if any new IPs need to be added
# Check if any IPs need to be removed
# Check if any IPs need to have their roles updated
# Append size in GB in the volume details
# Append storage pool name and id.
# Append protection domain name and id
# Append snapshot policy name and id
# Basic volume created.
# get volume details
# create operation
# checking if basic volume parameters are modified or not.
# Mapping the SDCs to a volume
# Unmap the SDCs to a volume
# Update the basic volume attributes
# delete operation
# Returning the updated volume details
# Append the list of snapshots associated with the volume
# Append statistics
# Unique ways to identify a device:
# (current_pathname , sds_id)
# (current_pathname , sds_name)
# (device_name , sds_name)
# (device_name , sds_id)
# device_id.
# result is a dictionary to contain end state and device details
# validate input parameters
# get SDS ID from name
# get device details
# Get device id
# add operation
# modify operation
# get Protection Domain ID from name
# it is needed to uniquely identify a storage pool or acceleration
# pool using name
# remove operation
# Returning the updated device details
# get storage pool ID from name
# get acceleration pool ID from name
# Append SDS name
# Append storage pool name and its protection domain name and ID
# Append acceleration pool name and its protection domain name
# and ID
# Get remote system details
# result is a dictionary to contain end state and RCG details
# get RCG details
# perform create
# Returning the RCG details
# Copyright: (c) 2021-24, Dell Technologies
# adding protection domain name in the pool details
# Append storage pool list present in protection domain
# Creation of Protection domain
# result is a dictionary to contain end state and protection domain
# Checking invalid value for id, name and rename
# get Protection Domain details
# checking if basic protection domain parameters are modified or not
# Returning the updated Protection domain details
# Copyright: (c) 2024-2025, Dell Technologies
# Initialize the ansible module
# (c) 2022, John McCall (@lowlydba)
# Options for object state.
# Options for authenticating with SQL Authentication.
# (c) 2021, Sudhir Koduri (@kodurisudhir)
# Copyright: (c) 2021, NetApp Ansible Team <ng-ansibleteam@netapp.com>
# Documentation fragment for CLOUDMANAGER
# (c) 2021, NetApp, Inc
# set up state variables
# Calling generic rest_api class
# get list of working environments
# Four types of working environments:
# azureVsaWorkingEnvironments, gcpVsaWorkingEnvironments, onPremWorkingEnvironments, vsaWorkingEnvironments
# get aggregates for each working environment
# Sleep for 2 minutes
# Taking too long for status to be active
# delete vm deploy
# delete interfaces deploy
# delete storage account deploy
# delete deployment
# Taking too long for terminating OCCM
# check if all the required parameters exist
# optional parameters
# check if aggregate exists
# check the action
# (c) 2022, NetApp, Inc
# clean default value if it is not by Capacity license
# Check mandatory parameters
# assume the agent does not exist anymore
# it's possible the VM instance does not exist, but the clients are still present.
# only one cifs server exists per working environment.
# enable_thin_provisioning reflects storage efficiency.
# When creating volume, 'Everyone' must have upper case E, 'everyone' will not work.
# When modifying volume, 'everyone' is fine.
# Always hard coded to true.
# Get extra azure tag from current working environment
# It is created automatically not from the user input
# default azure tag
# check the action whether to create, delete, or not
# Use source working environment to get physical properties info of volumes
# All the volumes in one aggregate have the same physical properties
# get account ID
# registerAgentTOServiceForGCP
# add proxy_certificates as part of json data
# convert response to json format
# getCustomDataForGCP
# compose
# first resource
# The template must be in this format:
# {
# - name: xxxx
# "
# post
# check occm status
# Sleep for 1 minutes
# check proxy configuration
# add to proxy_certificates list
# region is the super class of zone. For example, zone us-east4-b is one of the zone in region us-east4
# deploy GCP VM
# sleep for 30 sec
# Copyright (c) 2017-2021, NetApp Ansible Team <ng-ansibleteam@netapp.com>
# Here, 1 kb = 1024
# if True, append REST requests/responses to /tmp/cloudmanager_apis.log
# if True, and if trace_apis is True, include <large> headers in trace
# if True, it is running on simulator
# requires trace_apis to do anything
# most requests are sent to Cloud Manager, but for connectors we need to manage VM instances using AWS, Azure, or GCP APIs
# add host if API starts with / and host is not already included in self.url
# we observe this error with DELETE on agents-mgmt/agent (and sometimes on GET)
# If the response was successful, no Exception will be raised
# If an error was reported in the json payload, it is handled below
# success
# status value 0 means pending
# Copyright (c) 2022, Laurent Nicolas <laurentn@netapp.com>
# convert to lower case for string comparison.
# if list has string element, convert string to lower case.
# change in state
# check the working environment exist or not
# get working environment lists
# look up the working environment in the working environment lists
# TODO? do we need to create an account?  And the code below is broken
# headers = {
# api = '/tenancy/account/MyAccount'
# account_res, error, dummy = rest_api.post(api, header=headers)
# account_id = None if error is not None else account_res['accountPublicId']
# return account_id, error
# TODO? creating an account is not supported
# return self.create_account(rest_api)
# if the object does not exist,  we can't modify it
# error out if keys do not match
# self.check_keys(current, desired)
# collect changed attributes
# get modified list from current and desired
# get what in desired and not in current
# get what in current but not in desired
# there are changes
# I tried to query by name and provider in addition to account_id, but it returned everything
# GET /vsa/working-environments/{workingEnvironmentId}?fields=status,awsProperties,ontapClusterProperties
# Ignore auto generated gcp label in CVO GCP HA
# python 2.6 doe snot support set comprehension
# check if any current gcp_labels are going to be removed or not
# gcp HA has one extra gcp_label created automatically
# check if any current key labels are in the desired key labels
# no change
# azure has one extra azure_tag DeployedByOccm created automatically and it cannot be modified.
# Check if tags/labels of desired configuration in current working environment
# get working environment details by working environment ID
# compare tags
# no tags in current cvo
# if both are empty, no need to update
# Ignore auto generated gcp label in CVO GCP
# 'count-down', 'gcp_resource_id', and 'partner-platform-serial-number'(HA)
# no tags in input parameters
# has tags in input parameters and existing CVO
# Permutation query example:
# aws: /metadata/permutations?region=us-east-1&instance_type=m5.xlarge&version=ONTAP-9.10.1.T1
# azure: /metadata/permutations?region=westus&instance_type=Standard_E4s_v3&version=ONTAP-9.10.1.T1.azure
# gcp: /metadata/permutations?region=us-east1&instance_type=n2-standard-4&version=ONTAP-9.10.1.T1.gcp
# The examples of the ontapVersion in ontapClusterProperties response:
# AWS for both single and HA: 9.10.1RC1, 9.8
# AZURE single: 9.10.1RC1.T1.azure. For HA: 9.10.1RC1.T1.azureha
# GCP for both single and HA: 9.10.1RC1.T1, 9.8.T1
# To be used in permutation:
# AWS ontap_version format: ONTAP-x.x.x.T1 or ONTAP-x.x.x.T1.ha for Ha
# AZURE ontap_version format: ONTAP-x.x.x.T1.azure or ONTAP-x.x.x.T1.azureha for HA
# GCP ontap_version format: ONTAP-x.x.x.T1.gcp or ONTAP-x.x.x.T1.gcpha for HA
# Get current working environment property
# instanceType in aws case is stored in awsProperties['instances'][0]['instanceType']
# check if license type is changed
# AWS ontap_version format: ONTAP-x.x.x.Tx or ONTAP-x.x.x.Tx.ha for Ha
# AZURE ontap_version format: ONTAP-x.x.x.Tx.azure or .azureha for HA
# GCP ontap_version format: ONTAP-x.x.x.Tx.gcp or .gcpha for HA
# Tx is not relevant for ONTAP version. But it is needed for the CVO creation
# upgradeVersion imageVersion format: ONTAP-x.x.x
# The updates of followings are not supported. Will response failure.
# get CVO status
# get current svmName
# check upgrade status
# get ONTAP image version
# upgrade
# set flag
# Copyright: (c) 2018-2022, Sumit Kumar <sumit4@netapp.com>, chris Archibald <carchi@netapp.com>
# Copyright (c) 2018-2025, NetApp, Inc
# Documentation fragment for ONTAP (na_ontap) that contains REST
# Documentation fragment for ONTAP (na_ontap) that are ZAPI ONLY
# Documentation fragment for ONTAP (na_ontap) that are REST ONLY
# Documentation fragment for ONTAP (na_ontap) peer options
# (c) 2020-2025, NetApp, Inc
# ZAPI
# we need the SVM UUID to add banner or motd if they are not present
# by default REST adds a trailing \n if no trailing \n set in desired message/banner.
# rstip \n only when desired message/banner does not have trailing \n to preserve idempotency.
# if the message is '-' that means the banner doesn't exist.
# (c) 2019-2025, NetApp, Inc
# Set up Rest API
# return portset details
# This will form ports list for fcp, iscsi and mixed protocols.
# list of desired ports not present in the node.
# list of uuid information of each desired port should present in broadcast domain.
# Error if any of provided ports are not found.
# list of lifs not present in the vserver
# dict with each key is lif name, value contains lif type - fc or ip and uuid.
# ignore error on fc if ip interface is found
# ignore error on ip if fc interface is found
# (c) 2018-2025, NetApp, Inc
# add more desired attributes that are allowed to be modified
# compose query
# set up required variables to create a snapshot
# Set up optional variables to create a snapshot
# Set up required variables to delete a snapshot
# set up optional variables to delete a snapshot
# Create query object, this is the existing object
# this is what we want to modify in the snapshot object
# set up required variables to rename a snapshot
# with REST, rename forces a change in modify for 'name'
# Required items first
# Optional items next
# (c) 2018-2022, NetApp, Inc
# igroup may have 0 initiators.
# to cache calls to get_node_uuid
# only accept default value for these 5 options (2 True and 3 False)
# accept the default value (for replace_package, this is implicit for REST) but switch to ZAPI or error out if set to False
# accept the default value of False, but switch to ZAPI or error out if set to True
# return firmware image details
# acp firmware version upgrade required
# return firmware image update progress details
# Current firmware version matches the version to be installed
# with netapp_lib, error.code may be a number or a string
# API did not finish on time
# even if the ZAPI reports a timeout error, it does it after the command completed
# Bad Gateway
# ONTAP proxy breaks the connection after 5 minutes, we can assume the download is progressing slowly
# command completed, check for success
# progress only show the current or most recent update/install operation.
# burt 1442080 - when timeout is 30, the API may return a 500 error, though the job says download completed!
# disk_qual, disk, shelf, and ACP are automatically updated in background
# The SP firmware is automatically updated on reboot
# can't force an update if the software is still downloading
# service-processor firmware upgrade
# we don't know until we try the upgrade
# shelf firmware upgrade
# with check_mode, we don't know until we try the upgrade -- assuming the worst
# acp firmware upgrade
# Disk firmware upgrade
# (c) 2018-2024, NetApp, Inc
# some attributes are not supported in REST implementation
# 13115 (EINVALIDINPUTERROR) if the node does not exist
# since from_exists contains the node name, modify will at least contain the node name if a rename is required.
# (c) 2023-2025, NetApp, Inc
# Modify current filter to remove auto added rule of type exclude, from testing it always appears to be the last element
# Next check if either one has no rules
# Next let check if rules is the same size if not we need to modify
# compare each field to see if there is a mismatch
# adding default values for fields under message_criteria
# set up variables
# rename and create are mutually exclusive
# create policy by renaming it
# check if policy should be assigned
# find out if the existing policy needs to be changed
# can't delete if already assigned
# (c) 2025, NetApp, Inc
# Error 14636 denotes an ipspace does not exist
# Error 13073 denotes an ipspace not found
# reset cd_action to None and add name to modify to indicate rename.
# If the path is passed as vol/vol1/ns it will be converted to ns for asa r2 systems.
# (c) 2021-2025, NetApp, Inc
# REST API should be used for ONTAP 9.6 or higher.
# make op_state active/idle  is supported from 9.11.1 or later with REST.
# set default value for ZAPI like before as REST currently not support this option.
# efficiency is enabled if dedupe is either background or both.
# it's disabled if both dedupe and compression is none.
# disable volume efficiency requires dedupe and compression set to 'none'.
# there are cases where ZAPI allows setting cross_volume_background_dedupe and inline_dedupe and REST not.
# REST changes policy to default, so use policy in params.
# start/stop vol efficiency
# if any of the keys are set, efficiency gets enabled, error out if any of eff keys are set and state is absent.
# If the volume efficiency does not exist for a given path to create this current is set to disabled
# this is for ONTAP systems that do not enable efficiency by default.
# enable/disable, start/stop & modify vol efficiency handled in REST PATCH.
# Checking to see if there are any additional parameters that need to be set after
# enabling volume efficiency required for Non-AFF systems
# key may not exist anymore, if modify is refreshed at line 686
# Removed the enabled and volume efficiency status,
# if there is anything remaining in the modify dict we need to modify.
# This checks desired vlan already exists and returns interface_name and node
# Requires both broadcast_domain and ipspace in body
# of PATCH call if any one of it present in modify
# enabled key in POST call has no effect
# applying PATCH if there is change in default value
# ntfs_sd is not included in the response if there is not an associated value. Required for modify
# Modify endpoint is not functional.
# if policy doesn't exist, create the policy first.
# delete the task, not the policy.
# (c) 2017-2025, NetApp, Inc
# cached, so that we don't call the REST API more than once
# assuming precluster state
# Note: cannot use node_name here:
# 13001:The "-node-names" parameter must be used with either the "-node-uuids" or the "-cluster-ips" parameters.
# Error 36503 denotes node already being used.
# skip if error says no failed operations to retry.
# Unexpected, for delete one of cluster_ip_address, node_name is required.
# cluster-ip and node-name are mutually exclusive:
# 13115:Element "cluster-ip" within "cluster-remove-node" has been excluded by another element.
# wait is part of post_async for REST
# collecting errors, and retrying
# This API is not supported for 9.3 or earlier releases, just wait a bit
# wait is part of delete_async for REST
# delete only applies to node
# simulate REST behavior by sending to all nodes in the cluster
# check if query returns the expected cifs-share-access-control
# type is required for unix-user and unix-group
# few keys in policy.statements will be configured with default value if not set in create.
# so removing None entries to avoid idempotent issue in next run.
# below keys can be reset with empty list.
# if cidr notation not set in each ip, append /32.
# cidr unset ip address will return with /32 in next run.
# So we treat SID as a String as it can accept Words, or Numbers.
# ONTAP will return it as a String, unless it is just
# numbers then it is returned as an INT.
# setting keys in each condition to None if not present to avoid idempotency issue.
# empty [] is used to reset policy statements.
# setting policy statements to [] to avoid idempotency issue.
# volume key is not returned for NAS buckets.
# if desired statement length different than current, allow modify.
# continue to next if the current statement already has a match.
# no modify required, match found for the statment.
# break the loop and check next desired policy statement has match.
# match not found, switch to next current statement and continue to find desired statement is present.
# 'conditions' key in policy.statements is list type, each element is dict.
# if the len of the desired conditions different than current, allow for modify.
# check for modify if 'conditions' is the only key present in statement_modified.
# check for difference in each modify[policy.statements[index][conditions] with current[policy.statements[index][conditions].
# each condition should be checked for modify based on the operator key.
# operator is a required field for condition, if not present, REST will throw error.
# allow modify
# volume uuid returned only for s3 buckets.
# error if dhcp is set to v4 and address_type is ipv6.
# error if dhcp is set to v4 and manual interface options are present.
# check if job exists
# set dhcp: 'none' if current dhcp set as None to avoid idempotent issue.
# when try to enable and set dhcp:v4 or manual ip, the status will be 'not_setup' before changes to complete.
# In ZAPI, once the status is 'succeeded', it takes few more seconds for ip details take effect..
# if the desired address_type already configured in current, interface details will be returned.
# if the desired address_type not configured in current,  None will be set in network interface options
# and setting either dhcp(for v4) or (ip_address, gateway_ip_address, netmask) will enable and configure the interface.
# if dhcp is enabled in REST, setting ip_address details manually requires dhcp: 'none' in params.
# if dhcp: 'none' is not in params set it False to disable dhcp and assign manual ip address.
# error if try to disable service processor network status in REST.
# error if try to enable and modify not have either dhcp or (ip_address, netamsk, gateway)
# disable dhcp requires configuring one of ip-address, netmask and gateway different from current.
# This module implements the operations for ONTAP MCC Mediator.
# The Mediator is supported for MCC IP configs from ONTAP 9.7 or later.
# This module requires REST APIs for Mediator which is supported from
# ONTAP 9.8 (DW) or later
# maintain backward compatibility
# we explictly test for 0 as it would be converted to -1, which has a special meaning (all).
# other value errors will be reported by the API.
# if any of the job_hours, job_minutes, job_months, job_days are empty:
# it means the value is -1 using ZAPI convention
# adjust offsets if necessary
# adjust minutes if necessary, -1 means all in ZAPI and for our user facing parameters
# while REST returns all values
# it means the value is -1 for ZAPI
# -1 is a special value
# -1 means all in zapi, while empty means all in api.
# need to set empty value for minutes as this is a required parameter
# Usually only include modify attributes, but omitting an attribute means all in api.
# Need to add the current attributes in params.
# job_minutes is mandatory for create
# The rest default is to delete all users, and empty bucket attached to a service.
# This would not be idempotent, so switching this to False.
# second issue, delete_all: True will say it deleted, but the ONTAP system will show it's still there until the job for the
# delete buckets/users/groups is complete.
# Once the service is created, bucket and user can not be modified by the service api, but only the user/group/bucket modules
# load ~/.ssh/known_hosts if it exists
# accept unknown key, but raise a python warning
# ONTAP makes copious use of \r
# if we don't close, we may see a TypeError
# count is a list of integers
# Set up required variables to modify snapshot policy
# Set up optional variables to modify snapshot policy
# User hasn't supplied any snapmirror labels.
# Identify schedules for deletion
# Identify schedules to be modified or added
# Schedule exists. Only modify if it has changed.
# New schedule
# Delete N-1 schedules no longer required. Must leave 1 schedule in policy
# at any one time. Delete last one afterwards.
# Modify schedules.
# Add 1 new schedule. Add other ones after last schedule has been deleted.
# Delete last schedule no longer required.
# Add remaining new schedules.
# set up required variables to create a snapshot policy
# User hasn't supplied any prefixes.
# zapi attribute for first schedule is schedule1, second is schedule2 and so on
# Set up optional variables to create a snapshot policy
# Set up required variables to delete a snapshot policy
# REST API support for create, delete and modify snapshot policy
# User hasn't supplied any retention period values.
# User hasn't supplied any prefix.
# Identify schedules to be deleted
# Delete N schedules no longer required if there is at least 1 schedule is to be retained
# Otherwise, delete N-1 schedules no longer required as policy must have at least 1 schedule
# Add 1 new schedule.  At least one schedule must be present, before we can delete the last old one.
# Don't sort schedule/prefix/count/snapmirror_label/retention_period lists as it can
# mess up the intended parameter order.
# (c) 2022-2025, NetApp, Inc
# the APi Returning _link in each user and policy record which is causing modify to get called
# If min_spares is not specified min_spares is 1 if SSD, min_spares is 2 for any other disk type.
# api requires 1 disk to be removed at a time.
# unassign disks if more disks are currently owned than requested.
# check to make sure we will have sufficient spares after the removal.
# unassign disks.
# take spare disks from partner so they can be reassigned to the desired node.
# assign disks to node.
# assign all unassigned disks to node
# unassign
# assign
# import untangle
# check if query returns the expected cifs-share
# modify is set in params, if not assign self.parameters for create.
# ZAPI accepts both 'show-previous-versions' and 'show_previous_versions', but only returns the latter
# (c) 2018-2025, NetApp Inc.
# API should be used for ONTAP 9.6 or higher, Zapi for lower version
# (c) 2022-2025, NetApp, Inc. GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Copyright: NetApp, Inc
# we can't update a tuple value, so rebuild the tuple
# with REST, we can have nested dictionaries
# key is absent when not transferring.  We convert this to 'idle'
# not found, or no match
# (c) 2021-2024, NetApp, Inc
# get just the P1 or P2 partitions
# build dict of partitions and disks to be unassigned/assigned
# are there enough unassigned partitions to meet the requirement?
# check for unassigned disks
# not enough unassigned disks
# check for partner spare partitions
# we have enough spare partitions and dont need any disks
# we don't know if we can take all of the spare partitions as we don't know how many spare disks there are
# spare partions != spare disks
# we have enough spare disks so can use all spare partitions if required
# now we know we have spare disks we can take as may spare partitions as we need
# we need to take some spare disks as well as using any spare partitions
# are the required partitions more than the currently owned partitions and spare disks, if so we need to assign
# which action needs to be taken
# now that we have calculated where the partitions and disks come from we can take action
# form current with from_name.
# allow for rename and check for modify with current from from_name.
# (c) 2020-2025, NetApp Inc.
# normalize case, using user inputs
# ONTAP 9.6 and 9.7 do not support name.  We'll change this to True if we detect an issue.
# API should be used for ONTAP 9.6 or higher
# REST allows setting cluster/admin svm in create certificate, but no records returned in GET.
# error if data svm not found
# let's attempt a retry using common_name
# report success, or any other error as is
# special key: svm
# TODO: add telemetry for REST
# validate as much as we can in check_mode or not
# Supported_ciphers is supported in ZAPI only.
# UUID if the FlexClone if it exists, or after creation
# use cluster ZAPI, as vserver ZAPI does not support parent-vserser for create
# keep vserver for ems log and clone-get
# Error 15661 denotes a volume clone not being found.
# Check if clone is currently splitting. Whilst a split is in
# progress, these attributes are present in 'volume-clone-info':
# block-percentage-complete, blocks-scanned & blocks-updated.
# if it is a FlexClone, it is not split.
# if it is not a FlexClone, it can be either the result of a split, or a plain volume. We mark it as split,
# as it cannot be split again.
# in order to capture UUID
# the only thing that is supported is split
# check if CIFS Server NetBIOS Name is not given in input;
# if not add it to the given local user or group name for maintaining idempotency
# convert the privilege to lower case to match with GET response
# GNU General Public License v3.0+  (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# POST at SD level only expects a subset of keys in ACL
# validate that at least one suboption is true
# add default suboptions values if apply_to is not given
# error if identical acls are set.
# make sure options are set in ach ACL, so we can match easily desired ACLs with current ACLs
# \ is the escape character in python, so \\ means \
# If we get 655865 the path we gave was not found, so we don't want to fail we want to return None
# if path not exists and state absent, return None and changed is False.
# REST does not return the keys when the value is False
# we already verified these options are consistent when present, so it's OK to overrid
# some fieds are set to None when not present
# drop keys not accepted in body
# we already verified these options are consistent when present, so it's OK to override
# with 9.9.1, access_control is not supported.  It will be set to None in received ACLs, and omitted in desired ACLs
# but we can assume the user would like to see file_directory.
# We can't modify inherited ACLs.  But we can create a new one at a lower scope.
# if exact match of 2 acl found, look for modify in that matched desired and current acl.
# Ignore inherited ACLs
# only delete ACLs that matches the desired access_control, or all ACLs if not set
# validate_modify function will check a modify in acl is required or not.
# if neither patch-acls or post-acls required and modify None, set changed to False.
# split ACLs into four categories
# find a non empty list of ACLs
# use sorted to make this deterministic
# keep one category for create, and move the remaining ACLs into post-acls
# delete is not supported by the API, or rather a DELETE will only delete the SLAG ACLs and nothing else.
# so we just loop through all the ACLs
# delete ACLs first, to avoid conflicts with new or modified rules
# PATCH call succeeds, but its not working: changes are not reflecting
# modify before adding new rules to avoid conflicts
# changes in ACLs
# Make query
# Get Kerberos Realm details
# Other options/attributes
# Initialize NaElement
# Try to create Kerberos Realm configuration
# Try to modify Kerberos Realm
# Returns exact match
# extract base name if n exists
# security
# root
# REST: protocol.v3_enabled
# REST: protocol.v40_enabled
# REST: protocol.v41_enabled
# protocol.v41_features.pnfs_enabled
# REST: vstorage_enabled
# REST: protocol.v4_id_domain
# REST: transport.tcp_enabled
# REST: transport.udp_enabled
# REST: protocol.v40_features.acl_enabled
# REST: protocol.v40_features.read_delegation_enabled
# REST: protocol.v40_features.write_delegation_enabled
# REST: protocol.v41_features.acl_enabled
# REST: protocol.v41_features.read_delegation_enabled
# REST: protocol.v41_features.write_delegation_enabled
# REST: showmount_enabled
# This is what the old module did, not sure what happens if nfs dosn't exist.
# TODO: might return more than 1 record, find out
# Tested this out, in both create and modify, changing the service_state will enable and disabled the service
# during both a create and modify.
# set peer server connection
# if dest_hostname is present, peer_options is absent
# return cluster peer details
# check peer lif or peer cluster present in each peer cluster data in current.
# if peer-lifs not present in parameters, use peer_cluster to filter desired cluster peer in current.
# generate passphrase in source if passphrase not provided.
# Default value for encryption.proposed is tls_psk.
# explicitly set to none if encryption_protocol_proposed options not present in parameters.
# create only if expected cluster peer relation is not present on both source and destination clusters
# will error out with appropriate message if peer relationship already exists on either cluster
# check and modify IP addresses of the logical interfaces used in peer relation
# on either source or destination cluster
# delete peer relation in cluster where relation is present
# validate leading and trailing forward slashes in unix_path & cifs_path
# NVMe status_admin needs to be down before deleting it
# ignore RBAC issue with FSx - BURT1525998
# modify can change the name for REST, as UUID is the key.
# ONTAP returns port in port ranges. 120 set is returned as 120-120
# if action is bypass/discard and auth_method is psk/pki then security_key/certificate will be ignored in REST.
# so delete the authentication_method, secret_key and certificate to avoid idempotent issue.
# mapping protocol number to protocol to avoid idempotency issue.
# Cannot get current IPsec policy with ipspace - burt1519419
# if self.parameters.get('ipspace'):
# query['ipspace.name'] = self.parameters['ipspace']
# Expected ONTAP to throw error but restarts instead, if try to set certificate where auth_method is none.
# If the response contains a next link, we need to gather all records
# Update the message with the gathered info
# metrocluster doesn't have a records field, so we need to skip this
# Getting total number of records
# Volume_autosize returns KB and not B like Volume so values are shifted down 1
# API should be used for ONTAP 9.6 or higher, ZAPI for lower version
# increment size and reset are not supported with rest api
# firewall config for a node is always present, we cannot create or delete a firewall on a node
# modify users requires separate zapi.
# current is to add user when creating a group
# There can be SNMPv1, SNMPv2 (called community) or
# SNMPv3 (called usm users)
# access control does not exist in rest
# TODO: This module should have been called snmp_community as it only deals with community and not snmp
# metric supported from ONTAP 9.11.0 version.
# self.module.fail_json(msg="Error: vserver is a required parameter when using ZAPI")
# Error 15661 denotes a route doesn't exist.
# return if desired route already exists
# use existing metric if not specified
# restore the old route, create the route with the existing values
# Invalid value specified for any of the attributes
# even if metric not set, 20 is set by default.
# create by renaming existing route if it exists
# destination and gateway combination is unique, and is considered like an id.
# So modify destination or gateway is considered a rename action.
# If one of 'destination', 'gateway' is not in the from field, use the desired value.
# (c) 2017-2022, NetApp, Inc
# stop iscsi service before delete.
# TODO: include other details about the lun (size, etc.)
# if we have an return_dr_id we don't need to loop anymore
# check if there is some FC group to delete
# some attributes are not supported in earlier REST implementation
# policy does not exist
# Ignore builtin rules
# For snapmirror policy rules, 'snapmirror_label' is required.
# Check size of 'snapmirror_label' list is 0-10. Can have zero rules.
# Take builtin 'sm_created' rule into account for 'mirror_vault'.
# 'keep' must be supplied as long as there is at least one snapmirror_label
# Make sure other rule values match same number of 'snapmirror_label' values.
# 'snapmirror_label' not supplied.
# Bail out if other rule parameters have been supplied.
# Schedule must be supplied if prefix is supplied.
# if policy type is omitted, REST assumes async
# To set 'create_snapshot_on_source' as 'False' requires retention objects label(snapmirror_label) and count(keep)
# Only modify snapmirror policy if a specific snapmirror policy attribute needs
# modifying. It may be that only snapmirror policy rules are being modified.
# Construct new rule. prefix and schedule are optional.
# Rule doesn't exist. Add new rule.
# Iterate existing rules.
# Existing rule isn't in parameters. Delete existing rule.
# Rule exists. Identify whether it requires modification or not.
# Get indexes of current and supplied rule.
# Check if keep modified
# Check if prefix modified
# Check if schedule modified
# Need 'snapmirror_label' to add/modify/delete rules
# Delete rules no longer required or modified rules that will be re-added.
# Add rules. At least one non-schedule rule must exist before
# a rule with a schedule can be added, otherwise zapi will complain.
# This will also delete all existing rules if everything is now obsolete
# policy_type is only required for create or modify
# Policy types 'mirror_vault', 'vault', 'async_mirror' are mapped to async policy type
# async_mirror accepts two choices with copy_all_source_snapshots or copy_latest_source_snapshot
# Policy types ''sync_mirror', 'strict_sync_mirror' are mapped to sync policy type
# Inconsistency in REST API - POST requires a vserver, but GET does not return it
# report any error even in check_mode
# (c) 2024-2025, NetApp, Inc
# expand parameters to match REST returned info
# eliminate any empty entry and add port when needed
# if specify on-boarding passphrase, it is on-boarding key management.
# it not, then it's external key management.
# try first at SVM level
# retry at cluster scope
# remove extra fields that are readonly and not relevant for modify
# ONTAP returns no record if external manager is configured but no server is present
# external key servers cannot be updated in PATCH, they are handled later
# ONTAP does not return a record when an external key manager is configured without any external server
# if check_old is current, we're good to change the passphrase.  For other errors, we'll just try again, we already warned.
# do we need to synchronize
# do we need to update the passphrase
# wrapping up
# order matters for key servers
# with onboard, changing a passphrase or synchronizing cannot be done in the same PATCH request
# set status, and if applicable errno/reason, and remove attribute fields
# collect errno and reason
# remove irrelevant info
# log first, then error out as needed
# change in auth_type
# we're already using chap
# change in chap inbound credentials
# change in chap outbound credentials
# use credentials from input
# set values from self.parameters as they may not show as modified
# use current values as inbound username/password are required
# PATCH fails if this is not present, even though there is no change
# if source_vserver not present, set destination_vserver value for intra-vserver copy operation.
# lun already exists at destination
# need to copy lun
# for parameters having different key names in REST API and module inputs
# rename is handled in modify in REST.
# rename will enable the cifs server also, so disable it if service_state is stopped.
# destination username should be passed as name
# Retrieves the cluster configuration backup information
# Updates the cluster configuration backup information.
# if check_mode, don't attempt to change the password, but assume it would be changed
# when deleting, ignore previous errors, but report them if delete fails
# setup later if required
# only for ElementSW -> ONTAP snapmirroring, validate if ElementSW SDK is available
# if source_hostname is present, peer_options is absent
# sleep for a maximum of X seconds (with a default of 5 minutes), in 30 seconds increments
# sleep for a maximum of X seconds (with a default of 5 minutes), in 10 seconds increments
# do a get volume to check if volume exists or not
# if source_volume is present, then source_vserver is also guaranteed to be present
# if there is no ':' in path, it returns path
# REST defaults to Asynchronous
# except for consistency groups
# testing for True
# Quiesce and Break at destination
# if source is ONTAP, release the destination at source cluster
# if the source_hostname is unknown, do not run snapmirror_release
# Release at source
# Note: REST remove the source from destination, so not required to release from source for REST
# Delete at destination
# checking if quiesce was passed successfully
# record error but proceed with deletion
# if it's REST call, then not required to run release
# Operation already in progress, let's wait for it to end
# only send it when True
# if source_endpoint or destination_endpoint if present, both are required
# then sanitize inputs to support new style
# sanitize inputs
# options requiring 9.7 or better, and REST
# options requiring 9.8 or better, and REST
# fill in old style parameters
# use new structures for source and destination endpoints
# skipping svm for now, as it is not accepted and not needed with path
# for old, new in (('path', 'path'), ('vserver', 'svm'), ('cluster', 'cluster')):
# validate source_path
# split IP address from source_path
# Use the POST /api/snapmirror/relationships REST API call with the property "restore=true" to create the SnapMirror restore relationship
# Use the POST /api/snapmirror/relationships/{relationship.uuid}/transfers REST API call to start the restore transfer on the SnapMirror relationship
# run this API calls on Source cluster
# if the source_hostname is unknown, do not run snapmirror_restore
# REST API call to start the restore transfer on the SnapMirror relationship
# this may be called after a create including restore, so we may need to fetch the data
# check_param get the value if it's given in other format like destination_endpoint etc..
# REST API supports only Extended Data Protection (XDP) SnapMirror relationship
# initialized to avoid name keyerror
# if the field is absent, assume ""
# policies not associated to a SVM
# If current is not None, it means the state is present otherwise we would take a delete action
# add initialize or resume action as needed
# add resync or check_for_update action as needed
# check for initialize
# set changed explicitly for initialize
# resume when state is quiesced
# set changed explicitly for resume
# resync when state is broken-off
# set changed explicitly for resync
# Update when create is called again, or modify is being called
# take things at face value
# with ONTAP -> ONTAP, vserver names can be aliased
# match either the local name or the remote name
# ONTAP automatically convert DP to XDP
# source is ElementSW
# 9.9.0 and earlier versions returns rest-api, convert it to rest_api.
# changing type is not supported
# (c) 2019-2024, NetApp, Inc
# only fields that are readable through a GET request
# additional fields for POST/PATCH
# Error 15661 denotes an object store not being found.
# Get LDAP client configuration details
# Define config details structure
# Get list for element chidren
# returns empty list if element does not exist
# LDAP servers NaElement
# preferred_ad_servers
# Try to create LDAP configuration
# LDAP_servers
# Simple attributes
# Try to modify LDAP client
# state is present, either servers or ad_domain is required
# REST retrives only data svm ldap configuration, error if try to use non data svm.
# (c) 2018 Piotr Olczak <piotr.olczak@redhat.com>
# Just make sure it is empty
# name is a key, and REST does not allow to change it
# it should match anyway, but REST may prepend the domain name
# GET return the proxy url, if configured, along with port number
# based on the existing config, append port numnber to input url to
# maintain idempotency while modifying config
# strip trailing '/' and extract the port no
# port is not mentioned in input proxy URL
# if port is present in current url configured then add to the input url
# report Ansible and our collection versions
# check import netapp-lib
# check zapi connection errors only if import successful
# check rest connection errors
# Validation of input parameters
# Parseable dict output
# Raw XML output
# Generate stdout_lines list
# Generate stdout_lines_filter_list
# get rid of extra quotes "'1'", but maybe "u'1'" or "b'1'"
# ignore client_match as ZAPI query is string based and preserves order
# skipping int keys to not include rule index in query as we're matching on attributes
# convert client_match list to comma-separated string
# ignore options that are not relevant
# If no rule matches the query, return None
# If rule index passed in doesn't exist, return None
# force a 'rename' to set the index
# for list, do an initial query based on first element
# if rule_index is not None, see if we need to re-index an existing rule
# the existing rule may be indexed by from_rule_index or we can match the attributes
# create export policy (if policy doesn't exist) only when changed=True
# nameservice-sources will not present in result if the value is '-'
# if database type is already deleted by ZAPI call, REST will not have the database key.
# setting it to [] help to set the value in REST patch call.
# (c) 2018-2023, NetApp, Inc
# With Ansible 2.9, python 2.6 reports a SyntaxError
# not defined in earlier versions
# list of tuples - original licenses (license_code or NLF contents), and dict of NLF contents (empty dict for legacy codes)
# when using REST, just keep a list as returned by GET to use with deepdiff
# Set up REST API
# By default, the GET method only returns licensed packages.
# To retrieve all the available package state details, below query is used.
# Error 15661 - Object not found
# since this is a query, we need to specify state, or only active licenses are removed
# request nested errors
# when a new license is added, it seems REST may not report all licenses
# wait for things to stabilize
# Ansible converts double quotes into single quotes if the input is python-like
# and we can't use json loads with single quotes!
# For an NLF license, extract fields, to later collect serial number and bundle name (product)
# nothing is installed
# force a delete
# V2 and V1 formats
# product is required for delete
# if serial number is not present in the NLF, we could use a module parameter
# Add / Update licenses.
# delete
# add or update
# execute create
# not able to detect that a new license is required until we try to install it.
# delete actions
# initiator_group_type (protocol) cannot be changed after create
# use the new structure, a list of dict with name/comment as keys.
# sanitize WWNs and convert to lowercase for idempotency
# keep a list of names as it is convenient for add and remove computations
# we may need to remove existing initiators
# we may need to remove existing igroups
# for initiators, there is no uuid, so we're using name as the key
# place holder, not used for ZAPI
# don't add if initiator_names/igroups is empty string
# we may have an empty list, ignore it
# for python 2.6
# we need to remove everything first
# comments are not supported in ZAPI, we already checked for that in validate changes
# validate_modify ensured modify is empty with ZAPI
# a change in name is handled in rename for ZAPI, but REST can use modify
# Run the task for all ports in the list of 'ports'
# create by renaming existing snapshot, if it exists
# unlike rest, ipspace is not mandatory field for zapi.
# check if broadcast_domain exists
# all ports should be removed to delete broadcast domain in rest.
# update current ipspace as it required in modifying ports later.
# split already handled ipspace and ports.
# if want to remove all ports, simply delete the broadcast domain.
# rename broadcast domain.
# if desired ports with uuid present then return only the ports to add or move.
# either create new domain or split domain, also ipspace can be modified.
# check for exact match of ports only if from_name present.
# rename with no change in ports.
# create new broadcast domain with desired ports (REST will move them over from the other domain if necessary)
# REST API is required
# check version
# vserser is empty for cluster
# sourcery skip: dict-comprehension
# error code 4 is empty table
# TODO: add CIFS options, and S3
# Ontap documentation uses C.UTF-8, but actually stores as c.utf_8.
# with REST, to force synchronous operations
# with REST, to know which protocols to look for
# root volume not supported with rest api
# python 2.6 does not support dict comprehension with k: v
# using old semantics, anything not present is disallowed
# so that we can compare UUIDs while using a more friendly name in the user interface
# force an entry to enable modify
# ignore name, as only certificate UUID is supported in svm/svms/uuid/web
# REST returns allowed: True/False with recent versions, and a list of protocols in allowed_protocols for older versions
# protocols are not present when the vserver is stopped
# earlier ONTAP versions
# certificate is available starting with 9.7 and is deprecated with 9.10.1.
# we don't use certificate with 9.7 as name is only supported with 9.8 in /security/certificates
# only collect the info if the user wants to configure the web service, and ONTAP supports it
# 9.6 to 9.8 do not support max_volumes for svm/svms, using private/cli
# add allowed-protocols, aggr-list, max_volume after creation
# since vserver-create doesn't allow these attributes during creation
# python 2.6 does not support dict comprehension {k: v for ...}
# admin_state is only supported in modify
# Ansible sets unset suboptions to None
# REST does not allow to modify this directly
# allowed is not supported in earlier REST versions
# if allowed is not set, retrieve current value
# use REST CLI for older versions of ONTAP
# add max_volumes and update allowed protocols after creation for older ONTAP versions
# REST reports an error if we modify the name and something else at the same time
# certificate is a deprecated field for 9.10.1, only use it for 9.8 and 9.9
# use REST CLI for max_volumes and allowed protocols with older ONTAP versions
# we don't know if the service is already started or not, link will tell us
# API only accepts a UUID
# create by renaming existing SVM
# If rename is True, cd_action is None, but modify could be true or false.
# check if query returns the expected export-policy
# Since there is no modify or delete, we will return no change
# create only
# self.debug['got'] = 'empty'     # uncomment to enable collecting data
# use_exact_size is defaulted to true, but not supported with REST. To get around this we will ignore the variable in rest.
# ASA r2 is only supported from ONTAP releases 9.16.0x onwards
# set default value for ZAPI only supported options.
# REST API for application/applications if needed
# {'luns': [{'path': '/vol/ansibleLUN/ansibleLUN_1', ...
# default value for create, may be overriden below
# os_type and qos are not supported in 9.7 for the SAN application_component
# only one of them can be present at most
# Should we support new_igroups?
# It may raise idempotency issues if the REST call fails if the igroup already exists.
# And we already have na_ontap_igroups.
# we expect value to be a dict, but maybe an empty dict
# only used for create
# not supported for modify operation, but required at application component level for create
# these cannot be present when using PATCH
# dummy modify, so that we don't fill in the body
# Error 9042 denotes the new LUN size being the same as the
# old LUN size. This happens when there's barely any difference
# in the two sizes. For example, from 8388608 bytes to
# 8194304 bytes. This should go away if/when the default size
# requested/reported to/from the controller is changed to a
# larger unit (MB/GB/TB).
# The same ZAPI is used for both QOS attributes
# fix total_size attribute, report error if total_size is missing (or size is missing)
# fix total_size attribute
# can't change total_size, let's ignore it
# * 100 to get a percentage, and .0 to force float conversion
# we can't increase, but we can't say it is a problem, as the size is already bigger!
# TODO: Check that path and name are the same in Rest
# if it dosn't start with a slash we will use flexvol name and/or qtree name to build the path
# find and validate app changes
# save application name, as it is overriden in the flattening operation
# there is an issue with total_size not reflecting the real total_size, and some additional overhead
# will be updated below as it is mutable
# fixed copy
# flatten
# app template
# app component
# if component name does not match, assume a change at LUN level
# restore app name
# ready to compare, except for a quirk in size handling
# preserve change state before calling modify in case an ignorable total_size change is the only change
# check if target volume already exists
# volume already exists, but not as part of this application
# default name already in use, ask user to clarify intent
# actions at LUN level
# For LUNs created using a SAN application, we're getting lun paths from the backing storage
# create by renaming existing LUN, if it exists
# reset warning as we found a match
# we already handled rename if required
# ignore LUN not found, as name can be a group name
# modify at LUN level, as app modify does not set some LUN level options (eg space_reserve)
# no application template, fall back to LUN only
# space_reserve will be set to True
# To match input parameters, lun_modify is recomputed.
# Ensure that size was actually changed. Please
# read notes in 'resize_lun' function for details.
# size may not have changed
# flatten {'account': {'name': 'some_name'}} into {'account': 'some_name'} to match input parameters
# not supported in 2.6
# if not present, REST API reports 502 Server Error: Proxy Error for url
# always create, by keeping current as None
# force an entry as REST does not return anything if no comment was set
# there is exactly 1 record for delete
# and 2 or more records for delete_all
# there is exactly 1 record for modify
# if disk add operation is in progress, cannot offline aggregate, retry few times.
# boolean to str
# round off time_out
# online aggregate first, so disk can be added after online.
# modify tags
# add disk before taking aggregate offline.
# offline aggregate after adding additional disks.
# make sure we found a match
# let's see if we need to add disks
# we expect a list of tuples (disk_name, plex_name), if there is a mirror, we should have 2 plexes
# let's get a list of disks for each plex
# find who is who
# Now that we know what is which, find what needs to be removed (error), and what needs to be added
# finally, what's to be added
# create by renaming existing aggregate
# Interestingly, REST expects True/False in body, but 'true'/'false' in query
# I guess it's because we're using json in the body
# query = {'disk_size': disk_size} if disk_size else None
# report an error if disk_size_with_unit is not valid
# additional validations that are done at runtime
# offine aggregate after create.
# cached value to limit number of API calls.
# REST supports both netmask and cidr for ipv4 but cidr only for ipv6.
# ZAPI supports only netmask.
# none is an allowed value with ZAPI
# in precluster mode, network APIs are not available!
# since our queries included a '*', we expect multiple records
# an exact match is <home_node>_<name> or <name>.
# is there an exact macth on name only?
# now matching with home_port as a prefix
# look for all known nodes
# fix name, otherwise we'll attempt a rename :(
# ONTAP renames cluster interfaces, use a * to find them
# Note: broadcast_domain is CreateOnly
# home_node/home_port not present for FC on ONTAP 9.7.
# if interface_attributes.get_child_by_name('failover-group'):
# service_policy is not supported for FC interfaces
# ignore role for FC interface
# We normally create using home_port, and migrate to current.
# But for FC, home_port is not supported on 9.7 or earlier!
# LOCATION
# IP
# home_node/current_node supported only in ip interfaces.
# don't add node location when port structure is already present
# REST only supports DATA SVMs
# if role is intercluster, protocol cannot be specified
# validate if mandatory parameters are present for create or modify
# python 2.6 syntax
# running validation twice, as interface_type dictates the second set of requirements
# force the value of fail_if_subnet_conflicts as it is writeOnly
# home_port is not supported with 9.7
# msg: "Error Creating interface ansible_interface: NetApp API failed. Reason - 17:A LIF with the same name already exists"
# only for fc interfaces disable is required before delete.
# curiously, we sometimes need to send the request twice (well, always in my experience)
# Current_node and current_port don't exist in modify only migrate, so we need to remove them from the list
# if home node has been changed we need to migrate the interface
# like with REST, the migration may not be completed on the first try!
# just blindly do it twice.
# create by renaming existing interface
# self.parameters['interface_name'] may be overriden in self.get_interface so save a copy
# fc interface supports only home_port and port in POST/PATCH.
# add home_port and current_port in modify for home_node and current_node respectively to form home_port/port.
# above will modify home_node of fc interface, after modify if requires to update current_node, it will error out for fc interface.
# migrate not supported for fc interface.
# if try to modify both home_port and current_port in FC interface and if its equal, make migrate_body None
# build the payloads even in check_mode, to perform validations
# needed for migrate after creation
# interface type returned in REST but not in ZAPI.
# for 9.7 or earlier, allow modify current node/port for fc interface.
# sid is an Str or a number, it will return a string back unless you pass a number then it returns a int
# If the path is passed as vol/vol1/lun1 it will be converted to lun1 for asa r2 systems.
# proxy_url may contain a password: user:password@url
# present or absent requires modifying state to enabled or disabled
# zapi invoke successful
# change in password, it can be a false positive as password is replaced with ********* by ONTAP
# password was found in proxy_url, sanitize it, use something different than ZAPI *********
# allow for passing in any additional rest api fields
# If the API doesn't exist (using an older system), we don't want to fail the task.
# if Aggr recommender can't make a recommendation, it will fail with the following error code, don't fail the task.
# Do not fail on error
# Fail the module if error occurs from REST APIs call
# Use 'DACL - ACE' as a marker for the start of the list of DACLS in the descriptor.
# The '-' marker is the start of the DACL, the '-0x' marker is the end of the DACL.
# TODO: Handle errors that are not errors
# Add rest API names as there info version, also make sure we don't add a duplicate
# creates a new list of dicts
# mutates the existing dicts
# Verify whether the supported subset passed
# Get all the set of records if next link found in subset_info for the specified subset
# Update the subset info for the specified subset
# Defining gather_subset and appropriate api_call
# If all in subset list, get the information of all subsets
# If multiple fields specified to return, convert list to string
# Restrict gather subsets to one subset if fields section is list_of_fields
# If a user requests a subset that their version of ONTAP does not support give them a warning (but don't fail)
# remove subset so info dosn't fail for a bad subset
# Delete throws retry after sometime error during first run by default, hence retrying after sometime.
# No other fields can be specified when enabled is specified for modify
# This method will be called to modify fields other than enabled
# validate list options not contains '' in it in REST.
# file_ext_to_include cannot be empty in both ZAPI and REST.
# map filters options to rest equivalent options.
# set default value for is_scan_mandatory and scan_files_with_no_ext if not set.
# form filters from REST options only_execute_access and scan_readonly_volumes.
# '' is an invalid value.
# enable/disable policy handled in single modify api with REST.
# policy cannot be deleted unless its disabled, so disable it before delete.
# by default newly created policy will be in disabled state, enable if policy_status is set in ZAPI.
# REST enable policy on create itself.
# if rest and use_rest: auto and name is present, revert to zapi
# if rest and use_rest: always and name is present, throw error.
# if rest and ports is not present, throw error as ports is a required field with REST
# if ports are in different LAGs and state is absent, return None
# for a LAG, rename is equivalent to adding/removing ports from an existing LAG.
# if we could not find a lag, or only a lag with a partial match, do a new query using from_lag_ports.
# if we have a partial match with an existing LAG, we will update the ports.
# with rest, current will have the port details
# While using REST, fetch the name of the created LAG and return as response in result
# TODO: Remove this method and import snapshot module and
# call get after re-factoring __init__ across all the modules
# we aren't importing now, since __init__ does a lot of Ansible setup
# the existing current index or from_index to be swapped
# different directions can have same index
# Cannot swap entries which have hostname or address configured.
# Delete and recreate the new entry at the specified position.
# Throws error when trying to swap with non existing index
# pattern and replacement are required when creating name mappings.
# if policy_type is 'scheduled'
# if duration not set for a policy, ZAPI returns "-", whereas REST returns 0.
# "-" is an invalid value in REST, set to 0 if REST.
# REST API should be used for ONTAP 9.6 or higher
# make sure app entries are not duplicated
# REST prefers certificate to cert
# REST get always returns 'second_authentication_method'
# keep them sorted for comparison with current
# actual conversion
# replace "none" values with None for comparison
# new read-only attributes in 9.14 onwards, breaks idempotency when present
# Error 16034 denotes a user not being found.
# Error 16043 denotes the user existing, but the application missing.
# we can't change sec_method in place, a tuple is not mutable
# with 'auto' we ignore existing apps that were not asked for
# with auto, only a single method is supported
# find if there is an error for service processor application value
# update value as per ONTAP version support
# post again and throw first error in case of an error
# non-sp errors thrown or initial sp errors
# if the password is reused, assume idempotency but show a warning
# self.server.set_vserver(self.parameters['vserver'])
# if the user give the same password, instead of returning an error, return ok
# to change roles, we need at least one app
# REST does not return locked if password is not set
# lock/unlock actions require password to be set
# Cluster vserver and data vserver use different REST API.
# REST API should be used for ONTAP 9.6 or higher, ZAPI for lower version
# with 9.13, using scope=cluster with POST on 'name-services/dns' does not work:
# "svm.uuid" is a required field
# scope requires 9.9, so revert to cluster API
# omit scope as vserver may be a cluster vserver
# There is a chance we are working at the cluster level
# 15661 is object not found
# default value for force is false in ZAPI.
# default value for capacity_shared is False in REST.
# if block_size is not to be modified then remove it from the params
# to avoid error with block_size option during modification of other adaptive qos options
# one of the fixed throughput option required in create qos_policy.
# error if both fixed_qos_options or adaptive_qos_options not present in creating qos policy.
# create policy by renaming an existing one
# Setup REST API.
# don't add if ports is None
# if type is not set, assign current type
# for avoiding incompatible network interface error in modify portset.
# add current lifs type and uuid to self.lifs for modify and delete purpose.
# Default value is False if 'force' not in parameters.
# get lifs type and uuid which is not present in current.
# REST handles create and add ports in create api call itself.
# Error 13013 denotes fcp service already started.
# There can only be 1 FCP per vserver. If true, one is set up, else one isn't set up
# this is a mess i don't want to touch...
# return cluster image version
# return empty dict on error to satisfy package delete upon image update
# return cluster image update progress details
# Error 18408 denotes Package image with the same name already exists
# return cluster image download progress details
# return cluster validation report
# set comnprehension not supported on 2.6
# mixed set, need to update
# only update if versions differ
# delete package once update is completed
# assume in_progress if dict is empty
# record can be empty or these keys may not be present when validation is still in progress
# fetch validation results
# success: delete and return
# report error
# job not found
# connection errors
# module keys to REST keys
# With ONTAP 9.8, the job persists until the node is rebooted
# With ONTAP 9.9, the job returns quickly
# floor division
# TODO: cluster image update only works for HA configurations.
# check if node image update can be used for other cases.
# check if broadcast domain exists
# execute delete
# REST returns error if ipsec ca-certificates doesn't exist.
# config doesnt exist
# need to recreate as protocol can't be changed
# response will contain the job ID created by the post.
# check if query returns the expected subnet
# required parameters
# creating new subnet by renaming
# patch takes care of renaming subnet too.
# handled in rename
# If rename is True, cd_action is None but modify could be true
# volume UUID after quota rule creation, used for on or off quota status
# valid qtree name for ZAPI is /vol/vol_name/qtree_name and REST is qtree_name.
# converted blank parameter to * as shown in vsim
# Bypass a potential issue in ZAPI when policy is not set in the query
# https://github.com/ansible-collections/netapp.ontap/issues/4
# BURT1076601 Loop detected in next() for table quota_rules_zapi
# if quota-target is '*', the query treats it as a wildcard. But a blank entry is represented as '*'.
# Hence the need to loop through all records to find a match.
# ignore error on quota-on, as all rules have been deleted
# set qtree name in query for type user and group if not ''.
# If type: user, get quota rules api returns users which has name starts with input target user names.
# Example of users list in a record:
# users: [{'name': 'quota_user'}], users: [{'name': 'quota_user'}, {'name': 'quota'}]
# along with user/group, qtree should also match to get current quota.
# for type user/group if qtree is not set in create, its not returned in GET, make desired qtree None if ''.
# for type tree, desired quota_target should match current tree.
# Rest allows reset quota limits using '-', convert None to '-' to avoid idempotent issue.
# code: 5308568 requires quota to be disabled/enabled to take effect.
# code: 5308571 - rule created, but to make it active reinitialize quota.
# reinitialize will disable/enable quota.
# fetch volume uuid as response will be None if above code error occurs.
# skip fetching volume uuid from response if volume_uuid already populated.
# delete operation succeeded, but reinitialize is required.
# code: 5308569 requires quota to be disabled/enabled to take effect.
# code: 5308572 error occurs when trying to delete last rule.
# limits are modified but internal error, require reinitialize quota.
# if 'set_quota_status' == True in create, sometimes there is delay in status update from 'initializing' -> 'on'.
# if quota_status == 'on' and options(set_quota_status == True and activate_quota_on_change == 'resize'),
# sometimes there is delay in status update from 'resizing' -> 'on'
# status switch interval
# if warn message and quota not reinitialize, throw warnings to reinitialize in REST.
# force kb as the default unit for REST
# conversion to KB
# conversion to Bytes
# Rounding off the converted bytes
# to support very infrequent form-data format
# we exhausted the dictionary
# TODO, log usage
# uuid should be provided in the API
# If the path is passed as vol/vol1/lun_map it will be converted to lun_map for asa r2 systems.
# build the lun query
# find lun using query
# why do we do this, it never used in the module, and has nothing to do with lun_map (it probably should be in
# the lun module
# REST API for application/applications if needed - will report an error when REST is not supported
# consistency checks
# tiering policy is duplicated, make sure values are matching
# aggregate_name will force a move if present
# flatten component list
# extract values from volume record
# The keys are used to index a result dictionary, values are read from a ZAPI object indexed by key_list.
# If required is True, an error is reported if a key in key_list is not found.
# We may have observed cases where the record is incomplete as the volume is being created, so it may be better to ignore missing keys
# I'm not sure there is much value in omitnone, but it preserves backward compatibility
# If omitnone is absent or False, a None value is recorded, if True, the key is not set
# style is not present if the volume is still offline or of type: dp
# snapshot_auto_delete options
# 1 is the maximum value for nas
# scale_out should be absent or set to True for FlexCache
# we expect value to be a list of dicts, with maybe some empty entries
# get_volume may receive incomplete data as the volume is being created
# MDV volume will fail on a move, but will work using the REST CLI pass through
# vol move start -volume MDV_CRS_d6b0b313ff5611e9837100a098544e51_A -destination-aggregate data_a3 -vserver wmc66-a
# if REST isn't available fail with the original error
# if REST exists let's try moving using the passthrough CLI
# We have 5 states that can be returned.
# warning and healthy are state where the move is still going so we don't need to do anything for thouse.
# success - volume move is completed in REST.
# ZAPI returns failed or alert, REST returns failed or aborted.
# REST returns running or initializing, ZAPI returns running if encryption in progress.
# If encryprion is completed, REST do have encryption status message.
# reset fail count to 0
# Desired state is online, setup zapi APIs respectively
# Desired state is offline, setup zapi APIs respectively
# Unmount before offline
# retrieve existing in parent, or create a new one
# Volume-attributes is split in to 25 sub categories
# volume-inode-attributes
# volume-space-attributes
# volume-snapshot-attributes
# volume-export-attributes
# volume-security-attributes
# volume-performance-attributes
# volume-qos-attributes
# volume-comp-aggr-attributes
# volume-state-attributes
# volume-dr-protection-attributes
# volume-id-attributes
# End of Volume-attributes sub attributes
# handle error if modify space, policy, or unix-permissions parameter fails
# snaplock requires volume in unmount state.
# Rest didn't support snapshot_auto_delete prior to ONTAP 9.13.1; for supported ONTAP versions,
# modification for this parameter is handled by calling volume_modify_attributes function.
# don't mount or unmount when offline
# keep it last, as it may take some time
# handle change in encryption as part of the move
# allow for encrypt/decrypt only if encrypt present in attributes.
# Not found
# If running as cluster admin, the job is owned by cluster vserver
# rather than the target vserver.
# ZAPI requires compression to be set for inline-compression
# Error 40043 denotes an Operation has already been enabled.
# Don't error out if efficiency settings cannot be read.  We'll fail if they need to be set.
# ignore a less than XX% difference
# ignore change in size immediately after a create:
# inodes are not set in create
# prechecks when computing modify
# prechecks before computing modify
# verify type is the only option when not enabling snaplock compliance or enterprise
# verify type is not used before 9.10.1, or allow non_snaplock as this is the default
# don't change if the values are the same
# can't change permissions if not online
# snapshot_auto_delete's value is a dict, get_modified_attributes function doesn't support dict as value.
# ignore small changes in volume size or inode maximum by adjusting self.parameters['size'] or self.parameters['max_files']
# offline volume last
# volume-rename-async and volume-rename are the same in rest
# Zapi you had to give the old and new name to change a volume.
# Rest you need the old UUID, and the new name only
# Rest does not have force_restore or preserve_lun_id
# Zapi's Space-guarantee and space-reserve are the same thing in Rest
# TODO: Check to see if there a difference in rest between flexgroup or not. might need to throw error
# only one of these 2 options for QOS policy can be defined at most
# not too sure why we don't always set them
# one good reason are fields that are not supported on all releases
# type is not allowed in patch, and we already prevented any change in type
# volume-encryption-conversion-start
# Set the "encryption.enabled" field to "true" to start the encryption conversion operation.
# For variable that have been merged together we should fail before we do anything
# TODO FIX THIS!!!! ZAPI would only return a single aggr, REST can return more than 1.
# For now i'm going to hard code this, but we need a way to show all aggrs
# if analytics.state is initializing it will be ON once completed.
# this might need some additional logic
# Rest return an Int while Zapi return a string, force Rest to be an String
# The default setting for access_time_enabled and snapshot_directory_access_enabled is true
# report an error if the vserver does not exist (it can be also be a cluster or node vserver with REST)
# create by renaming
# create by rehosting
# update by restoring
# let's allow restoring after a rename or rehost
# Ignoring modify after a rehost, as we can't read the volume properties on the remote volume
# or maybe we could, using a cluster ZAPI, but since ZAPI is going away, is it worth it?
# ZAPI decrypts volume using volume move api and aggregate name is required.
# restore current change state, as we ignore this
# rehost, snapshot_restore and modify actions requires volume state to be online.
# ignore options that requires volume shoule be online.
# rename can be done if volume is offline.
# always online volume first before other changes.
# rehost, snapshot_restore and modify requires volume in online state.
# when moving to online, include parameters that get does not return when volume is offline
# REST DOES NOT have a volume-rehost equivalent
# if we create using ZAPI and modify only options are set (snapdir_access or atime_update), we need to run a modify.
# The modify also takes care of efficiency (sis) parameters and snapshot_auto_delete.
# If we create using REST application, some options are not available, we may need to run a modify.
# If we create using REST and modify only options are set (snapdir_access or atime_update or snapshot_auto_delete), we need to run a modify.
# For modify only options to be set after creation wait_for_completion needs to be set.
# volume should be online for modify.
# restore this, as set_modify_dict could set it to False
# Get LDAP configuration details
# Try to modify LDAP
# Rest API objects
# else for target peer.
# return vserver peer details
# required for delete and accept
# if remote peer exist , get remote cluster name else local cluster name
# peer-vserver -> remote (source vserver is provided)
# vserver -> local (destination vserver is provided)
# ignore RBAC issue with FSx - BURT1467620 (fixed in 9.11.0) - GitHub #45
# peer cluster may have multiple peer relationships
# filter by the created relationship uuid
# required local_peer_vserver_uuid to delete the peer relationship
# accept only if the peer relationship is on a remote cluster
# we only have the svm name, we need to the the uuid for the svm
# an api path have '/' in it, validate it present for ONTAP earlier versions.
# Error 16031 denotes a role not being found.
# Error 16039 denotes command directory not found.
# there is no direct modify for role.
# if the path is not in privilege then it need to be added
# removing one of relevant commands will also remove all other commands in group.
# skip if entry does not exist error occurs.
# if desired state specify empty quote query and current query is None, set desired query to None.
# otherwise na_helper.get_modified_attributes will detect a change.
# for REST, query is part of a tuple in privileges list.
# 'delete': ['access', 'access_control', 'ignore_paths', 'propagation_mode'],
# warnings will be added to the info results, if any
# thanks to coreywan (https://github.com/ansible/ansible/pull/47016)
# for starting this
# min_version identifies the ontapi version which supports this ZAPI
# use 0 if it is supported since 9.1
# supported in ONTAP 9.3 and onwards
# supported in ONTAP 9.4 and onwards
# Alpha Order
# the preferred key is node_name:aggregate_name
# but node is not present with MCC
# preferred key is <vserver>:<domain>:<cifs-server>
# alternate key is <vserver>:<domain-workgroup>:<cifs-server>
# don't use key_fieldss, as we cannot build a key with optional key_fieldss
# without a key, we'll get a list of dictionaries
# use vserver tunneling if vserver is present (not None)
# Can val be nested?
# for missing_vserver_api_error, the API is already in error_message
# flatten the list as only 1 element is expected
# aggregate a list of dictionaries into a single dict
# make sure we only have dicts and no key duplication
# abort if we don't see a dict - not sure this can happen with ZAPI
# no duplicates!
# the keys may have been converted, or not
# dictionaries are mutable
# don't summarize errors
# https://stackoverflow.com/questions/14962485/finding-a-key-recursively-in-a-dictionary
# allows for a key not to be present
# origins[0]
# ignored with REST
# sanitize the dictionary, as Ansible fills everything with None values
# this is an artificial key, to match the REST list of dict structure
# name cannot be set, though swagger example shows it
# asynchronous call, assuming success!
# There may be a bug in ONTAP.  If return_timeout is >= 15, the call fails with uuid not found!
# With 5, a job is queued, and completes with success.  With a big enough value, no job is
# queued, and the API returns in around 15 seconds with a not found error.
# ignore modify operation if the only key to modify is 'name'
# force a prepopulate action
# mount first, as this is required for prepopulate to succeed (or fail for unmount)
# Copyright (c) 2021, Laurent Nicolas <laurentn@netapp.com>
# without return_timeout, REST returns immediately with a 202 and a job link
# with return_timeout, REST returns quickly with a 200 and a job link
# see delete_async for async and sync operations and status codes
# limit the polling interval to something between 5 seconds and 60 seconds
# cluster does not use uuid or name, and query based PATCH does not use UUID (for restit)
# query based DELETE does not use UUID (for restit)
# Copyright (c) 2020, Laurent Nicolas <laurentn@netapp.com>
# vserver aggr-list can be empty by default
# allowed-protocols is not empty for data SVM, but is for node SVM
# Copyright (c) 2017, Sumit Kumar <sumit4@netapp.com>
# Copyright (c) 2017, Michael Price <michael.price@netapp.com>
# Copyright (c) 2017-2025, NetApp, Inc
# Management GUI displays 1024 ** 3 as 1.1 GB, thus use 1000.
# This is used for Zapi only Modules.
# This is used for Zapi + REST Modules.
# This is used for REST only Modules.
# get rid of default values, as we'll use source values
# when true, fail if response.content in not empty and is not valid json
# when true, append ZAPI and REST requests/responses to /tmp/ontap_zapi.txt
# when true, headers are not redacted in send requests
# when true, auth_args are not redacted in send requests
# use ZAPI wrapper to send Authorization header
# unicode values, 8 is backspace
# for better error reporting
# ONTAP bug if too big?
# for SVM, whch protocols can be allowed
# percentage of increase/decrease required to trigger a modify action
# when True, don't attempt to find cserver and don't send cserver EMS
# defaults to cert authentication if both basic and client certificate authentication parameters are given
# use same value as source if no value is given for dest
# default is HTTP
# HACK to bypass certificate verification
# set up zapi
# override NaServer in netapp-lib to enable certificate authentication
# SSL certificate authentication requires SSL
# override NaServer in netapp-lib to add Authorization header preemptively
# use wrapper to handle parse error (mostly for na_ontap_command)
# legacy netapp-lib
# netapp-lib message may contain a tuple or a str!
# python 2.7 does not know about ConnectionError
# Do not fail if we can't connect to the server.
# The module will report a better error when trying to get some data from ONTAP.
# raise on other errors, as it may be a bug in calling the ZAPI
# very unlikely to fail, but don't take any chance
# exit if there is an error or no data
# cluster admin
# assume vserver admin
# python 2.x syntax, but works for python 3 as well
# some ONTAP CLI commands return BEL on error
# And 9.1 uses \r\n rather than \n !
# And 9.7 may send backspaces
# python 3
# very unlikely, noop
# ignore a second exception, we'll report the first one
# report first exception, but include full response
# in case the response is very badly formatted, ignore it
# as is from latest version of netapp-lib
# ConnectionRefusedError is not defined in python 2.7
# either username/password or a certifcate with/without a key are used for authentication
# The module only supports REST, so make it required
# accept is used to turn on/off HAL linking
# vserver tunneling using vserver name and/or UUID
# with requests, there is no challenge, eg no 401.
# OPTIONS provides the list of supported verbs
# a job looks like this
# if the job has failed, return message as error
# Would like to post a message to user (not sure how)
# Expecting job to be in the following format
# {'job':
# Will run every <increment> seconds for <timeout> seconds
# ignore error if status is provided in the job
# using GET rather than HEAD because the error messages are different,
# and we need the version as some REST options are not available in earlier versions
# in precluster mode, version is not available :(
# force ZAPI if requested
# Check if ONTAP version is already known
# If a variable is on a list we need to move it to a dict for this check to work correctly.
# ignore error, it will show up later when calling another REST API
# we're now using 'auto'
# force ZAPI if some parameter requires it
# if ontap version is lower than partially_supported_rest_properties version, force ZAPI, only if the paramater is used
# we can't trust REST support on 9.5, and not at all on 9.4
# Copyright (c) 2020-2022, Laurent Nicolas <laurentn@netapp.com>
# assume a single component
# we expect two types of response
# and it's possible to expect both when 'return_timeout' > 0
# when using a query instead of UUID, REST return jobs (a list of jobs) rather than a single job
# only restit can send a query, all other calls are using a UUID.
# Copyright (c) 2018, Laurent Nicolas <laurentn@netapp.com>
# we can call this with module set to self or self.module
# self is a NetApp module, while self.module is the AnsibleModule object
# When using self or self.module, this gives access to:
# When using self, this gives access to:
# if self.netapp_module:
# error, do nothing
# cannot rename a non existent resource
# source is not None and target is None:
# rename is in order
# target is not None, so do nothing as the destination exists
# if source is None, maybe we already renamed
# if source is not None, maybe a new resource was created after being renamed
# we've exhausted the keys, good!
# preserve original values
# error, key or index not found
# error, we were expecting a dict or NAElement
# keep a copy, as the list is mutated
# allow empty dict or list if allow_empty_list_or_dict is set.
# skip empty dict or list otherwise
# skip None value
# one more caller in the stack
# python 2.7 does not have named attributes for frames
# one more caller to account for this function
# ONTAP will throw error as invalid field if the length is not 9 or 12.
# if the length is 12, first three character sets userid('s'), groupid('s') and sticky('t') attributes
# if the len is 9, start from 0 else start from 3.
# python 2.7 requires unicode format
# return compressed value for IPv6 and value in . notation for IPv4
# Copyright: (c) 2019, NetApp Ansible Team <ng-ansibleteam@netapp.com>
# Documentation fragment for StorageGRID
# (c) 2020, NetApp Inc
# Calling generic SG rest_api class
# Checking for the parameters passed and create new parameters list
# Get list of groups
# Retrun mapping of uniqueName to ids if found, or None
# Use the unique name to check if the user exists
# let's see if we need to update parameters
# If a password has been set
# Only update the password if update_password is always, or a create activity has occurred
# (c) 2025, NetApp Inc
# if alert 'type' exists, return it, else none
# (c) 2021, NetApp Inc
# Get API version
# Create body for creation request (POST with state present)
# Create timestamp for logs
# Log events
# Check if profile exists
# Return info if found, or None
# if ILM pool with 'name' exists, return it, else none
# (c) 2020, NetApp, Inc
# allow for passing in any additional rest api parameters
# required_if=[("state", "present", ["state", "name", "protocol"])],
# Check if tenant account exists
# Return tenant account info if found, or None
# Check if policy tag exists
# if ILM policy tag with 'name' exists, return it, else none
# Only add the parameter if value is True, as JSON response does not include non-true objects
# Use the unique name to check if the group exists
# for a federated group, the displayName parameter needs to be specified
# and must match the existing displayName
# (c) 2021-2025, NetApp Inc
# if a password has been specified we need to update it
# if identity federation is already in a disable state
# if no error, connection test successful
# Remove passphrases from usm_users for both current_snmp and self.data to make it idempotent
# Append "management" to the capability list only if parameter is True
# Set marker to last element
# If a tenant password has been set
# Only update the tenant password if update_password is always
# On a create action, the tenant password is set directly by the POST /grid/accounts method
# if EC profile with 'name' exists, return it, else none
# only change something if the EC profile is not already inactive
# (c) 2022, NetApp Inc
# Check if Traffic Classification Policy exists
# Return policy ID if found, or None
# self.module.fail_json(msg=self.data)
# if vlan interface with 'vlan_id' exists, return it, else none
# Arguments for Creating Gateway Port
# Arguments for setting Gateway Virtual Server
# Parameters for creating a new gateway port configuration
# Parameters for setting a gateway virtual server configuration for a gateway port
# Parameters for allowing Management Interfaces for a gateway port
# Get only a list of used ports
# if port already exists then get gateway ID and get the gateway port server configs
# Get list of all gateway port configurations
# If certificate private key supplied, update
# remove metadata because we can't compare that
# compare current and desired state
# gateway config cannot be modified until StorageGRID 11.5
# Check if certificate with name exists
# Return certificate ID if found, or None
# required_if=[("state", "present", ["display_name"])],
# (c) 2020-2025, NetApp Inc
# Get list of admin groups
# Checking for the parameters passed and create new parameters list.
# if it is set and true
# Check if HA Group exists
# Return HA Group info if found, or None
# check if we are in check mode
# this deprecated parameter is required for PUT operations, in case "bucketFilter" is set. Dont ask me.
# Check if rule exists
# if rule with 'name' exists, return it, else none
# Add rest API names as there info version, also make sure we don't add a duplicate.
# Defining gather_subset and appropriate api_call.
# If all in subset list, get the information of all subsets.
# Verify whether the supported subset passed.
# Check if domain name is not present in the current domain name list
# sleep for 5 seconds to allow the upload to get refreshed
# Check for any validation errors after updating the node queue
# Check if the node update is in progress
# sleep for 5 seconds before checking again
# get the hotfix details again after uploading the file
# get the nodes details after starting the hotfix
# sleep for 5 seconds before checking the status
# (c) 2024, NetApp Inc
# Check if policy exists
# if policy with 'name' exists, return it, else none
# Copyright (c) 2020, NetApp Ansible Team <ng-ansibleteam@netapp.com>
# key only exists in d1
# keys exist in both, but values are different
# both values can be compared
# recursion! dict or list inside here
# takes care that a answer="false" is not overwritten with answer="true". Once answer is false, it should ever be.
# Do not return at this point, as we are in a loop and have to check all elements.
# values are different
# case if list elements are "simple": strings, integers, bools, none
# case if list elements are "complex": recursion! dict or list inside here
# check if response is binary file
# Add status_code to the json_dict
# cannot rename an non existent resource
# alternatively we could create B
# idempotency (or) new_name_is_already_in_use
# alternatively we could delete B and rename A to B
# do nothing, maybe the rename was already done
# Common options for ansible_collections.microsoft.ad.plugins.plugin_utils._module_with_reboot
# Common options for ansible_collections.microsoft.ad.plugins.module_utils._ADObject
# associated with plugin_utils._ldap.create_ldap_connection
# Parsing the string value variant as regex is too complicated due to the
# myriad of rules and escaping so it is done manually.
# We only count the spaces in the middle of the string so we need to
# keep track of how many have been found until the next character.
# We can add any spaces we are still tentatively collecting as there's
# a real value after it.
# FILETIME is 100s of nanoseconds since 1601-01-01. As Python does not
# support nanoseconds the delta is number of microseconds.
# Starting char cannot be ' ' or #
# Ending char cannot be ' '
# Any of these chars need to be escaped
# These are documented in RFC 4514
# These are extra chars MS says to escape, it must be done using
# the hex syntax
# https://learn.microsoft.com/en-us/previous-versions/windows/desktop/ldap/distinguished-names
# This behaviour is defined in RFC 4514 and while not defined in that RFC
# this will also remove any extra spaces before and after , = and +.
# This operates on bytes for 2 reasons:
# surrogateescape is used for all conversions to ensure non-unicode bytes
# are preserved using the escape behaviour in UTF-8.
# If ended with + we want to continue parsing the AVA values
# FOR INTERNAL COLLECTION USE ONLY
# The interfaces in this file are meant for use within this collection
# and may not remain stable to outside uses. Changes may be made in ANY release, even a bugfix release.
# See also: https://github.com/ansible/community/issues/539#issuecomment-780839686
# Please open an issue if you have questions about this.
# This is not ideal but the psrp connection plugin doesn't catch all these exceptions as an AnsibleConnectionFailure.
# Until we can guarantee we are using a version of psrp that handles all this we try to handle those issues.
# PowerShell has 5 characters it uses as a single quote, we need to double up on all of them.
# https://github.com/PowerShell/PowerShell/blob/b7cb335f03fe2992d0cbd61699de9d9aafa1d7c1/src/System.Management.Automation/engine/parser/CharTraits.cs#L18-L21
# Get current boot time. A lot of tasks that require a reboot leave the WSMan stack in a bad place. Will try to
# get the initial boot time 3 times before giving up.
# Report a the failure based on the last exception received.
# This command may be wrapped in other shells or command making it hard to detect what shutdown.exe actually
# returned. We use this hackery to return a json that contains the stdout/stderr/rc as a structured object for our
# code to parse and detect if something went wrong.
# We cannot have an expected result if the command is user defined
# It turns out that LogonUI will create this registry key if it does not exist when it's about to show the
# logon prompt. Normally this is a volatile key but if someone has explicitly created it that might no longer
# be the case. We ensure it is not present on a reboot so we can wait until LogonUI creates it to determine
# the host is actually online and ready, e.g. no configurations/updates still to be applied.
# We echo a known successful statement to catch issues with powershell failing to start but the rc mysteriously
# being 0 causing it to consider a successful reboot too early (seen on ssh connections).
# Keep on trying to run the last boot time check until it is successful or the timeout is raised
# Reset the connection plugin connection timeout back to the original
# Run test command until ti is successful or a timeout occurs
# override connection timeout from defaults to custom value
# Keep on trying the reset until it succeeds.
# The error may be due to a connection problem, just reset the connection just in case
# Need to wrap the command in our PowerShell encoded wrapper. This is done to align the command input to a
# common shell and to allow the psrp connection plugin to report the correct exit code without manually setting
# $LASTEXITCODE for just that plugin.
# The psrp connection plugin should be doing this but until we can guarantee it does we just convert it here
# to ensure AnsibleConnectionFailure refers to actual connection errors.
# While the reboot command should output json it may have failed for some other reason. We continue
# reporting with that output instead
# Test for "A system shutdown has already been scheduled. (1190)" and handle it gracefully
# Try to abort (this may fail if it was already aborted)
# While reset() should probably better handle this some connection plugins don't clear the existing connection on
# reset() leaving resources still in use on the target (WSMan shells). Instead we try to manually close the
# connection then call reset. If it fails once we want to skip closing to avoid a perpetual loop and just hope
# reset() brings us back into a good state. If it's successful we still want to try it again.
# For some connection plugins (ssh) reset actually does something more than close so we also class that
# Not all connection plugins have reset so we just ignore those, close should have done our job.
# Not all connection plugins implement this, just ignore the setting if it doesn't work
# https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_quoting_rules?view=powershell-5.1
# We should always quote values in PowerShell as it has conflicting rules where strings can and can't be quoted.
# This means we quote the entire arg with single quotes and just double up on the single quote equivalent chars.
# Regardless of the module result this needs to be True as a
# reboot happened.
# Make sure the invocation details from the module are still present.
# Assume that on a rerun it will not have failed and that it
# ran successful.
# Catches any other module option not needed here
# If the client establishes the SSL/TLS-protected connection by means of connecting on a protected LDAPS port,
# then the connection is considered to be immediately authenticated (bound) as the credentials represented
# by the client certificate. An EXTERNAL bind is not allowed, and the bind will be rejected with an error.
# https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/8e73932f-70cf-46d6-88b1-8d9f86235e81
# Set socket into blocking mode
# The socket has already been shutdown for some other reason
# dnspython is used for dynamic server lookups
# krb5 is used to retrieve the default realm for dynamic server lookups.
# Sorts the array by lowest priority then highest weight.
# Cryptography is used for the TLS channel binding data and to convert PKCS8/12
# certs to a PEM file required by OpenSSL.
# If the cert signature algorithm is unknown, md5, or sha1 then use sha256
# otherwise use the signature algorithm of the cert itself.
# It is important the caller does not delete the dir because the
# lookup happens during the handshake and not now.
# cafile only works for PEM encoded certs, whereas cadata can
# load DER encoded certs.
# Specifying an empty b"" stops OpenSSL from prompting for the password if
# it was required. Instead it will just fail to load.
# load_cert_chain does not expose a way to load a certificate/key from
# memory so a temporary directory is used in the cases where a pfx or
# string supplied cert is used.
# Check is in __init__.py
# SearchResultReference is ignored for now.
# The certs are provided in the TLS handshake, the SASL EXTERNAL mech
# just tells the server to check those for the bind.
# MS AD rejects any authentication that provides integrity or
# confidentiality if the connection is already protected by TLS.
# As the GSS-SPNEGO SASL relies on the context attributes to
# negotiate whether signing/encryption and Kerberos by default
# always uses the integrity attributes we need to tell it
# explicitly not to. The no_integrity flag does that for us.
# When not operating over TLS request integrity and confidentiality
# so that we can encrypt the traffic.
# If inventory_hostname was defined in compose, set it in the custom
# attributes so we can set the hostname before processing the rest of
# compose entries.
# These options are in ../doc_fragments/ldap_connection.py
# This is a stub to get the docs working, the implementation is an action plugin
# cred attrs added in krb5 0.5.0
# Copyright (c) 2021 T-Systems Multimedia Solutions GmbH
# This module is free software: you can redistribute it and/or modify
# This software is distributed in the hope that it will be useful,
# along with this software.  If not, see <http://www.gnu.org/licenses/>.
# Documentation for common options that are always the same
# default for deprecated monitoring module
# call base method to ensure properties are available for use with other helper methods
# Read the inventory YAML file
# Store the options from the YAML file
# Add variables created by the user's Jinja2 expressions to the host
# The following two methods combine the provided variables dictionary with the latest host variables
# Using these methods after _set_composite_vars() allows groups to be created with the composed variables
# Module execution.
# use the predefined argument spec for url
# add our own arguments
# Define the main module
# Copyright (c) 2020 T-Systems Multimedia Solutions GmbH
# Icinga2 API class
# icinga also returns normal objects when querying templates,
# we need to filter these
# Copyright (c) 2025 Deutsche Telekom MMS GmbH
# Swap parameter name in case of a parent host variable
# Swap parameter name in case of a parent service variable
# Icinga expects 'y' or 'n' instead of booleans for option "with_services"
# Icinga expects 'yes' or 'no' instead of booleans for option "fixed"
# When deleting objects, only the name is necessary, so we cannot use
# required=True in the argument_spec. Instead we define here what is
# necessary when state is present and we do not append to an existing object
# We cannot use "required_if" here, because we rely on module.params.
# These are defined at the same time we'd define the required_if arguments.
# Copyright (c) 2023 T-Systems Multimedia Solutions GmbH
# get the current deployment status
# if there is no existing deployment (e.g. on a new instance), there is no config object
# execute the deployment
# the deployment is asynchronous and I don't know of a way to check if it is finished.
# so we need some sleep here. 2 seconds is a wild guess and a default, now it is a variable
# get the new deployment status
# when the old checksum, the checksum to be created and the new checksum are the same, nothing changed
# when the current and new deployment are the same, but the checksum to be created is different, the deployment failed
# in other cases the deployment succeeded and changed something
# `arguments` is of type dict, default should be {}
# however, the director api returns [] when no `arguments` are set, making the diff seem changed
# therefore set the default to [] as well to get a clean diff output
# set url parameters when service is assigned to a host
# set url parameters when service is assigned to a serviceset
# Only one of "service_set" or "host" needs to be present at the
# same time so we cannot use required=True in the argument_spec.
# Instead we define here that only one of the two parameters is defined at the same time.
# type of arguments is actually dict, i.e. without specification = {}
# the director API returns an empty array if nothing is defined
# therefore overwrite so that the diff works better
# handle 400 errors
# handle other errors
# if nothing is modified when trying to change objects, fetch_url
# returns only the 304 status but no body.
# if that happens we set the content to an empty json object.
# else we serialize the response as a json object.
# workaround for type confusion in API, remove when https://github.com/telekom-mms/ansible-collection-icinga-director/issues/285 is solved
# Copyright: (c) 2024, Infinidat <info@infinidat.com>
# pylint: disable=invalid-name,use-dict-literal,line-too-long,wrong-import-position
# RETURN = r''' # '''
# Handled by HAS_INFINISDK from module_utils
# pylint: disable=too-many-locals
# pylint: disable=invalid-name,use-dict-literal,line-too-long,wrong-import-position,too-many-branches
# Default value of ssd_cache is True. Disable ssd caching if False
# Default value of compression is True. Disable compression if False
# Roundup the capacity to mimic Infinibox behaviour
# print('fields: {0}'.format(fields))
# pylint: disable=invalid-name,use-dict-literal,too-many-branches,too-many-locals,line-too-long,wrong-import-position
# Quiet pylint
# pylint: disable=too-many-statements
# Could check metadata value size < 32k
# Create json data
# Put
# Variable 'changed' not returned by design
# No vol therefore no metadata to delete
# No fs therefore no metadata to delete
# No host therefore no metadata to delete
# No cluster therefore no metadata to delete
# No fssnap therefore no metadata to delete
# No pool therefore no metadata to delete
# No volsnap therefore no metadata to delete
#.get_result()
# Add object name to result
# TODO - support pagination
# Assemble rest path
# Check system object_type
# Check object_name is None
# Handle special system metadata keys
# Convert bool string to bool
# Convert integer string to int
# pylint: disable=invalid-name,use-dict-literal,line-too-long,wrong-import-position,too-many-locals
# Example of line searched for:
# <input type="hidden" name="csrfmiddlewaretoken" value="VUe6...m5Nl7y">'
# Get a token
# Use POST provide token
# Use GET to get a token
# Check that the IBOX was added or was previously added.
# Search for one of:
# Just added now
# Previously added
# Check that the IBOX was removed or was previously removed
# In response.return_code, search for 200
# pylint: disable=too-many-branches
# This module does not use infinibox_argument_spec() from infinibox.py
# pylint: disable=invalid-name,use-list-literal,use-dict-literal,line-too-long,wrong-import-position,multiple-statements
# search_order
# Add type specific fields to data dict
# LDAP
# Rename id to repository_id
# pylint: disable=too-many-return-statements,multiple-statements
# ad_domain_name = module.params["ad_domain_name"]
# bind_password = module.params["bind_password"]
# bind_username = module.params["bind_username"]
# ldap_servers = module.params["ldap_servers"]
# ldap_port = module.params["ldap_port"]
# schema_group_memberof_attribute = module.params["schema_group_memberof_attribute"]
# schema_group_name_attribute = module.params["schema_group_name_attribute"]
# schema_groups_basedn = module.params["schema_groups_basedn"]
# schema_user_class = module.params["schema_user_class"]
# schema_username_attribute = module.params["schema_username_attribute"]
# schema_users_basedn = module.params["schema_users_basedn"]
# Creating an LDAP
# if changed:
# check_mode
# May raise APICommandFailed if mapped, etc.
# Copyright: (c) 2024, Infinidat(info@infinidat.com)
# msg = "client_list: {0}, type: {1}".format(client_list, type(client_list))
# module.fail_json(msg=msg)
# from_cache=True, raw_value=True)
# Used when hacking
# Check params
# Restore fs from snapshot
# Use default for master fs
# Use default for snapshot
# pylint: disable=invalid-name,use-dict-literal,line-too-long,wrong-import-position, wrong-import-order
# Update client
# If access_mode and/or no_root_squash not passed as arguments to the module,
# use access_mode with RW value and set no_root_squash to False
# Create client
# Found client
# Set the user's role
# Add the user to the pool's owners
# Pylint is not finding Infinibox.pools but Python does.
# User is not a pool owner
# e.g. of one host dict found in the module.params['cluster_hosts'] list:
# Need to add host to cluster?
# Need to remove host from cluster?
# Check host has required keys
# _ = host[valid_key]
# Check host has no unknown keys
# Add address_long key to init whose value is the address with colons inserted.
# print("\nrequest:", request, "initiator_type:", initiator_type)
# Only return initiators of the desired type.
# print("host_initiators_by_type:", host_initiators_by_type)
# print()
# Only include certain keys in the returned initiators
# print("initiator targets:", initiator['targets'])
# Restore volume from snapshot
# isvalue.lower() not in values:
# pylint: disable=invalid-name,use-list-literal,use-dict-literal,line-too-long,wrong-import-position
# Create network space
# product_id = system.api.get('system/product_id')
# Ignore
# Create space
# Find space's ID
# Add IPs to space
# Trailing space by design
# Find IPs from space
# Disable and delete IPs from space
# Must be disabled and deleted last
# Delete space
# Target ID for sending to recipients
# Rename id to rule_id
# pylint:
# disable=use-list-literal,use-dict-literal,line-too-long,wrong-import-position,broad-exception-caught,invalid-name
# except (ImportError, ModuleNotFoundError):
# Should never get to this line but it quiets pylint inconsistent-return-statements
# Some GET rest calls do not support paging, e.g. /api/rest/metadata/{id}.
# Try first with paging, then if there is an UNKNOWN_PARAMETER error,
# try without paging.
# Bad request, e.g. METADATA_NOT_FOUND
# GET does not support paging
# Reached end of the pagination.
# If no pages_total key in metadata, then it is not a list.
# Return the single result.
# Remove existing file to ensure the creation time is updated
# Create system
# For metadata
# Used by metadata module
# Then user has specified wish to lock snap
# Check for lock in the past
# Check for lock later than max lock, i.e. too far in future.
# 30 days in minutes
# Lock earlier than current lock
# Lock already set to correct time
# Set lock
# (c) 2017, Tennis Smith, https://github.com/gamename
# Make coding more python3-ish
# define start time
# http://bytes.com/topic/python/answers/635958-handy-short-cut-formatting-elapsed-time-floating-point-seconds
# Align summary report header with other callback plugin summary
# Print the timings starting with the largest one
# (c) 2016, Matt Martz <matt@sivel.net>
# (C) 2016, Joel, https://github.com/jjshoe
# (C) 2015, Tom Paine, <github@aioue.net>
# (C) 2014, Jharrod LaFon, @JharrodLaFon
# (C) 2012-2013, Michael DeHaan, <michael.dehaan@gmail.com>
# (C) 2017 Ansible Project
# Record the start time of the current task
# stats[TASK_UUID]:
# Sort the tasks by the specified sort
# Display the number of tasks specified or the default of 20
# Print the timings
# (c) 2018 Matt Martz <matt@sivel.net>
# RECORD SEPARATOR
# LINE FEED
# pylint: disable=non-parent-init-called
# We may be profiling after the playbook has ended
# pylint: disable=too-few-public-methods,no-init
# Enable JSON identation
# All the -u options must be first, so we process them first
# don't quote the cmd if it's an empty string, because this will break pipelining mode
# The following test is fish-compliant.
# comparison (or or and) that sets the rc value.  Every check is run so
# the last check in the series to fail will be the rc that is
# returned.
# (UFS filesystem?) which is not what the rest of the ansible code
# expects
# If all of the available hashing methods fail we fail with an rc of
# 0.  This logic is added to the end of the cmd at the bottom of this
# NOQA  Python > 2.4 (including python3)
# NOQA  Python == 2.4
# Copyright: (c) 2013, Adam Miller <maxamillion@fedoraproject.org>
# The import errors are handled via FirewallTransaction, don't need to
# duplicate that here
# Even it shouldn't happen, it's actually possible that
# the same interface is in several zone XML files
# remove from old
# add to new
# Convert the rule string to standard format
# before checking whether it is present
# Sanity checks
# `offline`, `immediate`, and `permanent` have a weird twisty relationship.
# specifying offline without permanent makes no sense
# offline overrides immediate to false if firewalld is offline
# immediate defaults to true if permanent is not enabled
# Verify required params are provided
# mode == 'get'
# prevents absolute path warnings and removes headers
# use nfs4_getfacl instead of getfacl if use_nfsv4_acls is True
# To check the ACL changes, use the output of setfacl or nfs4_setfacl with '--test'.
# FreeBSD do not have a --test flag, so by default, it is safer to always say "true".
# lists are mutables so cmd would be overwritten without this
# if use_nfsv4_acls and entry is listed
# The current 'nfs4_setfacl --test' lists a new entry,
# So if the entry has already been registered, the entry should be find twice.
# trim last line only when it is empty
# Copyright: Red Hat Inc.
# Verify that the platform is an rpm-ostree based system
# (c) 2012, David "DaviXX" CHANIAL <david.chanial@gmail.com>
# We have to use LANG=C because we are capturing STDERR of sysctl to detect
# success or failure.
# current token value in proc fs
# current token value in file
# all lines in the file
# dict of token values
# will change occur
# does sysctl need to set value
# does the sysctl file need to be reloaded
# Whitespace is bad
# get the current proc fs value
# get the current sysctl file value
# update file contents with desired token/value
# what do we need to do now?
# with reload=yes we should check if the current system values are
# correct, so that we know if we should reload
# use the sysctl command or not?
# Do the work
# sysctl can fail to set a value even if it returns an exit status 0
# (https://bugzilla.redhat.com/show_bug.cgi?id=1264080). That's why we
# also have to check stderr for errors. For now we will only fail on
# specific errors defined by the regex below.
# Use the sysctl command to find the current value
# openbsd doesn't support -e, just drop it
# Use the sysctl command to set the current value
# openbsd doesn't accept -w, but since it's not needed, just drop it
# freebsd doesn't accept -w, but since it's not needed, just drop it
# Run sysctl -p
# freebsd doesn't support -p, so reload the sysctl service
# openbsd doesn't support -p and doesn't have a sysctl service,
# so we have to set every value with its own sysctl call
# FIXME this check is probably not needed as set_token_value would fail_json if rc != 0
# set_token_value would have called fail_json in case of failure
# so return here and do not continue to the error processing below
# https://github.com/ansible/ansible/issues/58158
# system supports reloading via the -p flag to sysctl, so we'll use that
# Get the token value from the sysctl file
# don't split empty lines or comments or line without equal sign
# Fix the value in the sysctl file content
# Completely rewrite the sysctl file
# open a tmp file
# replace the real one
# defining module
# In case of in-line params
# Copyright: (c) 2012, Red Hat, inc
# Written by Seth Vidal
# based on the mount modules from salt and puppet
# Append newline if the line in fstab does not finished with newline.
# Check if we got a valid line for splitting
# (on Linux the 5th and the 6th field is optional)
# The last two fields are optional on Linux so we fill in default values
# Fill in the rest of the available fields
# Check if we found the correct line
# In the case of swap, check the src instead
# If we got here we found a match - let's check if there is any
# difference
# If we got here we found a match - continue and mark changed
# Set fstype switch according to platform. SunOS/Solaris use -F
# Even if '-o remount' is already set, specifying multiple -o is valid
# Use module.params['fstab'] here as args['fstab'] has been set to the
# Multiplatform remount opts
# Note: Forcing BSDs to do umount/mount due to BSD remount not
# working as expected (suspect bug in the BSD mount command)
# Interested contributor could rework this to use mount options on
# the CLI instead of relying on fstab
# https://github.com/ansible/ansible-modules-core/issues/5591
# Note: this does not affect ephemeral state as all options
# are set on the CLI and fstab is expected to be ignored.
# Note if we wanted to put this into module_utils we'd have to get permission
# from @jupeter -- https://github.com/ansible/ansible-modules-core/pull/2923
# @jtyr -- https://github.com/ansible/ansible-modules-core/issues/4439
# and @abadger to relicense from GPLv3+
# That's for unmounted/absent
# Omit the parent's root in the child's root
# == Example:
# 140 136 253:2 /rootfs / rw - ext4 /dev/sdb2 rw
# 141 140 253:2 /rootfs/tmp/aaa /tmp/bbb rw - ext4 /dev/sdb2 rw
# == Expected result:
# src=/tmp/aaa
# Prepend the parent's dst to the child's root
# 42 60 0:35 / /tmp rw - tmpfs tmpfs rw
# 78 42 0:35 /aaa /tmp/bbb rw - tmpfs tmpfs rw
# If the provided mountpoint is not a mountpoint, don't waste time
# Treat Linux bind mounts
# For Linux bind mounts only: the mount command does not return
# the actual source for bind mounts, but the device of the source.
# is_bind_mounted() called with the 'src' parameter will return True if
# the mountpoint is a bind mount AND the source FS is the same than 'src'.
# is_bind_mounted() is not reliable on Solaris, NetBSD and OpenBSD.
# But we can rely on 'mount -v' on all other platforms, and Linux non-bind mounts.
# mount with parameter -v has a close behavior on Linux, *BSD, SunOS
# Requires -v with SunOS. Without -v, source and destination are reversed
# Output format differs from a system to another, but field[0:3] are consistent: [src, 'on', dest]
# solaris args:
# linux args:
# Note: Do not modify module.params['fstab'] as we need to know if the user
# explicitly specified it in mount() and remount()
# FreeBSD doesn't have any 'default' so set 'rw' instead
# Cache all mounts here in order we have consistent results if we need to
# call is_bind_mounted() multiple times
# Override defaults with user specified params
# Linux, FreeBSD, NetBSD and OpenBSD have 'noauto' as mount option to
# handle mount on boot.  To avoid mount option conflicts, if 'noauto'
# specified in 'opts',  mount module will ignore 'boot'.
# If fstab file does not exist, we first need to create it. This mainly
# happens when fstab option is passed to the module.
# If state is 'ephemeral', we do not need fstab file
# absent:
# unmounted:
# present:
# mounted:
# ephemeral:
# Something like mkdir -p but with the possibility to undo.
# Based on some copy-paste from the "file" module.
# ephemeral: completely ignore fstab
# When 'state' == 'ephemeral', we don't know what is in fstab, and 'changed' is always False
# If state == 'ephemeral', check if the mountpoint src == module.params['src']
# If it doesn't, fail to prevent unwanted unmount or unwanted mountpoint override
# If not already mounted, mount it
# Not restoring fstab after a failed mount was reported as a bug,
# ansible/ansible#59183
# A non-working fstab entry may break the system at the reboot,
# so undo all the changes if possible.
# If the managed node is Solaris, convert the boot value type to Boolean
# Copyright: (c) 2012, Luis Alberto Perez Lazaro <luisperlazaro@gmail.com>
# Copyright: (c) 2015, Jakub Jirutka <jakub@jirutka.cz>
# Older versions of FreeBSD, OpenBSD and NetBSD support the --check option only.
# NB: for 'backup' parameter, semantics is slightly different from standard
# Create type object as namespace for module params
# patch need an absolute file name
# Copyright: (c) 2012, Derek Carter<goozbach@friocorte.com>
# getter subroutines
# inconsistent config - return None to force update
# setter subroutines
# SELINUX=permissive
# edit config file with state value
# SELINUXTYPE=targeted
# global vars
# enabled means 'enforcing' or 'permissive'
# check to see if policy is set if state is not 'disabled'
# check changed values and run changes
# cannot change runtime policy
# Temporarily set state to permissive
# Only report changes if the file is changed.
# This prevents the task from reporting changes every time the task is run.
# Update kernel enabled/disabled config only when setting is consistent
# across all kernels AND the requested state differs from the current state
# The following method implements what setsebool.c does to change
# a boolean and make it persist after reboot..
# Only respawn the module if both libraries are missing.
# If only one is available, then usage of the "wrong" (i.e. not the system one)
# python interpreter is likely not the problem.
# selinux_boolean_sub allows sites to rename a boolean and alias the old name
# Feature only available in selinux library since 2012.
# Copyright: (c) 2021, Hideki Saito <saito@fgrep.org>
# This function supports python-firewall 0.9.0(or later).
# Exit with failure message if requirements modules are not installed.
# If you want to show warning messages in the task running process,
# you can append the message to the 'warn' list.
# Gather general information of firewalld.
# Gather information for zones.
# Gather settings for each zone based on the output of
# 'firewall-cmd --info-zone=<ZONE>' command.
# The 'forward' parameter supports on python-firewall 0.9.0(or later).
# Copyright: (c) 2012, Brad Olson <brado@movedbylight.com>
# Makes sure the public key line is present or absent in the user's .ssh/authorized_keys.
# see example in examples/playbooks
# http://stackoverflow.com/questions/2328235/pythonextend-the-dict-class
# touches file so we can set ownership and perms
# ordered dict
# the following regex will split on commas while
# ignoring those commas that fall within quotes
# connection options
# encrypted key string
# type of ssh key
# index of keytype in key string|list
# remove comment yaml escapes
# split key safely
# keep comment hashes
# comment line, invalid line, etc.
# check for options
# parse the options (if any)
# get key after the type index
# set comment to everything after the key
# use key as identifier
# for an invalid line, just set the line
# dict key to the line so it will be re-output later
# order the new_keys by their original ordering, via the rank item in the tuple
# comment line or invalid line, just leave it
# if the key is a url or file, request it and use it as key source
# resp.read gives bytes on python3, convert to native string type
# if the key is an absolute path, check for existense and use it as a key source
# extract individual keys into an array, skipping blank lines and comments
# check current state -- just get the filename, don't create file
# Add a place holder for keys that should exist in the state=present and
# exclusive=true case
# we will order any non exclusive new keys higher than all the existing keys,
# resulting in the new keys being written to the key file after existing keys, but
# in the order of new_keys
# Check our new keys, if any of them exist we'll continue.
# rank here is the rank in the provided new keys, which may be unrelated to rank in existing_keys
# Then we check if everything (except the rank at index 4) matches, including
# the key type and options. If not, we append this
# existing key to the non-matching list
# We only want it to match everything when the state
# is present
# handle idempotent state=present
# new key that didn't exist before. Where should it go in the ordering?
# We want the new key to be after existing keys if not exclusive (rank > max_rank_of_existing_keys)
# replace existing key tuple with new parsed key with its total rank
# remove all other keys to honor exclusive
# for 'exclusive', make sure keys are written in the order the new keys were
# Copyright: (c) 2012-2013, Timothy Appnel <tim@appnel.com>
# pylint: disable=disallowed-name
# the default of these params depends on the value of archive
# https://github.com/ansible/ansible/issues/15907
# if the user has not supplied an --rsh option go ahead and add ours
# If the user specified a port value
# Note:  The action plugin takes care of setting this to a port from
# inventory if the user didn't specify an explicit dest_port
# verbose required because rsync does not believe that adding a
# hardlink is actually a change
# If we are using password authentication, write the password into the pipe
# a leading period indicates no change
# Copyright: (c) 2014, Richard Isaacson <richard.c.isaacson@gmail.com>
# Get list of job numbers for the user.
# Read script_file into a string.
# Loop through the jobs.
# Return the list.
# If command transform it into a script_file
# if absent remove existing and return
# if unique if existing return unchanged
# (c) 2012-2013, Timothy Appnel <tim@appnel.com>
# Check if we have a local relative path and do not process
# * remote paths (some.server.domain:/some/remote/path/...)
# * URLs (rsync://...)
# * local absolute paths (/some/local/path/...)
# make sure the dwim'd path ends in a trailing "/"
# if the original path did
# If using docker or buildah, do not add user information
# preserve formatting of remote paths if host or user@host is explicitly defined in the path
# If we're connecting to a remote host or we're delegating to another
# host or we're connecting to a different ssh instance on the
# localhost then we have to format the path as a remote rsync path
# If we're delegating to non-localhost and but the
# inventory_hostname host is localhost then we need the module to
# fix up the rsync path to use the controller's public DNS/IP
# instead of "localhost"
# Clear the current definition of these variables as they came from the
# connection to the remote host
# Add the definitions from localhost
# When modifying this function be aware of the tricky convolutions
# your thoughts have to go through:
# In normal ansible, we connect from controller to inventory_hostname
# (playbook's hosts: field) or controller to delegate_to host and run
# a module on one of those hosts.
# So things that are directly related to the core of ansible are in
# terms of that sort of connection that always originate on the
# controller.
# In synchronize we use ansible to connect to either the controller or
# to the delegate_to host and then run rsync which makes its own
# connection from controller to inventory_hostname or delegate_to to
# inventory_hostname.
# That means synchronize needs to have some knowledge of the
# controller to inventory_host/delegate host that ansible typically
# establishes and use those to construct a command line for rsync to
# connect from the inventory_host to the controller/delegate.  The
# challenge for coders is remembering which leg of the trip is
# associated with the conditions that you're checking at any one time.
# We make a copy of the args here because we may fail and be asked to
# retry. If that happens we don't want to pass the munged args through
# to our next invocation. Munged args are single use only.
# Store remote connection type
# Handle docker connection options
# self._connection accounts for delegate_to so
# remote_transport is the transport ansible thought it would need
# between the controller and the delegate_to host or the controller
# and the remote_host if delegate_to isn't set.
# ssh paramiko docker buildah and local are fully supported transports.  Anything
# else only works with delegate_to
# Parameter name needed by the ansible module
# rsync thinks that one end of the connection is localhost and the
# other is the host we're running the task for  (Note: We use
# ansible's delegate_to mechanism to determine which host rsync is
# running on so localhost could be a non-controller machine if
# delegate_to is used)
# dest_is_local tells us if the host rsync runs on is the same as the
# host rsync puts the files on.  This is about *rsync's connection*,
# not about the ansible connection to run the module.
# CHECK FOR NON-DEFAULT SSH PORT
# Set use_delegate if we are going to run rsync on a delegated host
# instead of localhost
# edge case: explicit delegate and dest_host are the same
# so we run rsync on the remote machine targeting its localhost
# (itself)
# If we're delegating to a remote host then we need to use the
# delegate_to settings
# Delegate to localhost as the source of the rsync unless we've been
# told (via delegate_to) that a different host is the source of the
# rsync
# Unlike port, there can be only one shell
# Unlike port, there can be only one executable
# Needed for ansible-core < 2.15
# Override _remote_is_local as an instance attribute specifically for the synchronize use case
# ensuring we set local tmpdir correctly
# SWITCH SRC AND DEST HOST PER MODE
# MUNGE SRC AND DEST PER REMOTE_HOST INFO
# Determine if we need a user@ and a password
# Src and dest rsync "path" handling
# Private key handling
# Use the private_key parameter if passed else use context private_key_file
# use the mode to define src and dest's url
# src is a remote path: <user>@<host>, dest is a local path
# src is a local path, dest is a remote path: <user>@<host>
# Still need to munge paths (to account for roles) even if we aren't
# copying files between hosts
# Allow custom rsync path argument
# backup original become as we are probably about to unset it
# don't escalate for docker. doing --rsync-path with docker exec fails
# and we can switch directly to the user via docker arguments
# If no rsync_path is set, become was originally set, and dest is
# remote then add privilege escalation here.
# TODO: have to add in the rest of the become methods here
# We cannot use privilege escalation on the machine running the
# module.  Instead we run it on the machine rsync is connecting
# to.
# If launching synchronize against docker container
# use rsync_opts to support container to override rsh options
# Replicate what we do in the module argumentspec handling for lists
# run the module and store the result
# (c) 2013-2018, Adam Miller (maxamillion@fedoraproject.org)
# Firewalld is not currently running, permanent-only operations
# Import other required parts of the firewalld API
# In firewalld version 0.7.0 this behavior changed
# type: (firewall.client, tuple, str, bool, bool, bool)
# List of messages that we'll call module.fail_json or module.exit_json
# with.
# Allow for custom messages to be added for certain subclass transaction
# exception handling
# If there are any commonly known errors that we should provide more
# context for to help the users diagnose what's wrong. Handle that here
# Pre-run version checking
# Check for firewalld running
# Copyright (c) 2023 Maxwell G <maxwell@gtmx.me>
# This particular file snippet, and this file snippet only, is based on
# Lib/posixpath.py of cpython
# It is licensed under the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved"
# are retained in Python alone or in any derivative version prepared by Licensee.
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis.  PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee.  This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
# path/.. on a different device as path
# path/.. is the same i-node as path
# Copyright (c) 2020 Ansible Project
# Common options for ansible_collections.ansible.windows.plugins.module_utils.WebRequest
# Quoting a dict assumes you are quoting a KEY=value pair for MSI arguments. The key can only be
# '[A-Z0-9_\.]' so we don't attempt to quote the key, only the value. If another format is desired then it
# need to be done manually, i.e. '/KEY:{{ value | ansible.windows.quote }}'.
# The interfaces in this file are meant for use within the ansible.windows collection
# type: (text_type) -> text_type
# https://docs.microsoft.com/en-us/archive/blogs/twistylittlepassagesallalike/everyone-quotes-command-line-arguments-the-wrong-way
# Replace any double quotes in an argument with '\"'.
# We need to double up on any '\' chars that preceded a double quote (now '\"').
# Double up '\' at the end of the argument so it doesn't escape out end quote.
# Finally wrap the entire argument in double quotes now we've escaped the double quotes within.
# https://docs.microsoft.com/en-us/archive/blogs/twistylittlepassagesallalike/everyone-quotes-command-line-arguments-the-wrong-way#a-better-method-of-quoting
# 'file &whoami.exe' would result in 'whoami.exe' being executed and then that output being used as the argument
# instead of the literal string.
# Copyright: (c) 2016, Ansible, inc
# Copyright: (c) 2017, Andrew Saraceni <andrew.saraceni@gmail.com>
# Copyright: (c) 2014, Paul Durivage <paul.durivage@rackspace.com>
# Copyright: (c) 2019, Carson Anderson <rcanderson23@gmail.com>
# Copyright: (c) 2020, Brian Scholer (@briantist)
# Copyright: (c) 2015, Jon Hawkesworth (@jhawkesworth) <figs@unity.demon.co.uk>
# Copyright: (c) 2022, DataDope (@datadope-io)
# Copyright: (c) 2015, Hans-Joachim Kliemeck <git@kliemeck.de>
# Copyright: (c) 2017, Noah Sparks <nsparks@outlook.com>
# Copyright: (c) 2014, Chris Hoffman <choffman@chathamfinancial.com>
# Copyright: (c) 2018, Ripon Banik (@riponbanik)
# Copyright: (c) 2017, Dag Wieers <dag@wieers.com>
# Copyright: (c) 2020 VMware, Inc. All Rights Reserved.
# SPDX-License-Identifier: GPL-3.0-only
# Copyright: (c) 2021 Sebastian Gruber ,dacoso GmbH All Rights Reserved.
# Copyright: (c) 2019, Hitachi ID Systems, Inc.
# Copyright: (c) 2015, Matt Davis <mdavis_ansible@rolpdog.com>
# Copyright: (c) 2014, Trond Hindenes <trond@hindenes.com>, and others
# Copyright: (c) 2015, Adam Keech <akeech@chathamfinancial.com>
# Copyright: (c) 2015, Josh Ludwig <jludwig@chathamfinancial.com>
# Copyright: (c) 2017, Michael Eaton <meaton@iforium.com>
# Copyright: (c) 2016, Ansible Project
# Copyright: (c) 2019, RusoSova
# Copyright: (c) 2015, Phil Schwartz <schwartzmx@gmail.com>
# Copyright: (c) 2015, Trond Hindenes
# Copyright: (c) 2020, Håkon Heggernes Lerring <hakon@lerring.no>
# Copyright: (c) 2023, Jordan Pitlor <jordan@pitlor.dev>
# Copyright: (c) 2015, Trond Hindenes <trond@hindenes.com>, and others
# Copyright: (c) 2017, Red Hat, Inc.
# Copyright: (c) 2016, Red Hat | Ansible
# Copyright: (c) 2014, Paul Durivage <paul.durivage@rackspace.com>, and others
# Copyright: (c) 2015, Corwin Brown <corwin@corwinbrown.com>
# Copyright: (c) 2014, Matt Martz <matt@sivel.net>, and others
# Copyright: (c) 2022, Oleg Galushko (@inorangestylee)
# Copyright: (c) 2019, Prasoon Karunan V (@prasoonkarunan)
# Copyright: (c) 2025, Red Hat, Inc.
# Copyright: (c) 2019, Micah Hunsberger (@mhunsber)
# Copyright: (c) 2018, Micah Hunsberger (@mhunsber)
# Copyright: (c) 2018, Matt Davis <mdavis@ansible.com>
# Defaults are applied in reboot_action so skip adding to kwargs if the input wasn't set (None)
# Setting a lower value and kill PowerShell when sending the shutdown command. Just use the defaults
# if this is the case.
# Satisfy Python 2 which doesn't have typing.
# type: (int) -> str
# The code exists here so we don't have to transfer all this data when kicking off some updates.
# Raw information on all the updates found in the current result. The key is the update_id (GUID).
# Key for the following is the update_id.
# Updates that passed the selection criteria
# Updates that were filtered and the reasons why it was filtered
# Updates that were downloaded and the result
# Updates that were installed and the result
# When running in async the module itself waits for the result and formats the results.
# Build the final results to return to the caller.
# Remove _wait in the invocation args to avoid confusing users
# type: (Dict, Dict, bool, int) -> Dict
# In case we are running with become we need to make sure the module uses the correct dir
# Check that at least 1 update has not already been installed. This
# is to detect an update that may have been rolled back in the last
# reboot or whether WUA failed to report that the update failed.
# A failure could indicate a reboot was required from a previous install or just a faulty WUA. When
# reboot=True we should at least attempt to reboot once before considering it a failure.
# Clear the previous failure flag as the last update was successful.
# If reboot=False, in check mode, not installing, and no further updates were found on the last round,
# then break from the loop as we are done.
# type: (Dict, Dict) -> UpdateResult
# Try our best to cancel the update task on an unknown failure.
# The last thing we should be doing is to cancel the task. This
# stops a process that might still be running and is also used in
# a normal operation to signal the exit status has been received.
# type: (Dict, Optional[Dict], str, Optional[Dict], bool) -> Dict
# WinRM based connections are problematic when the host is under load.
# When polling the output, ignore connection/timeout failures and try
# again in case it was a temporary error. We can only do this during
# the polling stage as that's the only time we can ignore a dropped
# progress message.
# First run through we want to update the invocation value in the final results
# type: (str) -> Dict
# type: (str, Dict, Optional[str]) -> None
# This is very chatty, only display every 25% of the total completion - rest goes to debug.
# FUTURE: Always display once we can do host specific progress updates
# Reset for the install phase
# HRESULT values returned from pwsh are signed, we compare with unsigned ints in Python.
# try the 2.19+ version that can preserve user-set `ansible_managed` first
# accept the extra arg at the call-site and silently discard for older core releases
# stings
# force templar to use AnsibleEnvironment to prevent issues with native types
# https://github.com/ansible/ansible/issues/46169
# Copyright: (c) 2025, Ansible Project
# Replace the script argument with the contents of the local script.
# Restores the invocation back to the original state.
# This was a a circular symlink.  So add it as
# encoding the file/dir name with base64 so Windows can unzip a unicode
# filename and get the right name, Windows doesn't handle unicode names
# very well
# copy the file across to the server
# create local zip file containing all the files and directories that
# need to be copied to the server
# send zip file to remote, file must end in .zip so
# Com Shell.Application works
# run the explode operation of win_copy on remote
# If content is defined make a temp file and write the content into it
# if content comes to us as a dict it should be decoded json.
# We need to encode it back into a string and write it out
# all actions should occur on the remote server, run win_copy module
# find_needle returns a path that may not have a trailing slash on a
# directory so we need to find that out first and append at the end
# explicitly mark it so (note - win_copy module relies on this).
# Source is a file, add details to source_files dict
# check if dest ends with / or \ and append source filename to dest
# replace \\ with / so we can use os.path to get the filename or dirname
# find out the files/directories/symlinks that we need to copy to the server
# src is not required for query, will fail path validation is src has unix allowed chars
# we only need to copy 1 file, don't mess around with zips
# either multiple files or directories need to be copied, compress
# to a zip and 'explode' the zip on the server
# TODO: handle symlinks
# no operations need to occur
# remove the content tmp file and remote tmp file if it was created
# in this case, we'll make the filters return error messages (see bottom)
# IP addresses and networks
# This filter is designed to do ipv6 conversion in required format
# Copyright 2023 Red Hat
# spanning_cidr needs at least two values
# Need to install python's netaddr for these filters to work
# normalize value and test variables into an ipaddr
# get first and last addresses as integers to compare value and test; or cathes value when case is /32
# ip.version == 4:
# This filter is designed to fetch first or last bits of IPV6 address
# normalize network variable into an ipaddr
# create an empty list to fill and return
# normalize address variables into an ipaddr
# subnet index out of range
# This filter is designed to do simple IP math/arithmetic
# Note, this file can only be used on the control node
# where ansible is installed
# limit imports to filter and lookup plugins
# JinjaPluginIntercept.get() raises an exception instead of returning None
# in ansible-core 2.15+
# for k in list(data.keys()):
# check if plugin configuration option passed as kwargs
# valid for lookup, filter, test plugins or pass through
# variables if supported by the module.
# check if plugin configuration option passed in task vars eg.
# check if plugin configuration option as passed as enviornment  eg.
# env:
# - name: ANSIBLE_VALIDATE_JSONSCHEMA_DRAFT
# ---- IP address and network query helpers ----
# We don't have any query to process, so just check what type the user
# expects, and return the IP address in a correct format
# /31 networks in netaddr have no broadcast address
# For the first IPv6 address in a network, netaddr will return it as a network address, despite it being a valid host address.
# Does it make sense to raise an error
# fallback to support netaddr < 1.0.0
# attempt to emulate IPAddress.is_global() if it's not available
# note that there still might be some behavior differences (e.g. exceptions)
# Administrative Multicast
# Multicast test network
# 6to4 anycast relays (RFC 3068)
# deprecate
# 'ip_wildcard': _ip_wildcard_query, built then could not think of use case
# Check if value is a list and parse each element
# TODO: Remove this check in a major version release of collection with porting guide
# TODO: and raise exception commented out below
# Check if value is a number and convert it to an IP address
# We don't know what IP version to assume, so let's check IPv4 first,
# then IPv6
# IPv4 didn't work the first time, so it definitely has to be IPv6
# The value is too big for IPv6. Are you a nanobot?
# We got an IP address, let's mark it as such
# value has not been recognized, check if it's a valid IP string
# value is a valid IP string, check if user specified
# CIDR prefix or just an IP address, this will indicate default
# output format
# value hasn't been recognized, maybe it's a numerical CIDR?
# It's not numerical CIDR, give up
# It is something, so let's try and build a CIDR from the parts
# It's not a valid IPv4 CIDR
# It's not a valid IPv6 CIDR. Give up.
# We have a valid CIDR, so let's write it in correct format
# We have a query string but it's not in the known query types. Check if
# that string is a valid subnet, if so, we can check later if given IP
# address/network is inside that specific subnet
# ?? 6to4 and link-local were True here before.  Should they still?
# This code checks if value maches the IP version the user wants, ie. if
# it's any version ("ipaddr()"), IPv4 ("ipv4()") or IPv6 ("ipv6()")
# If version does not match, return False
# ---- HWaddr query helpers ----
# ---- HWaddr / MAC address filters ----
# the nxos | xml includes a odd garbage line at the end, so remove it
# PY2 compatibility for JSONDecodeError
# All available schema versions with the format_check and validator class names.
# Older jsonschema version
# Either no draft was specified or specified draft has no validator class
# in installed jsonschema version. Do autodetection instead.
# TODO: Remove when Python 3.6 support is dropped.
# On jsonschema<4.5, there is no connection between a validator and the correct format checker.
# So we iterate through our known list of validators and if one matches the current class
# we use the format_checker from that validator.
# use C version if possible for speedup
# IEEE EUI-48 upper and lower, commom unix
# Cisco triple hextex
# Bare
# Not all parsers use a template, in the case a parser provides
# an extension, provide it the template path
# Not all parsers require the template contents
# when true, provide the template contents
# ensure the response returned to the controller
# contains only native types, nothing unique to the parser
# found a '.', move to the next character
# found a '[', pop until ']' and then get the next
# make numbers numbers
# or strip the quotes
# pop'ed past the end of the que
# so add the final field
# All keys should be identical
# TODO: Update this to point to functionality being exposed in 2.11
# ansible-base 2.11 should expose argspec validation outside of the
# ansiblemodule class
# TODO: Support extends_documentation_fragment
# Copyright: (c) 2019 Ansible, Inc
# Parse ansible_ssh_common_args, specifically looking for ProxyCommand
# _split_ssh_args split ProxyCommand from the command itself
# ProxyCommand and the command itself are a single string
# this abruptly closes the connection when
# scp.get fails only when the file is not there
# it works fine if the file is actually present
# 2017 Red Hat Inc.
# only close the connection if its connected.
# Only insert "ciphers" kwarg for ansible-core versions >= 2.14.0.
# Emit warning when "ansible_httpapi_ciphers" is set but not supported
# Avoid modifying passed-in headers
# The default behavior, retry indefinitely until timeout.
# Try to assign a new auth token if one is given
# If network_os is not specified then set the network os to auto
# This will be used to trigger the use of guess_network_os when connecting.
# to_ele operates on native strings
# TO-DO: Add logic to scan ssh_* args to read ProxyCommand
# Try to guess the network_os if the network_os is set to auto
# If we have tried to detect the network_os but were unable to i.e. network_os is still 'auto'
# then use default as the network_os
# Network os not discovered. Set it to default
# sock is only supported by ncclient >= 0.6.10, and will error if
# included on older versions. We check the version in
# _get_proxy_command, so if this returns a value, the version is
# fine and we have something to send. Otherwise, don't even send
# the option to support older versions of ncclient
# Managing prompt context
# Support autodetection of supported library
# NOTE: This MUST be paramiko or things will break
# To maintain backward compatibility
# not accessible code alert can be taken out at around  01-01-2027,
# when connection local is removed
# Retain old look_for_keys behaviour, but only if not set
# This actually can't be overridden yet without changes in ansible-core
# TODO: Uncomment when appropriate
# self.queue_message(
# TODO: This works, but is not really ideal. We would rather use
# set cli prompt context at the start of new task run only
# if data is still received on channel it indicates the prompt string
# is wrongly matched in between response chunks, continue to read
# remaining response.
# restart command_timeout timer
# reset socket timeout to global timeout
# when a channel stream is closed, received data will be empty
# check again even when handled, if same prompt repeats in next window
# (like in the case of a wrong enable password, etc) indicates
# value of answer is wrong, report this as error.
# We can't exit here, as we need to drain the buffer in case
# the error isn't fatal, and will be using the buffer again
# TODO: Should be ConnectionError when pylibssh drops Python 2 support
# Socket has closed
# set terminal regex values for command prompt and errors in response
# try cache first
# invalidate the existing cache
# populate cache
# if prompt_retry_check is enabled to check if same prompt is
# repeated don't send answer again.
# Force a fresh connect if for some reason we have connected before.
# TO-DO: support jsonfile or other modes of caching with
# AnsiblePlugin base class in Ansible 2.9 does not have has_option() method.
# TO-DO: use has_option() when we drop 2.9 support.
# (c) 2022 Ansible Project
# noqa: F401  # pylint: disable=unused-import
# Copyright (c) 2018 Cisco and/or its affiliates.
# enable is implemented inside the network connection plugins
# Needed to satisfy PluginLoader's required_base_class
# Sort and remove duplicates
# Single VLAN
# Run of 2 VLANs
# Run of 3 or more VLANs
# First line (" switchport trunk allowed vlan ")
# Subsequent lines (" switchport trunk allowed vlan add ")
# Remove trailing orphan commas
# Sometimes text wraps to next line, but there are no remaining VLANs
# for source and destination address
# forces all criteria to match
# holds final acl data after removal of aces
# holds removed acl information
# ["acls"]
# ipv4 or v6
# filter by acl_name ignores whole acl entries i.e all aces
# iterate on ace entries
# check matching criteria and remove from final dict
# removes one ace entry per acl
# for remove all
# store filtered aces
# store removed aces
# check if xpath ends with attribute.
# If yes set attribute key/value dict to param value in case attribute matches
# else if it is a normal xpath assign matched element text value.
# Vendored copy of https://github.com/python/cpython/blob/62d55a4d11fe25e8981e27e68ba080ab47c3f590/Lib/telnetlib.py
# Python Software Foundation License 2.0 (see LICENSES/PSF-2.0.txt or https://opensource.org/licenses/Python-2.0)
# SPDX-License-Identifier: PSF-2.0
# Imported modules
# Tunable parameters
# Telnet protocol defaults
# Telnet protocol characters (don't change)
# "Interpret As Command"
# Telnet protocol options code (don't change)
# These ones all come from arpa/telnet.h
# prepare to reconnect
# approximate message size
# give status
# timing mark
# remote controlled transmission and echo
# negotiate about output line width
# negotiate about output page size
# negotiate about CR disposition
# negotiate about horizontal tabstops
# negotiate about horizontal tab disposition
# negotiate about formfeed disposition
# negotiate about vertical tab stops
# negotiate about vertical tab disposition
# negotiate about output LF disposition
# extended ascii character set
# force logout
# byte macro
# data entry terminal
# supdup protocol
# supdup output
# send location
# terminal type
# end or record
# TACACS user identification
# output marking
# terminal location number
# 3270 regime
# X.3 PAD
# window size
# terminal speed
# remote flow control
# Linemode option
# X Display Location
# Old - Environment variables
# Authenticate
# Encryption option
# New - Environment variables
# the following ones come from
# http://www.iana.org/assignments/telnet-options
# Unfortunately, that document does not assign identifiers
# to all of them, so we are making them up
# TN3270E
# XAUTH
# CHARSET
# Telnet Remote Serial Port
# Com Port Control Option
# Telnet Suppress Local Echo
# Telnet Start TLS
# KERMIT
# SEND-URL
# FORWARD_X
# TELOPT PRAGMA LOGON
# TELOPT SSPI LOGON
# TELOPT PRAGMA HEARTBEAT
# Extended-Options-List
# Buffer for IAC sequence.
# flag for SB and SE sequence.
# 'IAC: IAC CMD [OPTION only for WILL/WONT/DO/DONT]'
# SB ... SE start.
# Callback is supposed to look into
# the sbdataq
# We can't offer automatic processing of
# suboptions. Alas, we should not get any
# unless we did a WILL/DO before.
# raised by self.rawq_getchar()
# Reset on EOF
# The buffer size should be fairly small so as to avoid quadratic
# behavior in process_rawq() above
# res = self._check_reqs()
# if res.get("errors"):
# this is a virtual module that is entirely implemented server side
# (c) 2024, Ansible by Red Hat, inc
# Method not found
# (c) 2018, Ansible by Red Hat, inc
# (c) 2016, Leandro Lisboa Penz <lpenz at lpenz.org>
# to maintain backward compatibility for ansible 2.9 which
# defaults to "subtree" filter type
# identify target datastore
# Netconf server capability validation against input options
# lock is requested (always/if-support) and supported => lets do it
# lock is requested (always/if-supported) but not supported => issue warning
# check for format of type json/xml/xpath
# assumption content contains already XML payload
# trying if content contains dict
# explicit close-session is not allowed, as this would make the next
# NETCONF operation to the same host fail
# If source is None, NETCONF <get> operation is issued, reading config/state data
# from the running datastore. The python expression "(source or 'running')" results
# in the value of source (if not None) or the value 'running' (if source is None).
# FIXME, default to play_context?
# attempt to run using dexec
# find and load the module
# not using AnsibleModule, return to normal run (eg eos_bgp)
# patch and update the module
# execute the module, collect result
# Do not overwrite the destination if the contents match.
# Create a template search path in the following order:
# [working_path, self_role_path, dependent_role_paths, dirname(source)]
# log early about dexec
# disable dexec when not PY3
# disable dexec when running async
# 2.10
# 2.9
# build an AnsibleModule that doesn't load params
# update the task args w/ all the magic vars
# set the params of the ansible module cause we're not using stdin
# use a copy so the module doesn't change the original task args
# give the module our revised AnsibleModule
# preserve stdout/stderr to swap and restore later, create private buffers to capture
# temporarily swap stdout/stderr with private buffers
# allow unhandled module exceptions to fly; handled by TE to preserve as much error detail as possible
# module exited cleanly
# restore stdout/stderr
# if 2.19.1+, anything using fail_json/exit_json from the patched module should have recorded _raw_result
# if _raw_result is not available, results should be on stdout/stderr
# parse the response
# Clean up the response like action _execute_module
# split stdout/stderr into lines if needed
# Copyright 2018 Red Hat Inc.
# Copyright 2021 Red Hat Inc.
# get list of supported resource modules for given os_name
# get resource module facts for given host
# push resource module configuration
# handle short name redirection not working for ansible-2.9
# parse module docs to check for 'config' and 'state' options to identify it as resource module
# (c) 2018, Ansible Inc,
# It is supported only with network_cli
# Get destination file if specified
# Get proto
# Get mode if set
# Now src has resolved file write to disk in current diectory for scp
# Cleanup tmp file expanded wih ansible vars
# IOSXR sometimes closes socket prematurely after completion
# of file transfer
# Simplified BSD License (see LICENSES/BSD-2-Clause.txt or https://opensource.org/licenses/BSD-2-Clause)
# SPDX-License-Identifier: BSD-2-Clause
# (c) 2022 Red Hat, Inc.
# (c) 2018 Red Hat, Inc.
# remove attributes
# Copyright (c) 2015 Peter Sprygada, <psprygada@ansible.com>
# No socket_path, connection most likely local.
# handle top level commands
# handle sub level commands
# block extracted from other does not have all parents
# but the last one. In case of multiple parents we need
# to add additional parents.
# generate a list of ConfigLines that aren't in other
# If parent of current line not added in expanded list flag it
# to be added later on
# check if parent of current line is already added, if added don't
# add again
# global config command
# handle ignore lines
# add parent to config
# add child objects
# check if child already exists
# Note: Workaround for ncclient 0.5.3
# Networking tools for network modules only
# forward compatibility for Python 3
# Get fallback if defined, and valid
# Get default if defined, otherwise set to None
# Coerce authorize to provider if a string has somehow snuck in.
# wipe out the commands list to avoid issues if additional
# commands are executed later
# default subset should always returned be with legacy facts subsets
# (c) 2021 Red Hat Inc.
# pylint: disable=R0902
# Error out if empty config is passed for following states
# TODO: Remove this import when we no longer support ansible < 2.11
# backward compatibility for modules, in which, module is not passed
# to the NetworkTemplate
# GCP doc fragment.
# support GCP_* env variables for some parameters
# Extract name from the first index
# there was an error listing parameter versions
# set version to the latest version because
# we can't be sure that "latest" is always going
# to be set if secret versions get disabled
# see https://issuetracker.google.com/issues/286489671
# generic server error
# generic client request error
# all other possible errors
# there was an error listing secret versions
# Copyright (c) 2025 Red Hat
# GNU General Public License v3.0+ https://www.gnu.org/licenses/gpl-3.0.txt
# I had to duplicate (almost?) all of the documentation found in  the
# ansible.plugins.connection.ssh plugin, due to how ansible-doc and ansible-test sanity
# work, they look at the lexical structure of the code. I had initially done:
# 1. load ssh.py DOCUMENTATION string into a yaml object
# 2. modify the yaml object to change defaults / add my options
# 3. set this plugin's DOCUMENTATION to the yaml.dump() of the modified object
# but that doesn't work with how the AST evaluation is done.
# here are the changes from upstream:
# 1. Changed default private_key_file default to ~/.ssh/google_compute_engine
# 2. Make host_key_checking default to False
# 3. Added known_hosts_file option pointing to ~/.ssh/google_compute_known_hosts
# 4. Added missing scp_if_ssh option to fix compatibility issues
# start-iap-tunnel prints 2 lines:
# - Picking local unused port [$PORT].
# - Testing if tunnel connection works.
# and only when the terminal is a pty, a 3rd line:
# - Listening on port [$PORT].
# The last line only displayed after the tunnel has been tested,
# that's why we use a PTY for the subprocess
# pty is closed
# no need to monitor if already up
# wait up to 5 seconds to terminate IAP
# joining thread back should be quick
# If the gcloud binary isn't found/configured, bail out immediately
# this shouldn't happen, but still.
# override port with the random IAP port
# read path to the supplied known hosts file
# have to trick SSH to connect to localhost instead of the instances
# trick
# avoid multiple entries
# as defined in opts
# prepend our generated ssh config to all ssh_args if not already present
# Terminate IAP
# remove ssh config
# (c) 2019, Eric Anderson <eric.sysmin@gmail.com>
# Usage:
# Mocking a module to reuse module_utils
# If no metadata, 'items' will be blank.
# We want the metadata hash overriden anyways for consistency.
# Get public IP if exists
# Fallback: Get private IP
# For multiple queries, all queries should have ()
# If not found, return nothing.
# If no content, return nothing.
# example k would be a zone or region name
# example v would be { "disks" : [], "otherkey" : "..." }
# If zones specified, only store those zones' data
# setup parameters as expected by 'fake module class' to reuse module_utils w/o changing the API
# Cache logic
# Fetch all instances
# Copyright (C) 2017 Google
# TODO: capacity_scaler does some value normalization
# server-side, so there needs to be a way to do proper
# value comparison.
# Remove all output-only from response.
# req = GcpRequest(request_vals)
# res = GcpRequest(response_vals)
# import epdb; epdb.serve()
# Remove unnecessary properties from the response.
# This is for doing comparisons with Ansible's current parameters.
# Transform module list of instances (dicts of instance responses) into a list of selfLinks.
# Find all instances to add and add them
# Find all instances to remove and remove them
# Transform instance list into a list of selfLinks for diffing with module parameters
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt
# or https://www.gnu.org/licenses/gpl-3.0.txt)
# for decoding and validating parameters
# build the payload
# validate create
# filter by only enabled parameter version
# probably a code error
# base64 decode the value
# if parameter not exist
# doesn't exist, must create
# create a new parameter
# specified present and verison is provided but value is not provided
# specified present and verison is not provided
# that no parameter could be created without a version
# specified present but no value
# that no parameter version could be created without a value to encrypt
# that no parameter could be created without a value to encrypt
# parameter and parameter version both exist
# check if the value is the same
# if not, delete the version and create new one
# if the value is the same, do nothing
# Delete existing version and create new one
# # pop value data if return_value == false
# Short names are given (and expected) by the API
# but are returned as full names.
# Sets this version as default.
# Need to set triggerHttps to {} if boolean true.
# If file exists, we're doing a no-op or deleting the key.
# If file exists and we should delete the file, delete it.
# Create the file if present state and no current file.
# Not returning any information about the key because that information should
# end up in logs.
# return_if_object not used in any context where 404 means error.
# SQL only: return on 403 if not exist
# shared_secret is returned with stars instead of the
# actual secret
# transport can be either the request or response objects
# Google Container Engine API has its own layout for the create method,
# defined like this:
# Format the request to match the expected input by the API
# Deletes the default node pool on default creation.
# GcpRequest handles unicode text handling
# `config` is a useful parameter for declarative syntax, but
# is not a part of the GCP API
# for decoding and validating secrets
# if this occurs, there are no available secret versions
# handle the corner case that we tried to delete
# a secret version that doesn't exist
# create secret is a create call + an add version call
# filter by only enabled secrets
# technically we're destroying the version
# delete secret does not take "latest" as a default version
# get the latest version if it doesn't exist in the request
# limited support for parameters described in the "Secret" resource
# in order to simplify and deploy primary use cases
# expectation is customers needing to support additional capabilities
# in the SecretPayload will do so outside of Ansible.
# ref: https://cloud.google.com/secret-manager/docs/reference/rest/v1/projects.secrets#Secret
# nothing came back, so the secret doesn't exist
# create a new secret
# fail, let the user know
# that no secret could be created without a value to encrypt
# secret is absent, success
# delete the secret version (latest if no version is specified)
# delete the secret
# check to see if the values are the same, and update if neede
# Update secret
# pop value data if return_value == false
# Fetch current SOA. We need the last SOA so we can increment its serial
# Create a clone of the SOA record so we can update it
# TODO(nelsonjr): Merge and delete this code once async operation
# declared in api.yaml
# Check if files exist.
# Upload
# Mask the fact healthChecks array is actually a single object of type
# HttpHealthCheck.
# Google Compute Engine API defines healthChecks as a list but it can only
# take [0, 1] elements. To make it simpler to declare we'll map that to a
# single object and encode/decode as appropriate.
# Mask healthChecks into a single element.
# @see encode_request for details
# Map healthChecks[0] => healthCheck
# catches and edge case specific to IAM roles where the role not
# existing returns 400.
# fetch all non-boot (i.e. additional) disks to attach
# but discard local disks (if defined) because they can
# only be attached to instances at creation time anyway
# TODO(alexstephen): Implement updating metadata on existing resources.
# Expose instance 'metadata' as a simple name/value pair hash. However the API
# defines metadata as a NestedObject with the following layout:
# metadata {
# Map metadata.items[]{key:,value:} => metadata[key]=value
# The create response includes the certificate, but it's not usable until
# the operation completes. The create response is also the only place the
# private key is available, so return the newly created resource directly.
# Copyright (c), Google Inc, 2017
# Blank dictionaries should return None or GCP API may complain.
# Handles the replacement of dicts with values -> the needed value for GCP API
# Handles all authentication and HTTP sessions for GCP API calls.
# The following methods fully mimic the requests API and should be used.
# Only log the message to avoid logging any sensitive info.
# This class does difference checking according to a set of GCP-specific rules.
# This will be primarily used for checking dictionaries.
# In an equivalence check, the left-hand dictionary will be the request and
# the right-hand side will be the response.
# Rules:
# Extra keys in response will be ignored.
# Ordering of lists does not matter.
# Returns the difference between a request + response.
# While this is used under the hood for __eq__ and __ne__,
# it is useful for debugging.
# Remove all empty values from difference.
# Takes in two lists and compares them.
# All things in the list should be identical (even if a dictionary)
# Have to convert each thing over to unicode.
# Python doesn't handle equality checks between unicode + non-unicode well.
# We have to compare each thing in the request to every other thing
# in the response.
# This is because the request value will be a subset of the response value.
# The assumption is that these lists will be small enough that it won't
# be a performance burden.
# Looking for a None value here.
# Compare two values of arbitrary types.
# If a None is found, a difference does not exist.
# Only differing values matter.
# Can assume non-None types at this point.
# Always use to_text values to avoid unicode issues.
# to_text may throw UnicodeErrors.
# These errors shouldn't crash Ansible and should be hidden.
# Compare two boolean values.
# Both True
# Value1 True, resp_value 'true'
# Both False
# Value1 False, resp_value 'false'
# Python (2 esp.) doesn't do comparisons between unicode + non-unicode well.
# This leads to a lot of false positives when diffing values.
# The Ansible to_text() function is meant to get all strings
# into a standard format.
# Register an HTTP function with the Functions Framework
# Your code here
# Return an HTTP response
# Copyright: (c) 2017, Simon Dodsley <simon@purestorage.com>
# Standard Pure Storage documentation fragment
# Documentation fragment for FlashArray
# (c) 2019, Simon Dodsley (simon@purestorage.com)
# These are legacy values. Return 0 for backwards compatability
# Add additional SMI-S section to help with formatting
# issues caused by `-` in the dict name.
# there are currently no rules for autodir policies
# Provide system as this matches the old naming convention
# Backwards compatability
# resource_reference
# (c) 2021, Simon Dodsley (simon@purestorage.com)
# Copyright: (c) 2017, Simon Dodsley (simon@purestorage.com)
# (c) 2020, Simon Dodsley (simon@purestorage.com)
# (c) 2018, Simon Dodsley (simon@purestorage.com)
# (c) 2024, Simon Dodsley (simon@purestorage.com)
# Check for special case of deleting a system-defined role.
# Here we have to just blank out the group and group_base fields
# check for system-defined role and update it instead of creating it
# FlashBlade API tokens start with "T-" so use that to differentiate
# fleet member platform type
# Copyright: (c) 2020, Simon Dodsley (simon@purestorage.com)
# (c) 2022, Simon Dodsley (simon@purestorage.com)
# Time periods in micro-seconds
# Initialize at default value
# api_version = array.get_rest_version()
# Until get_controller has context_names we can check against a target system
# so CBS can't be support for Fusion until 6.8.4??
# if LooseVersion(CONTEXT_API_VERSION) <= LooseVersion(api_version):
# current_realm = ""
# if "::" in module.params["name"]:
# Initialize res
# If no context is provided set the context to the local array name
# set banner if empty value or value differs
# clear banner if it has a value
# quota
# Set a new keep-for after recovery if requested
# (c) 2023, Simon Dodsley (simon@purestorage.com)
# Initialize for pylint
# As we may be on a single controller device, only check for the ct0 version of the interface
# Modify FC Interface settings
# Modify ETH Interface settings
# CBS can't be support for Fusion
# Until get_arrays_ntp_test has context_names we can check against a target system
# test_ntp is not supported with context
# Must be HEX string is greter than 20 characters
# Fail if context is set as not supported
# 2018, Simon Dodsley (simon@purestorage.com)
# Currently only 1 SMTP server is configurable
# (c) 2025, Simon Dodsley (simon@purestorage.com)
# Start the workload calculation for the preset being used
# Wait for the workload calulation to complete
# Replace any defined placement with the result from the recommendation
# Update preset name with fleet prefix
# (c) 2017, Simon Dodsley (simon@purestorage.com)
# We have a host PG
# We have a hostgroup PG
# First check if there are any volumes in the host groups
# Second check for host specific volumes
# We have a volume PG
# Copyright: (c) 2024, Simon Dodsley (simon@purestorage.com)
# Copyright (c) 2024 Simon Dodsley, <simon@purestorage.com>
# Copyright (c), Simon Dodsley <simon@purestorage.com>,2017
# Documentation fragment for FlashBlade
# TODO: Check SMB mode.
# If mode is SMB adapter only allow nfs
# Only allow cifs or HOST is SMB mode is native
# REST 1 does not support fan-out for replication
# REST 2 has a limit which we can check
# Demotion only allowed on filesystems in a replica-link
# Need to create the policy with its first rule
# Create a new rule for the policy
# Only currently allowed option is:
# GET, PUT, HEAD, POST, DELETE
# (c) 20224, Simon Dodsley (simon@purestorage.com)
# Remove duplicates
# To make the bucket truely public we have to create a bucket access policy
# and rule
# From REST 2.12 classic is no longer the default mode
# Special case as we have changed 'target' to be a string not a list of one string
# so this provides backwards compatability
# User Create adds the pure:policy/full-access policy by default
# If we are specifying a list then remove this default value
# Now we have enabled the DS lets make sure there aren't any new updates...
# GNU General Public License v3.0+ (see COPYING or
# https://www.gnu.org/licenses/gpl-3.0.txt)
# This section is just for REST 2.x features
# skip processing buckets marked as destroyed
# Calls for data only available from Purity//FB 3.2 and higher
# for Python < 2.7
# Could not find configuration-set in reply, perhaps device does not support it?
# this is done to filter out `delete ...` statements which map to
# nothing in the config as that will cause an exception to be raised
# update operations
# deprecated replace in Ansible 2.3
# config operations
# confirm a previous commit
# if display format is not mentioned in command, add the display format
# from the modules params
# (c) 2019, Ansible by Red Hat, inc
# if key does exist, do a type check on it to validate it
# We need to pass in the path to the ssh_config file when guessing
# the network_os so that a jumphost is correctly used if defined
# Due to issue in ncclient commit() method for Juniper (https://github.com/ncclient/ncclient/issues/238)
# below commit() is a workaround which build's raw `commit-configuration` xml with required tags and uses
# ncclient generic rpc() method to execute rpc on remote host.
# Remove below method after the issue in ncclient is fixed.
# TODO: Remove below code after ansible minimal is cut out
# for legacy reasons provider value is required for junos_facts(optional) and junos_package
# modules as it uses junos_eznc library to connect to remote host
# (c) 2017 Red Hat, Inc.
# if warning is received from device diff is empty.
# build xml subtree
# operation 'delete' is added as element attribute
# only if it is key or leaf only node
# convert param value to device specific value
# eg: top = 'system/syslog/file'
# <file>
# </file>
# if value of tag_only node is false, delete the node
# Add value of leaf node if required while deleting.
# in some cases if value is present while deleting, it
# can result in error, hence the check
# set replace attribute at parent node
# set active/inactive at parent node
# fetch old style facts only when explicitly mentioned in gather_subset option
# Parse facts for security policies
# parse name of policy
# parse match criteria of security policy
# end of match criteria parsing
# parse match action of security policy
# end of match action parsing
# parse description of security policy
# parse scheduler name of security policy
# Copyright (C) 2020  Red Hat, Inc.
# Parse facts for BGP address-family global node
# Read arp node
# Read client_lists node
# Read communities node
# Read routing-instance-access
# Read contact
# Read description
# Read customization
# Read engine-id
# Read filter-duplicates
# Read filter-interfaces
# Read health_monitor
# Read if-count-with-filter-interfaces
# Read interfaces
# Read location
# Read logical-system-trap-filter
# Read name
# Read nonvolatile
# Read rmon
# Read subagent node
# Read traceoptions node
# Read trap-group node
# Read trap-options node
# Read snmp-v3
# Read view
# Parse facts for security zones
# Parse facts for hostname node
# Parse facts for routing instances node
# Parse attribute name
# Parse attribute dynamic-db
# Parse attribute address-prefix
# Read allow-duplicates node
# Read boot-server node
# Read broadcast node
# Read broadcast-client node
# Read interval-range node
# Read multicast-client node
# Read peer node
# Read server node
# Read source-address node
# Read threshold node
# read trusted-keys node
# Read archive node
# Read console node
# Read file node
# Read host node
# Read log-rotate-frequency node
# Read routing-instance node
# Read user node
# Read time-format
# Read child attributes
# Read any node
# Read file name node
# Read contents
# Read archives
# Read explicit priority
# Read match
# Read match-strings
# Read srtructured-data
# Read exclude-hostname node
# Read facility-override node
# Read log-prefix node
# Read port node
# Layer 2 is configured on interface
# if lacp config is not present for interface return empty dict
# Parse BGP group address-family config node
# Parse neighbors af node
# nh_af_dict["name"] = neighbor.get("neighbor_address")
# nh_af_dict["name"] = neighbor.get("name")
# TBD wrap route-target'
# Parse NLRI Parameters
# Parse accepted-prefix-limit
# populate accepted_prefix_limit
# Parse add-path
# Parse aggregate-label
# populate aggregate-label
# Parse aigp
# populate aigp
# Parse and populate damping
# Parse defer-initial-multipath-build
# populate defer_initial_multipath_build
# Parse delay-route-advertisements
# populate delay_route_advertisements
# Parse entropy-label
# populate entropy-label
# Parse explicit-null
# populate explicit-null
# Parse extended-nexthop
# Parse extended-nexthop-color
# Parse forwarding-state-bit
# Parse legacy-redirect-ip-action
# populate legacy_redirect_ip_action
# Parse local-ipv4-address
# Parse loops
# Parse no-install
# Parse no-validate
# Parse output-queue-priority
# Parse per-group-label
# Parse per-prefix-label
# Parse resolve-vpn
# Parse prefix-limit
# Parse rib
# Parse rib-group
# Parse route-refresh-priority
# Parse secondary-independent-resolution
# Parse strip-nexthop
# Parse topology
# populate topology
# Parse traffic-statistics
# Parse withdraw-priority
# if lag interfaces config is not present return empty dict
# Parse routing instances
# read instance name
# read connection-id-advertise
# read description
# read instance role
# read instance type
# read interfaces
# read l2vpn-id
# read no-irb-layer2-copy
# read no_local_switching
# read no-vrf-advertise
# read no_vrf_propagate_ttl
# read qualified_bum_pruning_mode
# read route-distinguisher
# read vrf imports
# read vrf exports
# read bridge domains
# Included for compatibility, remove after 2025-07-01
# Set ASN value into facts
# Read group
# parse neighbors
# Parse neighbors in the group list
# Read accept-remote-nexthop value
# Read add-path-display-ipv4-address value
# Parse advertise-bgp-static dictionary
# Parse advertise-external dictionary
# Read advertise-from-main-vpn-tables value
# Read advertise-inactive value
# Read advertise-peer-as value
# Read authentication-algorithm value
# Read authentication-key value
# Read authentication-key-chain value
# Parse bfd-liveness-detection dictionary
# Parse authentication dictionary
# Parse detection-time dictionary
# Parse transmit-interval dictionary
# Read holddown-interval value
# Read minimum-receive-interval value
# Read minimum-interval value
# Read multiplier value
# Read no-adaptation value
# Read session-mode value
# Read version value
# write the  bfd_liveness_detection to bgp global config dictionary
# Parse bgp-error-tolerance dictionary
# Parse bmp dictionary
# Read none attribute value
# Read post-policy attribute value
# Read pre-policy attribute value
# Read monitor value
# write the  bmp to bgp global config dictionary
# Read cluster value
# Read damping value
# Read description value
# Read disable value
# Read egress-te value
# Read egress-te-backup-paths
# We have list of templates
# Read egress-te-set-segment
# Read egress-te-sid-stats value
# Read enforce-first-as value
# Read export value
# Read forwarding-context value
# Read graceful-restart
# read forwarding-state-bit
# read long-lived
# read advertise_to_non_llgr_neighbor
# read restart-time
# read stale-routes-time
# Read hold-time value
# Read holddown-all-stale-labels value
# Read idle-after-switch-over
# Read import value
# Read include-mp-next-hop value
# Read ipsec-sa value
# Read keep value
# Read local-address value
# Read local_as value
# Read local-interface value
# Read local-preference value
# Read log-updown value
# Read metric-out
# metric value
# read igp
# read minimum-igp
# Read mtu-discovery value
# Read multihop value
# Read multipath
# Read no-advertise-peer-as value
# Read no-aggregator-id value
# Read no-client-reflect value
# Read no-precision-timers value
# Read out-delay value
# Read outbound-route-filter
# read outbound-route-filter
# read prefix-based
# read accept node attributes
# Read output-queue-priority value
# read defaults
# read high
# read low
# read medium
# read expedited
# read priority
# Read passive value
# Read path-selection value
# read med-plus-igp
# Read peer-as value
# Read precision-timers value
# Read preference value
# Read remove-private value
# Read rfc6514-compliant-safi129 value
# Read route-server-client value
# Read send-addpath-optimization value
# Read snmp-options value
# Read sr-preference-override value
# Read stale-labels-holddown-period value
# Read tcp-aggressive-transmission value
# Read tcp-mss value
# Read traceoptions value
# read file
# read flag
# Read traffic-statistics-labeled-path
# read interval
# Read ttl value
# Read unconfigured-peer-graceful-restart value
# Read vpn-apply-export value
# Read group name value
# Read as-override value
# Read allow
# Read optimal-route-reflection
# Read group type value
# Read autonomous-system
# Read router-id
# Parse facts for security policies global settings
# Copyright (C) 2019  Red Hat, Inc.
# add match criteria node
# add action node
# add zone-pair policies
# add global policies
# add arp node
# add node client list
# add name node
# add routing_instance_access
# add communities node
# add contact node
# add customization node
# add description node
# add engine_id
# add filter_duplicates node
# add filter_interfaces node
# add health_monitor node
# add if_count_with_filter_interfaces
# add interfaces node
# add location
# add logical_system_trap_filter
# add name
# add nonvolatile
# add rmon
# subagent
# traceoptions
# trap_options
# snmp_v3
# target_parameters
# usm
# add hostname node
# replace interface config with data in want
# delete interface config if interface in have not present in want
# delete lldp interfaces attribute from all the existing interface
# generate node: prefix-list
# generate node: name
# generate node: prefix-list-item
# generate name node
# generate node: dynamic_db
# get the instances in running config
# form existing instance list
# Delete target routing-instance
# Delete all the routing-instance
# add authentication-keys node
# add type node
# add value node
# add boot_server node
# add broadcast node
# add key node
# add routing-instance-name node
# add ttl node
# add version node
# add broadcast_client node
# add interval_range node
# add multicast_client node
# add peers node
# add prefer node
# add server node
# add routing-instance node
# add source_address node
# add threshold node
# add trusted key
# Look deeply into have to match replace correctly
# if ace["name"] not in ace_names:
# add allow-duplicates node
# add archive node
# add console node
# add any level node
# add file node
# add contents
# add explicit-priority
# add match node
# add match-strings node
# add structured-data
# add host node
# add exclude-hostname node
# add facility_override node
# add log_prefix node
# add port node
# add routing_instance node
# add log_rotate_frequency node
# add time_format
# add user node
# add binary-data node
# add files node
# add no-binary-data node
# add size node
# add world-readable node
# add no-world-readable node
# delete l2 interfaces attribute from all the existing interface having l2 config
# delete lag interfaces attribute for all the interface
# Generate xml node for autonomous-system
# render global address family attribute commands
# render commands for group address family attribute commands
# render neighbor address-family commands
# Add the nlri node
# Read nlri_types list
# Add the node for nlri type
# build node for accepted-prefix-limit
# Add node for maximum
# Add node for teardown
# add node for limit-threshold
# Add node for teardown idle_timeout
# add node for timeout
# add forever node
# build node for add_path
# add node for receive
# add node for send
# add node for path_count
# add node for include_backup_path
# add node for path_selection_mode
# add node for all_paths
# add node for equal_cost_paths
# add node for prefix_policy
# build node for aggregate_label
# add node community
# build node for aigp
# build node for defer_initial_multipath_build
# add node maximum_delay
# build node for delay_route_advertisements
# add maximum delay node
# add node route-age
# add node routing-uptime
# add minimum delay node
# add node inbound-convergence
# build node for entropy_label
# add node import
# add node no_next_hop_validation
# add node connected-only
# build node for explicit_null
# add node extended-nexthop
# add node extended-nexthop-color
# add node forwarding-state-bit
# add node local-ipv4-address
# add node legacy_redirect_ip_action
# add node loops
# add node no-validate
# add node output-queue-priority
# node for output-queue-priority
# add node expedited
# add node priority
# build node for prefix-limit
# add resolve-vpn
# add node inet.3
# add rib-group
# add node route-refresh-priority
# node for route-refresh-priority
# add secondary-independent-resolution
# add node withdraw-priority
# node for withdraw-priority
# add strip-nexthop
# add topology
# add traffic-statistics
# add node interval
# add node labeled-path
# Delete root address family
# Delete group address-family
# if lag interface not already configured fail module.
# delete lag configuration from member interfaces
# add node routing table-name
# add node connector-id-advertise
# add node description
# add node instance-role
# add node instance-type
# add child node interface
# add child node bridge-domains
# add node l2vpn-id TODO
# add node no-irb-layer2-copy
# add node no-local-switching
# add node no-vrf-advertise
# add node no-vrf-propagate-ttl
# add node qualified-bum-pruning-mode
# add node route-distinguisher
# add node vrf-import
# add node vrf-export
# delete base interfaces attribute from all the existing interface
# Add node for loops
# Add node for asdot_notation
# Generate commands for bgp node
# Generate commands for groups
# Generate commands for each group in group list
# Parse the boolean value attributes
# Parse the non-boolean leaf attributes
# Generate commands for nodes with child attributes
# Generate commands for each neighbors
# Generate commands for each neighbor in neighbors list
# Generate config commands for advertise-bgp-static
# Generate config commands for advertise-external
# Generate config commands for bfd-liveness-detection
# Add node for authentication
# Add node for algorithm
# Add node for key-chain
# Add node for loose-check
# Add node for detection-time
# Add node for threshold
# Add node for transmit-interval
# Add node for minimum-interval
# Add node for holddown-interval
# Add node for minimum-receive-interval
# Add node for multiplier
# Add node for no-adaptation
# Add node for session-mode
# Add node for version
# Generate config commands for bgp-error-tolerance
# Add node for malformed-route-limit"
# Add node for malformed-update-log-interval
# Generate config commands for no-malformed-route-limit
# Generate config commands for bmp
# Add node for monitor
# Add node for route-monitoring
# Add node for none
# Add node for post-policy
# Generate config commands for egress-te
# Generate config commands for egress-te-backup-paths
# generate commands for templates
# add peers
# add remote-nexthop
# add ip-forward
# Generate config commands for allow
# Generate config commands for optimal-route-reflection
# return [tostring(xml) for xml in self.root.getchildren()]
# _*_ coding: utf_8 _*_
# (see COPYING or https://www.gnu.org/licenses/gpl_3.0.txt)
# pylint: disable=W0621
# Copyright: (c) 2024, Nir Argaman, <nargaman@redhat.com>
# Azure doc fragment
# Copyright: (c) 2016 Matt Davis, <mdavis@ansible.com>
# Copyright: (c) 2016 Chris Houseknecht, <house@redhat.com>
# Copyright: (c) 2016, Matt Davis, <mdavis@ansible.com>
# Copyright: (c) 2016, Chris Houseknecht, <house@redhat.com>
# Copyright (c) 2022 Hai Cao, <t-haicao@microsoft.com>, Marcin Slowikowski (@msl0)
# (c) 2018 Yunge Zhu, <yungez@microsoft.com>
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Old version incorrectly used resource providers instead of resource type.
# Will continue to support to avoid breaking backwards compatibility.
# Get the Access Details to connect to Arc Connectivity platform from the HybridConnectivity RP
# If for some reason the request for Service Configuration fails,
# we will still attempt to get relay information and connect. If the service configuration
# is not setup correctly, the connection will fail.
# The more likely scenario is that the request failed with a "Authorization Error",
# in case the user isn't an owner/contributor.
# relay has retry delay after relay connection is lost
# must sleep for at least as long as the delay
# otherwise the ssh connection will fail
# Downloads client side proxy to connect to Arc Connectivity Platform
# Only download new proxy if it doesn't exist already
# if directory exists, delete any older versions of the proxy
# Construct status_code
# Construct parameters
# Construct headers
# Python <= 2.5
# pylint: disable=unspecified-encoding
# pylint: disable=too-many-instance-attributes
# Overwrite relay_info if it already exists in that folder.
# FUTURE: do we need a set of sane default filters, separate from the user-defineable ones?
# eg, powerstate==running, provisioning_state==succeeded
# FUTURE: use API profiles with defaults
# display.debug("azure_rm inventory filename must end with 'azure_rm.yml' or 'azure_rm.yaml'")
# Load results from Cache if requested
# cache may be True or False at this point to indicate if the inventory is being refreshed
# get the user's cache option too to see if we should save the cache if it is changing
# read if the user has caching enabled and the cache isn't being refreshed
# update if the user has caching enabled and the cache is being refreshed;
# update this value to True if the cache has expired below
# attempt to read the cache if inventory isn't being refreshed and the user has caching enabled
# This occurs if the cache_key is not in the cache or if the cache_key
# expired, so the cache needs to be updated
# parse the provided inventory source
# FUTURE: track hostnames to warn if a hostname is repeated (can happen for legacy and for composed inventory_hostname)
# FUTURE: configurable default IP list? can already do this via hostvar_expressions
# FUTURE: configurable hostvar prefix? Makes docs harder...
# constructable delegation
# FUTURE: fix underlying inventory stuff to allow us to quickly access known groupvars from reconciled host
# FUTURE: should warn/fail if conditional doesn't return True or False
# FUTURE: add direct VM filtering by tag here (performance optimization)?
# Stack HCI instances look close enough to regular VMs that we can share the handler impl...
# FUTURE: add direct VMSS filtering by tag here (performance optimization)?
# Since Flexible instance is a standalone VM we are processing them as regular VM.
# VMSS instances look close enough to regular VMs that we can share the handler impl...
# use the undocumented /batch endpoint to bulk-send up to 500 requests in a single round-trip
# FUTURE: error-tolerant operation mode (eg, permissions)
# FUTURE: store/handle errors from individual handlers
# 429: Too many requests Error, Backoff and Retry
# Remove already processed requests
# VM list (all, N resource groups): VM -> InstanceView, N NICs, N PublicIPAddress)
# VMSS VMs (all SS, N specific SS, N resource groups?): SS -> VM -> InstanceView, N NICs, N PublicIPAddress)
# 'Connected'
# Azure often doesn't provide a globally-unique filename, so use resource name + a chunk of ID hash
# 'Running'
# single-nic instances don't set primary, so figure it out...
# osType unavailable with disabled guest agent
# hci specific
# Set the attribute information related to the Uniform VMSS instance
# Set os compute name, os name, os version and hyper V generation
# Set Uniform VMSS instance's nic-related values
# set nic-related values from the primary NIC first
# and from the primary IP config per NIC first
# set image and os_disk
# Convert panda object to dict
# If no tags are present use an empty dict
# Update row with updated tags
# Copyright (c) 2020 Fred-Sun, (@Fred-Sun)
# This is handled in azure_rm_common
# Set default location
# Copyright (c) 2018 Hai Cao, <t-haicao@microsoft.com>
# Copyright (c) 2017 Obezimnaka Boms, <t-ozboms@microsoft.com>
# FUTURE: add missing record types from https://github.com/Azure/azure-sdk-for-python/blob/master/azure-mgmt-dns/azure/mgmt/dns/models/record_set.py
# define user inputs into argument
# store the results of the module operation
# create conditionals to catch errors when calling record facts
# list the conditions for what to return based on input
# if there is a name listed, they want only facts about that specific Record Set itself
# else, they just want all the record sets of a specific type
# if there is a zone name listed, then they want all the record sets in a zone
# try to get information for specific Record Set
# Copyright (c) 2020 Sakar Mehra (@sakar97), Nikhil Patne (@nikhilpatne)
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
# Copyright (c) 2018 Yunge Zhu, <yungez@microsoft.com>
# update in create_or_update as parameters
# Managed Identity
# site config, e.g app settings, ssl
# app service plan
# siteSourceControl
# site, used at level creation, or update. e.g windows/linux, client_affinity etc first level args
# property for internal usage, not used for sdk
# set site_config value from kwargs
# updatable_properties
# set location
# get existing web app
# get app service plan
# java is mutually exclusive with other frameworks
# Use given name as is if it starts with allowed values of multi-container application
# init site
# check if the web app already present in the resource group
# service plan is required for creation
# no existing service plan, create one
# if linux, setup startup_file
# set app setting
# existing web app, do update
# check if root level property changed
# check if site_config changed
# check if linux_fx_version changed
# purge existing app_settings:
# check if app settings changed
# compare existing web app with input, determine weather it's update operation
# compare xxx_version
# check if startup file changed
# comparing existing app setting with input, determine whether it's changed
# comparing deployment source with input, determine wheather it's changed
# Newer SDK versions (0.40.0+) seem to return None if it doesn't exist instead of raising error
# normalize sku
# Copyright (c) 2017 Zim Kalinowski, <zikalino@microsoft.com>
# Copyright (c) 2024 xuzhang3 (@xuzhang3), Fred-sun (@Fred-sun)
# Copyright (c) 2020 Cole Neubauer, (@coleneubauer)
# either create or update
# check if the backup policy exists
# log that we're doing an update
# If backup policy doesn't exist, that's the desired state.
# need to represent the run time as a date_time
# year, month, day has no impact on run time but is more consistent to see it as the time of creation rather than hardcoded value
# azure requires this as a list but at this time doesn't support multiple run times
# should easily be converted at this step if they support it in the future
# basic parameter checking. try to provide a better description of faults than azure does at this time
# azure forces instant_recovery_snapshot_retention to be 5 when schedule type is Weekly
# create a schedule policy based on schedule_run_frequency
# Daily backups can have a daily retention or weekly but Weekly backups cannot have a daily retention
# This assignment exists exclusively to deal with the following line being too long otherwise
# This function will create and update resource on the api management service.
# Copyright (c) 2019 Matti Ranta, (@techknowlogick)
# Copyright: (c) 2020, Paul Aiton <@paultaiton>
# Copyright: (c) 2016, Bruno Medina Bolanos Cacho <bruno.medina@microsoft.com>
# handled in azure_rm_common
# Copyright (c) 2018 Yuwei Zhou, <yuwzho@microsoft.com>
# Create the resource instance
# Copyright (c) 2022 xuzhang3 (@xuzhang3), Fred-sun (@Fred-sun)
# Copyright (c) 2018 Yunge Zhu <yungez@microsoft.com>
# get management client
# Copyright (c) 2018 Hai Cao, <t-haicao@microsoft.com>, Yunge Zhu <yungez@microsoft.com>
# Copyright (c) 2020 Praveen Ghuge (@praveenghuge), Karl Dasan (@karldas30)
# turn notification hub object into a dictionary (serialization)
# Copyright (c) 2020 Suyeb Ansari (@suyeb786), Pallavi Chaudhari(@PallaviC2510)
# self.log('Enabling/Updating protection for the Azure Virtual Machine {0}'.format(self.))
# self.log('Stop protection and retain existing data{0}'.format(self.))
# self.log('Stop protection and delete data{0}'.format(self.))
# self.log('Trigger an on-demand backup for a protected Azure VM{0}'.format(self.))
# The return value is None, which only triggers the backup. Backups also take some time to complete.
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.rulesetsoperations?view=azure-python
# Copyright (c) 2018 Fred-sun, <xiuxi.sun@qq.com>
# Copyright (c) 2017 Yuwei Zhou, <yuwzho@microsoft.com>
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.routesoperations?view=azure-python
# Copyright (c) 2018 Zim Kalinowski, (@zikalino)
# Copyright (c) 2019 Yuwei Zhou, <yuwzho@microsoft.com>
# Copyright (c) 2024 Bill Peck, <bpeck@redhat.com>
# translate Ansible input to SDK-formatted dict in self.parameters
# only one cert, not a chain
# populate key into jwk
# need to strip L in python 2.x
# v7.2-preview and v7.2 will change the upload operation from Sync to Async
# due to service defects, it returns 'Succeeded' before the change and 'Success' after the change
# Copyright: (c) 2016, Thomas Stringer <tomstr@microsoft.com>
# set linux fx version
# This currently doesn't work as there is a bug in SDK / Service
# Copyright (c) 2025 xuzhang3 (@xuzhang3), Fred-sun (@Fred-sun)
# update is not supported except for tags
# the image does not exist and create a new one
# create from virtual machine
# finally make the change if not check mode
# delete image
# the delete does not actually return anything. if no exception, then we'll assume it worked.
# blob URI can only be given by str
# not a disk or snapshots
# source can be name of snapshot or disk
# self.resource can be a vm (id/name/dict), or not a vm. return the vm iff it is an existing vm.
# Return None iff the resource is not found
# Copyright (c) 2021 Aparna Patil(@techcon65)
# define user inputs from playbook
# retrieve resource group to make sure it exists
# serialize object into a dictionary
# create or update Virtual network link
# delete virtual network link
# create the virtual network link
# delete the virtual network link
# Copyright (c) 2021 Cole Neubauer, (@coleneubauer)
# this ensures service principals are returned
# see https://learn.microsoft.com/en-us/graph/api/group-list-members?view=graph-rest-1.0&tabs=http
# Update, changed
# Get the updated versions of the users to return
# the update method, has no return value so it needs to be explicitely returned in a call
# Create, changed
# Delete, changed
# Do nothing unchanged
# the type doesn't get more specific. Could check the error message but no guarantees that message doesn't change in the future
# more stable to try again assuming the first error came from the attribute being a list
# run a filter based on user input to return based on any given attribute/query
# User was not found
# Defaults for variables
# Get current container registry token
# Create dict form input, without None value
# Filter out all None values
# Create/Update if state==present
# The container registry already exists, try to update
# Dict for update is the union of existing object over written by input data
# Check mode, skipping actual creation
# The container registry scope map not exist, create
# Check mode, Skipping actual creation
# Do not delete in check mode
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.afdoriginsoperations?view=azure-python
# enforce_certificate_check = origin.enforce_certificate_check, # Not fully implemented yet
# enforce_certification_name_check=dict(type='bool'),
# Copyright (c) 2020 Nikhil Patne (@nikhilpatne) and Sakar Mehra (@sakar97)
# prepare url
# Copyright (c) 2018 Hai Cao, <t-haicao@microsoft.com> Yunge Zhu <yungez@microsoft.com>
# Copyright (c) 2016 Matt Davis, <mdavis@ansible.com>
# capitalize the sku and allocation_method. standard => Standard, Standard => Standard.
# Delete returns nada. If we get here, assume that all is well.
# Copyright (c) 2018 Zim Kalinowski, <zikalino@microsoft.com>
# Copyright (C) 2019 Junyi Yi (@JunyiYi)
# get location from the lab as it has to be the same and has to be specified (why??)
# currently artifacts can be only specified when vm is created
# and in addition we don't have detailed information, just a number of "total artifacts"
# Copyright (c) 2023 xuzhang3 (@xuzhang3), Fred-sun (@Fred-sun)
# Copyright (c) 2021 Praveen Ghuge (@praveenghuge), Karl Dasan (@ikarldasan)
# don't change anything if creating an existing zone, but change if deleting it
# the DDoS protection plan does not exist so create it
# you can't delete what is not there
# return the results if you are only gathering information
# delete DDoS protection plan
# turn DDoS protection plan object into a dictionary (serialization)
# Copyright (c) 2024 Bill Peck  <bpeck@redhat.com>
# Copyright (c) 2022 Aubin Bikouo (@abikouo)
# duplicated in azure_rm_manageddisk_facts
# create or update disk
# Attach the disk to multiple VM
# Detach disks from all VMs attaching them
# Detach the disk from list of VMs
# Delete existing disks
# Detach disk to all VMs attaching it
# attach all disks to the virtual machine
# prepare the data disk
# pylint: disable=missing-kwoa
# This method accounts for the difference in structure between the
# Azure retrieved disk and the parameters for the new disk to be created.
# Check how to implement tags
# Copyright (c) 2020 Paul Aiton, < @paultaiton >
# If the name matches, return result regardless of anything else.
# If name is not defined and either state is Enabled or all is true, and tags match, return result.
# Create resource group
# Update resource group
# The delete operation doesn't return anything.
# If we got here, assume all is good
# check if it's workernode
# Copyright (c) 2020 Suyeb Ansari (@suyeb786)
# self.log('Get Recovery Service Vault Details {0}'.format(self.))
# Copyright (c) 2021 Aparna Patil(@aparna-patil)
# define user inputs variables
# list the conditions and results to return based on user input
# if firewall policy name is provided, then return facts about that specific firewall policy
# all the firewall policies listed in specific resource group
# all the firewall policies in a subscription
# get specific Firewall policy
# serialize result
# Copyright (c) 2025 Klaas Demter (@Klaas-)
# it seems the exception is thrown when iterating through data_collection_rules, not when setting data_collection_rules
# Copyright (c) 2022 Ross Bender (@l3ender)
# Copyright (c) 2024 xuzhang3 (@xuzhang3), Fred-Sun (@Fred-Sun)
# changed = True
# self.fail("The zone_redundant is an immutable property")
# Copyright (c) 2020 Jose Angel Munoz, <josea.munoz@gmail.com>
# create a new zone variable in case the 'try' doesn't find a zone
# the zone does not exist so create it
# return the results if your only gathering information
# delete zone
# the delete does not actually return anything. if no exception, then we'll assume
# it worked.
# create or update the new Zone object we created
# delete the Zone
# turn Zone object into a dictionary (serialization)
# Copyright (c) 2018 Yunge Zhu, (@yungezz)
# get existing role definition
# check if the role definition exists
# existing role definition, do update
# compare if role definition changed
# build scope
# check update
# Copyright (c) 2019 Jose Angel Munoz, <josea.munoz@gmail.com>
# The API will raise exception if the secret is disabled.
# Exception as (Forbidden) Operation get is not allowed on a disabled secret.
# create or update a dedicated host group
# delete a host group
# create the host group
# delete the host group
# https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2020_10_01.models.MetricAlertresource?view=azure-python
# https://github.com/ansible/ansible/issues/74001
# Can't properly define criteria in arg spec
# Get current metric alert if it exists
# Create dict from input, without None values
# https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2018_03_01.models.metricalertresource?view=azure-python
# tags seperately because of update_tags behavior
# metric alert does not exist, create
# On creation default to 'Global' unless otherwise noted in input variables
# On creation input == what we send to api
# Needs to be extended by tags if set
# metric alert already exists, updating it
# Dict for update is the union of existing object overwritten by input data
# Enhanced with tags (special behaviour because of append_tags possibility)
# Check if we need to update the metric alert
# Need to create/update the metric alert; changed -> True
# When object was not updated or when running in check mode
# assume metric_alert_update is resulting object
# otherwise take resulting new object from response of create call
# Delete metric alert if state is absent and it exists
# if it doesn't exist, it's already absent
# do not delete in check mode
# Copyright (c) 2019 Yunge Zhu, <yungez@microsoft.com>
# Copyright (c) 2020 Nikhil Patne (@nikhilpatne), Sakar Mehra (@sakar97)
# if not old_response:
# make sure instance is actually deleted, for some Azure resources, instance is hanging around.
# Creating / Updating the ApiManagementService instance.
# Deleting the ApiManagementService instance.
# Checking if the ApiManagementService instance is present
# Copyright (c) 2020  haiyuazhang <haiyzhan@micosoft.com>
# self.create_compare_modifiers(self.module_arg_spec, '', modifiers)
# self.results['modifiers'] = modifiers
# self.results['compare'] = []
# if 'workProfiles' in self.body['properties']:
# if not self.default_compare(modifiers, self.body, old_response, '', self.results):
# make sure instance is actually deleted, for some Azure resources, instance is hanging around
# for some time after deletion -- this should be really fixed in Azure
# self.log('Deleting the OpenShiftManagedCluster instance {0}'.format(self.))
# self.log('Checking if the OpenShiftManagedCluster instance {0} is present'.format(self.))
# self.log("OpenShiftManagedCluster instance : {0} found".format(response.name))
# Added per Mangirdas Judeikis (RED HAT INC) to fix first letter of cluster domain beginning with digit ; currently not supported
# hard code the ingress profile name as default, so user don't need to specify it
# if domain is not set in cluster profile or it is set to an empty string or null value then generate a random domain
# self.log('Updating the Gallery instance {0}'.format(self.))
# self.log('Creating the Gallery instance {0}'.format(self.))
# self.log('Deleting the Gallery instance {0}'.format(self.))
# self.log('Checking if the Gallery instance {0} is present'.format(self.))
# self.log("AzureFirewall instance : {0} found".format(response.name))
# Copyright (c) 2020 Praveen Ghuge (@praveenghuge), Karl Dasan (@ikarldasan), Sakar Mehra (@sakar97)
# all the express route listed in that specific resource group
# turn express route object into a dictionary (serialization)
# structure of parameters for update must be changed
# If endpoint doesn't exist and no start/stop operation specified, create endpoint.
# Fail the module when user try to start/stop a non-existed endpoint
# Copyright (c) 2021 Praveen Ghuge (@praveenghuge), Karl Dasan (@karldas30)
# don't change anything if creating an existing namespace, but change if deleting it
# the notification hub does not exist so create it
# delete Notification Hub
# turn notification hub namespace object into a dictionary (serialization)
# Copyright (c) 2021 Praveen Ghuge(@praveenghuge) Karl Dasan(@karldas30) Saurabh Malpani (@saurabh3796)
# turn event hub object into a dictionary (serialization)
# important properties from output. not match input arguments.
# curated site_config
# java container setting
# linux_fx_version
# curated app_settings
# curated deploymenet_slot
# ftp_publish_url
# curated publish credentials
# curated auth settings
# This is a workaround because PATCH does not support sending just the values you want to update
# https://github.com/Azure/azure-rest-api-specs/issues/34530
# This may need to be moved to a deep merge if future options are not flat
# Not properly returned by sdk nor API
# https://github.com/Azure/azure-rest-api-specs/issues/34530#issuecomment-2862950321
# self.results['location'] = vault_config['location']
# get specific host group
# create the dedicated host
# update the dedicated host
# restart the dedicate host
# delete the dedicated host
# list the conditions and what to return based on user input
# if there is a name, facts about that specific zone
# all the zones listed in that specific resource group
# all the zones in a subscription
# get specific zone
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.afdrulesetsoperations?view=azure-python
# compare sku
# compare event hub property
# compare endpoint
# find the total length
# If already changed, no need to compare any more
# compare routes
# compare IP filter
# compare identity
# only tags changed
# Copyright (c) 2022 xuzhang3 (@xuzhang3)
# site, used at level creation, or update.
# get web app
# get slot
# set is_linux
# set auto_swap_slot_name
# check if the slot already present in the webapp
# clone slot
# existing slot, do update
# compare site config
# comparing deployment source with input, determine whether it's changed
# Copyright (c) 2020 Haiyuan Zhang, <haiyzhan@microsoft.com>
# Copyright (c) 2019 Zim Kalinowski, <zikalino@microsoft.com>
# subnet overrides for virtual network and subnet created by default
# if not self.default_compare({}, self.policy, response['policy'], '', dict(compare=[])):
# Create a new one
# SAS policy
# This currently doesnt' work as there is a bug in SDK / Service
# Copyright (c) 2021 Ross Bender (@l3ender)
# if there is a link name provided, return facts about that specific virtual network link
# all the virtual network links in specified private DNS zone
# get specific virtual network link
# self.log('Creating Recovery Service Vault Name {0}'.format(self.))
# self.log('Deleting Recovery Service Vault {0}'.format(self.))
# self.log('Get Recovery Service Vault Name {0}'.format(self.))
# Some operations of Recovery Service Vault can take a while to
# Copyright (c) 2020 David Duque Hernández, (@next-davidduquehernandez)
# get storage account id
# get existing log profile
# if profile not exists, create new
# log profile exists already, do update
# check if update
# Copyright (c) 2019 Zim Kalinowski (@zikalino)
# delete when no exists
# FUTURE: add `choices` support once choices supports lists of values
# default to key vault's tenant, since that's all that's currently supported anyway
# FUTURE: this list isn't really order-dependent- we should be set-ifying the rules list for order-independent comparison
# Copyright (c) 2021 Paul Aiton, < @paultaiton >
# The parameter to SDK's management_groups.get(group_id) is not correct,
# it only works with a bare name value, and not the fqid.
# default to response of an empty list
# list method cannot return children, so we must iterate over root management groups to
# get each one individually.
# If group has no children, then property will be set to None type.
# We want an empty list so that it can be used in loops without issue.
# In theory if the Azure API is updated to include another child type of management groups,
# the code here will prevent an exception. But there should be logic added in an update to take
# care of a new child type of management groups.
# https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2023_01_01.models.actiongroupresource?view=azure-python
# Get current action group if it exists
# action group does not exist, create
# On creation default to location of resource group unless otherwise noted in input variables
# action group already exists, updating it
# assume action_group_update is resulting object
# Delete action group if state is absent and it exists
# Copyright (c) 2019 Liu Qingyi, (@smile37773)
# self.log('Response : {0}'.format(response))
# Copyright (c) 2020 XiuxiSun, (@Fred-sun)
# Copyright (c) 2020 GuopengLin, (@t-glin)
# command (list of str)
# ports (list of ContainerPort)
# environment_variables (list of EnvironmentVariable)
# resources (ResourceRequirements)
# volume mounts (list of VolumeMount)
# instance_view (ContainerPropertiesInstanceView)
# events (list of ContainerEvent)
# since this client hasn't been upgraded to expose models directly off the OperationClass, fish them out
# Format identities
# get list of ports
# Copyright (c) 2018 James E. King, III (@jeking3) <jking@apache.org>
# HACK: ditch this once python SDK supports get by URI
# make sure options are lower case
# convert elements to ints
# Verify parameters and resolve any defaults
# Try to determine if the VM needs to be updated
# If type set to None, and VM has no current identities, nothing to do
# If type different to None, and VM has no current identities, update identities
# If type in module args different from type of vm_dict, update identities
# If type in module args contains 'UserAssigned'
# Create sets with current user identities and module args identities
# If new identities have to be appended to VM
# and the union of identities is longer
# update identities
# If new identities have to overwrite current identities
# Check if module args identities are different as current ones
# Defaults for boot diagnostics
# Adding boot diagnostics can create a default storage account after initial creation
# this means we might also need to update the _own_sa_ tag
# Create the VM
# Validate parameters
# Get defaults
# os disk
# disk caching
# do this before creating vm_resource as it can modify tags
# If UserAssigned in module args and ids specified
# Append identities to the model
# If UserAssigned in module args, but ids are not specified
# In any other case ('SystemAssigned' or 'None') apply the configuration to the model
# Azure SDK (erroneously?) wants native string type for this
# data disk
# Before creating VM accept terms of plan if `accept_terms` is True
# Update the VM based on detected config differences
# pass if the proximity Placement Group
# pass if the availability set is not set
# You can't change a vm zone
# If 'append' is set to True save current user assigned managed identities to use later
# Nothing to append to
# 'append' is False or unset
# If there are identities in 'id' and 'UserAssigned' in type
# If there are identities to append, merge the dicts
# Save the identity
# If there are no identities in 'id' and 'UserAssigned' in type
# Fail if append is False
# If append is true, user is changing from 'UserAssigned' to 'SystemAssigned, UserAssigned'
# Save current identities
# Set 'SystemAssigned' or 'None'
# storageUri is undefined if boot diagnostics is disabled
# Add custom_data, if provided
# Add admin password, if one provided
# Add Windows configuration, if applicable
# Add linux configuration, if applicable
# Make sure we leave the machine in requested power state
# Attempt to power on the machine
# Attempt to power off the machine
# delete the VM
# until we sort out how we want to do this globally
# Expand network interfaces to include config properties
# Expand public IPs to include config properties
# store the attached vhd info so we can nuke it after the VM is gone
# FUTURE enable diff mode, move these there...
# store the attached nic info so we can nuke them after the VM is gone
# also store each nic's attached public IPs and delete after the NIC is gone
# wait for the poller to finish
# TODO: parallelize nic, vhd, and public ip deletions with begin_deleting
# TODO: best-effort to keep deleting other linked resources if we encounter an error
# Delete doesn't return anything. If we get this far, assume success
# FUTURE: figure out a cloud_env indepdendent way to delete these
# We previously created one in the same invocation
# We previously created one in a previous invocation
# We must be updating, like adding boot diagnostics
# Attempt to find a valid storage account name
# Find a virtual network
# Copyright (c) 2019 Zim Kalinowski, (@zikalino), Jurijs Fadejevs (@needgithubid)
# self.log('Creating / Updating the AzureFirewall instance {0}'.format(self.))
# self.log('Deleting the AzureFirewall instance {0}'.format(self.))
# self.log('Checking if the AzureFirewall instance {0} is present'.format(self.))
# handle the intelligence pack
# create new route
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.rulesoperations?view=azure-python
# Models for Actions:
# ModifyRequestHeader
# ModifyResponseHeader
# RouteConfigurationOverride
# UrlRedirect
# UrlRewrite
# Models for Conditions:
# ClientPort
# Cookies
# HostName
# HttpVersion
# IsDevice
# PostArgs
# QueryString
# RemoteAddress
# RequestBody
# RequestHeader
# RequestMethod
# RequestScheme
# RequestUri
# ServerPort
# SocketAddr
# SslProtocol
# UrlFileExtension
# UrlFileName
# UrlPath
# They both exist so check if the values match
# Catch the edge case where we have an empty list and a None object
# Catch where a default of False matches the None object
# Copyright: (c) 2017, Sertac Ozercan <seozerca@microsoft.com>
# Copyright: (c) 2016, Julien Stroheker <juliens@microsoft.com>
# Copyright (c) 2018
# Gustavo Muniz do Carmo <gustavo@esign.com.br>
# Zim Kalinowski <zikalino@microsoft.com>
# create or update ip group
# delete ip group
# create ip group
# comparing IP addresses list
# Copyright (c) 2017 Julien Stroheker, <juliens@microsoft.com>
# Check if the AS already present in the RG
# go through dependent resources
# append if not in list
# convert dictionary to list
# create or update disk encryption set
# delete disk encryption set
# create the disk encryption set
# delete the disk encryption set
# Default option for type is "None"
# remove for comparison as value not returned in old_response
# use all existing config
# duplicated in azure_rm_autoscale_facts
# create new
# check changed
# construct the instance will be send to create_or_update api
# results should be the dict of the instance
# this returns as a list, since we parse multiple pages
# paginated response can be quite large
# https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2020_10_01.models.activitylogalertresource?view=azure-python
# Get current activity log alert if it exists
# hhttps://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2020_10_01.models.activitylogalertresource?view=azure-python
# activity log alert does not exist, create
# activity log alert already exists, updating it
# Check if we need to update the activity log alert
# Need to create/update the activity log alert; changed -> True
# assume activity_log_alert_update is resulting object
# Delete activity log alert if state is absent and it exists
# if there is set name is provided, return facts about that specific disk encryption set
# all the disk encryption sets listed in specific resource group
# all the disk encryption sets in a subscription
# get specific disk encryption set
# The container registry token already exists, try to update
# The container registry token not exist, create
# Gets the properties of the specified token
# Creates a token for a container registry with the specified parameters
# Updates a token with the specified parameters
# Deletes a token from a container registry
# Copyright (c) 2019 Hai Cao, <t-haicao@microsoft.com>
# Create KeyVaultClient
# Key exists and will be deleted
# Key doesn't exist
# Create key
# Delete key
# Copyright (c) 2022 Andrea Decorte, <adecorte@redhat.com>
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.afdendpointsoperations?view=azure-python
# Copyright (c) 2021 Praveen Ghuge (@praveenghuge), Karl Dasan (@ikarldasan), Sakar Mehra (@sakar97)
# the express route does not exist so create it
# delete express route
# Copyright (c) 2018 Sertac Ozercan, <seozerca@microsoft.com>
# Check if the AKS instance already present in the RG
# Default to SystemAssigned if service_principal is not specified
# Cannot Update the SSH Key for now // Let service to handle it
# self.module.warn("linux_profile.ssh_key cannot be updated")
# self.log("linux_profile response : {0}".format(response['linux_profile'].get('admin_username')))
# self.log("linux_profile self : {0}".format(self.linux_profile[0].get('admin_username')))
# Cannot Update the Username for now // Let service to handle it
# self.module.warn("linux_profile.admin_username cannot be updated")
# Cannot have more that one agent pool profile for now
# self.module.warn("windows_profile.admin_username cannot be updated")
# Only service_principal or identity can be specified, but default to SystemAssigned if none specified.
# self.log("service_principal_profile : {0}".format(parameters.service_principal_profile))
# self.log("linux_profile : {0}".format(parameters.linux_profile))
# self.log("ssh from yaml : {0}".format(results.get('linux_profile')[0]))
# self.log("ssh : {0}".format(parameters.linux_profile.ssh))
# self.log("agent_pool_profiles : {0}".format(parameters.agent_pool_profiles))
# AKS only supports a single UserAssigned Identity
# If type set to SystamAssigned, and Resource has SystamAssigned, nothing to do
# If type set to SystemAssigned, and Resource has current identity, remove UserAssigned identity
# Copyright (c) 2021
# Maxence Ardouin <max@23.tf>
# it seems the exception is thrown when iterating through activity_log_alerts, not when setting activity_log_alerts
# it seems the exception is thrown when iterating through action_groups, not when setting action_groups
# Create KeyVault Client
# Secret exists and will be deleted
# Secret doesn't exist
# Create secret
# Delete secret
# it seems the exception is thrown when iterating through metric_alerts, not when setting metric_alerts
# duplicate with azure_rm_publicipaddress
# self.log("Snapshot instance : {0} found".format(response.name))
# Copyright (c) 2020 Guopeng Lin, <linguopeng1998@google.com>
# Copyright (c) 2016 Sertac Ozercan, <seozerca@microsoft.com>
# default virtual_network_resource_group to resource_group
# if self.virtual_network_name:
# Currently not support for the vmss contains more than one loadbalancer
# Create the VMSS
# self.log('Creating VM Backup Policy {0}'.format(self.))
# self.log('Deleting Backup Policy {0}'.format(self.))
# self.log('Fetch Backup Policy Details {0}'.format(self.))
# Copyright (c) 2020 Aparna Patil(@aparna-patil)
# this property should not be changed
# FUTURE: ensure all record types are supported (see https://github.com/Azure/azure-sdk-for-python/tree/master/azure-mgmt-dns/azure/mgmt/dns/models)
# we're doing two-pass arg validation, sample and store the args internally to allow this
# first-pass arg validation so we can get the record type- skip exec_module
# look up the right subspec and metadata
# patch the right record shape onto the argspec
# rerun validation and actually run the module this time
# FUTURE: fail on anything other than ResourceNotFound
# FUTURE: implement diff mode
# convert the input records to SDK objects
# and use it to get the type-specific records
# compare the input records to the server records
# also check top-level recordset properties
# delete record set
# delete the record set
# ensure we're always comparing a list, even for the single-valued types
# only a difference if the server set is missing something from the input set
# non-append mode; any difference in the sets is a change
# check add or update metadata
# check remove
# response = self.update_local_network_gateway_tags(new_tags)
# return [self.format_item(x) for x in results['value']] if results['value'] else []
# d = {
# return d
# for compatible issue, keep this field
# if not set the security group name, use nic name for default
# if application security groups set, convert to resource id format
# If ip_confiurations is not specified then provide the default
# private interface
# check for update
# We need to ensure that dns_servers are list like
# parse the virtual network resource group and name
# check the ip_configuration is changed
# construct two set with the same structure and then compare
# the list should contains:
# name, private_ip_address, public_ip_address_name, private_ip_allocation_method, subnet_name
# duplicated in azure_rm_autoscale
# from azure.core.serialization import NULL as AzureCoreNull
# Ignore query_parameters when query_string_caching_behavior is "IgnoreQueryString" or "UseQueryString"
# Get the existing resource
# Get the Origin Group ID
# Get a list of all the Custom Domain IDs
# Populate the rule_set_ids
# Test for custom_domain equality
# cache_configuration = AzureCoreNull # Reported as issue to azure-mgmt-cdn: https://github.com/Azure/azure-sdk-for-python/issues/35801
# x = CdnManagementClient()
# x.routes.begin_update()
# if there is a group name provided, return facts about that specific proximity placement group
# all the proximity placement groups listed in specific resource group
# all the proximity placement groups in a subscription
# get specific proximity placement group
# self.log('Creating / Updating the ManagementGroup instance {0}'.format(self.))
# self.log('Deleting the ManagementGroup instance {0}'.format(self.))
# self.log('Checking if the ManagementGroup instance {0} is present'.format(self.))
# self.log("ManagementGroup instance : {0} found".format(response.name))
# Copyright (c) 2021 Andrii Bilorus <andrii.bilorus@gmail.com>
# Copyright (c) 2017 Yawei Wang, <yaweiw@microsoft.com>
# Copyright (c) 2022 Mandar Kulkarni, < @mandar242 >
# Different return info is gathered using 2 different clients
# 1. All except "user" section of the return value uses azure.mgmt.subsctiption.operations.subscriptionoperations
# 2. "user" section of the return value uses different client (GraphServiceClient)
# Get
# "homeTenantId": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
# "id": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
# "isDefault": true,                                    <- WIP on getting this param
# "managedByTenants": [
# "name": "Pay-As-You-Go",
# "state": "Enabled",
# "tenantId": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
# Makes use of azure.mgmt.subsctiption.operations.subscriptionoperations
# https://docs.microsoft.com/en-us/python/api/azure-mgmt-subscription/azure.mgmt.subscription.operations.subscriptionsoperations?view=azure-python#methods
# Create GraphServiceClient for getting
# "user": {
# Makes use of azure MSGraph
# https://learn.microsoft.com/en-us/graph/api/user-get?view=graph-rest-1.0&tabs=http
# Update associated NAT Gateway
# Disassociate NAT Gateway
# the subnet does not exist
# create new subnet
# update subnet
# delete subnet
# construct scope id
# if api_version was not specified, get latest one
# extract provider and resource type
# if there's no provider in API version, assume Microsoft.Resources
# Copyright (c) 2021 Cole Neubauer, (@coleneubauer), xuzhang3 (@xuzhang3)
# run a filter based on user input
# Platform telemetry options are currently same as windows_firewall_logs
# Can't properly define this in arg spec
# Get current data collection rule if it exists
# https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2021_04_01.models.datacollectionruleresource?view=azure-python
# Data collection rule does not exist, create
# Data collection rule already exists, updating it
# Check if we need to update the data collection rule
# Need to create/update the Data collection rule; changed -> True
# assume data_collection_rule_update is resulting object
# Delete data collection rule if state is absent and it exists
# the event hub does not exist so create it
# update the managed identity for first creation
# delete Event Hub
# turn event hub namespace object into a dictionary (serialization)
# Copyright (c) 2024
# Nir Argaman <nargaman@redhat.com>
# Handle exceptions
# Copyright (c) 2017 Bruno Medina Bolanos Cacho <bruno.medina@microsoft.com>
# need create or update
# Mount the disk to multiple VM
# unmount from the old virtual machine and mount to the new virtual machine
# find the lun
# TODO: Add support for EncryptionSettings, DiskIOPSReadWrite, DiskMBpsReadWrite
# overrides deprecated 'ip_range_filter' parameter
# Copyright (c) 2019 Yunge Zhu (@yungezz)
# parse virtual_network
# get vnet peering
# parse remote virtual_network
# check vnet id not changed
# check remote vnet id not changed
# not exists, create new vnet peering
# check if vnet exists
# create or update proximity placement group
# delete proximity placement group
# create the placement group
# delete the placement group
# self.results['location'] = vault_config.location
# network overhead part
# Copyright (c) 2017 Sertac Ozercan <seozerca@microsoft.com>
# Copyright (c) 2021 Andrii Bilorus, <andrii.bilorus@gmail.com>
# check if the firewall rule exists
# if firewall rule not exists
# redis exists already, do update
# Copyright (c) 2017 Fred-sun, <xiuxi.sun@qq.com>
# self.log('Creating / Updating the GalleryImage instance {0}'.format(self.))
# self.log('Deleting the GalleryImage instance {0}'.format(self.))
# self.log('Checking if the GalleryImage instance {0} is present'.format(self.))
# if not existing
# existing app service plan, do update
# check if sku changed
# check if number_of_workers changed
# if there is a host group name provided, return facts about that dedicated host group
# all the host groups listed in specific resource group
# all the host groups in a subscription
# create new load balancer structure early, so it can be easily compared
# construct the new instance, if the parameter is none, keep remote one
# self.results['gallery_images'] = self.format_item(self.get())
# self.results['gallery_images'] = self.format_item(self.listbygallery())
# Python SDK Reference: https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.operations.afdorigingroupsoperations?view=azure-python
# if there is IP group name provided, return facts about that specific IP group
# all the IP groups listed in specific resource group
# all the IP groups in a subscription
# get specific IP group
# define redis_configuration properties
# check subnet exists
# get existing Azure Cache for Redis
# if redis not exists
# sku can be null, define a default value
# Parameter changed
# Parameter omitted while it was specified before
# if there is a host name provided, return facts about that dedicated host
# all the host listed in specific host group
# Several errors are passed as generic HTTP errors. Catch the error and pass back the basic account information
# if the account doesn't support replication or replication stats are not available.
# get the following try catch from CLI
# create new route table
# delete unspecified properties for clean comparison
# keep backward compatibility
# self.log('Creating / Updating the GalleryImageVersion instance {0}'.format(self.))
# self.log('Deleting the GalleryImageVersion instance {0}'.format(self.))
# self.log('Checking if the GalleryImageVersion instance {0} is present'.format(self.))
# Copyright (c) 2020 Gu Fred-Sun, (@Fred-Sun)
# self.results['galleries'] = self.format_item(self.get())
# self.results['galleries'] = self.format_item(self.listbyresourcegroup())
# self.results['galleries'] = [self.format_item(self.list())]
# add the missing prefixes
# replace existing address prefixes with requested set
# create a new virtual network
# update existing virtual network
# This will hold the resource ID of the user-assigned identity
# change the account type
# Perform the update. The API only allows changing one attribute per call.
# No blob storage available?
# Copyright (c) 2019 Yunge Zhu, (@yungezz)
# create or update firewall policy
# delete firewall policy
# create a firewall policy
# delete a firewall policy
# comparing input values with existing values for given parameter
# check add or update
# Copyright (c) 2023 Patrick Uiterwijk <@puiterwijk>
# Copyright (c) 2020 Haiyuan Zhang <haiyzhan@micosoft.com>
# from ansible_collections.azure.azcollection.plugins.module_utils.azure_rm_common import AzureRMModuleBase
# class AzureRMSqlManagedInstance(AzureRMModuleBase):
# sql_managed_instance = self.update_sql_managed_instance(self.body)
# all the DDoS protection plan listed in that specific resource group
# all the DDoS protection plan listed in the subscription
# Copyright (c) 2020 Suyeb Ansari(@suyeb786), Pallavi Chaudhari(@PallaviC2510)
# self.log('Fetching protection details for the Azure Virtual Machine {0}'.format(self.))
# Copyright (c) 2020 Guopeng Lin, <linguopeng1998@gmail.com>
# allow_guests_sign_in=self.allow_guests_sign_in,
# self.log("Did not find the graph instance instance {0} - {1}".format(self.app_id, str(ge)))
# return False
# allow_guests_sign_in=object.allow_guests_sign_in,
# value ? secret_text
# value ? key
# utf16 is used by AAD portal. Do not change it to other random encoding
# unless you know what you are doing.
# value ? additional_data
# Argument validation
# check the subspec and metadata
# rerun validation and module
# use recordset to get the type-specific records
# Compare the input records to the server records
# check the top-level recordset properties
# create record set
# create the record set
# when source(destination)_application_security_groups set, remove the default value * of source(destination)_address_prefix
# if the new one is in the old list, check whether it is updated
# keep this rule
# one rule is removed
# Compare new list and old list is the same? here only compare names
# type: azure.mgmt.network.models
# tighten up poll interval for security groups; default 30s is an eternity
# this value is still overridden by the response Retry-After header (which is set on the initial operation response to 10s)
# self.network_client.config.long_running_operation_timeout = 3
# TODO: actually check for ResourceMissingError
# update the security group
# create the security group
# If the "path" is not defined, it's cwd.
# If kubeconfig file already exists, compare it with the new file
# If equal, do nothing, otherwise, override.
# No need to close the temp file as it's closed by filecmp.cmp.
# add file path validation
# create the container
# update container attributes
# create, update or download blob
# Delete blob
# when a blob is present, then tags are assigned at the blob level
# dest is a directory
# does path exist without basename
# dest already exists and we're not forcing
# TODO: Name check the URL
# Location is not currently implemented in begin_update()
# private_ip_address=dict(type='str'),
# Copyright (c) 2016 Thomas Stringer, <tomstr@microsoft.com>
# Copyright (c) 2020 Paul Aiton, (@paultaiton)
# check if the role assignment exists
# If assignment doesn't exist, that's the desired state.
# Hen Yaish <hyaish@redhat.com>
# Copyright (c) 2017 Fred Sun, <xiuxi.sun@qq.com>
# If role_definition_id is set, we only want results matching that id.
# atScope filter limits to exact scope plus parent scopes. Without it will return all children too.
# If assignee is set we only want results matching that assignee.
# If strict_scope_match is true we only want results matching exact scope.
# self.log('Creating / Updating the Snapshot instance {0}'.format(self.))
# self.log('Deleting the Snapshot instance {0}'.format(self.))
# self.log('Checking if the Snapshot instance {0} is present'.format(self.))
# Copyright (c) Ansible Project
# resource group does not exist
# fetch the RG directly (instead of using the base helper) since we don't want to exit if it's missing
# Blocking wait till the delete is finished
# time.sleep(15) # there is a race condition between when we ask for deployment status and when the
# If we fail here, the original error gets lost and user receives wrong error message/stacktrace
# Define error_map with common http error codes
# If set to None we default to the max which is 1 hour
# little endian: b'\x01\x00'
# big endian: b'\x00\x01'
# FUTURE: this should come from the SDK or an external location.
# For now, we have to copy from azure-cli
# This passes the sanity import test, but does not provide a user friendly error message.
# Doing so would require catching Exception for all imports of Azure dependencies in modules and module_utils.
# FUTURE: either get this from the requirements file (if we can be sure it's always available at runtime)
# or generate the requirements files from this so we only have one source of truth to maintain...
# self.debug = self.module.params.get('debug')
# delegate auth to AzureRMAuth class (shared with all plugin types)
# common parameter validation
# Ensure Azure modules are at least 2.0.0rc5.
# can't get at the module version for some reason, just fail silently...
# resource group object fits this model
# Open default ports based on OS type
# add an inbound SSH rule
# for windows add inbound RDP and WinRM rules
# Open custom ports
# wrap basic strings in a dict that just defines the default
# The graphrbac has deprecated, migrate to msgraph
# def get_graphrbac_client(self, tenant_id):
# most things are resource_manager, don't make everyone specify
# https://github.com/Azure/msrestazure-for-python/pull/169
# China's base_url doesn't end in a trailing slash, though others do,
# and we need a trailing slash when generating credential_scopes below.
# Some management clients do not take a subscription ID as parameters.
# unversioned clients won't accept profile; only send it if necessary
# clients without a version specified in the profile will use the default
# If the client doesn't accept api_version, it's unversioned.
# If it does, favor explicitly-specified api_version, fall back to api_profile
# remove profile; only pass API version if specified
# FUTURE: remove this once everything exposes models directly (eg, containerinstance)
# Add user agent for Ansible
# Add user agent when running from Cloud Shell
# Add user agent when running from VSCode extension
# passthru methods to AzureAuth instance for backcompat
# authenticate
# cert validation mode precedence: module-arg, credential profile, env, "validate"
# Disable instance discovery: module-arg, credential profile, env, "False"
# if cloud_environment specified, look up/build Cloud object
# SDK default
# try to look up "well-known" values via the name attribute on azure_cloud members
# get authentication authority
# for adfs, user could pass in authority or not.
# for others, use default authority from cloud environment
# MSI Credentials
# AzureCLI credentials
# Get object `cloud_environment` from string `_cloud_environment`
# use the first subscription of the MSI
# Get authentication credentials.
# auto, precedence: module parameters -> environment variables -> default profile in ~/.azure/credentials -> azure cli
# try module params
# try environment
# try default profile from ~./azure/credentials
# Use only during module development
# if self.debug:
# This schema should be used when users can add more than one user assigned identity
# Also this schema allows the option to append identities in update
# This schema should be used when users can add only one user assigned identity
# first check if option was passed
# check if pattern needs to be used
# should fail if level is > 0?
# check if any extra values passed
# format url
# Converting from single_managed_identity_spec to managed_identity_spec
# If type set to None, and Resource has None, nothing to do
# If type set to None, and Resource has current identities, remove UserAssigned identities
# Or
# If type in module args different from current type, update identities
# Update User Assigned identities to the model
# Set identity to None to remove it
# Set identity to user_assigned to add it
# Support http errors by status code
# Construct and send request
# raise common http errors
# Copyright (c) 2018-2019 Red Hat, Inc.
# Copyright (c) 2020 Infoblox, Inc.
# maybe using network.prefixlen+1 as default
# check for ip version 4 or 6 else die
# check for valid subnetting cidr
# Plugin interface (2)
# restart is a grid function, so we need to properly format
# the arguments before sending the command
# Copyright © 2020 Infoblox Inc
# from infoblox documentation
# Fields List
# Field         Type            Req     R/O     Base    Search
# comment               String          N       N       Y       : = ~
# extattrs              Extattr         N       N       N       ext
# external_primaries    [struct]        N       N       N       N/A
# external_secondaries  [struct]        N       N       N       N/A
# grid_primary          [struct]        N       N       N       N/A
# grid_secondaries      [struct]        N       N       N       N/A
# is_grid_default       Bool            N       N       N       N/A
# is_multimaster        Bool            N       Y       N       N/A
# name                  String          Y               N       Y       : = ~
# use_external_primary  Bool            N       N       N       N/A
# cleanup tsig fields
# Error checking that only one member type was defined
# A member node was passed in. Ehsure the correct type and struct
# A FO association was passed in. Ensure the correct type is set
# MS server was passed in. Ensure the correct type and struct
# one of name or num is required; enforced by the function options()
# This is what gets posted to the WAPI API
# to check for vendor specific dhcp option
# Module entry point
# removing the container key from post arguments
# to get the argument ipaddr
# options-router-templates, broadcast-address-offset, dhcp6.name-servers don't have any associated number
# to modify argument based on ipaddr type i.e. IPV4/IPV6
# defining nios constants
# apply default values from NIOS_PROVIDER_SPEC since we cannot just
# assume the provider values are coming from AnsibleModule
# override any values with env variables unless they were
# explicitly set
# Check if the key is empty (None, empty string, empty list, etc.)
# Add more empty checks if needed
# If the key doesn't exist in current_object, mark it for removal
# Remove the identified keys from proposed_object
# get object reference
# When a range update is defined, check for a range that matches the target range definition as well
# to allows for idempotence
# If configure_by_dns is set to False and view is 'default', then delete the default dns
# To check for existing A_record with same name with input A_record by IP
# To check for existing Host_record with same name with input Host_record by IP
# Else set the current_object with input value
# checks if the object type is member to normalize the attributes being passed
# The WAPI API will never return the "create_token" field that causes a difference
# with the defaults of the module. To prevent this we remove the "create_token" option
# if it has not been set to true.
# Iterate over each option and remove the 'num' key
# remove use_options false from proposed_object
# remove use_options false from current_object
# Convert 'default_value' to string in both proposed_object and current_object if it exists
# checks if the 'text' field has to be updated for the TXT Record
# checks if the name's field has been updated
# this check is for idempotency, as if the same ip address shall be passed
# add param will be removed, and the same exists true for the remove case as well.
# Checks if 'new_ipv4addr' param exists in ipv4addr args
# Removes keys from the proposed_object that are empty and do not exist in current_object.
# Fix the issue to update the optional fields of the object with default empty values
# Checks if nios_next_ip param is passed in ipv4addrs/ipv4addr args
# Check if NIOS_MEMBER and the flag to call function create_token is set
# the function creates a token that can be used by a pre-provisioned member to join the grid
# popping 'view' key as update of 'view' is not supported with respect to a:record/aaaa:record/srv:record/ptr:record/naptr:record
# popping 'zone_format' key as update of 'zone_format' is not supported with respect to zone_auth
# Remove 'use_for_ea_inheritance' from each dictionary in 'ipv4addrs'
# WAPI always reset the use_for_ea_inheritance for each update operation
# Handle use_for_ea_inheritance flag changes for IPv4addr in a host record
# Fetch the updated reference of host to avoid drift.
# Create a dictionary for quick lookups
# Normalize MAC address for comparison
# if proposed has a key that current doesn't, then the objects are
# not equal and False will be immediately returned
# If the lists are of a different length, the objects cannot be
# equal, and False will be returned before comparing the list items
# this code part will work for members' assignment
# Validate the Sequence of the List data
# Skip non-dict items
# Host IPv4addrs wont contain use_nextserver and nextserver
# If DHCP is false.
# Compare the items of the dict to see if they are equal. A
# difference stops the comparison and returns false. If they
# are equal, move on to the next item
# Checks if extattrs existing in proposed object
# gets and returns the current object based on name/old_name passed
# check if network_view allows searching and updating with camelCase
# to check only by old_name if dns bypassing is set
# if there are multiple records with the same name and different ip
# get the object reference
# to fix the sanity issue
# to check only by name if dns bypassing is set
# resolves issue where a_record with uppercase name was returning null and was failing
# resolves issue where multiple a_records with same name and different IP address
# resolve issue if nios_next_ip exists which is not searchable attribute
# resolves issue where multiple txt_records with same name and different text
# removing Port param from get params for NIOS_DTC_MONITOR_TCP
# check if test_obj_filter is empty copy passed obj_filter
# prevents creation of a new A record with 'new_ipv4addr' when A record with a particular 'old_ipv4addr' is not found
# prevents creation of a new TXT record with 'new_text' when TXT record with a particular 'old_text' is not found
# del key 'restart_if_needed' as nios_zone get_object fails with the key present
# reinstate restart_if_needed if ib_obj is none, meaning there's no existing nios_zone ref
# gets and returns current_object as per old_name/host_name passed
# reinstate 'create_token' key
# del key 'create_token' as nios_member get_object fails with the key present
# del key 'template' as nios_network get_object fails with the key present
# del key 'members' as nios_network get_object fails with the key present
# Don't reinstate the field after as it is not valid for network containers
# reinstate the 'template' and 'members' key
# Delete the update keys to find the original range object
# Restore the keys to the object.
# throws exception if start_addr and end_addr doesn't exists for updating range
# Shims for resource module support
# This method is ONLY here to support resource modules. Therefore most
# arguments are unsupported and not present.
# to close session gracefully execute abort in top level session prompt.
# prepare candidate configuration
# running configuration
# This requires enable mode to run
# re.compile(br"^% \w+", re.M),
# Strings like this regarding VLANs are not errors
# returned in response to 'channel-group <name> mode <mode>'
# switching operational vrfs here
# need to add the desired state as well
# For EAPI we need to construct a dict with cmd/input
# key/values for the banner
##############################################
# gather all comma separated interfaces
# handle domain_list items to be removed
# handle domain_list items to be added
# handle lookup_source items to be removed
# handle lookup_source items to be added
# handle name_servers items to be removed.  Order does matter here
# since name servers can only be in one vrf at a time
# handle name_servers items to be added
# { interface: <str>, vrf: <str> }
# { server: <str>; vrf: <str> }
# Refuse to diff_against: session if sessions are disabled
# recreate the object in order to process diff_ignore_lines
# compare against image version 4.20.10
# the eos cli prevents this by rule so capture it and display
# a nice failure message
# Emulate '| json' from CLI
# parse native config using the Vrf_global template
# parse native config using the Ospfv3 template
# parse native config using the Ntp_global template
# new vrf
# new afi and not the first updating all prev configs
# To check the format of the dest
# For new dest and  not the first dest
# acls_list.append(name_dict)
# populate the facts from the configuration
# TODO: listify?
# remove global configs from bgp_address_family
# parse native config using the Bgp_af template
# This module was verified on an ios device since vEOS doesnot support
# acl_interfaces cnfiguration. In ios, ipv6 acl is configured as
# traffic-filter and in eos it is access-group
# access_group_v6 = utils.parse_conf_arg(line, 'ipv6 traffic-filter')
# remaining options are all e.g., 10half or 40gfull
# remove address_family configs from bgp_global
# Skip other protocol configs like router ospf etc
# parse native config using the Bgp_global template
# instance_list.append(ospf_params_dict)
# config.update({"ospf_version": "v2", "ospf_processes": instance_list})
# Handle vlans not already in config
# remove superfluous config for overridden
# overridden
# if neither old or new format, fail !
# passing dict without vrf, inorder to avoid  no router ospfv3 command
# It doesn't match the new format or the old format, fail here
# move negate commands to beginning
# new interfaces in want
# Removing non-secondary removes all other interfaces
# Flatten the configurations for comparison
# Start with a copy of `want`
# Check if the whole ACL is already present
# Check sequence-specific entries
# Match sequence number and full content
# Entry already exists, skip
# Generate commands for any remaining entries in `config`
# Add ACL definition
# Add ACL entry
# gives the ace to be updated in case of merge update.
# if its a valid port number, then convert it to service name
# eg: 8082 -> us-cli
# if socket.getservbyport is unable to resolve the port name then directly use the port number
# eg: 50702
# gives the diff of the aces passed.
# convert list of dicts to dicts of dicts
# Removing the alias key
# skip superfluous configs for replaced
# check if want has vrf or not
# if want doesnot have vrf, device's vrf config will not
# be touched.
# Clearing all args, send empty dictionary
# merged and deleted are likely to emit duplicate tlv-select commands
# a_cmd = "traffic-filter" if w['afi'] == 'ipv6' else "access-group"
# duplex is handled with speed
# switching from default (layer2) mode to layer3
# setting to default (layer 2) mode
# Don't try to also remove the same key, if present in to_remove
# had to do this to escape flake8 and black errors
# ansible.content_builder.
# in the documentation in the module file and re-run
# ansible.content_builder commenting out
# the path to external 'docstring' in build.yaml.
# from eos 4.23 , 'bgp listen limit ' is replaced by 'dynamic peer max'.
# diable no-self-use
# pylint: disable=R0201
# pylint: disable=W0642
# pylint: disable=no-self-argument
# Copyright: (c) 2019, Or Soffer <orso@checkpoint.com>
# Case of read-only
# we only replace gaia_ip/ with web_api/gaia-api/ if target is set and path contains gaia_ip/
# Ansible module to manage Check Point Firewall (c) 2019
# Ansible module to manage CheckPoint Firewall (c) 2019
# Create lsm-cluster name
# check_fields_for_rule_action_module(module_args)
# Add to the THIS list for the value which needs to be excluded
# from HAVE params when compared to WANT param like 'ID' can be
# part of HAVE param but may not be part of your WANT param
# parse failure message with code and response
# send the request to checkpoint
# get the payload from the user parameters
# build the payload from the parameters which has value (not None), and they are parameter of checkpoint API as well
# special handle for this param in order to avoid two params called "version"
# message & syslog_facility are internally used by Ansible, so need to avoid param duplicity
# wait for task
# As long as there is a task in progress
# Check the status of the task
# Count the number of tasks that are not in-progress
# Are we done? check if all tasks are completed
# Wait for two seconds
# Getting a status description and comments of task failure details
# if failed occurred, in some cases we want to discard changes before exiting. We also notify the user about the `discard`
# Read-only mode without UID
# handle publish command, and wait for it to end if the user asked so
# if user insert a specific version, we add it to the url
# if code is 400 (bad request) or 500 (internal error) - fail
# handle call
# handle a command
# handle api call facts
# if there isn't an identifier param, the API command will be in plural version (e.g. show-hosts instead of show-host)
# handle delete
# else equals_code is 404 and no need to delete because he doesn't exist
# handle the call and set the result with 'changed' and the response
# handle api call
# else objects are equals and there is no need for set request
# returns a generator of the entire rulebase. show_rulebase_identifier_payload can be either package or layer
# in case there are empty sections after the last rule, we need them to appear in the reply and the limit might
# cut them out
# get 'to' or 'from' of given section
# return the total amount of rules in the rulebase of the given layer
# if 'above' in relative_position then get_number_and_section_from_relative_position returns the previous section
# so there isn't a need to further search for the relative section
# if relative position is a rule then get_number_and_section_from_relative_position has already entered the section
# (if exists) that the relative rule is in
# cases relevant for relative-position=section
# if the entire section isn't present in rulebase, the 'from' value of the section might not be
# the first position in the section, which is why we use get_edge_position_in_section
# section exists in rulebase
# we update this only after the 'above' case since the section that should be returned in that case isn't
# the one we are currently iterating over (but the one beforehand)
# if the entire section isn't present in rulebase, the 'to' value of the section might not be the
# last position in the section, which is why we use get_edge_position_in_section
# meaning the entire section is present in rulebase
# is the rule already at the bottom of the section. Can infer this only if the entire section is
# present in rulebase
# else: need to keep searching the rulebase, so position=None is returned
# setting a rule 'below' a section is equivalent to setting the rule at the top of that section
# is the rule already at the top of the section
# if search_entire_rulebase=True: even if rules['rulebase'] is cut (due to query limit) this will
# eventually be updated to the correct value in further calls
# cases relevant for relative-position=rule
# None, None, False/True, x>=1, None
# get the position in integer format and the section it is.
# is a number so we need to get the section (if exists) of the rule in that position
# section = None
# is the rule we're getting its position number above the rule it is relatively positioned to
# no from-to in empty sections so can't infer the position from them -> need to keep track of the position
# before the empty relative section
# need to keep track of the previous section in case the iteration starts with a new section and
# we want to set the rule above a section - so the section the rule should be at is the previous one
# build the show rulebase payload
# remove from payload unrecognized params (used for cases where add payload differs from that of a set)
# extract first rule from given rulebase response and the section it is in.
# skip empty sections (possible when offset=0)
# returns the show rulebase payload with the relevant required identifiers params
# mobile-access-x apis don't have an identifier in show rulebase command
# returns the show section/rule payload with the relevant required identifying package/layer
# is the param position (if the user inserted it) equals between the object and the user input, as well as the section the rule is in
# In this case the one of the following has occurred:
# 1) There is no position param, then it's equals in vacuous truth
# 2) search_entire_rulebase = False so it's possible the relative rule wasn't found in the default limit or maybe doesn't even exist
# 3) search_entire_rulebase = True and the relative rule/section doesn't exist
# if the names of the exist rule and the user input rule are equals, as well as the section they're in, then it
# means that their positions are equals so I return True. and there is no way that there is another rule with this
# name cause otherwise the 'equals' command would fail
# get copy of the payload without some of the params
# get copy of the payload with only some of the params
# is equals with all the params including action and position
# here the action is equals, so check the position param
# handle api call for rule
# if user insert param 'position' and needed to use the 'set' command, change the param name to 'new-position'
# check if call is in plural form
# handle api call facts for rule
# if there is no layer, the API command will be in plural version (e.g. show-https-rulebase instead of show-https-rule)
# The code from here till EOF will be deprecated when Rikis' modules will be deprecated
# checkpoint_argument_spec = dict(
# the rest data that don't follow the deepsec_* naming convention
# if module.params['wait_for_task_timeout'] is not None and module.params['wait_for_task_timeout'] >= 0:
# handle the call and set the result with 'changed' and teh response
# else equals_code is 404 and no need to delete because object doesn't exist
# This fn. will return both code and response, once all of the available modules
# Copyright: (c) 2017, Wayne Witzel III <wayne@riotousliving.com>
# Ansible Tower documentation fragment
# Automation Platform Controller documentation fragment
# Copyright: (c) 2020, Ansible by Red Hat, Inc
# plugin constructor
# All frequencies can use a start date
# If we are a none frequency we don't need anything else
# All non-none frequencies can have an end_on option
# A week-based frequency can also take the on_days parameter
# A month-based frequency can also deal with month_day_number and on_the options
# All frequencies can use a timezone but rrule can't support the format that AWX uses.
# So we will do a string manip here if we need to
# rrule puts a \n in the rule instad of a space and can't handle timezones
# AWX requires an interval. rrule will not add interval if it's set to 1
# Defer processing of params to logic shared with the modules
# Create our module
# We are going to tolerate multiple types of input here:
# something: 1 - A single integer
# something: "1" - A single str
# something: "1,2,3" - A comma separated string of ints
# something: "1, 2,3" - A comma separated string of ints (with spaces)
# something: ["1", "2", "3"] - A list of strings
# something: [1,2,3] - A list of ints
# If they give us a single int, lets make it a list of ints
# If its not a list, we need to split it into a list
# If they have a list of strs we want to strip the str incase its space delineated
# If value happens to be an int (from a list of ints) we need to coerce it into a str for the re.match
# Validate the start date
# 366 for leap years
# rrule puts a \n in the rule instead of a space and can't handle timezones
# Only the first rule needs the dtstart in a ruleset so remaining rules we can split at \n
# If we are an exclude rule we need to flip from an rrule to an ex rule
# For a sanity check lets make sure our rule can parse. Not sure how we can test this though
# return self.get_rrule(frequency, kwargs)
# REPLACE
# Stays backward compatible with the inventory script.
# If the user supplies '@controller_inventory' as path, the plugin will read from environment variables.
# validate type of inventory_id because we allow two types as special case
# To start with, create all the groups.
# Then, create all hosts and add the host vars.
# Lastly, create to group-host and group-group relationships, and set group vars.
# First add hosts to groups
# Then add the parent-children group relationships.
# add the child group to groups, if its already there it will just throw a warning
# Set the group vars. Note we should set group var for 'all', but not '_meta'.
# Fetch extra variables if told to do so
# Clean up the inventory.
# (c) 2020, John Westcott IV <john.westcott.iv@redhat.com>
# Any additional arguments that are not fields of the item can be added here
# Create a module for ourselves
# Extract our parameters
# Attempt to look up the related items the user specified (these will fail the module if not found)
# Attempt to look up an existing item based on the provided data
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
# Set lookup data to use
# Create the data that gets sent for create and update
# In the case of a new object, the utils need to know it is a node
# If the state was present and we can let the module build or update the existing item, this will return on its own
# Create approval node unified template or update existing
# Set Approval Fields
# Extract Parameters
# Find created workflow node ID
# Due to not able to lookup workflow_approval_templates, find the existing item in another place
# (c) 2017, Wayne Witzel III <wayne@riotousliving.com>
# Attempt to look up the object based on the provided name and inventory ID
# Preserve existing objects
# If the state was present we can let the module build or update the existing group, this will return on its own
# (c) 2020,Geoffrey Bachelot <bachelotg@gmail.com>
# Attempt to look up application based on the provided name and org ID
# Attempt to look up the job based on the provided name
# (c) 2021, Sean Sullivan
# Attempt to look up jobs based on the status
# The current running job for the update is in last_request['summary_fields']['current_update']['id']
# Get parameters that were not passed in
# Invoke wait function
# Set Changed to correct value depending on if hash changed Also output refspec comparision
# Alias for manual projects
# Attempt to look up project based on the provided name and org ID
# Attempt to look up credential to copy based on the provided name
# a new existing item is formed when copying and is returned.
# Attempt to look up associated field items the user specified.
# this is resolved earlier, so save an API call and don't do it again in the loop above
# Respect local_path if scm_type is manual type or not specified
# If we are doing a not manual project, register our on_change method
# An on_change function, if registered, will fire after an post_endpoint or update_if_needed completes successfully
# If the state was present and we can let the module build or update the existing project, this will return on its own
# Attempt to look up inventory based on the provided name and org ID
# We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.
# If the state was present and we can let the module build or update the existing inventory, this will return on its own
# Resolve name to ID for related resources
# Do not resolve name for "jobs" suboptions, for optimization
# Launch the jobs
# This is for backwards compatability
# Attempt to look up the command based on the provided name
# If we are past our time out fail with a message
# Account for Legacy messages
# Put the process to sleep for our interval
# Attempt to look up command based on the provided id
# (c) 2018, Nikhil Jain <nikjain@redhat.com>
# If our value is already None we can just return directly
# If we were given a name/value pair we will just make settings out of that and proceed normally
# Load the existing settings
# Begin a json response
# Check any of the settings to see if anything needs to be updated
# At least one thing is different so we need to patch
# If nothing needs an update we can simply exit with the response (as not changed)
# Make the call to update the settings
# Set the changed response to True
# To deal with the old style values we need to return 'value' in the response
# If we were using a name we will just add a value of a string, otherwise we will return an array in values
# (c) 2019, John Westcott IV <john.westcott.iv@redhat.com>
# If the state was absent we can delete the endpoint and exit.
# Check if Tower is already licensed
# Determine if we will install the license
# Handle check mode
# Do the actual install, if we need to
# (c) 2020, Ansible by Red Hat, Inc
# get rid of None type values
# peer item can be an id or address
# if address, get the id
# (c) 2020, Bianca Henderson <bianca@redhat.com>
# Sync the inventory source(s)
# Delete the hosts
# (c) 2020, Shane McDonald <shanemcd@redhat.com>
# NOTE: Default for pull differs from API (which is blank by default)
# (c) 2017, John Westcott IV <john.westcott.iv@redhat.com>
# These two lines are not needed if awxkit changes to do programatic notifications on issues
# In this module we don't use EXPORTABLE_RESOURCES, we just want to validate that our installed awxkit has import/export
# noqa: F401; pylint: disable=unused-import
# Currently the import process does not return anything on error
# It simply just logs to Python's logger
# Set up a log gobbler to get error messages from import_assets
# Run the import process
# Finally, consume the logs in case there were any errors and die if there were
# Attempt to look up workflow job based on the provided id
# Special treatment of tags parameters
# Create a datastructure to pass into our job launch
# Attempt to look up job_template based on the provided name
# The API will allow you to submit values to a jb launch that are not prompt on launch.
# Therefore, we will test to see if anything is set which is not prompt on launch and fail.
# Check if Either ask_variables_on_launch, or survey_enabled is enabled for use of extra vars.
# Launch the job
# Attempt to look up project based on the provided name or id
# Update the project
# Credentials will be a str instead of a list for backwards compatability
# job_timeout is special because its actually timeout but we already had a timeout variable
# We need to clear out the name from the search fields so we can use name_or_id in the following searches
# We need to clear out the organization from the search fields the searches for labels and instance_groups doesnt support it and won't be needed anymore
# (c) 2018, Samuel Carpentier <samuelcarpentier0@gmail.ca>
# We are not going to raise an error here because the __init__ method of ControllerAWXKitModule will do that for us
# The export process will never change the AWX system
# The exporter code currently works like the following:
# Here we are going to setup a dict of values to export
# If we are exporting everything or we got the keyword "all" we pass in an empty string for this asset type
# Otherwise we take either the string or None (if the parameter was not passed) to get one or no items
# Currently the export process does not return anything on error
# Set up a log gobbler to get error messages from export_assets
# Run the export process
# Deal with legacy parameters
# Change workflows to its endpoint name.
# Lookup actor data
# separate actors from resources
# Lookup Resources
# Attempt to look up project based on the provided name, ID, or named URL and lookup data
# build association agenda
# perform associations
# (c) 2018, Adrien Fleury <fleu42@gmail.com>
# These will be passed into the create/updates
# Attempt to look up credential_type based on the provided name
# If the state was present and we can let the module build or update the existing credential type, this will return on its own
# Not sure how to make this actually return a non 200 to test what to dump in the respinse
# Deal with legacy credential and vault_credential
# Special treatment of extra_vars parameter
# A token is special because you can never get the actual token ID back from the API.
# So the default module return would give you an ID but then the token would forever be masked on you.
# This method will return the entire token object we got back so that a user has access to the token
# If we are state absent make sure one of existing_token or existing_token_id are present
# Attempt to look up job based on the provided id
# Attempt to look up host based on the provided name and inventory ID
# If the state was present and we can let the module build or update the existing host, this will return on its own
# Attempt to look up team based on the provided name and org ID
# If the state was present and we can let the module build or update the existing team, this will return on its own
# Lookup Job Template ID
# Lookup Values for other fields
# Two lookup methods are used based on a fix added in 21.11.0, and the awx export model
# Set Search fields
# Determine if state is present or absent.
# Start Approval Node creation process
# Attempt to look up an existing item just created
# Get id's for association fields
# Extract out information if it exists
# Test if it is defined, else move to next association.
# Search for existing nodes.
# Loop through found fields
# Extract schema parameters
# Get Workflow information in case one was just created.
# Destroy current nodes if selected.
# Work thorugh and lookup value for schema fields
# Create Schema Nodes
# Create Schema Associations
# Copyright: (c) 2018, Adrien Fleury <fleu42@gmail.com>
# How do we handle manual and file? The controller does not seem to be able to activate them
# Layer in all remaining optional information
# Attempt to JSON encode source vars
# Sanity check on arguments
# If the state was present we can let the module build or update the existing inventory_source_object, this will return on its own
# Attempt to look up the object based on the provided name, credential type and optional organization
# Create a copy of lookup data for copying without org.
# If we don't already have a credential (and we are creating one) we can add user/team
# The API does not appear to do anything with these after creation anyway
# NOTE: We can't just add these on a modification because they are never returned from a GET so it would always cause a changed=True
# Copyright: (c) 2020, Tom Page <tpage@redhat.com>
# Attempt to look up the object based on the target credential and input field
# Attempt to look up organization based on the provided name
# If the state was present and we can let the module build or update the existing organization, this will return on its own
# Create a datastructure to pass into our command launch
# extra_var can receive a dict or a string, if a dict covert it to a string
# Launch the ad hoc command
# Die if we don't have AWX_KIT installed
# Establish our conneciton object
# Associations of these types are ordered and have special consideration in the modified associations function
# Parameters specified on command line will override settings in any config
# Perform magic depending on whether controller_oauthtoken is a string or a dict
# Perform some basic validation
# Try to parse the hostname as a url
# Store URL prefix for later use in build_url
# Remove ipv6 square brackets
# Try to resolve the hostname
# Make sure we start with /api/vX
# Update the URL path with the endpoint
# Load configs like TowerCLI would have from least import to most
# If we have a specified  tower config, load it
# TODO: warn if there are conflicts with other params
# Since we were told specifically to load this we want it to fail if we have an error
# Only throw a formatting error if the file exists and is not a directory
# Validate the config file is an actual file
# Read in the file contents:
# First try to yaml load the content (which will also load json)
# If this is an actual ini file, yaml will return the whole thing as a string instead of a dict
# TowerCLI used to support a config file with a missing [general] section by prepending it if missing
# py2 ConfigParser has readfp, that has been deprecated in favor of read_file in py3
# This "if" removes the deprecation warning
# If we made it here then we have values from reading the ini file, so let's pull them out into a dict
# If we made it here, we have a dict which has values in it from our config, any final settings logic can be performed here
# Veriffy SSL must be a boolean
# This method is intended to be overridden
# Try to log out if we are authenticated
# TODO: Move the collection version check into controller_module.py
# This gets set by the make process so whatever is in here is irrelevant
# This maps the collections type (awx/tower) to the values returned by the API
# Those values can be found in awx/api/generics.py line 204
# A named URL is pretty unique so if we have a ++ in the name then lets start by looking for that
# This also needs to go first because if there was data passed in kwargs and we do the next lookup first there may be results
# Maybe someone gave us a named URL so lets see if we get anything from that.
# We found a named item but we expect to deal with a list view so mock that up
# Since we didn't have a named URL, lets try and find it with a general search
# If we get a value error, then we didn't have an integer so we can just pass and fall down to the fail
# Since we did a name or ID search and got > 1 return something if the id matches
# We got > 1 and either didn't find something by ID (which means multiple names)
# Or we weren't running with a or search and just got back too many to begin with.
# truncate to not include the base URL
# In case someone is calling us directly; make sure we were given a method, let's not just assume a GET
# Extract the headers, this will be used in a couple of places
# Authenticate to AWX (if we don't have a token and if not already done so)
# This method will set a cookie in the cookie jar for us and also an oauth_token
# If we have a oauth token, we just use a bearer header
# Important, if content type is not JSON, this should not be dict type
# Sanity check: Did the server send back some kind of internal error?
# Sanity check: Did we fail to authenticate properly?  If so, fail out now; this is always a failure.
# Sanity check: Did we get a forbidden response, which means that the user isn't allowed to do this? Report that.
# Sanity check: Did we get a 404 response?
# Requests with primary keys will return a 404 if there is no response, and we want to consistently trap these.
# Sanity check: Did we get a 405 response?
# A 405 means we used a method that isn't allowed. Usually this is a bad request, but it requires special treatment because the
# API sends it as a logic error in a few situations (e.g. trying to cancel a job that isn't running).
# Sanity check: Did we get some other kind of error?  If so, write an appropriate error message.
# We are going to return a 400 so the module can decide what to do with it
# A 204 is a normal response for a delete function
# In PY2 we get back an HTTPResponse object but PY2 is returning an addinfourl
# First try to get the headers in PY3 format and then drop down to PY2.
# Attempt to get a token from /api/v2/tokens/ by giving it our username/password combo
# If we have a username and password, we need to get a session cookie
# Preserve URL prefix
# Post to the tokens endpoint with baisc auth to try and get a token
# If we have neither of these, then we can try un-authenticated access
# This will exit from the module on its own.
# If the method successfully deletes an item and on_delete param is defined,
# This will return one of two things:
# Note: common error codes from the AWX API can cause the module to fail
# If we have an item, we can try to delete it
# This is from a project delete (if there is an active job against it)
# if we got None instead of [] we are not modifying the association_list
# First get the existing associations
# Some associations can be ordered (like galaxy credentials)
# If the current associations EXACTLY match the desired associations then we can return
# because of ordering, we have to remove everything
# re-add everything back in-order
# Lookup existing item to copy from
# Fail if the copy_from_lookup is empty
# Do checks for copy permisions if warrented
# Because checks have passed
# This will exit from the module on its own
# If the method successfully creates an item and on_create param is defined,
# If we don't have an exisitng_item, we can try to create it
# We have to rely on item_type being passed in since we don't have an existing item that declares its type
# We will pull the item_name out from the new_item, if it exists
# 200 is response from approval node creation on tower 3.7.3 or awx 15.0.0 or earlier.
# Process any associations with this item
# If we have an on_create method and we actually changed something we can call on_create
# all sub-fields are either equal or could be equal
# Something doesn't match, or something might not match
# case of 'field not in new' - user password write-only field that API will not display
# If the method successfully updates an item and on_update param is defined,
# If we have an item, we can see if it needs an update
# Check to see if anything within the item requires the item to be updated
# If we decided the item needs to be updated, update it
# compare apples-to-apples, old API data to new API data
# but do so considering the fields given in parameters
# If we change something and have an on_change call it
# Remove boolean values of certain specific types
# this is needed so that boolean fields will not get a false value when not provided
# Attempt to delete our current token from /api/v2/tokens/
# in error cases, fail_json exists before exception handling
# Grab our start time to compare against for the timeout
# If the job has failed, we want to raise a task failure for that so we get a non-zero response.
# Approval jobs have no elapsed time so return
# Removed time so far from timeout.
# Now that Job has been found, wait for it to finish
# survey_enabled=True, survey_spec=survey_spec
# test preservation of error NTs when success NTs are added
# test removal to empty list
# to test org scoping
# FIXME: should this be identifier instead
# Create an insights credential
# Make a credential type which will be used by the credential
# create the ssh credential type
# Example from docs
# https://github.com/ansible/ansible/issues/61324
# Shuffle them up and try again to make sure a new order is honored
# Test CyberArk AIM credential source
# Test CyberArk Conjur credential source
# Test Hashicorp Vault secret credential source
# Test Hashicorp Vault signed ssh credential source
# Test Azure Key Vault credential source
# Test Changing Credential Source
# Test Centrify Vault secret credential source
# Create a new instance in the DB
# Set the new instance group only to the one instnace
# Test with a valid start date (no time) (also tests none frequency and count)
# Test with a valid start date and time
# Test end_on as count (also integration test)
# Test end_on as date
# Test on_days as a single day
# Test on_days as multiple days (with some whitespaces)
# Test valid month_day_number
# Test a valid on_the
# Test an valid timezone
# Try to run our generated rrule through the awx validator
# This will raise its own exception on failure
# Test end_on as junk
# Test on_days as junk
# Test combo of both month_day_number and on_the
# Test month_day_number as not an integer
# Test month_day_number < 1
# Test month_day_number > 31
# Test on_the as junk
# Test on_the with invalid occurance
# Test on_the with invalid weekday
# Test an invalid timezone
# 2nd modification, should cause no change
# https://github.com/ansible/ansible/issues/46803
# idempotency assertion
# Compare 1.0.0 to 1.2.3 (major matches)
# Compare 1.2.0 to 1.2.3 (major matches minor does not count)
# Compare 1.2.0 to 1.2.3 (major/minor matches)
# Compare 1.0.0 to 1.2.3 (major/minor fail to match)
# imports done here because of PATH issues unique to this test suite
# must be saved as encrypted
# Test no-op, this is impossible if the notification_configuration is given
# because we cannot determine if password fields changed
# Test a change in the configuration
# Analysis variables
# -----------------------------------------------------------------------------------------------------------
# Read-only endpoints are dynamically created by an options page with no POST section.
# Normally a read-only endpoint should not have a module (i.e. /api/v2/me) but sometimes we reuse a name
# For example, we have a role module but /api/v2/roles is a read only endpoint.
# This list indicates which read-only endpoints have associated modules with them.
# If a module should not be created for an endpoint and the endpoint is not read-only add it here
# THINK HARD ABOUT DOING THIS
# This is a view for inventory with kind=constructed
# Some modules work on the related fields of an endpoint. These modules will not have an auto-associated endpoint
# Subscription deals with config/subscriptions
# Add modules with endpoints that are not at /api/v2
# Global module parameters we can ignore
# Some modules take additional parameters that do not appear in the API
# Add the module name as the key with the value being the list of params to ignore
# The wait is for whether or not to wait for a project update on change
# Existing_token and id are for working with an existing tokens
# /survey spec is now how we handle associations
# We take an organization here to help with the lookups only
# Organization is how we are looking up job templates, Approval node is for workflow_approval_templates,
# lookup_organization is for specifiying the organization for the unified job template lookup
# Survey is how we handle associations
# organization is how we lookup unified job templates
# ad hoc commands support interval and timeout since its more like job_launch
# group parameters to perserve hosts and children.
# new_username parameter to rename a user and organization allows for org admin user creation
# workflow_approval parameters that do not apply when approving an approval node.
# bulk
# When this tool was created we were not feature complete. Adding something in here indicates a module
# that needs to be developed. If the module is found on the file system it will auto-detect that the
# work is being done and will bypass this check. At some point this module should be removed from this list.
# This is a hierarchical list of things that are ok/failures based on conditions
# If we know this module needs development this is a non-blocking failure
# If the module is a read only endpoint:
# There may be some cases where a read only endpoint has a module
# If the endpoint is listed as not needing a module and we don't have one we are ok
# If module is listed as not needing an endpoint and we don't have one we are ok
# All of the end/point module conditionals are done so if we don't have a module or endpoint we have a problem
# Now perform parameter checks
# First, if the parameter is in the ignore_parameters list we are ok
# If both the api option and the module option are both either objects or none
# If the API option is node and the parameter is in the no_api_parameter list we are ok
# If we know this parameter needs development and we don't have a module option we are non-blocking
# Check for deprecated in the node, if its deprecated and has no api option we are ok, otherwise we have a problem
# If we don't have a corresponding API option but we are a list then we are likely a relation
# TODO, at some point try and check the object model to confirm its actually a relation
# We made it through all of the checks so we are ok
# Load a list of existing module files from disk
# must begin with a letter a-z, and end in .py
# Module names are singular and endpoints are plural so we need to convert to singular
# If we don't have a module for this endpoint then we can create an empty one
# Add in our endpoint and an empty api_options
# Get out the endpoint, load and parse its options page
# Parse through our data to get string lengths to make a pretty report
# Print out some headers
# Print out all of our data
# This handles cases were we got no params from the options page nor from the modules
# Credential is not required for ec2 source, because of IAM roles
# make another inventory by same name in another org
# Tests related to source-specific parameters
# We want to let the API return issues with "this doesn't support that", etc.
# GUI OPTIONS:
# - - - - - - - manual:	file:	scm:	ec2:	gce	azure_rm	vmware	sat	openstack	rhv	tower	custom
# credential		?	?	o	o	r	r		r	r		r		r	r	o
# source_project	?	?	r	-	-	-		-	-		-		-	-	-
# source_path		?	?	r	-	-	-		-	-		-		-	-	-
# scm_branch		?	?	r	-	-	-		-	-		-		-	-	-
# verbosity			?	?	o	o	o	o		o	o		o		o	o	o
# overwrite			?	?	o	o	o	o		o	o		o		o	o	o
# overwrite_vars	?	?	o	o	o	o		o	o		o		o	o	o
# update_on_launch	?	?	o	o	o	o		o	o		o		o	o	o
# UoPL          	?	?	o	-	-	-		-	-		-		-	-	-
# source_vars*		?	?	-	o	-	o		o	o		o		-	-	-
# environmet vars*	?	?	o	-	-	-		-	-		-		-	-	o
# source_script		?	?	-	-	-	-		-	-		-		-	-	r
# UoPL - update_on_project_launch
# * - source_vars are labeled environment_vars on project and custom sources
# not actually desired, but assert for sanity
# create JT and labels
# sanity: no-op without labels involved
# first time adding labels, this should make the label list equal to what was specified
# shuffling the labels should not result in any change
# not specifying labels should not change labels
# should be able to remove only some labels
# native JSON types, no problem
# translation proxies often not string but stringlike
# query params for GET are handled a bit differently by
# tower-cli and python requests as opposed to REST framework APIRequestFactory
# make request
# requests library response object is different from the Django response, but they are the same concept
# this converts the Django response object into a requests response object for consumption
# Requies specific PYTHONPATH, see docs
# Note that a proper Ansiballz explosion of the modules will have an import path like:
# ansible_collections.awx.awx.plugins.modules.{}
# We should consider supporting that in the future
# Ansible params can be passed as an invocation argument or over stdin
# this short circuits within the AnsibleModule interface
# Call the test utility (like a mock server) instead of issuing HTTP requests
# Ansible modules return data to the mothership over stdout
# A system exit indicates successful execution
# dump the stdout back to console for debugging
# A module exception should never be a test expectation
# has_unpartitioned_events determines if there are any events still
# left in the old, unpartitioned job events table. In order to work,
# this method looks up when the partition migration occurred. When
# Django's unit tests run, however, there will be no record of the migration.
# We mock this out to circumvent the migration query.
# Copyright (c) 2021 René Moser <mail@renemoser.net>
# Copyright (c) jasites <jsites@vultr.com>
# flake8: noqa: E402
# Copyright (c) 2021, René Moser <mail@renemoser.net>
# Copyright (c) 2018, Yanis Guenane <yanis+ansible@guenane.org>
# Copyright (c) 2023, René Moser <mail@renemoser.net>
# Copyright (c) 2022, René Moser <mail@renemoser.net>
# Hanlde power status
# Copyright (c) 2019, Nate River <vitikc@gmail.com>
# Copyright (c) 2020, Simon Baerlocher <s.baerlocher@sbaerlocher.ch>
# Set firewall group id to resource path, ensures firewall group exists
# Upload by URL has a different endpoint
# Reset endpoint
# Set loadbalancer ID for source
# Warn about port only affects TCP and UDP protocol
# exit and show result if no attach/detach needed.
# Query details information about block type
# Empty string ID means detach instance
# URL encode label
# Filter instances by label
# Skip IP with different type
# Skip IP in different region
# Attach instance
# Refresh
# Detach instance
# Attach instance or change attached instance
# Copyright (c) 2024, René Moser <mail@renemoser.net>
# Use region to distinguish labels  between regions
# Password is required in create mode.
# Password is never returned and we can not compare.
# That is why we update it only if forced
# TODO: Workaround to get the description field into the list if missing
# Cloud instance features
# Bare metal features
# Common features
# VPCs
# sshkey_id ist a list of ids
# attach_vpc is a list of ids used while creating
# attach_vpc2 is a list of ids used while creating
# VPC1
# detach_vpc is a list of ids to be detached
# VPC2
# detach_vpc2 is a list of ids to be detached
# The API resource path e.g ssh_key
# The API result data key e.g ssh_keys
# The API resource path e.g /ssh-keys
# The name key of the resource, usually 'name'
# The name key of the resource, usually 'id'
# Some resources need an additional GET request to get all attributes
# List of params used to create the resource
# List of params used to update the resource
# Some resources have PUT, many have PATCH
# Hook custom configurations
# Check for:
# 429 Too Many Requests
# 500 Internal Server Error
# 504 Gateway Time-out
# Vultr has a rate limiting requests per second, try to be polite
# Success with content
# Success without content
# In case the resource has a region, distinguish between the region
# This allows to have identical identifiers (e.g. names) per region
# Returns a single dict representing the resource queryied by name
# Defaults
# Returns a single dict representing the resource
# Copyright 2024 Red Hat, Inc.
# Based on the kubernetes.core.k8s_auth_options doc fragment
# Apache License 2.0 (see LICENSE or http://www.apache.org/licenses/LICENSE-2.0)
# Copyright 2023 Red Hat, Inc.
# Based on the kubernetes.core.k8s inventory
# Handle import errors of python kubernetes client.
# Set HAS_K8S_MODULE_HELPER and k8s_import exception accordingly to
# potentially print a warning to the user if the client is missing.
# Handle import errors of trust_as_template.
# It is only available on ansible-core >=2.19.
# Copy values from config_data and set defaults for keys not present
# Used to convert camel case variable names into snake case
# LoadBalancer services can return a hostname or an IP address
# NodePort services use the node name as host
# LoadBalancer services use the port attribute
# NodePort services use the nodePort attribute
# Copy the single connections entry into the top level
# Continue if no VMs and VMIs were found to avoid adding empty groups.
# If resource not found return None
# Continue if service is not of type LoadBalancer or NodePort
# Continue if ports are not defined, there are more than one port mapping
# or the target port is not port 22 (ssh) or port 5985 or 5986 (winrm).
# Only add the service to the list if the domain selector is present
# Return early if no VMs and VMIs were found to avoid adding empty groups.
# Add found VMs and optionally enhance with VMI data
# Add remaining VMIs without VM
# Use first interface
# Find interface by its name
# If interface is not found or IP address is not reported skip this VMI
# Set up the connection
# Add hostvars from metadata
# Create label groups and add vm to it if enabled
# Add hostvars from status
# Add host to each label_value group
# Set ansible_host to the kubesecondarydns derived host name if enabled
# See https://github.com/kubevirt/kubesecondarydns#parameters
# Set ansible_host and ansible_port to the host and port from the LoadBalancer
# or NodePort service exposing SSH
# Default to the IP address of the interface if ansible_host was not set prior
# Based on the kubernetes.core.k8s_info module
# Monkey patch service.diff_objects to temporarily fix the changed logic
# Set kind to query for VirtualMachineInstances
# Set wait_condition to allow waiting for the ready state of the VirtualMachineInstance
# Based on the kubernetes.core.k8s module
# Set resource_definition to our constructed VM
# Set wait_condition to allow waiting for the ready state of the VirtualMachine
# Set kind to query for VirtualMachines
# Set wait_condition to allow waiting for the ready state of the
# VirtualMachine based on the running parameter.
# Copyright 2025 Red Hat, Inc.
# Copied from
# https://github.com/ansible-collections/kubernetes.core/blob/d329e7ee42799ae9d86b54cf2c7dfc8059103504/plugins/module_utils/k8s/service.py#L493
# Removed this once this fix was merged into kubernetes.core.
# -*- coding:utf-8 -*-
# Copyright(C) 2023 IEIT Inc. All Rights Reserved.
# result = dict()
# if module.check_mode:
# The following metadata allows Python runners and nox to install the required
# dependencies for running this Python script:
# /// script
# dependencies = ["nox>=2025.02.09", "antsibull-nox"]
# ///
# We try to import antsibull-nox, and if that doesn't work, provide a more useful
# error message to the user.
# Allow to run the noxfile with `python noxfile.py`, `pipx run noxfile.py`, or similar.
# Requires nox >= 2025.02.09
#!/usr/bin/python3
# TODO: Must have full path, should add relative path support.
# Setting vault password.
# TODO: do something with the returned output?
# TODO: log file location is currently in the same folder
# Documentation: We only support attached storage domains in the var generator.
# The reason we go over each active Host in the DC is that there might
# be a Host which fail to connect to a certain device but still be active.
# Locate the service that manages the storage domains that are attached
# to the data centers:
# Get default location of the yml var file.
# Close the connections.
# TODO: Remove once vnic profile is validated.
# TODO: 'dc_name' might be referenced before assignment.
# TODO: Add check whether the data center exists in the setup
# TODO: Add data center also
# Check for profile + network name duplicates in primary
# Check for profile + network name duplicates in secondary
# We fetch the source map as target host,
# since in failback we do the reverse operation.
# We fetch the target host as target the source mapping for failback,
# since we do the reverse operation.
# TODO: Use regex
# Add dry run
# "/tmp/dr_ovirt-ansible/mapping_vars.yml"
# Copyright: (c) 2016, Red Hat, Inc.
# Standard oVirt documentation fragment
# info standard oVirt documentation fragment
# (c) 2015, Filipe Niero Felisbino <filipenf@gmail.com>
# Hack to handle Ansible Unsafe text, AnsibleMapping and AnsibleSequence
# See issue: https://github.com/ansible-collections/community.general/issues/320
# For older jmespath, we can get ValueError and TypeError without much info.
# Not only visible to ansible-doc, it also 'declares' the options the plugin
# requires and how to configure them.
# TODO Fix DOCUMENTATION to pass the ansible-test validate-modules
# only needed if you ship it and don't want to enable by default
# make sure the expected objects are present, calling the base's
# Copyright: (c) 2018, Red Hat, Inc.
# ovirt-hosted-engine-setup -- ovirt hosted engine setup
# Copyright (C) 2018 Red Hat, Inc.
# Copyright (c) 2016 Red Hat, Inc.
# If host is in non-responsive state after upgrade/install
# let's wait for few seconds and re-check again the state:
# In case host is in INSTALL_FAILED status, we can reinstall it:
# Uncomment when 4.1 is EOL, and remove the cond:
# if host.name in event.description
# search='type=885 and host.name=%s' % host.name,
# Set to False, because upgrade_check isn't 'changing' action:
# Finished upgrade:
# 841: HOST_UPGRADE_FAILED
# 842: HOST_UPGRADE_FINISHED
# 888: HOST_UPGRADE_FINISHED_AND_WILL_BE_REBOOTED
# Fail upgrade if migration fails:
# 17: Failed to switch Host to Maintenance mode
# 65, 140: Migration failed
# 166: No available host was found to migrate VM
# Deactivate host if not in maintanence:
# Reinstall host:
# Activate host after reinstall:
# VM + MANAGEMENT is part of root network
# Try to import network
# Update clusters networks:
# removing keys which are not defined
# Nothing need to do when both are empty.
# We cannot really distinguish here if it was really updated cause
# we cannot take key value to check if it was changed or not. So
# for sure we update here always.
# Wait must be set to false if state == absent
# Copyright (c) 2016, 2018 Red Hat, Inc.
# Check if there is any change in address assignments and
# update it if needed:
# Check if bond configuration should be updated:
# Check if labels need to be updated on interface/bond:
# If any labels which user passed aren't assigned, relabel the interface:
# Check if networks attachments configuration should be updated:
# If attachment don't exists, we need to create it:
# Remove networks which are attached to different interface then user want:
# Append attachment ID to network if needs update:
# Check if we have to break some bonds:
# Assign the networks:
# Remove unmanaged networks:
# Need to check if there are any labels to be removed, as backend fail
# if we try to send remove non existing label, for bond and attachments it's OK:
# Copyright (c) 2017 Red Hat, Inc.
# Find the unregistered VM we want to register:
# Close the connection, but don't revoke token
# Copyright (c) 2022 Red Hat, Inc.
# After adding a new transfer for the disk, the transfer's status will be INITIALIZING.
# Wait until the init phase is over. The actual transfer can start when its status is "Transferring".
# Assign tags:
# Assign the tag:
# Detach the tag:
# Unassign tags:
# Remove existing bonds
# Create new bond
# Attach NICs to instance type, if specified:
# Remove all graphical consoles if there are any:
# If there are not gc add any gc to be added:
# Update consoles:
# If found more groups, filter them by namespace and authz name:
# (filtering here, as oVirt/RHV backend doesn't support it)
# Remove all previous network filter parameters
# Create all specified netwokr filters by user
# Locate the service that manages the virtual machines and use it to
# search for the NIC:
# Locate the VM, where we will manage NICs:
# TODO: We have to modify the search_by_name function to accept raise_error=True/False,
# Find vNIC id of the network interface (if any):
# When not specified which vnic use ovirtmgmt/ovirtmgmt
# Handle appropriate action:
# If template isn't specified and VM is about to be created specify default template:
# Mark if entity exists before touching it:
# After creation of the VM, attach disks and NICs:
# Forcibly stop the VM, if it's not in DOWN state:
# In case VM is preparing to be UP, wait to be up, to migrate it:
# Stateless snapshot may be already removed:
# If disk ID is not specified, find disk by name:
# Attach disk to VM:
# Remove all existing virtual numa nodes before adding new ones
# Attach NICs to VM, if specified:
# Wait until event with code 1152 for our VM don't appear:
# Result state is SUSPENDED, we should wait to be suspended:
# Invalid states:
# If VM is powering down, wait to be DOWN or UP.
# VM can end in UP state in case there is no GA
# or ACPI on the VM or shutdown operation crashed:
# Allow migrate vm when state present.
# Migrate before update
# In case of wait=false and state=running, waits for VM to be created
# In case VM don't exist, wait for VM DOWN state,
# otherwise don't wait for any state, just update VM:
# If VM is going to be created and check_mode is on, return now:
# Run the VM if it was just created, else don't run it:
# Start action kwargs:
# Apply next run configuration, if needed:
# Find the storage domain with unregistered VM:
# Register the vm into the system:
# Fetch vm to initialize return.
# Typically, "ConnectionRefusedError" or "socket.gaierror".
# format=raw uses the NBD backend, enabling:
# - Transfer raw guest data, regardless of the disk format.
# - Automatic format conversion to remote disk format. For example,
# - Collapsed qcow2 chains to single raw file.
# - Extents reporting for qcow2 images and raw images on file storage,
# The system has removed the disk and the transfer.
# The system will remove the disk and the transfer soon.
# Old engine (< 4.4.7): since the transfer was already deleted from
# the database, we can assume that the disk status is already
# updated, so we can check it only once.
# Disk verification failed and the system removed the disk.
# We don't support move&copy for non file based storages:
# Initiate move:
# If `vm_id` isn't specified, find VM by name:
# Fail when host is specified with the LUN id. LUN id is needed to identify
# an existing disk if already available in the environment.
# If the VM doesn't exist in VMs disks, but still it's found it means it was found
# for template with same name as VM, so we should force create the VM disk.
# First take care of creating the VM, if needed:
# Always activate disk when it is created.
# We need to pass ID to the module, so in case we want to detach/attach disk
# we have this ID specified to attach/detach method:
# Upload disk image in case it is a new disk or force parameter is passed:
# Download disk image in case the file doesn't exist or force parameter is passed:
# Disk sparsify, only if disk is of image type:
# Export disk as image to glance domain
# Wait for disk to appear in system:
# If VM was passed attach/detach disks to/from the VM:
# When the host parameter is specified and the disk is not being
# removed, refresh the information about the LUN.
# remove all
# add passed permits
# Wait for all VM pool VMs to be created:
# Get data center object of the storage domain:
# Search the data_center name, if it does not exist, try to search by guid.
# Find the DC, where the storage resides:
# Detach the storage domain:
# Wait until storage domain is detached:
# In case the user chose to destroy the storage domain there is no need to
# move it to maintenance or detach it, it should simply be removed from the DB.
# Also if storage domain in already unattached skip this step.
# Before removing storage domain we need to put it into maintenance state:
# Before removing storage domain we need to detach it from data center:
# If storage domain isn't attached, attach it:
# Wait until storage domain is in maintenance:
# In the case of no status returned, it's an attached storage domain.
# Redetermine the corresponding service and entity:
# Pick random available host when host parameter is missing
# Find the unregistered Template we want to register:
# The order of these condition is necessary.
# When would network_filter and pass_through specified it would try to create and network_filter and fail on engine.
# The order of these condition is necessary. When would qos and pass_through specified it would try to create and qos and fail on engine.
# The reason why we can't use equal method, is we get None from _get_network_filter_id or _get_qos_id method, when passing empty string.
# And when first param of equal method is None it returns true.
# When vNIC already exist update it, when not create it
# These are hardcoded IDs, once there is API, please fix this.
# legacy - 00000000-0000-0000-0000-000000000000
# minimal downtime - 80554327-0569-496b-bdeb-fcbbf52b827b
# suspend workload if needed - 80554327-0569-496b-bdeb-fcbbf52b827c
# post copy - a7aeedb2-8d66-4e51-bb22-32595027ce71
# Get Host
# Login
# Get LUNs exposed from the specified target
# -- FIXME --
# Note that we here always remove all cluster/storage limits, because
# it's not currently possible to update them and then re-create the limits
# appropriately, this shouldn't have any side-effects, but it's not considered
# as a correct approach.
# This feature is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1398576
# Manage cluster limits:
# Manage storage limits:
# API return <action> element instead of VM element, so we
# need to WA this issue, for oVirt/RHV versions having this bug:
# Those attributes are Supported since 4.1:
# Following attributes is supported since 4.1,
# so return if it doesn't exist:
# Following is supported since 4.1:
# Check if unsupported parameters were passed:
# Fetch VM ids which should be assigned to affinity group:
# Fetch host ids which should be assigned to affinity group:
# The base template is the template with the lowest version_number.
# Not necessarily version 1
# Wait until event with code 1158 for our template:
# when user puts version number which does not exist
# When user want to create new template subversion, we must make sure
# template is force created as it already exists, but new version should be created.
# We need to refresh storage domain to get list of images:
# Wait for template to appear in system:
# Find the storage domain with unregistered template:
# Register the template into the system:
# Fetch template to initialize return.
# Filtering by cluster version returns only those which have same cluster version as input
# Save the entity, so we know if Agent already existed
# Enable Power Management, if it's not enabled:
# enum is a ovirtsdk4 requirement
# Fetch nested values of struct:
# Get rid of whitespaces:
# Convert to bytes:
# Check if 'list' method support search(look for search parameter):
# There must be double quotes around name, because some oVirt resources it's possible to create then with space in name.
# We can get here 404, we should ignore it, in case
# of removing entity for example.
# Wait until the desired state of the entity:
# Exit if the condition of entity is valid:
# Sleep for `poll_interval` seconds if none of the conditions apply:
# Left for third-party module compatibility
# Entity exists, so update it:
# Update diffs only if user specified --diff parameter,
# so we don't useless overload API:
# Entity don't exists, so create it:
# Wait for the entity to be created and to be in the defined state:
# This is a list of error codes to retry.
# HTTP status: Conflict
# (c) 2016 Allen Sanabria, <asanabria@linuxdynasty.org>
# This is the base class of the exception.
# AWS Example botocore.exceptions.ClientError
# pylint: disable=isinstance-second-argument-not-valid-type
# Return original exception if exception is not a ClientError
# true decorator
# Copyright (c) 2021, Cisco Systems
# Compare normally if it exceeds expected size * 2 (len_list1==len_list2)
# Fail fast if elem not in list, thanks to any and generators
# not changes 'has diff elem' to list1 != list2 ':lists are not equal'
# print("dnac_compare_equality", current_value, requested_value)
# This substitution is for the import file operation
# Copyright (c) 2023, Cisco Systems
# temp_spec is the specification for the expected structure of configuration parameters
# Validate playbook params against the specification (temp_spec)
# Extract various network-related details from the response
# Prepare the network details for Cisco DNA Center configuration
# If the Global Pool doesn't exist and a previous name is provided
# Else try using the previous name
# Check if the Reserved Pool exists in Cisco DNA Center
# based on the provided name and site name
# If the Reserved Pool doesn't exist and a previous name is provided
# If the previous name doesn't exist in Cisco DNA Center, return with error
# If reserve pool exist, convert ipv6AddressSpace to the required format (boolean)
# Initialize the desired Global Pool configuration
# Converting to the required format based on the existing Global Pool
# Copy existing Global Pool information if the desired configuration is not provided
# Check for missing mandatory parameters in the playbook
# If there are no existing Reserved Pool details, validate and set defaults
# Check pool exist, if not create and return
# Pool exists, check update is required
# Pool Exists
# Check update is required
# Check update is required or not
# Check the execution status
# Update result information
# Define the specification for module arguments
# Create an AnsibleModule object with argument specifications
# Copyright (c) 2024, Cisco Systems
# Validate swim params
# Find the intersection of device IDs with the response get from get_membership api and get_device_list api with provided filters
# check if given site exists, store siteid
# if not then use global site
# For global site, use -1 as siteId
# check if given device family name exists, store indentifier value
# check if image for distributon is available
# check if image for activation is available
# Check if any device parameters are provided
# Format only the parameters that are present
# Fetch once
# CCO import
# Code to check if the image(s) already exist in Catalyst Center
# Monitor the task progress
# Check if all roles are tagged as Golden
# Check task status sequentially
# Check activation status sequentially
# Building message parts
# Final single-line message formation
# Copyright (c) 2022, Cisco Systems
# Validate device params
# Recursive call should impact the update_needed flag
# Check whether the given url is valid or not
# Collect the Instance ID of the syslog destination
# Add other filter parameters if present
# Ensure that there is input for 'domainsSubdomains'
# Need to take all required/optional parameter from Cisco Catalyst Center
# Collect the Instance ID of the webhook destination
# Collect the Instance ID of the email destination
# Prepare the parameters for the update operation
# Create/Update Rest Webhook destination in Cisco Catalyst Center
# ensure the URL starts with https://
# domain name
# OR IPv4 address
# OR IPv6 address
# port and path
# query string
# fragment locator
# Check if the input string matches the pattern
# Need to add snmp destination in Cisco Catalyst Center with given playbook params
# Check destination needs update and if yes then update SNMP Destination
# Update the syslog destination with given
# Create/Update Email destination in Cisco Catalyst Center
# Need to add email destination in Cisco Catalyst Center with given playbook params
# Check destination needs update and if yes then update Email Destination
# Update the email destination with given details in the playbook
# Create/Update Syslog destination in Cisco Catalyst Center
# We need to add the Syslog Destination in the Catalyst Center
# Check destination needs update and if yes then update Syslog Destination
# Create/Update snmp destination in Cisco Catalyst Center
# Create/Update ITSM Integration Settings in Cisco Catalyst Center
# collect the ITSM id with given name
# Check whether the url exist or not and if exists is it valid
# Update the ITSM integration settings with given details in the playbook
# Create Rest Webhook Events Subscription Notification in Cisco Catalyst Center
# Need to create webhook event notification in Cisco Catalyst Center
# Check whether the webhook evenet notification needs any update or not.
# Update the webhook notification with given playbook parameters
# Create Email Events Subscription Notification in Cisco Catalyst Center
# Need to create email event notification in Cisco Catalyst Center
# Check whether the email evenet notification needs any update or not.
# Update the email notification with given playbook parameters
# Create Syslog Events Subscription Notification in Cisco Catalyst Center
# Need to create syslog event notification in Cisco Catalyst Center
# Check whether the syslog evenet notification needs any update or not.
# Update the syslog notification with given playbook parameters
# Delete ITSM Integration setting from Cisco Catalyst Center
# Check whether the given itsm integration present in Catalyst Center or not
# Delete Webhook Events Subscription Notification from Cisco Catalyst Center
# Delete Email Events Subscription Notification from Cisco Catalyst Center
# Delete Syslog Events Subscription Notification from Cisco Catalyst Center
# Invoke the API to check the status and log the output of each destination and notification on the console
# Check if IP address list or hostname is provided
# Validate if valid ip_addresses in the ip_address_list
# Validate either ip_address_list OR site_name is present
# Validate if a network compliance operation is present
# Validate the categories if provided
# Initializing empty dicts/lists
# Create run_compliance_params
# Check for devices with Compliance Status of "IN_PROGRESS" and update parameters accordingly
# Iterate through the response to identify devices with 'IN_PROGRESS' status
# Update run_compliance_params to exclude devices with 'IN_PROGRESS' status
# Initialize an empty dictionary to store the mapped parameters
# Update params with current offset and limit
# Query Cisco Catalyst Center for device information using the parameters
# Check if a valid response is received
# Exit loop if no devices are returned
# Iterate over the devices in the response
# Check if the device is reachable and managed
# Skip Unified AP devices
# Check if the response size is less than the limit
# Increment offset for next batch
# Check if the IP from get_device_list_params is in mgmt_ip_to_instance_id_map
# Log the total number of devices processed and skipped
# Log an error message if any exception occurs during the process
# Log an error if no reachable devices are found
# Initialize a dictionary to store management IP addresses and their corresponding device IDs
# Split the IP address list into batches of 200
# Calculate total number of batches
# Get device list parameters for the current batch
# Retrieve device IDs for the current batch
# Update the main map with the results from the current batch
# Check if both site name and IP address list are provided
# Log an error message if mgmt_ip_to_instance_id_map is empty
# Validate if sync is required
# Categorize the devices based on status - "COMPLIANT", "NON_COMPLIANT", "OTHER"(status other than COMPLIANT and NON_COMPLIANT)
# Validate if all devices are "COMPLIANT" - then sync not required
# Initialize the lists/dicts
# Iterate through each device UUID in the run compliance parameters
# Find the corresponding device IP for the given device UUID
# Add the device IP to the device list
# Initialize the response list for the device IP if not already present
# Check if categories are specified and fetch details for each category of the device
# Fetch compliance details for the device without specific category
# If no compliance details were found, update the result with an error message
# Execute the compliance check operation
# Make an API call to synchronize device configuration
# If failure reason is provided, include it in the error message
# If no failure reason is provided, generate a generic error message
# Update the result with failure status and log the error message
# Get task status for the current batch
# Store the result for the current batch
# Append the current batch result to the batches_result list
# Iterate over each batch in the batches_status list
# Check if the task status is successful
# Check if the batch has already been retried with batch size of 1
# Re-run the compliance check for the failed batch with batch size of 1
# Recursively validate the batch results and append the successful device IDs
# Lists to store compliant and non-compliant devices
# Iterate over each device's compliance report
# Assume the device is compliant unless a non-compliant status is found
# Check each compliance type's status
# Classify the device based on its compliance status
# Log the results
# Reverse the mgmt_ip_to_instance_id_map to map device IDs to IPs
# Determine unsuccessful devices
# "success_devices": successful_ips,
# Determine the final operation result
# Retrieve the parameters for sync device config
# Extract the list of device IDs from sync_device_config_params
# Create device_ip_list by mapping the device IDs back to their corresponding IP addresses
# Retrieve and return the task status using the provided task ID
# Get compliance details before running sync_device_config
# Get compliance details after running sync_device_config
# Get the device IDs to check
# Initialize the status lists
# Iterate over the device IDs and check their compliance status
# Find the corresponding IP address from the mgmt_ip_to_instance_id_map
# Get the status before
# Get the status after
# Check if all statuses changed from "NON_COMPLIANT" to "COMPLIANT"
# Initialize parameters
# Store input parameters
# Remove Duplicates from list
# Validate the provided configuration parameters
# Retrieve device ID list
# Run Compliance Paramters
# Sync Device Configuration Parameters
# Construct the "want" dictionary containing the desired state parameters
# Action map for different network compliance operations
# Check if all action_map keys are missing in self.want
# Iterate through the action map and execute specified actions
# Execute the action and check its status
# Define the specification for the module"s arguments
# Initialize the Ansible module with the provided argument specifications
# Initialize the NetworkCompliance object with the module
# Get the state parameter from the provided parameters
# Check if the state is valid
# Validate the input parameters and check the return status
# Get the config_verify parameter from the provided parameters
# Iterate over the validated configuration parameters
# Exit with the result obtained from the NetworkCompliance object
# Check if the parameter value is a string
# update the role if role exists
# Create the role
# update the user if role exists
# Create the user
# Determine if default entries should be added based on role_operation
# Process each assurance rule
# Process each network analytics rule
# Process each network design rule
# Process each network provision rule
# Handle nested inventory_management
# Process each network service rule
# Process each platform rule
# Process each security rule
# Process each system rule
# Process each utilities rule
# Extract role name and description from the configuration
# List of functions to process each section of role configuration
# Process each section and check for errors
# Construct the final payload
# Compare and update first name
# Create the updated dictionary
# Compare and update last name
# Compare and update username
# Compare and update email
# Compare and update role list
# Code to validate ccc config for merged state
# Basic Ansible type check or assign default.
# common approach when a module relies on optional dependencies that are not available during the validation process.
# Check if Pyzipper is installed
# Check if Pathlib is installed
# Check if the config is provided in the playbook
# Define the specification for device configuration backup parameters
# Validate device_configs_backup params
# Check if there are any invalid parameters
# If validation is successful, update the result
# Mapping from input parameter names to API Specific parameter names
# Iterate over the parameters and add them to the result dictionary if present in the config
# If the parameter is serial_number_list, modify each serial number
# Handle case where serial numbers are provided as a single comma-separated string
# Split if there are multiple serial numbers in one string
# Add wildcard prefix and suffix
# Define device families to skip
# Initialize the mapping dictionary
# Check if site_list is provided in the config
# Use a set to ensure unique sites
# Retrieve device IDs for each site in the unique_sites set
# Get additional device list parameters excluding site_list
# If no site_list is provided, use other parameters to get device IDs
# Iterate through each IP address in the list to validate
# Check if the IP address is a valid IPv4 address
# Log a success message indicating all IP addresses are valid
# Define the regex pattern for a valid password
# Log the user-defined password for debugging purposes
# Check if the password matches the defined pattern
# Define the set of punctuation characters allowed in the password
# Combine allowed characters: punctuation, letters, and digits
# Create a list ensuring the password meets the criteria
# Form the password
# Log the password generation event
# Construct the parameters dictionary for exporting device configurations
# Log the download URL for debugging purposes
# Check if response returned
# Attempt the first function call
# Log and attempt the second function call if the first fails
# Handle final failure case
# Create the directory path if it does not exist
# Generate a timestamp and set the zipped file path
# Convert the binary file data to a BytesIO object for processing
# Unzip the file using the provided file password
# Perform additional tasks after breaking the loop
# Download the file using the additional status URL
# Unzip the downloaded file
# Append password information if unzipping is not required
# Retrieve and log configuration parameters
# Default to 'backup' if not provided
# Validate conflict if 'ALL' is selected with other types
# If 'ALL' is selected, expand to all supported file types
# Validate the IP address list if provided
# Define the time window in seconds to check for recently modified files
# List of files modified within the specified time window
# Check if there are any files modified within the window
# Validate discovery params
# Code to validate Cisco Catalyst Center config for merged state
# Code to validate Cisco Catalyst Center config for deleted state
# Copyright (c) 2025, Cisco Systems
# Defer this feature as API issue is there once it's fixed we will addresses it in upcoming release iac2.0
# Retrieve device IPs from the configuration
# If device IPs are not available, check hostnames
# If hostnames are not available, check serial numbers
# If serial numbers are not available, check MAC addresses
# If no information is available, return an empty list
# With this File ID call the Download File by FileID API and process the response
# Create a PyZipper object with the password
# Assuming there is a single file in the zip archive
# Extract the content of the file with the provided password
# Now 'file_content_binary' contains the binary content of the decrypted file
# Since the content is text, so we can decode it
# Now 'file_content_text' contains the text content of the decrypted file
# Parse the CSV-like string into a list of dictionaries
# Now all device UUID get collected so call the export device list API
# Export the device data in a batch of 500 devices at a time by default
# Write the data to a CSV file
# Code for triggers the resync operation using the retrieved device IDs and force sync parameter.
# Resync the device in a batch of 200 devices at a time in inventory by default
# Get and store the apEthernetMacAddress of given devices
# Now call the Reboot Access Point API
# Already provisioned
# Error in provisioning
# Check if device reaches managed state
# Handle final provisioning results
# Collect site and device information
# Collect the device parameters from the playbook to perform wireless provisioing
# This resync retry interval will be in seconds which will check device status at given interval
# Check till device comes into managed state
# Now we have provisioning_param so we can do wireless provisioning
# Not returning from here as there might be possiblity that for some devices it comes into exception
# but for others it gets provision successfully or If some devices are already provsioned
# Check If all the devices are already provsioned, return from here only
# Get the list of device that are present in Cisco Catalyst Center
# Split the payload into 500 devices(by default) only to match the device credentials
# Call the Get interface details by device IP API and fetch the interface Id
# Check if interface_details is None or does not contain the 'id' key.
# Now we call update interface details api with required parameter
# Check if interface need update or not here
# First check if device present in Cisco Catalyst Center or not
# If no update flagged, check for completed maintenance to schedule new maintenance
# Add the validation for the recurrence end time and recurrence interval
# To add the devices in inventory
# Update the role of devices having the role source as Manual
# Check if the same role of device is present in dnac then no need to change the state
# Update Device details and credentails
# If User defined field(UDF) not present then create it and add multiple udf to specific or list of devices
# Check if the Global User defined field exist if not then create it with given field name
# Create the Global UDF
# Get device Id based on config priority
# Now add code for adding Global UDF to device with Id
# Once Wired device get added we will assign device to site and Provisioned it
# Once Wireless device get added we will assign device to site and Provisioned it
# Find out the devices for which maintenance already schedule and not schedule yet
# Handle Global User Defined Fields (UDF) Deletion
# Loop over devices to delete them
# Execute API call to delete UDF
# Check for task ID in the response and monitor its progress
# If the task is successful, update status and log the result
# If there's an error, log and handle it
# Validate provision params
# Validate template params
# Check if project exists.
# DNAC returns project details even if the substring matches.
# Hence check the projectName retrieved from DNAC.
# Check if specified template in playbook is available
# Get available templates which are committed under the project
# This check will fail if specified template is there not committed in dnac
# There are committed templates in the project but the
# one specified in the playbook may not be committed
# ProjectId is required for creating a new template.
# Store it with other template parameters.
# Mandate fields required for creating a new template.
# Update language,deviceTypes and softwareType if not provided for existing template.
# Template does not need update
# Template needs to be versioned
# _import_project = _import.get("project")
# "payload": "{0}".format(payload)
# WIRED_CLIENT and # UNIFIED_AP and # WIRELESS_CLIENT # ROUTER
# WIRELESS_CLIENT
# UNIFIED_AP
# SWITCH_AND_HUB and # ROUTER and # UNIFIED_AP and # WIRELESS_CONTROLLER
# WIRELESS_CONTROLLER
# SWITCH_AND_HUB
# SWITCH_AND_HUB and # ROUTER
# SWITCH_AND_HUB and # ROUTER and # WIRELESS_CONTROLLER
# ROUTER and # WIRELESS_CONTROLLER
# SWITCH_AND_HUB and # ROUTER and # WIRELESS_CONTROLLER and # UNIFIED_AP
# Define validation rules for KPI names and device families
# Skip unknown KPI names
# Validate threshold values based on KPI name and device family
# Exit early on failure
# Defer this feature as API issue is there once it's fixed we will addresses it in upcoming release
# Validate 'queuing_profile'
# Validate 'application_set_details'
# Validate 'application'
# Validate 'application_policy'
# Validate the input configuration
# Fetching application data
# Check if update is required for site
# Check if the site IDs match
# Compare the site_ids and current_site_ids
# Application data (replace with actual data or mock data)
# Populate lists based on relevance
# Process current application sets
# Handle known relevance types
# Track application sets
# Determine if update is required
# Exit loop early
# Check the task response status
# Handle successful update
# Handle failed update
# Check for application_set_id
# Now conditionally add fields to the payload
# Construct the full payload
# Only add to payload if the value exists
# Add specific fields for 'server_name', 'url', or 'server_ip'
# Check if 'dscp' exists in instance_ids and is not None
# Compare and merge `want_bandwidth_settings` and `have_bandwidth_settings`
# Normalize key to uppercase with underscores
# Current DSCP settings from the current profiles
# Construct the payload for DSCP customization
# Loop through the current_profiles to extract the instanceId for each speed
# Store the instanceId in the dictionary with interfaceSpeed as the key
# Loop through the speeds and bandwidth settings to create the clauses dynamically
# Loop through the speeds and bandwidth settings for this profile
# Compare with the current profile description
# If the description is the same as the current one, retain it
# No new description provided, retain current one
# Check if 'is_common_between_all_interface_speeds' is false
# Check if 'is_common_between_all_interface_speeds' exists in 'bandwidth_settings'
# To track which application sets were deleted from which policies
# To track missing application sets for policies
# Loop through each policy in the config
# Fetch current application policy details
# Store the IDs of application sets or policies to be deleted
# List to track valid application sets
# List of application sets not found
# If application set name is provided, check if they exist or are already deleted
# Add the application set's ID
# Identify any application sets that are missing in the policy
# Proceed with valid application sets even if some are not available
# If application_set_name is not in the config, delete the entire policy
# Sending the list of application set or policy IDs for deletion
# Pass the collected IDs for deletion
# Proceed only if the status is successful
# If specific application sets were provided for deletion
# If no application sets were specified, the whole policy is deleted
# Track the success message
# Track the failed message
# Reporting application set deletions first
# Reporting missing or already deleted application sets with policy names in the required format
# Now collect all missing sets and group by policy
# Ensure only policies with missing sets are reported
# Reporting policy deletions
# Join all the messages together
# Determine final operation result
# === APPLICATIONS ===
# === APPLICATION POLICIES ===
# === QUEUING PROFILES ===
# Fetch application details
# If 'timeout' is not provided, use 'default_timeout'
# Determine the role based on whether the auth server exists and if the role is specified
# Use the role from 'auth_policy_server' if available, otherwise default to "secondary"
# Use the role from 'auth_server_details'
# sleep time after checking the status of the response from the API
# Check Authentication and Policy Server exist, if not create and return
# Authentication and Policy Server exists, check update is required
# Edit API not working, remove this
# Authenticaiton and Policy Server Exists
# Check the task status
# Validate configuration against the specification
# Increment offset for pagination
# Direct API call as SDK is not available yet
# This if condition will be removed once CLI unassign templete API upgrade released
# Below if condition for creating the switch profile
# Code to check if the image already exists in Catalyst Center
# Fetch image_id for the imported image for further use
# Cisco Catalyst Center returns project details even if the substring matches.
# Hence check the projectName retrieved from Cisco Catalyst Center.
# This check will fail if specified template is there not committed in Cisco Catalyst Center
# Fetch existing project details based on the name
# Compare values of the current key
# If no mismatches are found, configurations match
# Fetch the project ID using the project name
# If a valid project ID is found, proceed to delete the project
# Indicate that the project was deleted
# Get the existing project info
# Prepare update parameters
# Log the update parameters
# Ensure the response structure in self.result is initialized properly
# Initialize as a list with one empty dictionary
# Add the template name to the committed list
# Check whether the above template is committed or not
# Extract site details
# Check if the template is available but not yet committed
# Commit or versioned the given template in the Catalyst Center
# Get the latest version template ID
# Validate runtime scope type
# If the scope is not RUNTIME, we take the value directly from the resource_param dictionary
# Check if the elapsed time exceeds the timeout
# Wait for the specified poll interval before the next check
# Get the deployment id of the template if it get deployed successfully on the devices
# Filter devices based on the device family or device role
# Filter devices based on the device tag given to the devices
# Get the device ids associated with the given tag for given site
# No need to proceed when there is no elements in the list
# No need to check for the duplicates when there is only one element in the list
# IP_BASED_TRANSIT cannot be updated
# For SDA transit types, compare sdaTransitSettings
# Check transit exists, if not create and continue
# Handle fabric transit update evaluation
# Tranist Exists
# Check for 'dhcp_server_ips'
# Check for 'dns_server_ips'
# If the reserved pool has only IPv4, pool_info_length will be 1.
# If the reserved pool has both IPv4 and IPv6, pool_info_length will be 2.
# If the ipv6 flag is set in the second element, ipv4_index will be 0 and ipv6_index will be 1.
# If the ipv6 flag is set in the first element, ipv4_index will be 1 and ipv6_index will be 0.
# Extract DHCP details
# Extract DNS details
# Extract telemetry details
# Extract NTP server details
# Extract time zone details
# Extract banner (Message of the Day) details
# Extract AAA network and client/endpoint settings
# Prepare the network details for Cisco Catalyst Center configuration
# Handle the second AAA server network_aaa2
# Handle the second client AAA server client_and_endpoint_aaa2
# If there are no DNS servers, provide an empty list or handle it accordingly.
# To collect all error messages
# If there are errors, return a failure status with all messages
# Call the Catalyst Center API using family/function
# Check if the API call was successful
# Store response in 'have'
# Handle any exceptions that occur during the API call
# Check if the Reserved Pool exists in Cisco Catalyst Center
# If the previous name doesn't exist in Cisco Catalyst Center, return with error
# Retrieve device controllability details from the config
# Check if the CCC version is 2.3.7.9 or later
# Process results for older platform versions
# Process results for newer platform versions
# Search by name
# Search by CIDR
# Search for the global pool by subnet in nested addressSpace
# Initialize the desired Global Pool cwant_global = {
# Valid pool types enforced by API
# Process each pool in the global_ippool
# Normalize pool_type: first letter capitalized, rest lowercase
# Check for missing required parameters in the playbook
# Validate total_hosts
# Calculate host bits (add 2 for network + broadcast in IPv4)
# Process each reserved pool
# Prepare the IPv4 address space dictionary
# Conditionally add gateway and slaac_support
# Conditionally add DHCP  and dns servers
# Conditionally add address statistics
# Process IPv6 details if enabled
# Ensure it's a list with at least one non-empty value
# Use pop to avoid KeyError if key doesn't exist
# Check and update configure_dnac_ip
# Check and update ip_addresses
# If snmpServer is not defined in item, use have_network_details
# Retrieve existing syslog details
# Update configure_dnac_ip if provided, else fallback to existing value
# Update ip_addresses if provided, else fallback to existing value
# Use have value if syslogServer is not defined in the item
# Set to None if no value exists in item or have
# Handle collectorType
# Check if collector_type is None and assign from 'have' if so
# Ensure mandatory fields for TelemetryBrokerOrUDPDirector
# Attempt to retrieve values from `have`
# Log the values after attempting to assign from `have`
# If still missing, log failure and set status
# Add address and port
# Invalid collector_type
# Handle enableOnWiredAccessDevices (optional boolean field)
# Basic validation
# Build desired configuration dictionary
# Process Device Controllability details (only for Catalyst Center version >= 2.3.7.9)
# Check create_global_pool; if yes, create the global pool in batches
# Define batch size
# Check update_global_pool; if yes, update the global pool in batches
# Remove unnecessary keys
# Classify global pools into create and update lists
# Avoid index errors if `want_global_pool` has fewer items
# Check create_global_pool; if yes, create the global pool
# Check if the pool exists and matches the current or previous name
# Update the 'msg' field
# Update the 'response' field with your payload
# field_details = self.config[0].get('add_user_defined_field')
# Export the device data in a batch of 500 devices at a time
# Check the provisioning status of device
# Update list of interface details on specific or list of devices.
# Define the expected specification for RMA parameters
# Remove None values from valid_param
# Perform config validation
# Check if 'want' dictionary is valid
# Iterate through identifier keys to find valid device combinations
# Check if faulty device exists
# Check if replacement device exists
# Check if any valid identifier combination was not found
# Validate device names
# Validate IP addresses
# Validate serial numbers
# Monitor the task status using check_rma_task_status
# Attempt to unmark the device
# Combine both error messages
# If task is initiated successfully, monitor the replacement status
# If we've exhausted all retries without a definitive result
# Basic Ansible type check and assigning defaults.
# Loop through each ICAP settings batch
# Update to uppercase
# Update ota_band value
# Remove 'GHz', strip spaces, and convert to float or int
# Convert to int if it's a whole number (like 5.0)
# Normalize and validate assurance_icap_download capture type
# Process WLC Name
# Process AP Name
# Using hostname in the API call
# Assuming the device list response contains a list of devices
# Extract parameters
# Check ap_mac requirement for certain capture types
# Build request params
# Handle time filtering if both are provided
# Log the parameters used for retrieving the PCAP file ID
# Execute API call
# Check if response is an empty list
# Extract response dictionary
# Extract the first file ID
# Convert to epoch milliseconds
# Extract max duration across all capture jobs
# If response contains binary data, save it as a .pcap file
# Ensure the target directory exists
# Write the binary data to the .pcap file
# Store task ID
# Validate response
# Check if timeout has been reached
# Log before sleeping and retry
# Polling loop until we get a valid response or timeout
# Check if valid response is received
# Exit polling loop if a valid response is found
# Wait before retry
# After loop, process the final response
# Validate task details
# Implementation to retrieve CLI commands
# Check if an ICAP configuration with the same parameters is already in progress
# check if the icap with same config is already is in progress
# Update keys in assurance_icap_details
# Iterate through each dictionary in the list and pop the specified keys
# Generate the CLI commands that will be applied to device.
# to view the CLIs that will be applied to the device
# Proceed with deployment if successful
# Check if response is valid
# Log before sleeping
# Wait before retrying
# Validate assurance_icap_settings if provided
# If none of the deployments were successful
# Validate assurance_icap_download if provided
# Check files modified within the last 10 seconds
# Regex pattern for virtual network name having only letters numbers and underscores with 1-16 character long.
# Regex pattern for fabric vlan name having alphanumeric characters, underscores and hyphens with 1-32 character long.
# Validate the given traffic type for Vlan/VN/Anycast configuration.
# Validate the correct fabric_type given in the playbook
# Validate the Fabric Vlan name against the regex
# Validate the VN name against the regex
# Validate the fabric_type given in the playbook
# Collect the gateway id with combination of vn_name, ip_pool_name and fabric id
# Check fabric VLAN needs update or not only for fabric site
# Given VN already present in Cisco Catalyst Center, check vn needs update or not.
# Given Virtual network doesnot need any update
# Already present in the Cisco Catalyst Center and check for update needed or not.
# Given Anycast gateways details not present in the system needs to create it
# If site details are given in the input then we should operate on site only not delete the vn
# We cannot remove the anchored site id if subscriber sites are there
# Call the update API to remove the fabric sites from the given virtual network
# Create/Update fabric Vlan in Cisco Catalyst Center
# Create/Update virtual network in Cisco Catalyst Center
# Create/Update Anycast gateway in Cisco Catalyst Center with fabric id, ip pool and vn name
# Verify the deletion of layer2 Fabric Vlan from the Cisco Catalyst Center
# Need ID of the anycast gateway to delete the anycast gateway
# Delete layer3 Virtual network from the Cisco Catalyst Center
# Verify the creation/updation of fabric Vlan in the Cisco Catalyst Center
# Verify the creation/updation of layer3 Virtual Network in the Cisco Catalyst Center
# Verify the creation/updation of Anycast gateway in the Cisco Catalyst Center with fabric id, ip pool and vn name
# Verify the deletion of layer3 Virtual Network from the Cisco Catalyst Center
# Verify the deletion of Anycast gateway from the Cisco Catalyst Center
# Initialize the Virtual Network object
# Validate the provided state
# Validate input parameters
# Process each configuration
# Invoke the API to check status and log the output of each fabric VLAN, virtual network, and
# anycast gateways update on the console.
# Exit the module with the results
# Check if configuration is available
# Expected schema for configuration parameters
# Validate params
# Set the validated configuration and update the result with success status
# Initialize an empty list to store the fabric IDs
# Iterate over each site's information in the site details
# Check for provider_virtual_network
# Check for subscriber_virtual_networks
# Initialize the parameters dictionary with basic required parameters
# Check if 'fabric_sites' are provided and site details are available
# Create a dictionary with the extranet policy ID
# Initialize an empty dictionary to store site details
# Iterate over each site in the provided fabric sites list
# Validate if the site exists and retrieve its ID
# Call the SDA 'get_fabric_sites' API with the provided site ID
# Process the response if available
# Log an error message and fail if an exception occurs
# Get the fabric ID using the site name and site ID
# Execute the API call to get extranet policie
# Initialize variables to default values
# Check if the policy details were retrieved successfully
# Iterate over each key in the extranet policy details and compare the details
# Compare lists regardless of order
# Compare values directly
# Wrap the parameters in a payload dictionary
# Make the API call to add the extranet policy and return the task ID
# Get the name of the extranet policy from the input parameters
# Make the API call to update the extranet policy and return the task ID
# Make the API call to delete the extranet policy and return the task ID
# check if given extranet policy exits, if exists store current extranet policy info
# Initialize want
# Identify if policy already exists or needs to be created
# playbook CLI Credential details
# All CLI details from Cisco Catalyst Center
# Cisco Catalyst Center details for the CLI Credential given in the playbook
# Playbook snmp_v2c_read Credential details
# All snmp_v2c_read details from the Cisco Catalyst Center
# Cisco Catalyst Center details for the snmp_v2c_read Credential given in the playbook
# Playbook snmp_v2c_write Credential details
# All snmp_v2c_write details from the Cisco Catalyst Center
# Cisco Catalyst Center details for the snmp_v2c_write Credential given in the playbook
# Playbook https_read Credential details
# All https_read details from the Cisco Catalyst Center
# Cisco Catalyst Center details for the https_read Credential given in the playbook
# Playbook https_write Credential details
# All https_write details from the Cisco Catalyst Center
# Cisco Catalyst Center details for the https_write Credential given in the playbook
# Playbook snmp_v3 Credential details
# All snmp_v3 details from the Cisco Catalyst Center
# Cisco Catalyst Center details for the snmp_v3 Credential given in the playbook
# All CLI details from the Cisco Catalyst Center
# All httpRead details from the Cisco Catalyst Center
# All httpWrite details from the Cisco Catalyst Center
# Get the result global credential and want_update from the current object
# If no credentials to update, update the result and return
# Explicitly return the empty dictionary if input_value is {}
# If input_value is None, fall back to global_value (or return {} if global_value is None)
# Otherwise, return the input_value as-is
# Reset for each iteration
# List of credential types to handle
# Process each credential type
# If not provided in the input
# Fetch from global credentials
# Add site ID to parameters
# Append final response for logging
# Store original template
# Skip if credential_params is empty
# Validate pnp params
# Iterate through each device dictionary in the input list
# The "device_info" key contains a list, so we loop through it
# If we've seen this serial number before, it's a duplicate
# If this is the first time, add it to our set of seen serials
# Claiming is only allowed for single addition of devices
# check if given device exists in pnp inventory, store device Id
# check if given image exists, if exists store image_id
# check if project has templates or not
# check if given site exits, if exists store current site info
# Check the device already added and claimed for idempotent or import devices
# Check if the response contains the expected data
# Update the instance attribute
# If no device found, raise an error
# If the response is empty, log a warning
# Validate each telemetry entry
# Retrieve the current Cisco Catalyst Center version for comparison
# Check if provisioning should be handled based on DNAC version:
# - If DNAC version is ≤ 2.3.5.3, always proceed with provisioning logic.
# - If DNAC version is ≥ 2.3.7.6 AND the device is wireless, follow wireless provisioning logic.
# Fetch device details from validated config
# "enable" or "disable"
# Enable telemetry
# Disable telemetry
# Ensure device_id exists before proceeding
# Combine messages and set result flags
# "identity_psk": {"type": "bool"},
# "device_type": {"type": "str"},
# Validate params against the expected schema
# Check if any invalid parameters were found
# Determine required parameters based on state
# Check for missing required parameters
# Validate the length of ssid_name if it is present
# Normalize ssid_type to lowercase for consistent validation
# Define required parameters based on normalized ssid_type
# Validate normalized ssid_type
# Check if the site exists
# Define valid radio bands
# Validate radio_bands if present in radio_policy
# Convert float to int for comparison
# Check if radio_bands_set is a subset of valid_radio_bands
# Validate 2_dot_4_ghz_band_policy
# Validate band_select
# Validate 6_ghz_client_steering
# Validate Quality of Service parameters
# Validate egress QoS
# Validate ingress QoS
# Validate WPA encryption settings
# Validate authentication key management settings
# Validate l3_security settings
# Extract necessary information from l3_security
# Validate l3_auth_type
# Validate auth_server when l3_auth_type is WEB_AUTH
# Validate web_auth_url for specific auth_server types
# Validate sleeping_client_timeout
# Validate AAA settings
# Extract necessary information from aaa
# aaa_override = aaa.get("aaa_override", False)
# mac_filtering = aaa.get("mac_filtering", False)
# Validate AAA for Guest SSID with Central Web Authentication
# Validate enable_posture and pre_auth_acl_name for Enterprise SSID
# Validate mfp_client_protection value against allowed options
# Validate mfp_client_protection for 6 GHz radio bands
# Validate if the value is one of the valid options
# Extract relevant information from wlan_timeouts
# Validate session_timeout if session timeout is enabled
# Ensure session_timeout is within the valid range
# Ensure session_timeout is not provided when session timeout is disabled
# Validate client_exclusion_timeout if client exclusion is enabled
# Ensure client_exclusion_timeout is within the valid range
# Ensure client_exclusion_timeout is not provided when client exclusion is disabled
# Extract necessary information from bss_transition_support
# Validate bss_idle_client_timeout and bss_max_idle_service
# Ensure that bss_max_idle_service is enabled if bss_idle_client_timeout is set
# Check if bss_idle_client_timeout is within the valid range
# Check the length of nas_id
# Define valid range for client_rate_limit
# Validate client_rate_limit if provided
# Check if the client_rate_limit is within the valid range
# Check if the client_rate_limit is a multiple of 500
# Define allowed parameters for site-specific overrides
# Iterate over each site-specific override
# Validate parameters in the site override
# Handle nested l2_security validation
# Handle nested aaa validation
# Handle 'deleted' state separately
# Exit after handling the 'deleted' state
# Iterate through each SSID for validation
# Validate required parameters for the SSID
# Validate SSID type parameters
# Validate radio policy parameters
# Validate quality of service parameters
# Validate L2 security and related parameters
# Validate L3 security and AAA configuration parameters
# Validate MFP Client Protection parameters
# Validate Protected Management Frame (802.11w) parameters
# Validate WLAN timeouts parameters
# Validate 11v BSS Transition Support parameters
# Validate NAS ID
# Validate Client Rate Limit
# Validate site-specific override settings parameters
# Iterate over each interface dictionary
# Validate interface_name
# Validate vlan_id if state is "merged"
# Define required parameters based on state
# Define valid choices for various parameters
# Iterate through each power profile for validation
# Normalize values to uppercase for case-insensitive validation
# Additional validation for USB interface
# Additional validation for ETHERNET interface
# Validate interface_id
# Validate parameter_type
# Validate parameter_value
# Define a set of default or weak passwords to check against
# Check if the password matches any default or weak passwords
# Check if the password contains repeated characters
# Check if the password contains simple sequences
# If all checks pass, the password is considered valid
# Validate access_point_authentication choice
# Validate minimum_rssi
# Check if the minimum_rssi value is within the valid range
# Validate transient_interval
# Check if the transient_interval value is within the valid range
# Validate report_interval
# Check if the report_interval value is within the valid range
# Define valid choices for parameters
# Validate range
# Validate rap_downlink_backhaul
# Validate ghz_5_backhaul_data_rates
# Validate ghz_2_4_backhaul_data_rates
# Validate the bridge_group_name length
# Check if power settings are provided
# Check if calendar power profiles are provided
# Iterate over each calendar power profile
# Check if 'ap_power_profile_name' is provided
# Check if 'scheduler_type' is provided and valid
# Validate fields based on scheduler_type
# Validate the format of scheduler_start_time and scheduler_end_time
# Check if 'access_point_profile_name' is provided
# Check if the length of 'access_point_profile_name' is within the valid range
# Validate various aspects of the access point profile
# Check if the length of 'access_point_profile_description' is within the valid range
# Check if the provided 'country_code' is valid
# Check if the provided 'time_zone' is valid
# Check if the provided 'time_zone_offset_hour' is within the valid range
# Check if the provided 'time_zone_offset_minutes' is within the valid range
# Check if the provided 'maximum_client_limit' is within the valid range
# Check if all values are within the set of allowed values
# Ensure the correct interpretation of the range
# Check if the value is within the specified range
# Check if the number of mandatory data rates exceeds the allowed limit
# Check if all mandatory data rates are a subset of the supported data rates
# Variable to track the default RF profile
# Iterate over each RF profile
# If the profile is set as default, check for conflicts
# Set a global parameter for default RF profile
# Define validation rules for different radio frequency profile parameters
# Iterate over each profile in the list
# Check if the profile name exceeds the maximum allowed length
# Ensure required parameters are present
# Validate radio_bands
# Validate common parameters
# Validate band-specific parameters
# Perform validation for a single default RF profile only if the state is "merged"
# Check if the value is missing or invalid
# Check if the parameter value is within the allowed list
# Validate set constraints
# Validate range constraints
# Validate mandatory data rates
# Validate dict constraints
# Define regex patterns for valid MAC address formats
# Format: 00:11:22:33:44:55 or 00-11-22-33-44-55
# Format: 0a0b.0c01.0211
# Format: 0a0b0c010211
# Check if the MAC address matches any of the valid patterns
# Define a mapping of configuration keys to their corresponding validation functions
# Iterate over each configuration component and validate if present
# Perform validation and log the process
# Initialize pagination variables
# Start the loop for paginated API calls
# Update parameters for pagination
# Execute the API call
# Retry with integer offset/limit for specific cases
# Extend the results list with the response data
# Increment the offset for the next iteration
# Return the list of retrieved data
# Initialize an empty dictionary to hold the parameters for the API call
# Map the site ID to the expected API parameter
# Map the user-provided SSID name to the expected API parameter
# Map the user-provided SSID type to the expected API parameter
# Map the user-provided Layer 2 authentication type to the expected API parameter
# Map the user-provided Layer 3 authentication type to the expected API parameter
# Return the constructed parameters dictionary
# Add the site ID to the parameters
# get_ssids_params["site_id"] = site_id
# self.log("Added 'site_id' to parameters: {0}".format(site_id), "DEBUG")
# Execute the paginated API call to retrieve SSIDs
# Initialize modified SSID dictionary
# Mapping of user-facing WLAN config keys to corresponding API payload field names
# Apply basic mappings directly from ssid_settings
# Apply mappings
# mpsk_settings keys and entry keys
# Handle multiPSKSettings separately
# Auth Key Management settings
# Radio Bands and Policies
# Encryption settings
# L3 Security settings
# Log the initial state of updated_ssid and requested_ssid
# WPA Encryption parameters
# Authentication key management parameters
# Log the start of the reset process for the given parameter type
# If reset_all is True or the parameter is not in requested_ssid, reset it to False
# Reset WPA encryption and authentication key management parameters
# Reset all WPA encryption parameters to False
# Reset all authentication key management parameters to False
# Reset WPA encryption parameters only if they exist in requested_ssid
# Reset authentication key management parameters only if they exist in requested_ssid
# Reset passphrase, multiPSKSettings, and openSsid based on auth_type
# Initialize flags and result variables
# Extract the name and type from the requested SSID
# Iterate over the list of existing SSIDs
# Check if there is an SSID with the same name and type
# Start with a copy of the existing SSID
# Iterate over the parameters of the requested SSID
# Ignore 'sites_specific_override_settings', 'site_id', and 'id'
# Check if the parameter differs in the existing SSID
# Update the parameter in the updated SSID
# Add site_id and id if necessary
# Determine auth_type based on requested_ssid or existing_ssid
# Reset encryption and auth params based on authType
# Exit the loop after handling the match
# Return whether the SSID exists, if an update is required, the updated SSID parameters, and the SSID ID
# Initialize flags and result dictionary
# Compare each parameter in the requested SSID with the existing SSID
# Exit loop on first mismatch
# If an update is required, prepare the updated SSID
# Copy the requested SSID
# Copy the ID from the existing SSID
# Add site_id
# Exit the loop once the matching SSID is found
# Return whether the SSID exists, if an update is required, and the updated SSID parameters
# Assign parameters to ssid_entry
# Add site_id and ssid_id to ssid_params
# Set the SSID name in ssid_entry
# Set the wlanType in ssid_entry
# Remove "ssid" and "wlanType" from ssid_params
# Append the entry to the operation list
# Initialize lists to track SSIDs for creation, update, and no update
# Get Global Site ID and name
# Retrieve all existing SSIDs in the Global site
# Iterate over each requested SSID to determine the operation required
# Prepare structures for create, update, and no-update operations
# Retrieve and log SSID parameters
# Update request and log modified parameters
# Verify existence and need for update
# Determine operation based on existence and update requirement
# Handle site-specific overrides
# Determine site-specific operation
# Handle site-specific overrides for creation
# Append the results to the respective lists
# Initialize the list to hold SSIDs scheduled for deletion
# Iterate over each SSID to verify deletion requirements
# Check for global SSID deletion
# Find the SSID to delete from the global SSIDs
# Check for site-specific SSID deletions
# Validate the site existence and retrieve the site ID
# Find the SSID to delete from the site-specific SSIDs
# Return the list of SSIDs that need to be deleted
# Execute the API call to create the SSID and return the task ID
# Execute the API call to update the SSID and return the task ID
# Execute the API call to update or override the SSID and return the task ID
# Execute the API call to delete the SSID and return the task ID
# Construct the message for successful task completion
# Initialize lists to track successful and failed SSIDs
# Initialize a dictionary to store operation messages
# Iterate over each SSID parameter set for processing
# Handle global SSID operation
# Execute SSID operation (create or update) and get task status
# Handle site-specific SSID operation
# Check if SSID ID needs to be retrieved
# Execute SSID operation (update or override) and get task status
# Track success or failure for the SSID
# Store the message dictionary in the class
# Determine the final operation result based on success and failure lists
# Return the instance for method chaining or further processing
# Define the task name for creating SSIDs
# Call the common processing function to add SSIDs
# Define the task name for updating SSIDs
# Call the common processing function to update SSIDs
# Define the task name for deletion operations
# Initialize lists to track successful and failed SSID deletions
# Iterate over each SSID parameter set for deletion
# Each item in the list is a dictionary with a single key-value pair
# Perform the deletion operation and retrieve the task ID
# Check the status of the deletion task
# Categorize the SSID based on the task status
# Set the final message for successful operations
# Set the final message for failed operations
# Extract global site details
# Extract existing SSID names for comparison
# Initialize lists to track created and failed SSIDs
# Iterate over the global SSIDs in add_ssids_params to verify creation
# Check if the SSID was successfully created
# Return lists of created and failed SSIDs
# Retrieve global site details
# Retrieve all existing SSIDs in the global site
# Function to compare SSID parameters
# Compare each setting in multiPSKSettings while ignoring the passphrase
# Find matching setting by keys other than passphrase
# Lists to track failed verifications
# Iterate over the SSIDs in the update parameters
# Check if SSID is in the existing global SSIDs
# Verify site-specific SSID updates
# Retrieve existing SSIDs for the site
# Initialize lists to track results of deletions
# Iterate over the delete SSIDs parameters
# Only verify deletion for global site SSIDs
# Map the user-provided interface name to the expected API parameter
# Map the user-provided VLAN ID to the expected API parameter
# Execute the paginated API call to retrieve interfaces
# Execute the API call to create the interface and return the task ID
# Execute the API call to update the interface and return the task ID
# Execute the API call to delete the interface and return the task ID
# Retrieve all existing interfaces
# Initialize lists to store interfaces that need to be created, updated, or not changed
# Convert the requested interfaces to a dictionary for quick lookup by interface name
# Iterate over existing interfaces to find matches and differences
# If the interface exists in both, compare fields
# Check for differences
# Add the requested interface with the ID from the existing interface
# If there's no difference, add to no_update_interfaces
# Remove the processed interface from the dictionary
# Remaining items in requested_interfaces_dict are new interfaces to be created
# Calculate total interfaces processed and check against requested interfaces
# Return the categorized interfaces
# Initialize the list to hold interfaces scheduled for deletion
# Convert existing interfaces to a dictionary for quick lookup by interface name
# Iterate over the requested interfaces
# Check if the interface exists in the existing interfaces
# Return the list of interfaces that need to be deleted
# Initialize an empty list to store the mapped interfaces
# Check if the interfaces list is empty and return the empty mapped list
# Iterate over each interface to perform mappings
# Map each parameter to the required format
# Retain the id if it exists in the interface
# Add the mapped interface to the list of mapped interfaces
# Return the list of mapped interfaces
# Initialize lists to track successful and failed interface operations
# Iterate over each interface parameter set for processing
# Prepare parameters for the operation
# For delete operation, extract only the id
# For create or update operations, use the entire interface data
# Execute the operation and retrieve the task ID
# Check the status of the operation
# Determine if the operation was successful and categorize accordingly
# Define the task name for creating interfaces
# Call the common processing function to add interfaces
# Define the task name for updating interfaces
# Call the common processing function to update interfaces
# Define the task name for deleting interfaces
# Call the common processing function to delete interfaces
# Create a set of existing interface names for quick lookup
# Initialize a list to track interfaces that failed to add
# Iterate over the requested interfaces to verify addition
# Check if the interface exists in the current state
# Create a dictionary of existing interfaces for quick lookup by interface name and VLAN ID
# Initialize a list to track interfaces that failed to update
# Iterate over the requested interfaces to verify updates
# Check if the interface with the specified VLAN ID exists in the current state
# Initialize a list to track interfaces that failed deletion
# Iterate over the requested interfaces to verify deletion
# Check if the interface still exists in the current state
# Map the user-provided power profile name to the expected API parameter
# Execute the paginated API call to retrieve power profiles
# Iterate over each power profile to update rules with defaults
# Iterate over each rule in the power profile
# Assign default values based on the interface type if only one parameter is present
# Return the updated power profiles with default values applied
# Update requested profiles with default values where needed
# Retrieve all existing power profiles from the system
# Initialize lists to store profiles that need to be created, updated, or not changed
# Create a dictionary of existing profiles for quick lookup using the profile name
# Iterate over the updated requested power profiles
# Check if the profile already exists
# Flag to determine if an update is needed
# Compare description
# Compare rules, considering both parameter changes and order changes
# Add the requested profile with the ID from the existing profile
# If there's no difference, add to no_update_profiles
# If the profile does not exist, mark it for creation
# Calculate total profiles processed and check against requested profiles
# Return the categorized profiles
# Initialize the list to hold profiles scheduled for deletion
# Retrieve all existing power profiles
# Convert existing power profiles to a dictionary for quick lookup by power profile name
# Iterate over the requested power profiles
# Check if the power profile exists in the existing power profiles
# Add the requested power profile with the ID from the existing power profile
# Return the list of profiles that need to be deleted
# Initialize an empty list to hold the mapped power profiles
# Check if the power profiles list is empty and return the empty mapped list
# Iterate over each power profile to perform mappings
# Map the rules if they exist in the profile
# Add the mapped rule to the list of mapped rules
# Assign the mapped rules to the profile
# Add the mapped profile to the list of mapped power profiles
# Return the list of mapped power profiles
# Execute the API call to create the power profile and return the task ID
# Execute the API call to update the power profile and return the task ID
# Execute the API call to delete the power profile and return the task ID
# Initialize lists to track successful and failed profile operations
# Iterate over each profile parameter set for processing
# For create or update operations, use the entire profile
# Determine if the operation was successful
# Define the task name for creating power profiles
# Call the common processing function to add power profiles
# Define the task name for updating power profiles
# Call the common processing function to update power profiles
# Define the task name for deleting power profiles
# Call the common processing function to delete power profiles
# Initialize lists to track successful and failed profile additions
# Convert existing power profiles to a set for quick lookup by profile name
# Iterate over the requested power profiles to verify creation
# Check if the profile exists in the existing profiles
# Initialize lists to track successful and failed profile updates
# Convert existing power profiles to a dictionary for quick lookup by profile name
# Iterate over the requested power profiles to verify updates
# Compare rules, ignoring the sequence parameter
# Convert existing profiles to a set for quick lookup
# Initialize a list to track profiles that failed deletion
# Iterate over the requested power profiles to verify deletion
# Check if the profile still exists in the existing profiles
# If it exists, the deletion failed
# Map the user-provided access point profile name to the expected API parameter
# Execute the paginated API call to retrieve access point profiles
# Initialize an empty list to store the mapped access point profiles
# Check if the access point profiles list is empty and return the empty mapped list
# Define a mapping from country names to country codes
# Country name to code mappings
# Iterate over each access point profile
# Map the ID if it exists
# Define mappings for basic profile attributes
# Apply basic mappings
# Map the country code if provided
# Define mappings for management settings
# Map the management settings if they exist
# Define mappings for rogue detection settings
# Map the rogue detection settings if they exist
# Define mappings for mesh settings
# Map the mesh settings if they exist
# Map calendar power profiles if they exist
# Initialize the API-compatible structure for calendar power profiles
# Iterate over each calendar power profile in the provided settings
# Map the main calendar power profile fields
# Map the scheduler fields
# Add the mapped calendar profile to the list
# Add the mapped profile to the list
# Return the list of mapped access point profiles
# Use regex to match the time string pattern
# Extract hour, minute, and period from the matched groups
# Ensure two digits for the hour
# Return the original time string if no match is found
# Compare dictionaries key by key
# Compare lists by sorting and comparing elements
# Check if one list is empty while the other is not
# Compare lists element by element
# Normalize and compare time strings
# Direct comparison for other types
# If the requested list is empty, replace the existing list
# Handle list of dictionaries
# Directly update the value
# Retrieve all existing access point profiles from the system
# Iterate over the updated requested access point profiles
# Iterate over each parameter in the requested profile
# Compare requested and existing values
# Handle the specific case for time_zone
# Ensure timeZoneOffsetHour and timeZoneOffsetMinutes are set to zero
# Handle the specific case for calendarPowerProfiles
# Ensure the 'duration' field exists
# Update fields based on the schedulerType
# Copy the existing profile and update it with the requested values
# No changes needed for this profile
# The profile does not exist and needs to be created
# Validate that the total number of processed profiles matches the number of requested profiles
# Retrieve all existing access point profiles
# Convert existing access point profiles to a dictionary for quick lookup by profile name
# Iterate over the requested access point profiles
# Check if the access point profile exists in the existing access point profiles
# Add the requested access point profile with the ID from the existing profile
# Execute the API call to create the access point profile and return the task ID
# Execute the API call to update the access point profile and return the task ID
# Execute the API call to delete the access point profile and return the task ID
# For delete operations, only the ID is needed
# Construct operation message
# Check the status of the operation using the task ID
# Define the task name for logging and operation tracking
# Call the common processing function with the create operation function
# Call the common processing function with the update operation function
# Call the common processing function with the delete operation function
# Retrieve all existing access point profiles to verify against
# Convert existing access point profiles to a set for quick lookup by profile name
# Iterate over the requested access point profiles to verify their creation
# Check if the profile now exists in the existing profiles
# Profile exists, add to successful list
# Profile does not exist, add to failed list
# Iterate over the requested access point profiles to verify updates
# Flag to determine if the update was successful
# Iterate over each requested parameter to verify if the update was applied
# Special handling for management_settings
# Skip verification for sensitive keys within management_settings
# Use the compare_values method to compare the requested and existing values for other keys
# Iterate over the requested access point profiles to verify deletion
# Map the user-provided radio frequency profile name to the expected API parameter
# Execute the paginated API call to retrieve radio frequency profiles
# Find the existing default RF profile
# Unset the default RF profile
# Verify the update operation
# Check the status of the task using the task ID
# Update requested profiles with API-compatible values
# Retrieve all existing radio frequency profiles from the system
# Check if a default RF profile is marked in the configuration
# Iterate over the updated requested radio frequency profiles
# Iterate over each top-level parameter in the requested profile
# Check if the value is a dictionary containing specific properties
# Special handling for comma-separated string of numbers
# Standard comparison for other sub-keys
# Compare requested and existing values using compare_values
# Initialize an empty list to store the profiles that need to be deleted
# Retrieve all existing radio frequency profiles
# Convert existing radio frequency profiles to a dictionary for quick lookup by profile name
# Iterate over the requested radio frequency profiles
# Check if the radio frequency profile exists in the existing radio frequency profiles
# Add the requested radio frequency profile with the ID from the existing profile
# Call the API to create an RF profile and return the task ID
# Call the API to update the RF profile and return the task ID
# Call the API to delete the RF profile and return the task ID
# Initialize an empty list to store the mapped profiles
# Iterate over each radio frequency profile in the input list
# Extract radio bands from the current profile
# Create a new dictionary with mapped profile parameters
# If band settings are not provided, return None
# Define the mapping from band settings keys to target keys
# Initialize the mapped dictionary
# Iterate over each band setting and map them if present
# Special handling for lists that need to be joined into strings
# Define mappings for nested structures
# Process spatial reuse settings
# Process coverage hole detection settings
# Process multi-bssid settings
# Additional mappings directly under multi_bssid
# Map settings for 5GHz band if present
# Check and map flexible radio assignment settings for 5GHz band
# Map settings for 2.4GHz band if present
# Map settings for 6GHz band if present
# Check and map flexible radio assignment settings for 6GHz band
# Append the mapped profile to the list of mapped profiles
# Return the final list of mapped profiles
# Determine the profile name based on the operation type
# Define the task name for creating radio frequency profiles
# Call the common processing function with the appropriate creation function and task name
# Define the task name for updating radio frequency profiles
# Call the common processing function with the appropriate update function and task name
# Define the task name for deleting radio frequency profiles
# Call the common processing function with the appropriate deletion function and task name
# Retrieve all existing radio frequency profiles to verify against
# Convert existing radio frequency profiles to a set for quick lookup by profile name
# Iterate over the requested radio frequency profiles to verify their creation
# Convert existing profiles to a dictionary for quick lookup by profile name
# Iterate over the requested radio frequency profiles to verify updates
# Compare the requested and existing values
# Create a set of existing profile names for quick lookup
# Iterate over the requested radio frequency profiles to verify their deletion
# Execute the GET request to retrieve anchor groups
# Handle the case where api_response is None
# Extract the 'response' part of the API response
# Return the list of anchor groups if successfully retrieved
# Map the parameters to the API-supported format
# Retrieve all existing anchor groups
# Initialize lists to track anchor groups for creation, update, and no update needed
# Convert existing anchor groups to a dictionary for quick lookup by group name
# Iterate over the mapped requested anchor groups
# Check if the group exists in the existing groups
# Function to normalize and sort anchors for comparison
# Determine if an update is needed by comparing mobility anchors
# Add the requested group with the ID from the existing group for update
# If there's no difference, add to no_update_groups
# If the group does not exist, mark it for creation
# Calculate total groups processed and check against requested groups
# Return the categorized groups
# Initialize a list to track anchor groups that need deletion
# Iterate over the requested anchor groups
# Check if the anchor group exists in the existing anchor groups
# Add the requested anchor group with the ID from the existing group to the deletion list
# Return the list of groups that need to be deleted
# Perform the API call to create the anchor group and return the task ID
# Perform the API call to update the anchor group and return the task ID
# Perform the API call to delete the anchor group and return the task ID
# Check if the anchor_groups list is empty
# Define priority mapping from integer to string representation
# Iterate over each anchor group to map its parameters
# Map top-level parameters for the anchor group
# Initialize the list for mobility anchors in the mapped group
# Iterate over each mobility anchor to map its parameters
# Define the mapping of anchor parameters to API parameters
# Apply mappings for each parameter in the anchor
# Map device_priority to anchorPriority using the defined priority mapping
# Add the mapped anchor to the mobility anchors list
# Append the fully mapped group to the list of mapped anchor groups
# Initialize lists to track successful and failed group operations
# Iterate over each group parameter set for processing
# For create or update operations, use the entire group
# Retrieve all existing anchor groups to verify against
# Initialize lists to track successful and failed group additions
# Convert existing anchor groups to a set for quick lookup by group name
# Iterate over the requested anchor groups to verify their creation
# Check if the group now exists in the existing groups
# Iterate over the requested anchor groups to verify updates
# Compare mobility anchors, ignoring the order
# Initialize a list to track groups that failed deletion
# Iterate over the requested anchor groups to verify deletion
# Check if the group still exists in the existing groups
# Convert the list of statuses to a set to identify unique statuses
# Determine the final status and change flag based on the unique statuses
# Define operations for each state
# Process operations based on the state
# Define a list of operations for adding and updating configurations
# Iterate over operations and process them
# Handle the case where no operations are required
# Process the final result
# Define a list of operations to delete
# Iterate over operations and process deletions
# Handle the case where no deletions are required
# Define a list of operations with their parameter keys, descriptive names, and corresponding functions
# Iterate over operations and perform verification
# Retrieve the parameters for the current operation
# Define a list of operations to verify
# All CLI details from Cisco DNA Center
# Cisco DNA Center details for the CLI Credential given in the playbook
# Playbook snmpV2cRead Credential details
# All snmpV2cRead details from the Cisco DNA Center
# Cisco DNA Center details for the snmpV2cRead Credential given in the playbook
# Playbook snmpV2cWrite Credential details
# All snmpV2cWrite details from the Cisco DNA Center
# Cisco DNA Center details for the snmpV2cWrite Credential given in the playbook
# Playbook httpsRead Credential details
# All httpsRead details from the Cisco DNA Center
# Cisco DNA Center details for the httpsRead Credential given in the playbook
# Playbook httpsWrite Credential details
# All httpsWrite details from the Cisco DNA Center
# Cisco DNA Center details for the httpsWrite Credential given in the playbook
# Playbook snmpV3 Credential details
# All snmpV3 details from the Cisco DNA Center
# Cisco DNA Center details for the snmpV3 Credential given in the playbook
# All CLI details from the Cisco DNA Center
# All httpRead details from the Cisco DNA Center
# All httpWrite details from the Cisco DNA Center
# Flag to track if any valid site type was found
# Check if both the requested and existing values are None or falsy
# Check if only one of the values is None or falsy
# Process floor sites if all sites were successfully created
# Can be either 'fabric_site' or 'fabric_zone'
# If the SDK returns no response, then the transit does not exist
# If the SDK returns no response, then the device does not exist
# if the SDK returns no response, then the virtual network does not exist
# Check for maximum timeout, default value is 1200 seconds
# Find the reserved pool with the given name in the list of reserved pools
# If the status is 'failed', then the site is not a fabric
# If the status is 'failed', then the zone is not a fabric
# If the response returned from the SDK is None, then the device is not provisioned to the site.
# Formatted payload for the SDK 'Add fabric devices', 'Update fabric devices'
# Call the SDK with incremental offset till we find the L2 Handoff with given
# interface name and internal vlan id
# Call the SDK with incremental offset till to find the SDA L3 Handoff
# Check if the transit name is valid or not
# If yes, return the transit ID. Else, return a failure message.
# Call the SDK with incremental offset till we find the IP L3 Handoff with given virtual network name
# If the SDK return an empty response, then the fabric device is not available
# Update the existence, details and the id of the fabric device
# Transit network name is mandatory
# Transit name, interface name and virtual network name are mandatory
# Fabric name is mandatory for this workflow
# device_config contains the list of devices on which the operations should be performed
# Fabric device IP is mandatory for this workflow
# The border settings is active on when the device has a role of the border node
# Split the input into two parts
# Validate the range for both parts
# If the user didnot provide the mandatory information and if it can be
# retrieved from the Cisco Catalyst Center, we will use it
# Device IP and the Fabric name is mandatory and cannot be fetched from the Cisco Catalyst Center
# WIRELESS_CONTROLLER_NODE is added from backend and can't be passed to the API if not present in the backend.
# Get the border settings details from the Cisco Catalyst Center, if available
# Get the border settings from the Cisco Catalyst Center
# Default value of border priority is 10
# we can se the border priority from 0 to 9
# Default value of prepend autonomous system count is 0
# we can se the prepend autonomous system count from 1 to 10
# Internal vlan id can be from 2 to 4094
# Except 1002, 1003, 1004, 1005, 2046, 4094
# External vlan id can be from 2 to 4094
# If the SDK returns no response, then the transit does not exist with type 'SDA_LISP_PUB_SUB_TRANSIT'
# Affinity id prime value should be from 0 to 2147483647
# Affinity id decider value should be from 0 to 2147483647
# vlan id can be from 2 to 4094
# TCP mss adjustment should be from 500 to 1440
# If the fabric device is available, then fetch the local and remote IP addresses
# If external_connectivity_ip_pool_name is given local and remote address will be ignored
# If external_connectivity_ip_pool_name is not given, then local and remote is mandatory
# Check if the reserved pool is available in the Cisco Catalyst Center or not
# WIRELESS_CONTROLLER_NODE may always not be present in the device roles.
# In get_device_params we are removing it to not cause issues in the Update API.
# Check if both are empty lists
# PRIMARY SCOPE
# SECONDARY SCOPE
# ROLLING AP UPGRADE
# COMPILE FINAL OUTPUT
# The L2 Handoffs does not support update
# So find which L2 handoffs need to be created and which needs to be ignored
# Check is SDA L3 Handoff exists or not
# SDA L3 Handoff Exists
# Check if the SDA L3 Handoff required an update or not
# SDA L3 Handoff requires an update or not
# Check for the IP L3 Handoff existence
# If both the list are empty, then not update is required
# L3 IP Handoffs to be created
# L3 IP Handoffs to be updated
# Retrieve desired device details to check if embedded wireless controller capabilities are present
# Validate device roles for wireless controller settings
# Retrieve desired wireless controller settings
# Check if managed AP locations require update
# Check if wireless controller settings require update
# Handle device reload if wireless controller is disabled
# Construct payload
# Check fabric device exists, if not add it
# Check if the update is required or not
# Device Details Exists
# To Update Wireless Controller Settings, have should be updated with the fabric ID in case of new fabric creation.
# Check if the L2 Handoff exists or not
# Check which IP L3 Handoff exists and which does not
# If 'sda_l3_handoff_details' and 'l3_sda_handoff' and 'l2_handoff' are not provided
# We need to delete the device as well along with the settings
# Verifying whether the IP L3 Handoff is applied to the Cisco Catalyst Center or not
# Verifying whether the SDA L3 Handoff is applied to the Cisco Catalyst Center or not
# Verifying whether the L2 Handoff is applied to the Cisco Catalyst Center or not
# Verifying the absence of IP L3 Handoff
# Verifying the absence of SDA L3 Handoff
# Verifying the absence of L2 Handoff
# Verifying the absence of the device
# Retrieve the type (mac, hostname, or IP)
# Validate Controller Names
# Validate controller IP Addresses
# Validate Dual Radio Mode
# Below set of line related to the Access Point reboot / factory reset function.
# Device is not reachable
# Skip collection status check
# Check collection status
# Unacceptable collection status
# Check if both IP address and hostname are not provided
# Check if an IP address is provided but it is not valid
# Check if device exists and is reachable in Catalyst Center
# Check if either interface_name or connected_device_type is not provided
# List of valid connected device types
# Check if the connected device type is valid
# Log a success message indicating the connected device type is valid
# List of valid authentication template names
# Check if the authentication template name is valid
# Log a success message indicating the authentication template name is valid
# Retrieve specific parameters from the port_assignment dictionary
# Check if authentication_template_name is set and not equal to 'No Authentication
# Check if any parameters provided in the port_assignment dictionary are not from the valid parameters
# Check if the authentication_template_name is not "Closed Authentication"
# Check if security_group_name is provided and authentication_template_name is not "No Authentication"
# Check if the required parameter is present in port_assignment dictionary for a ACCESS_POINT
# Retrieve required parameters from the port_assignment dictionary
# Validate authentication_template_name if it is provided
# Call the validation method for trunking device parameters
# Call the validation method for user device parameters
# Call the validation method for access point parameters
# Check for missing parameters by comparing required_params with the keys in port_channel
# Check if the connected_device_type is provided and not in the list of valid types
# Valid protocols for each connected device type
# Check if the protocol is present and is not a boolean
# Check if protocol is valid for the connected device type
# Define protocol-specific interface limits
# Check if the protocol has a defined interface limit
# Check if the number of interfaces exceeds the protocol-specific limit
# Define the required parameter
# Check if the required parameter is in the provided interface dictionary
# If the required parameter is present, log a success message
# Check if the required parameter is in the provided port channel dictionary
# Check if SSID details exist
# Check if each SSID has a name
# Validate parameters for add/update in port assignments
# Validate parameters for add/update in port channels
# Validate parameters for deletion in port assignments
# Validate parameters for deletion in port channels
# Return a dictionary with 'management_ip_address' if ip_address is provided
# Return a dictionary with 'hostname' if hostname is provided
# Return an empty dictionary if neither is provided
# Initialize the dictionary to map management IP to instance ID
# Get the device information from the response
# Check if the device is reachable, not a Unified AP, and in a managed state
# Log an error if no valid device is found
# Attempt to retrieve site information from Catalyst Center
# Call the SDA 'get_fabric_zones' API with the provided site ID
# Get Device IP Address and Id (networkDeviceId required)
# Get siteId of the Site the device is part of
# Try to get fabricId using get_fabric_sites
# Try to get fabricId using get_fabric_zones
# Create dictionary with required parameters
# Update offset and limit in the parameters
# Execute the API call to get port assignments
# Convert the requested_port_assignment_details to a dictionary for quick lookup
# Iterate over existing ports to find matches and differences
# Check for differences using the new function
# Add the requested port with the id and relevant metadata from the existing port
# Copy the ID from existing port
# If there's no difference, add to no_update_port_assignments
# Remove the requested port from the dictionary so we know it's processed
# Remaining items in requested_ports_dict are new ports to be created
# Log details of port assignments to be created, update, not updated
# Calculate total ports processed and check against requested port assignments
# Return the categorized port assignments
# Create a dictionary with the required parameters
# Execute the API call to get port channels
# if interface.get("connected_device_type") == "TRUNKING_DEVICE" and not interface.get("authentication_template_name"):
# Directly iterate over the keys of delete_param
# Handle the case where there are no existing port channels
# Define the comparison fields within the function
# Compare sets of interface names
# Partial match
# Handle protocol conditions
# Raise an error if protocol is being changed
# Handle connected device type conditions
# Handle description specific conditions
# Ensure all necessary fields are included in the updated_channel dictionary
# Add logging for created, updated, and no-update port channels
# Check total ports processed
# return the categorized port channels
# Default protocol for TRUNK -> "ON"
# Default protocol for EXTENDED_NODE -> "PAGP"
# Retrieve the list of port channels to be created from the current configuration
# Construct the parameters for each port channel
# Add description if available
# Create the final payload for adding port channels
# Create the final payload for updating port channels
# Check if existing port channels is None
# If no port_channel_details are provided, prepare to delete all existing port channels
# delete_required = False
# delete_required = True
# "delete_required": delete_required,
# Stop after finding the first match
# Execute the API call to get vlans and ssids mapped to the vlan
# Initialize dictionaries for VLANs/SSIDs that need to be created, updated or dont need updates.
# Retrieve existing VLANs and SSIDs mapped to VLANs from the fabric site.
# Create a copy of the existing details to be modified.
# Create a dictionary for quick lookup of existing VLANs and their SSIDs.
# Iterate through the provided SSID details.
# Check if the VLAN exists in the existing details.
# Check if the SSID details need to be updated.
# Update needed
# No update needed
# New SSID needs to be added
# If the VLAN does not exist, add it to the copy.
# Log the updated VLANs and SSIDs details.
# Prepare payload with existing VLANs and empty SSID details
# Retrieve the parameters for create/update vlans and ssids mapped to vlans
# Check if port assignments exist for the given parameters
# Determine if deletion is required based on the existence of port assignments
# Retrieve the parameters for adding port assignments
# Retrieve the parameters for update port assignments
# Set the final message
# Check if no operations were performed
# Execute the task and get the status
# Check if the operation status matches self.status
# Fetch existing port channels
# Log the fetched port channels
# Compare interface names and collect created port channel names
# Update the message
# Retrieve the parameters for updating port channels
# Retrieve the task status using the provided task ID and check the return status
# Initialize dictionary for VLANs/SSIDs that need to be deleted.
# If no wireless_ssids_details are provided, mark all for deletion
# Iterate through the provided SSID details to identify deletions.
# No specific SSID details provided, remove the entire VLAN
# SSID exists and needs to be deleted
# Remove SSID from the updated existing details
# Check if all add_interface_names are in current_interface_names
# Compare the update_port_assignments_params with the current port_assignments
# Log the result of verification
# Compare the update_port_channels_params with the current port_channels
# Retrieve expected create and update mappings
# Get the current state of VLANs and SSIDs
# Verify creations
# Verify updates
# Retrieve expected deletions
# Verify deletions
# VLAN still exists, so check SSIDs
# nonlocal ip_address
# Compare and categorize port assignments
# Compare and categorize port channels
# Generate and verify parameters for deleting port assignments
# Generate and verify parameters for deleting port channels
# Generate and verify parameters for deleting
# Handle case where no specific port assignments details are not provided
# Handle case where no specific port channels details are not provided
# Store the constructed current state in the instance attribute
# Set parameters for adding port assignments
# Set parameters for updating port assignments
# Set parameters for adding port channels
# Set parameters for updating port channels
# Set parameters for deleting port assignments
# Set parameters for deleting port channels
# DELETE ALL condition
# Process deletion of port assignments if required
# Process deletion of port channels if required
# Process deletion go vlans and ssids mapped to vlans
# Retrieve parameters for add and update operations from the desired state (self.want)
# Verifying ADD Port Assignments operation
# Verifying UPDATE Port Assignments operation
# Verifying ADD Port Channels operation
# Verifying UPDATE Port Channels operation
# Verifying ADD/UPDATE VLANs and SSIDs mapped to VLANs operation
# Verifying DELETE Port Assignments operation
# Verifying DELETE Port Channels operation
# Verifying DELETE VLANs and SSIDs mapped to VLANs operation
# Validate site params
# check if the given site exists and/or needs to be updated/created.
# Existing Site requires update
# Site does not neet update
# Creating New Site
# Get the site id of the newly created site.
# Check here if the site have the childs then fetch it using get membership API and then sort it
# in reverse order and start deleting from bottom to top
# Sorting the response in reverse order based on hierarchy levels
# Deleting each level in reverse order till topmost parent site
# Delete the final parent site
# Code to validate dnac config for merged state
# Code to validate dnac config for delete state
# Invoke the API to check the status and log the output of each site on the console
# Formatted payload for the SDK 'Add multicast virtual networks', 'Update multicast virtual networks'
# Formatted payload for the SDK 'Update multicast'
# If the SDK return an empty response, then the fabric multicast details is not available
# Update the existence, details and the id of the fabric mutlticast configuration
# Build RP details
# Validate network device IPs
# Get device IDs
# Process IPv4 and IPv6 ASM ranges
# For 'merged' state, at least one of the ASM group config parameters must be provided.
# For 'deleted' state, these parameters are optional and depend on the use case.
# Process IPv4 ASM ranges
# Process IPv6 ASM ranges
# Ensure `item` is a dictionary
# Validate external rp device IPs
# Add external IPv4 and Process IPv4 ASM ranges
# Add external IPv6 and Process IPv6 ASM ranges
# Validate input
# If both external IPv4 and IPv6 RPs are present, default is allowed
# Check if rp_device_location is valid
# Process based on RP device location
# Add RP device location
# Append the processed RP to the list
# Log the final RP details
# If the response returned from the SDK is None, then the Layer 3 VN is not present in the Cisco Catalyst Center.
# Deleted Case: ip_pool_name is not mandatory when deleting entire multicast.
# Merged Case
# Prepare API payload and task details
# Validate input payload
# Prepare API payload
# Check for conflicting replication mode within the same fabric
# Process each multicast configuration
# Multicast configuration needs an update
# Replication mode needs an update
# Deduplicate entries
# Check if updates are needed
# Checking if IPv4 ASM Ranges needs update
# Checking if IPv6 ASM Ranges needs update
# RP Should be only removed when both ipv4 and ipv6 ASM ranges are empty for Fabric RP
# Checking if network device ID needs update
# Remove matching entry
# Retrieve details
# Key is missing in 'have'
# Skip IP address comparison for FABRIC RP when want value is None
# Each want item must match at least one have item
# Ensure inputs are valid
# Retrieve fabric multicast details from the config
# Verifying whether the multicast configuration of the L3 VN on a fabric site is applied or not
# Verifying whether the replication mode of the fabric site is applied or not
# nothing else to validate
# Checking if IPv4 ASM Ranges need update
# Checking if IPv6 ASM Ranges need update
# Checking if network device IDs need update
# No update requested → mark for removal
# Retrieve fabric multicast configurations from the playbook
# Iterate over each fabric multicast configuration
# Verifying the absence of the SDA multicast configurations for a L3 VN under a fabric site
# Create/Update Fabric sites/zones in Cisco Catalyst Center
# Convert dictionary to a frozenset - immutable set
# Updating/customising the default parameters for authentication profile template
# With the given site id collect the fabric site/zone id
# Validate the playbook input parameter for updating the authentication profile
# Collect the authentication profile parameters for the update operation
# Delete Fabric sites/zones from the Cisco Catalyst Center
# Preserve the order of input while deduplicating
# Check whether fabric site is present in Cisco Catalyst Center.
# Check whether fabric zone is present in Cisco Catalyst Center.
# Delete the fabric zone from the Cisco Catalyst Center
# Invoke the API to check the status and log the output of each site/zone and authentication profile update on console.
# Facility and mnemonic mappings for severities 3, 4, 5, and 6
# Store 'state' inside the class
# Specification for validation
# Severity 3 facilities and mnemonics
# Severity 4 facilities and mnemonics
# Severity 5 facilities and mnemonics
# Severity 6 facilities and mnemonics
# for rule in issue_setting.get("rules", []):
# Loop through rules list
# Check if the field is missing
# Check if severity is not in severity_mapping
# Convert severity to string and check if it's a valid label
# Iterate through the list of issue settings
# Check if the issue setting has the specified name
# Validate the 'threshold_value' if it exists
# Input Validation to Ensure Correct Range for Each Input Field
# Loop through issueEnabled values to fetch both enabled and disabled issues
# Logging the API response for debugging purposes
# Combining both responses (enabled and disabled issues) into a single list
# total_response = total_response[0] + total_response[1]
# Only one response available
# No valid responses
# Handle the case where no system issues are found
# Safer to avoid KeyError
# Create a copy to avoid modifying the original
# Initialize containers for categorizing issues
# Process assurance issues
# If the issue exists, add it to the update list and skip further processing
# Check for duplicates
# If the issue is new, add it to the create list
# Move duplicates to update list
# Use the deduplicated list
# Check if prev_name exists, otherwise fallback to checking name
# Ensure item is a dictionary
# Extract the 'name' key
# Count occurrences and decide whether to keep the item
# Final filtering: Remove first occurrences of duplicates
# Deduplicate config before get_have
# Deduplicate AFTER validation
# Validate Cisco Catalyst Center (CCC) Version Support
# Invoke the API to check the status and log the output of each assurance issue on the console
# Check if there are no additional configurations and profile names match
# Default case for creation
# Skip the rest of the loop if template doesn't exist
# If no previous templates, we can directly attach
# Continue to the next template
# If template already exists in previous templates, skip it
# Skip the rest of the loop if template already exists in previous_templates
# Otherwise, attach the template
# Default to False
# Validate Destination Device Management IP Address
# Default is False
# Define the specification for the module's arguments
# Changing to Upper case for comparision
# Choices
# Matches valid IPv4 addresses
# Matches valid hostnames
# Matches valid MAC addresses
# Matches valid serial numbers (8-12 alphanumeric characters)
# Process tags
# Process tag memberships
# Device rule_names
# Port rule_names
# Convert Mbps to kbps (UI expects Mbps, API expects kbps)
# Sort based on the `name` order and then by `value` within the same `name`
# Group leaf nodes by 'name'
# Helper function to limit items to two per group and branch
# Build the hierarchical structure for grouped nodes
# Create an OR operation for nodes with the same name
# Single node remains as is
# Combine all grouped conditions with AND
# Sorting it so that its uniform and easier to compare with future updates.
# Checking if rule_desctiptions exist because in case of update, only one of scope/rules can be given.
# Sorting it so that its easier to compare.
# Check if the response is empty
# Skips the operation when this specific error occurs
# Convert dictionary to a tuple of sorted items (temporary hashable representation)
# Append the original dict (not modified)
# Tag not updated/deleted for interface
# Tag not updated/deleted for device
# If no port names, add only device details and continue
# Process port details if device exists
# Calculate total batches
# Sorted existing List
# Merge while preserving order
# Check if new_dict is already in existing_list
# Delete elements in new_list from existing_list while preserving order
# Check if there's a difference
# Scope Description in Cisco Catalyst Center can't be None, else port_rule won't exist in the first place.
# In this case user wants to delete all the scope members, so returning empty updated_scope_description
# Check if the current dictionary has 'items' (indicating nested conditions)
# Recursively process each item
# If no 'items', it's a leaf node
# Nothing to update case
# Updating it with the new rules
# Nothing to delete case
# Both are Absent
# One is Absent, as nothing to merge, So No update required
# One is Absent, Existing No port rules so nothing to delete
# One is Absent, No new port rules in playbook, so nothing to delete
# Handle None cases upfront
# One is Absent
# Any one is absent, device_rules is none so, nothing to delete
# Any one is absent, device_rules_in_ccc is None, so nothing to delete
# These are extracted from CCC so they are already formatted.
# Fetch Network Devices and Interfaces Separately
# Combine both lists
# Process Network Devices
# Process Interfaces
# Remove dynamic rules
# Remove tag memberships
# Delete tag
# Update the tag name in the config
# Simple Tag Deletion Case
# Create a new path trace if no flow analysis ID exists
# Retrieve path trace details if flow analysis id exists
# If path trace creation failed, log the error
# for config in ccc_path_trace.validated_config:
# GNU General Public License v3.0+ (see LICENSE or
# Get common arguments specification
# Add arguments specific for this module
# NOTE: Does not have a get by name method, using get all
# NOTE: Does not have a get by id method or it is in another action
# Method 1. Params present in request (Ansible) obj are the same as the current (ISE) params
# If any does not have eq params, it requires update
# Checks the supplied parameters against the argument spec for this module
# Get common arguements specification
# NOTE: Does not have a get all method or it is in another action
# NOTE: Does not have a get by name and get all
# NOTE: Does not have a get by name method or it is in another action
# Method 1. Params present in request (Ansible) obj are the same as the current (DNAC) params
# NOTE: Does not have get all
# new_object_params['type'] = self.new_object.get('type')
# response = obj.update()
# NOTE: Does not have delete method. What do we do?
# NOTE: Does not have update method. What do we do?
# Transform payload if it's a list of dictionaries
# Check if we need to merge (format from Ansible playbook with each param as separate item)
# Filter duplicate site ids from site response
# Ensure the response is valid
# First, check if the key exists in the dictionary at the top level
# Then, recursively check values (nested dictionaries/lists)
# Ensure the item is a dictionary
# Dictionary to store multiple versions for easy maintenance and scalability
# To add a new version, simply update the 'dnac_versions' dictionary with the new version string as the key
# and the corresponding version number as the value.
# Add new versions here, e.g., "2.4.0.0": 2400
# Dynamically create variables based on dictionary keys
# If dnac_log is False, return an empty logger
# Split version strings into parts and convert to integers
# Compare each part of the version numbers
# If versions are of unequal lengths, check remaining parts
# Versions are equal
# Implement logic to merge the resource configuration
# Implement logic to delete the resource
# Implement logic to replace the resource
# Implement logic to overwrite the resource
# Implement logic to gather data about the resource
# Implement logic to render a configuration template
# Implement logic to parse a configuration file
# Implement logic to verify the merged resource configuration
# Implement logic to verify the deleted resource
# Implement logic to verify the replaced resource
# Implement logic to verify the overwritten resource
# Implement logic to verify the gathered data about the resource
# Implement logic to verify the rendered configuration template
# Implement logic to verify the parsed configuration file
# formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(module)s: %(funcName)s: %(lineno)d --- %(message)s', datefmt='%m-%d-%Y %H:%M:%S')
# Convert the path to absolute if it's relative
# Validate if the directory exists
# of.write("---- %s ---- %s@%s ---- %s \n" % (d, info.lineno, info.function, msg))
# message = "Module: " + self.__class__.__name__ + ", " + message
# self.log("status: {0}, msg:{1}".format(self.status, self.msg), frameIncrement=1)
# Define the regex pattern for a valid email address
# Use re.match to see if the email matches the pattern
# If site_id is not provided, retrieve it based on the site_name
# Determine API based on dnac_version
# Retrieve device IDs from the specified site
# Iterate through each device ID to retrieve its details
# Execute GET API call to retrieve device details
# Append the retrieved device details to the list
# Retrieve the list of device details from the specified site
# Iterate through each device's details
# Exclude Unified AP devices
# Now we check the status of API Events for configuring destination and notifications
# Check if the address is a valid IPv4 or IPv6 address
# Define the regex for a valid hostname
# Start of the string
# Domain name (e.g., example.com)
# Localhost
# Custom IPv4-like format (e.g., 2.2.3.31.3.4.4)
# IPv6 address (e.g., 2f8:192:3::40:41:41:42)
# End of the string
# Check if the address is a valid hostname
# Need to handle exception
# Update the result attributes with the provided values
# Log the message at the specified log level
# Capture the full traceback
# Log the traceback
# If the elapsed time exceeds the timeout period
# Log the response received
# Check if the response is None, an empty string, or an empty dictionary
# Check if the 'response' key is present and empty
# Update result and extract task ID
# Check if response is returned
# Check if the task has completed (either success or failure)
# Retrieve task details by task ID
# Check if there is an error in the task response
# Extract data, progress, and end time from the response
# Validate task data or progress if validation keys are provided
# Used only for comparison
# Keep original dict
# Skip unhashable dictionaries
# Log the lengths of the input lists
# Convert dicts to JSON strings with sorted keys for consistent comparison
# self.log("Invalid IPv6 address: {}".format(ipv6))
# Return as-is if it's not a valid IPv6 address
# Possible IPv6 addresses
# Handle the case when spec becomes empty but param list is still there
# Copyright: (c) 2018, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2020, Lionel Hercot (@lhercot) <lhercot@cisco.com>
# Copyright: (c) 2020, Cindy Zhao (@cizhao) <cizhao@cisco.com>
# Copyright: (c) 2023, Akini Ross (@akinross) <akinross@cisco.com>
# ipaddress.ip_address(list_of_hosts[0])
# Perform login request
# Handle MSO response
# Perform some very basic path input validation.
# Handle possible MSO error information
# Expose RAW output for troubleshooting
# TODO: Replace response by -
# In Py2 there is nothing that needs done, py2 does this for us
# Copyright: (c) 2025, Akini Ross (@akinross) <akinross@cisco.com>
# Query a specific object
# update mso.proposed with interface details that are not included in the interface payload and node details
# When the state is present
# When the state is absent
# When the state is present/absent with check mode
# Copyright: (c) 2021, Anvitha Jain (@anvitha-jain) <anvjain@cisco.com>
# This parameter is not required for querying all objects
# Get schema objects
# Get template
# Get external EPGs
# clean anpRef when anpRef is null
# clean contractRef to fix api issue
# Copyright: (c) 2025, Sabari Jaganathan (@sajagana) <sajagana@cisco.com>
# Query all objects
# When the filter EPG is specified in the configuration, the filter L3Out will be removed.
# L3Out and EPG cannot be configured simultaneously.
# When the filter L3Out is specified in the configuration, the filter EPG will be removed.
# Validate based on access_path_type
# Adding the object reference name to use the update_config_with_template_and_references function
# Copyright: (c) 2020, Jorge Gomez (@jgomezve) <jgomezve@cisco.com> (based on mso_dhcp_relay_policy module)
# Query for existing object(s)
# If we found an existing object, continue with it
# Get schema id
# Copyright: (c) 2019, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2025, Gaspard Micol (@gmicol) <gmicol@cisco.com>
# Get site
# Get site_idx
# Schema-access uses indexes
# Path-based access uses site_id-template
# Get ANP
# Get anp index at template level
# If anp not at site level but exists at template level
# Get anp index at site level
# Get EPG
# If anp exists at site level
# If anp already at site level AND if epg not at site level (or) anp not at site level?
# If EPG not at template level - Fail
# EPG at template level but not at site level. Create payload at site level for EPG
# If anp not in payload then, anp already exists at site level. New payload will only have new EPG payload
# If anp in payload, anp exists at site level. Update payload with EPG payload
# Get index of EPG at site level
# Workaround due to inconsistency in attributes REQUEST/RESPONSE API
# FIX for MSO Error 400: Bad Request: (0)(0)(0)(0)/deploymentImmediacy error.path.missing
# Get L3out
# Copyright: (c) 2024, Akini Ross (@akinross) <akinross@cisco.com>
# Copyright: (c) 2020, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Get tenant_id and site_id
# To ignore the object not found issue for the lookup methods
# set tenent and port paths
# Get Leaf
# FIXME: Changes based on index are DANGEROUS
# Get schema
# Get VRF
# Get Contract
# Get Subnet
# Copyright: (c) 2024, Sabari Jaganathan (@sajagana) <sajagana@cisco.com>
# When the state is query
# Returns No Contract - 204
# Get cloud type
# Get selectors
# If anp at site level and epg is at site level
# Get expressions
# if payload is empty, anp and epg already exist at site level
# if payload exist
# if anp already exists at site level
# if site-template does not exist, create it
# If vrf not at site level but exists at template level
# Update vrf index at site level
# Get Region
# Get CIDR
# Get Selector
# Cache used to reduce the number of schema queries done by MSOSchema function.
# Check if schema is already in cache, if not create a new MSOSchema object and add it to the cache.
# Copyright: (c) 2021, 2023, Anvitha Jain (@anvitha-jain) <anvjain@cisco.com>
# Get template External EPG
# Get Site External EPG
# Get external EPGs type from template level and verify template_external_epg type.
# Get filters
# There was no filter to begin with
# There was no entry to begin with
# There is only one entry, remove filter
# Filter does not exist, so we have to create it
# Entry does not exist, so we have to add it
# Entry exists, we have to update it
# Copyright: (c) 2023, Anvitha Jain (@anvitha-jain) <anvjain@cisco.com>
# Add vrfRef from the details to ensure idempotency succeeds
# Get External EPG
# Add the Priority 1 fixed value 128 to the PTP settings during initialization
# Copyright: (c) 2025, Samita Bhattacharjee (@samiib) <samitab@cisco.com>
# The object dictionary is used as a cache store for schema & template data.
# This is done to limit the amount of API calls when UUID is not specified for member scope references.
# Convert labels
# Query for mso.existing object(s)
# Checking if site is managed by MSO
# Copyright: (c) 2022, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Get Service Graph
# Application Load Balancer - Consumer Interface, Consumer Connector Type,
# Provider Interface, Provider Connector Type - not supported
# Network Load Balancer - Consumer Interface, Provider Interface - not supported
# Third-Party Load Balancer - Consumer Connector Type,
# Provider Connector Type - not supported
# (FW) Third-Party Firewall - Consumer Interface, Consumer Connector Type,
# Provider Interface, Provider Connector Type - supported
# The site service graph reference will be added automatically when the site is associated with the template
# So the add(create) part will not be used for the NDO v4.2
# Copyright: (c) 2024, Anvitha Jain (@anvjain) <anvjain@cisco.com>
# updating macsec_keys modifies the existing list with the new list
# remove macsec_keys if the list is empty
# Copyright: (c) 2023, Anvitha Jain (@anvjain) <anvjain@cisco.com>
# Convert sites and users
# Ensure displayName is not undefined
# BFD MultiHop Settings
# BFD Settings
# OSPF Interface Settings
# Query a specific Annotation
# Query all
# Copyright: (c) 2024, Gaspard Micol (@gmicol) <gmicol@cisco.com>
# Copyright: (c) 2024, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Get Region object
# Get hub network
# peer_prefix is None or empty dict => True
# BGP Controls
# Peer Controls
# Address Type Controls
# Private AS Controls
# Copyright: (c) 2022, Akini Ross (@akinross) <akinross@cisco.com>
# Deprecated input: contract_filter_type is deduced from filter_type
# Deprecated input: contract_filter_type is deduced from filter_type.
# contract_filter_type = module.params.get('contract_filter_type')
# Initialize variables
# Set path defaults, when object (contract or filter) is found append /{name} to base paths
# Get schema information.
# Get template by unique identifier "name".
# Get contract by unique identifier "name".
# Get filter by unique identifier "filterRef".
# If filter name is not provided, provide overview of all filter objects for the filter type.
# Contracts need at least one filter left, remove contract if remove would lead 0 filters remaining.
# Initialize "present" state filter variables
# Avoid validation error: "Bad Request: (0)(1)(0) 'directives' is undefined on object
# If contract exist the operation should be set to replace else operation is add to create new contract.
# Conditional statement 'description == ""' is needed to allow setting the description back to empty string.
# Conditional statement is needed to determine if "prio" exist in contract object.
# An object can be created in 3.3 higher version without prio via the API.
# In the GUI a default is set to "unspecified" and thus prio is always configured via GUI.
# We can't set a default of "unspecified" because prior to version 3.3 qos_level is not supported,
# If filter exist the operation should be set to replace else operation is add to create new filter.
# If contract_scope is not provided default to context to match GUI behaviour on create new contract.
# Update existing with filter (mso.sent) and contract information.
# Conditional statement to check qos_level is defined or is present in the contract object.
# qos_level is not supported prior to 3.3 thus this check in place, GUI uses default of "unspecified" from 3.3.
# When default of "unspecified" is set, conditional statement can be simplified since "prio" always present.
# Query for mso.existing object
# Copyright: (c) 2024, Samita Bhattacharjee (@samiib) <samita@cisco.com>
# Copyright: (c) 2021, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Get BD
# Check if DHCP policy already exists
# Check if DHCP option policy already exists
# Get DHCP policies
# Map choices
# Get BDs
# Static setting defined version and mcastRtMapDestVersion to 0 are set in UI but seem not required so excluding them
# If in future this is required then uncomment the below line
# route_map_filter = {"version": 0, "mcastRtMapDestVersion": 0}
# When updating an existing BD, replace operation for each attribute to avoid existing configuration being replaced
# This case is specifically important for subnet and dhcp policy which can be configured as a child module
# Only tenant type templates contain the correct route map policies
# Retrieves the list templates that contain the same tenant because only route map policies for the tenant assigned to the schema are options to be chosen
# NDO restricts route map policies in the same tenant to have the same name thus we can loop through the route map policies to find the correct uuid
# Error handling for when N amount of route map policies are not found
# Copyright: (c) 2024, Akini Ross (@akiniross) <akinross@cisco.com>
# Copyright: (c) 2025, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Set path defaults for create logic, if object (contract or filter) is found replace the "-" for specific value
# Get contract
# Get service graph if it exists in contract
# Validation to check if amount of service graph nodes provided is matching the service graph template.
# The API allows providing more or less service graph nodes behaviour but the GUI does not.
# Consumer and provider share connector details (so provider/consumer could have separate details in future)
# If service graph exist the operation should be set to "replace" else operation is "add" to create new
# Copyright: (c) 2020, Anvitha Jain (@anvitha-jain) <anvjain@cisco.com>
# Get schema_id
# removes API error for extra space
# Destination Port supports UUIDs, but we have no module to get them
# True if all elements are None, False otherwise.
# Clear the node routing policy when node_routing_policy is empty string
# Get site id
# Parent object present check
# Query all listeners under a contract service node
# The below condition was never false if the service graph was created properly, but condition is required to avoid error
# Query all listeners under a contract does not require service_node_index, so the below condition is required
# Query all listeners under a contract
# Query a listener under a contract service node
# Parent object creation logic begins
# Parent object creation logic ends
# Listener object creation logic begins
# Rules object creation logic
# Update an existing listener
# Create a new listener
# Create a new listener with parent object
# Combine physical and L3 domains into a single dictionary
# FIXME: Missing functionality
# uSegAttrs=[],
# Clean contractRef to fix api issue
# Get VRF at site level
# Optional, only used for YAML validation
# Validate content/payload
# Validate YAML/JSON string
# Perform request
# Report success
# Copyright: (c) 2023, Lionel Hercot (@lhercot) <lhercot@cisco.com>
# Copyright: (c) 2023, Sabari Jaganathan (@sajagana) <sajagana@cisco.com>
# TODO: What possible options do we have ?
# active=True,
# remote=True,
# NOTE: Since MSO always returns '******' as password, we need to assume a change
# Get Service Graphs
# Get service nodes
# Copyright: (c) 2019, Nirav Katarmal (@nkatarmal-crest) <nirav.katarmal@crestdatasys.com>
# Copyright: (c) 2023, Mabille Florent (@fmabille09) <florent.mabille@smals.be>
# Update anp index at template level
# Update anp index at site level
# Update index of EPG at site level
# Get Domains
# keeping for backworths compatibility
# rename of deploymentImmediacy
# If payload is empty, anp and EPG already exist at site level
# If payload exists
# If anp already exists at site level...(AND payload != epg as well?)
# Changing in existing object to set correctly in proposed/current output
# When more then 1 site can be provided the sites can be added and/or removed with PATCH operations
# This is done to avoid config loss within sites during a single replace operation on the sites container
# Set template specific payload in functions to increase readability due to the amount of different templates with small differences
# Sort in reverse to remove from the end of the list first, this way the indexes are not shifting during removal operations
# Copyright: (c) 2020, Jorge Gomez Velasquez <jgomezve@cisco.com>
# Verifies ANP  and EPG exists at template level
# Verifies if ANP exists at site level
# If epg not at site level (or) anp not at site level payload
# If anp and epg exists at site level
# Note :path has to be diffent in each leaf for every static port in the list.
# Select port path for fex if fex param is used
# validate and append staticports to staticport_list if path variable is different
# If anp already exists at site level
# Remove existing site
# Add new site
# Check if source and destination template are named differently if in same schema
# Get source schema id and destination schema id
# Get destination schema details before change
# Get destination tenant id
# Get source schema details
# Only for NDO less than or equal to 3.7
# Schema dict is used as a cache store for schema id lookups
# This is done to limit the amount of schema id lookups when schema is not specified for multiple contracts
# The list index should not shift when removing contracts from the list
# By sorting the indexes found in reverse order, we assure that the highest index is removed first by the NDO backend
# This logic is to avoid removing the wrong contract
# Only add the operation list if the contract is not already present
# This is to avoid adding the same contract multiple times
# Replace operation is not required because there are no attributes that can be changed except the contract itself
# Schema exists
# There was no schema to begin with
# There is only one tenant, remove schema
# Remove existing template
# Schema does not exist, so we have to create it
# Template exists, so we have to update it
# Template does not exist, so we have to add it
# Get l3out
# sec
# initially ignores None and "" (empty string)
# enabled = true
# disabled = false
# "" = remove the  "BGP Best Path Control" settings from the object
# Copyright: (c) 2024, Samita Bhattacharjee (@samiib) <samitab@cisco.com>
# Enforcing that a user must specify a name or uuid when
# adding, updating or removing even though there is only one policy per template.
# Query all policies
# Query a specific policy
# Get service node id
# UI uses blocks but chose pods since it is more descriptive
# TODO: leverage update_config_with_template_and_references when podPolicyGroup type is supported by objects API endpoint
# "mso/api/v1/templates/objects?type=podPolicyGroup&uuid=0809339e-65f7-4000-a23a-a39241865db8"
# reference_dict = {
# if pod_profile and pod_profile.get("policy"):
# Workaround template looping until podPolicyGroup type is supported by objects API endpoint
# Check if a policy is set in the pod profile, this is the UUID of the pod settings policy
# Check if the pod settings UUID is already in the cache
# Retrieve a summary of all fabric_policy templates from NDO and loop through all templates to find the pod settings policy
# Only retrieve the template details if the fabric_policy template has a pod settings policy defined
# A list that tracks of all levels to be removed during the PATCH operation when qos_levels is being updated
# "unspecified" is removed as it is not a valid value for qos_levels.level
# UTC Timezone implementation as datetime.timezone is not supported in Python 2.7
# check on name because refs are handled differently between versions
# certain unique identifiers are present in NDO4.0> source which need to be deleted from source_data prior to POST
# Check if source and destination schema are named differently
# Copyright: (c) 2024, Noppanut Ploywong (@noppanut15) <noppanut.connect@gmail.com>
# The state absent requires at least the pod, leaf and path to be provided
# The state present requires all the key_list to be provided
# Create missing site anp and site epg if not present
# Logic is only needed for NDO version below 4.x when validate false flag was still available
# This did not trigger the auto creation of site anp and site epg during template anp and epg creation or ataching site to template
# Coverage misses this two conditionals when testing on 4.x and above
# The list index should not shift when removing static ports from the list
# This logic is to avoid removing the wrong static ports
# The vlan is not required when the state is absent
# Get service graph reference schema id if the service graph schema is not matching with the schema
# When site type is on-premise, both node_relationship and tenant are required
# 'external_epg': {'id': 'externalEpg', 'connector_type': 'route-peering'}
# templateType in payload
# templateType container in payload
# tenant required
# 1 = 1 site, 2 = multiple sites
# configuration is set in template container in payload
# Checking if the template with id exists to avoid error: MSO Error 400: Template ID 665da24b95400f375928f195 invalid
# Remove unwanted keys from existing object for better output and diff compares
# Sometimes the attribute returned by api might be None
# If search_list is None, iterating over it will throw an error
# Thus we need to return the match of None and without existing values
# Set template ID and template name if available
# Update config data with reference names if reference_collections is provided
# Both objects are the same object
# Both objects are identical
# Both objects have a different type
# Ignore empty values
# Item from subset is missing from superset
# Item has different types in subset and superset
# Compare if item values are subset
# NOTE: Fails for lists of dicts
# Fall back to exact comparison for lists of dicts
# Only connectorType bd with value "general" is supported for now thus fixed in code
# msec
# Copied from ansible's module uri.py (url): https://github.com/ansible/ansible/blob/cdf62edc65f564fff6b7e575e084026fa7faa409/lib/ansible/modules/uri.py
# create a tempfile with some test content
# normal output
# mso_rest output
# info output
# debug output
# on-premise or cloud
# aws or azure or gcp
# Ensure protocol is set
# Set base_uri
# Perform password-based authentication, log on using password
# first check if we are redirected to a file download
# In place of Content-Disposition, NDO get_remote_file_io_stream returns content-disposition.
# if we are redirected, update the url with the location header and update dest with the new url filename
# there was no content, but the error read() may have been stored in the info as 'body'
# Get change status from HTTP headers
# 200: OK, 201: Created, 202: Accepted, 204: No Content
# 400: Bad Request, 401: Unauthorized, 403: Forbidden,
# 405: Method Not Allowed, 406: Not Acceptable
# 500: Internal Server Error, 501: Not Implemented
# If we PATCH with empty operations, return
# if method in ['PATCH', 'PUT']:
# 200: OK, 201: Created, 202: Accepted
# 204: No Content
# 404: Not Found
# Connection error
# When the last interface from a node is deleted the node configuration must also be removed
# When the response matches an error string from the ignore errors we need to remove the node configuration
# Ensure tenant has at least admin user
# To handle the issue in ND 4.0 related to querying local and remote users, new API endpoints have been introduced.
# These endpoints should be removed once the official ND API endpoints become operational.
# Handling a list of user objects is a temporary workaround that should be removed.
# Once the ND official local and remote user API endpoints are operational.
# Support the contract argspec
# Removes entry from payload
# Remove References
# Removed unwanted keys
# Clean up self.sent
# Always retain 'id'
# Remove unspecified values
# Remove identical values
# Add everything else
# Update self.proposed
# TODO investigate combine this method above sanitize method
# FIXME: Modified header only works for PATCH
# Return the gory details when we need it
# Conditional statement 'description == ""' is needed to set empty string.
# Same reason as described mso_schema_template_contract_filter.py.
# Workaround function due to inconsistency in attributes REQUEST/RESPONSE API
# Fix for MSO Error 400: Bad Request: (0)(0)(0)(0)/deploymentImmediacy error.path.missing
# Workaround function to remove null/None fields returned by API RESPONSE
# Temporarily introduced method to handle nd specific query without introducing a dependency on the nd collection in code
# Copied method from the nd collection: https://github.com/CiscoDevNet/ansible-nd/blob/master/plugins/module_utils/nd.py#L221
# TODO: Refactor the code for bundled nd collection
# To ensure consistency between the API response data and the input data by converting the node to a string
# Update the existing configuration
# Clear the existing configuration
# (c) 2017 Cisco Systems Inc.
# Cisco Intersight doc fragment
# Remove all whitespace
# Pattern to match: single VLAN (123) or VLAN range (123-456)
# Full pattern: comma-separated list of single VLANs or ranges
# Parse and validate individual VLAN IDs and ranges
# Validate single vlan
# When qinq_enabled is true, qinq_vlan is required
# When qinq_enabled is false, allowed_vlans is required
# Validate allowed_vlans format if provided and clean it
# Store the cleaned version (whitespace removed) back to module.params
# Verify that placing default works with required_if
# Resource path used to configure policy
# Define API body used in compares or create
# Build VlanSettings based on qinq_enabled
# Regular VLAN mode
# Create a custom default
# Code below should be common across all policy modules
# Resource path used to fetch info
# Virtual drive policy specification
# Validate UseJbodForVdCreation vs DefaultDriveMode compatibility
# UseJbodForVdCreation must be disabled if DefaultDriveMode is JBOD
# UnusedDisksState must be NoChange if DefaultDriveMode is JBOD or RAID0
# Validate M.2 virtual drive configuration
# Validate name format
# Validate drive groups configuration
# Validate drive group name
# Validate dedicated hot spares for RAID0
# Validate span groups requirements based on RAID level
# Validate span group count based on RAID level
# Validate virtual drives
# Validate virtual drive name
# Validate size requirement
# Set basic storage configuration
# Set secure JBODs if specified
# Set M.2 virtual drive configuration
# Set RAID0 drive configuration
# Set virtual drive policy using the common function
# Set NVMe slot configurations
# Save the storage policy response
# Process drive groups if provided
# Build drive group API body
# Manual drive group
# Build ManualDriveGroup
# Add dedicated hot spares if specified (not for RAID0)
# Build span groups (required field)
# Transform lowercase 'slots' to uppercase 'Slots' for API
# Build virtual drives
# Add size if specified and not expanding to available
# Create the drive group
# Save the drive group response
# Combine storage policy and drive groups in the main response
# Build filter for VLANs associated with this policy
# Add VLAN name filter if provided (exact match only)
# Reset api_response before the API call to avoid previous responses
# Get VLANs for this policy
# Capture the response immediately before it gets overwritten
# Ensure we return a list
# Resource path used to fetch policy info
# Get query parameters for policies
# Get VLAN policies
# Build structured response
# Ensure policies is always a list for iteration, even if a single dict is returned
# Create structured response for this policy
# Create final response structure
# Set final response based on number of policies found
# Single policy - return as dict
# Multiple policies - return as list
# No policies found - return empty structure
# Use intersight.result and update api_response directly
# Validate IQN allocation type requirements
# Validate MAC address configuration
# Validate required FIAttached fields
# Validate advanced placement configuration
# Skip advanced placement validation if auto placement is enabled
# Validate slot ID configuration
# Validate PCI link configuration
# Validate PCI link when using Custom mode
# Validate required Standalone fields
# Start with common policies
# Platform-specific policies
# Add fabric network policies
# MAC pool for FIAttached when using pool type
# iSCSI boot policy
# standalone server
# Base vNIC configuration for Standalone
# Add common CDN configuration
# Resolve and add policy MOIDs
# Add common connection type settings
# Base vNIC configuration for FIAttached
# Map lowercase user input to API format
# Handle placement based on auto_vnic_placement_enabled
# Full placement control
# Add slot ID if auto_slot_id is disabled
# Add PCI link configuration if auto_pci_link is disabled
# Add static MAC address if using static type
# Validate FI-Attached specific requirements
# Validate vNIC configurations
# Only validate present vNICs - absent vNICs only need name
# Validate common required fields
# Validate target platform specific fields
# Validate CDN configuration
# Validate connection type specific settings
# Define vNIC options
# Add connection settings argument specs
# Add FIAttached-specific parameters
# Resolve IQN pool MOID if specified
# Save the LAN connectivity policy response
# Process vNICs
# Cache for policy MOIDs to avoid redundant API calls
# Only build API body for present vNICs
# Build vNIC API body using helper function
# Configure the vNIC (create/update/delete)
# Save the vNIC response only if it's present
# Combine LAN connectivity policy and vNICs in the main response
# Fetch vNICs for each LAN connectivity policy
# Fetch vNICs for this policy using LanConnectivityPolicy.Moid filter
# Create a temporary intersight instance for vNICs query
# Add vNICs to the policy
# Single policy case
# Validate UDLD configuration
# Argument spec above, resource path, and API body should be the only code changed in each policy module
# Validate that required parameters were passed. We don't mark it as required in order to support absent.
# iscsi and pxe options
# local disk options
# bootloader options
# pxe only options
# sd card options
# lun for pch, san, sd_card
# usb options
# virtual media options
# Resource path used to fetch vNIC Template info
# Get query parameters for templates
# Get vNIC Templates
# Ensure templates is always a list for checking length, even if a single dict is returned
# Set final response based on number of templates found
# Single template - return as dict
# Multiple templates - return as list
# No templates found - return empty dict
# determine requested operation (config, delete, or neither (get resource only))
# api_body implies resource configuration through post/patch
# no api_body implies a get operation
# no query_params will try to create the resource without getting the current state
# state == 'absent'
# state == 'absent' with no query_params is not permitted to avoid accidental deletion
# get the current state of the resource
# resource exists and moid was returned
# request_delete
# Constraint 1: Cannot enable RDMA over Converged Ethernet and NVGRE simultaneously
# Constraint 2: Cannot enable 'GENEVE Offload' and 'Accelerated Receive Flow Steering (ARFS)' simultaneously
# Constraint 3: Cannot enable ether channel pinning with transmit queue count less than 2
# Validate ranges
# RoCE specific validations
# VXLAN, NVGRE, ARFS, PTP settings
# Advanced features
# RoCE settings
# Interrupt settings
# Queue settings
# Completion
# TCP offload settings
# RSS settings
# Build the API body with all the settings
# Add RoCE-specific settings only when RoCE is enabled
# Prevent a case where idempotency fails because of NtpServers (None vs [])
# Validate port range (1-65535)
# Validate LUN ID (>= 0)
# Add fields for present state
# Add IP address based on protocol type
# Configure the policy
# Add fabric network policies (vNIC templates are FI-attached only)
# Add MAC pool (always required for templates)
# iSCSI boot policy is optional - only add if specified
# Validate required fields for vNIC Template creation
# Resource path used to configure vNIC Template
# Add vNIC Template specific parameters
# Add CDN configuration
# Add connection type specific settings
# get Organization Moid
# always_update_password
# Create api body used to check current state
# update existing resource and purge any existing users
# configure the top-level policy resource
# EndPointUser local_users list config
# check for existing user in this organization
# create user if it doesn't exist
# GET EndPointRole Moid
# EndPointUserRole config
# When adding new policy parameters, update this dict with their respective resource path
# get expected policy moid from 1st list element
# check any current profiles and delete if needed
# get actual moid from 1st list element
# delete the actual policy
# post profile to the expected policy
# Argument spec above, resource path, and API body should be the only code changed in this module
# Get assigned server information (if defined)
# Configure the profile
# Chassis-specific parameters
# Server-specific parameters (Standalone and FI-Attached)
# FI-Attached specific parameters
# Validate that at least one of ipv4_blocks/ipv6_blocks was passed. We don't mark it as required in order to support absent.
# Validate that when enable_block_level_subnet_config is true, ipv4_blocks/ipv6_blocks contains ipv4_config/ipv6_config.
# Validate that when enable_block_level_subnet_config is false, ipv4_blocks/ipv6_blocks has a global ipv4_config/ipv6_config.
# Fetch drive groups for each storage policy
# Fetch drive groups for this policy using StoragePolicy.Moid filter
# Create a temporary intersight instance for drive groups query
# Add drive groups to the policy
# Validate that mac_blocks was passed. We don't mark it as required in order to support absent.
# If SNMP is disabled, we don't need to validate other parameters
# If SNMP is enabled, at least one version must be enabled
# Check if both sys_contact and sys_location are provided when SNMP is enabled
# SNMPv3 users can only be specified when SNMPv3 is enabled
# SNMPv3 only specific validations
# For SNMPv3, community access is always Disabled
# Validate SNMP users
# Validate SNMP traps
# V3 traps only support 'Trap' type, not 'Inform'
# Only add privacy type and password if security level is AuthPriv
# Base API body
# Only include additional parameters if SNMP is enabled
# Add version-specific parameters
# Add SNMP port
# Add system contact and location
# Add community access
# Add SNMPv2c specific parameters
# Add SNMPv3 specific parameters
# Add SNMP users
# Add SNMP traps
# When SNMP is disabled, send empty arrays
# Add optional parameters
# Get the argument specification
# Create the module
# Validate SNMP configuration
# Initialize Intersight module
# Build API body
# Exit with results
# Validate that iqn_suffix_blocks was passed. We don't mark it as required in order to support absent.
# Validate maximum_sessions range
# Validate remote_port range
# Validate KVM policy specific parameters
# Only add KVM-specific settings if enabled is True
# Get iSCSI Static Target policies
# Aggregate port syntax: "49/2"
# Validate aggregate port sub-port range (1-4)
# Regular port: just a number
# No specific restrictions for UCS-FI-6454 in the original implementation
# 1Gbps only for ports 89-96
# 40Gbps and 100Gbps for ports 97-108
# 1Gbps only for ports 9-10
# No breakout support
# 40Gbps and 100Gbps for ports 1-24
# 40Gbps and 100Gbps for ports 41-64
# Only FC ports support breakout
# 1Gbps only for ports 7-8
# Auto is always allowed
# Basic port ID validation
# Device-specific validations
# Separate lists for regular ports and aggregate ports
# Regular port IDs (e.g., 49)
# Aggregate port strings (e.g., "port49/2")
# Base ports used for aggregation (e.g., 49)
# Track the base port used for aggregation
# Add all port role configurations
# Add port channel member ports (can be regular or aggregate)
# TODO: Note to myself, please don't remove it or take it into account for now - I will need to handle port channel for aggregation in specific hardware.
# Legacy syntax: aggregate_port_id specified separately
# port_id can be "49/2" format or regular
# Add all port channel types
# Add breakout port ranges (these create the base ports for aggregation)
# Breakout ports become aggregate base ports, not regular ports
# Check for duplicates in both port lists
# Check for conflicts between regular ports and aggregate base ports
# Report conflicts
# For UCS-FI-6536 and UCSX-S9108-100G, FC is only available through breakout ports
# Warn user if they specify fc_port_mode for these models
# Override the parameter to prevent it from being processed
# If FC mode is configured, validate FC port IDs are within range
# Validate FC port mode range against device capabilities
# For most models (except UCS-FI-6536), the minimum FC port must always be included in the range
# Device-specific validation for maximum FC port
# For UCS-FI-6536, port 36 must always be included in FC range
# For these models, port_id_end must be one of the standard values
# Validate that FC port mode range is within device FC capabilities
# For aggregate ports, check the base port; for regular ports, check the port itself
# Special validation for FC breakout ports
# FC roles on aggregate ports (FC breakout) are only supported on specific devices
# Validate against FC port mode range
# Validate against device FC capabilities
# Validate admin speed for FC ports
# Validate both FC port types
# Parse port ID to check if it's an aggregate port
# Device model specific validation
# Manual numbering cannot be used with aggregate ports
# If manual_numbering is true, device type and device ID are required
# Default to 'Chassis' if not specified when manual numbering is enabled
# If preferred_device_type is specified, preferred_device_id is required
# If preferred_device_id is specified, preferred_device_type is required
# Validate admin speed for the device model and port
# List of all port channel parameter names
# PcId must be less than 257
# PcId must be positive
# Validate admin speed if present
# Validate port ID uniqueness across all port types
# Validate FC port constraints
# Validate server port constraints
# Validate appliance port constraints
# Validate ethernet uplink ports
# Validate fcoe uplink ports
# Validate port channel constraints
# Validate breakout port configurations
# Validate uplink port channel configurations
# Always filter by PortPolicy - required for all port policy resources
# Add PcId filter for port channels (most specific identifier)
# Add PortId filter for individual ports
# Add AggregatePortId filter for aggregate ports (makes port unique)
# Add PortIdStart/PortIdEnd for breakout ports and FC port modes
# Build the appropriate filter for this resource type
# Use the enhanced configure_secondary_resource with custom filter
# Build API body using the provided build function (needed for both create and delete to build filter)
# Configure the port channel using port policy specific handler
# Configure the port using port policy specific handler
# Collect the response for this port
# Parse port ID to handle aggregate ports
# Add aggregate port ID if this is an aggregate port
# Add preferred device configuration if specified
# Resolve Ethernet Network Group Policy MOIDs
# Resolve optional policies using generic helper
# Resolve policies using generic helper
# Build filter for the port channel
# Query the port channel
# Build filter for the port role
# Query the port role
# Determine the pin group type and target interface type
# Look up the MOID for the uplink port channel
# First, try to get from current run's port channels
# If not found in current run, try to fetch from API based on pin group type
# For LAN pin groups, try Ethernet uplink, FCoE uplink, and appliance port channels
# san
# For SAN pin groups, try FCoE uplink port channels first, then others
# Set ObjectType based on pin group type
# target_type == 'port'
# Look up the MOID for the uplink port
# Resolve single policies using generic helper
# Build API body for breakout port (needed for both create and delete to build filter)
# Configure the breakout port using field-based filtering
# Collect the response for this breakout port
# Build API body for server port (needed for both create and delete to build filter)
# Configure the server port using field-based filtering
# Collect the response for this server port
# Build API body for pin group (needed for both create and delete for consistency)
# Determine resource path based on pin group type
# Configure the pin group
# Build API body for FC port mode (needed for both create and delete to build filter)
# Configure the FC port mode using field-based filtering
# Define FC port mode options
# Define breakout port options
# Define server port options
# Define port options for port channels
# Define Ethernet uplink port channel options
# Define FC uplink port channel options
# Define FCoE uplink port channel options
# Define Appliance port channel options
# Define pin group options (both LAN and SAN)
# Define FC Uplink port options
# Define FC Storage port options
# Define Appliance port options
# Define Ethernet Uplink port options
# Define FCoE Uplink port options
# Configure the port policy
# Save the port policy response
# Process secondary resources if port policy is present
# Configure FC port mode
# Configure breakout ports
# Configure server ports
# Configure FC Uplink ports
# Configure FC Storage ports
# Configure Appliance ports
# Configure Ethernet Uplink ports
# Configure FCoE Uplink ports
# Configure Ethernet uplink port channels and get their MOIDs
# Configure FC uplink port channels
# Configure FCoE uplink port channels
# Configure Appliance port channels
# Configure pin groups (both LAN and SAN, must be after all port channels)
# Combine port policy and secondary resources in the main response
# Build pattern dynamically based on field sizes
# Validate UUID prefix format (xxxx-xxxx-xxxxxxxx)
# Validate UUID suffix blocks (xxxx-xxxxxxxxxxxx)
# Resource path used to configure pool
# Parse string input
# Check for comma-separated values and reject them
# Range format: "100-110"
# Single ID
# Initialize structured response
# Initialize list to track changed states from each API call
# Store the VLAN policy response
# Save the changed state and reset for next operation
# Process VLANs if provided
# Cache for multicast policy MOIDs to avoid redundant API calls
# Parse VLAN ID range to get list of individual VLAN IDs
# Validate each VLAN ID
# Extract configuration
# Process each VLAN ID in the range
# Check if we exceed the maximum VLAN limit (only for VLANs being created)
# Generate VLAN name: prefix_vlan_id
# Build base VLAN API body
# Handle sharing configuration
# If Isolated or Community, primary_vlan_id is required
# No sharing, use multicast policy
# Get multicast policy name from vlan config
# Check if multicast policy MOID is already cached
# Fetch multicast policy MOID and cache it
# Create the VLAN
# Store the VLAN response
# Set the final structured response
# Set final changed state based on whether any operation resulted in a change
# Store the VSAN policy response
# Process VSANs if provided
# Validate VSAN configuration
# If VSAN state is present, require vsan_id, fcoe_vlan_id, and vsan_scope
# Only build API body and validate if VSAN state is present
# Validate FCoE VLAN ID range
# Build VSAN API body
# Create or delete the VSAN
# Store the VSAN response
# Combine VSAN policy and VSANs in the main response
# Add fields for static DNS servers if not using DHCP
# Add IPv6 fields if enabled
# Add fields for static DNS servers if not using DHCP for IPv6
# Add Dynamic DNS fields if enabled and domain is provided
# GET Organization Moid
# GET IP Pool Moid
# remove read-only Organization key
# Organization must be set, but can't be changed after initial POST
# Resource path for System QoS Policies
# Check if it's a valid hex string
# Check if it has even number of characters
# Check if it doesn't exceed 40 characters
# Validate encryption key format
# In case state is absent we don't want to populate the api_body with any values
# Add encryption key if provided
# Validate ModelBundleCombo entries
# Set ModelBundleCombo
# Set exclude list
# Build filter for VSANs associated with this policy
# Add VSAN name filter if provided (exact match only)
# Get VSANs for this policy
# Get VSAN policies
# Embed VSANs in the policy response
# Check if device already exists in target list
# Send claim request if device id not already claimed
# Check if target exists
# Initialize all resource keys to ensure consistent structure
# Define resource mappings based on actual endpoints used in intersight_port_policy.py
# Port Modes (FC port mode and Breakout ports)
# Port Channels
# Individual Ports
# Pin Groups
# Build query parameters
# Get the resources
# Special handling for PortModes to separate FC port mode and breakout ports
# Make a deep copy to avoid modifying original data
# Breakout ports have CustomMode specified
# Change PortIdStart and PortIdEnd to PortId to align the main module.
# Log the error but continue with other resources
# Build query parameters using the standard method
# Get Port Policies
# Ensure port_policies is a list
# For each port policy, get all associated secondary resources
# Get all secondary resources for this policy
# Merge secondary resources directly into the policy data
# Update the api_response with the enhanced policy data
# Set count of policies found
# one API call returning all requested servers
# Defined API body used in compares or create
# Validate that snooping cannot be disabled with querier enabled
# Add querier IP address if querier is enabled
# Add peer IP address if provided
# Apply restrictions for FC class
# Apply restrictions for Best Effort class
# Validate CoS values (Best Effort is handled above)
# Validate weight values
# Validate MTU values
# Return default configuration with all 6 classes
# Always false, not user-configurable
# Process provided classes
# Add missing classes with defaults
# Format QoS classes with validation and defaults
# Add CDN value if specified
# Special handling for FabricEthNetworkGroupPolicy which needs to be an array for the API
# (c) 2020 Cisco Systems Inc.
# Intersight REST API Module
# Author: Matthew Garrett
# Contributors: David Soper, Chris Gascoigne, John McDonough
# mismatch if list lengths aren't equal
# if compare_values returns False, stop the loop and return
# loop complete with all items matching
# do not compare any password related attributes or attributes that are not in the actual resource
# Handle Intersight object references: when expected is a MOID string and actual is an object reference
# Compare the MOID string against the Moid field in the object reference
# Handle case where expected is a string and actual is an object reference
# if expected and actual != expected:
# This will fix the format of a v3 secret key, if needed.
# Some v3 keys PEM files are incorrectly formatted with
# "BEGIN EC PRIVATE KEY" instead of "BEGIN PRIVATE KEY"
# Verify an accepted HTTP verb was chosen
# Verify the resource path isn't empy & is a valid <str> object
# Verify the query parameters isn't empy & is a valid <dict> object
# Verify the MOID is not null & of proper length
# Check for query_params, encode, and concatenate onto URL
# Handle PATCH/DELETE by Object "name" instead of "moid"
# Check for moid and concatenate onto URL
# Check for GET request to properly form body
# Concatenate URLs for headers
# Get the current GMT Date/Time
# Generate the body digest
# Generate the authorization header
# Generate the HTTP requests header
# return the 1st list element
# Clear api_response when no results found to prevent returning stale data
# update the resource - user has to specify all the props they want updated
# return the 1st element in the results list
# PATCH returned the updated object directly (not in Results array)
# create the resource
# POSTs may not return any data.
# Get the current state of the resource if query_params.
# delete resource and create empty api_response
# Configure (create, update, or delete) the policy or profile
# Get the current state of the resource
# Configure (create, update, or delete) resources
# This method is used to configure secondery resources that are part of a policy or profile (e.g. VLANs)
# GET Moid of the resource
# add default timeout not default: 1 to support above or operation
# check archive state
# commit confirm specific attributes
# first item: macro command
# Ensure we are not in config mode
# re.compile(rb"^% \w+", re.M),
# initialize to false for default IOS execution
# support to SD-WAN mode
# fails as length required for handling prompt
# Note: python-3.5 cannot combine u"" and r"" together.  Thus make
# an r string and use to_text to ensure it's text on both py2 and py3.
# only catch the macro configuration command,
# not negated 'no' variation.
# send the configuration commands to the device and merge
# them with the current running config
# add before commands as dictonary type to config lines
# Copyright 2025 Red Hat
# IOS will accept a MD5 fingerprint of the public key
# and is easier to configure in a single line
# we calculate this fingerprint here
# ssh-rsa AAA...== comment
# just the key, assume rsa type
# Some ios devices don't understand `| section foo`
# parse native config using the Vlan_configurations template
# Determine if we need to collect vlan data or config data
# Appending MTU value to the retrieved dictionary
# Appending Remote Span value to related VLAN
# Appending private vlan information to related VLAN
# Sanitize and structure everything
# Define the secondary VLAN's type
# Assemble and merge data for primary private VLANs
# Associate with the proper VLAN in final_objs
# if any vlan configuration data is pending add it to facts
# check for index where state starts
# break range
# parse native config using the ServiceTemplate
# convert some of the dicts to lists
# parse native config using the Hsrp_interfaces template
# Default to priority to 100 if not specified, idempotent behavior
# Default to version 1 if not specified, idempotent behavior
# parse native config using the Evpn_global template
# defaults to version 3 data
# gathers v3 user data
# parse snmpv3_user data using the get_snmpv3_user_facts method
# add snmpv3_user data to the objs dictionary
# parse native config using the Evpn_ethernet template
# parse native config using the Static_routes template
# parse native config using the l3_interfaces template
# sorting the dict by interface name
# pylint: skip-file
# Removed the show access-list
# Removed the show running-config | include ip(v6)* access-list|remark
# this information is required to scoop out the access lists which has no aces
# this would update empty acls to the full acls entry
# parse main information
# parse just names to update empty acls
# for extended acl
# every remarks is one list item which has a sequence number
# every ace remark is preserved and ordered
# at the end of each sequence it is flushed to a ace entry
# i here denotes an ace, which would be populated with remarks entries
# pending remarks
# there can be ace entry with just a remarks and no ace actually
# 10 remarks I am a remarks
# 20 ..... so onn
# this handles the classic set of remarks at the end, which is not tied to
# any sequence number
# handling remarks for each ace entry
# parse native config using the L2_interfaces template
# parse native config using the Bgp_address_family template
# parse native config using the Lag_interfaces template
# parse native config using the Interfaces template
# parse native config using the Evpn_evi template
# Parse native config using the Vrf_interfaces template
# Ensure previous facts are removed to avoid duplication
# Update the ansible_facts dictionary with the VRF interface facts
# for older ios versions default being autonomous where operating mode classification not present
# Convert dict to list
# converts areas, interfaces in each process to list
# parse native config using the Vxlan_vtep template
# parse native config using the Vrf_address_family template
# if state is overridden, remove excluded vlans
# Exclude 'mtu' from comparison if deleting VLANs as it is not supported
# handles deleted as want be blank and only
# negates if no shutdown
# Convert each of config list to dict
# description gets merged with existing description, so explicit delete is required
# replaced and overridden state
# asplain helper func
# convert to asplain for correct sorting
# sort the list to ensure idempotency
# if state is merged, merge want onto have
# if state is deleted, limit the have to anything in want & set want to nothing
# Compare Exports
# Compare Imports
# Copyright 2019 Red Hat Inc.
# if state is replaced
# if state is deleted, limit the have to anything in want
# set want to nothing
# delete processes first so we do run into "more than one" errors
# this is to preserve the order in which ipv6 addresses are applied to the device
# Removal of unecessary configs in have_standby_group
# remove superfluous config for deleted
# Take in count the traps config mpls_vpn which is DEPRECATED and replaced by mpls.vpn
# Remove the duplicate interface call
# We didn't find a matching desired state, which means we can
# pretend we received an empty desired state.
# Set the interface config based on the want and have config
# Get the diff b/w want and have
# Delete the interface config based on the want and have config
# compare sequences
# remove remaining entries from have prefix list
# mind the order of the parsers
# other list attrs
# access_group
# if self.state in ["overridden", "deleted"]:
# or not want.get(addf)
# remove remaining items in have for replaced state
# remove superfluous config
# New interface (doesn't use fact file)
# entry is set as primary
# entry is set as secondary
# hacl is set as primary, if wacls has no other primary entry we must keep
# this entry as primary (so we'll compare entry to hacl and not
# generate commands)
# another primary is in wacls
# these can be subnets that are no longer used
# or secondaries that have moved to primary
# or primary that has moved to secondary
# deprecated attribute
# if state is deleted, empty out wantd and set haved to want
# If ACLs type is different between existing and wanted ACL, we need first remove it
# We remove ACEs because we have previously add a command to suppress completely the ACL
# to determine the index for acl command
# handle aces
# remove remaining acls lists
# case 1 - loop on want and compare with have data here
# if there is have information with same sequence
# the protocol options are processed here
# if want and have is different
# separate remarks from have in an ace entry
# separate remarks from want in an ace entry
# have aces processing starts here
# if merged then don't update anything and fail
# i.e if not merged
# remove all remarks for a ace if want and have don't match
# as if we randomly add aces we cannot maintain order we have to
# add all of them again, for that ace
# and me empty our have as we would add back
# all our remarks for that ace anyways
# remove ace if not in want
# we might think why not update it directly,
# if we try to update without negating the entry appliance
# reports % Duplicate sequence number
# once an ace is negated intentionally emptying out have so that
# the remarks are repopulated, as the remarks and ace behavior is sticky
# if an ace is taken out all the remarks is removed automatically.
# add remark if not in have
# but delete all remarks before to protect order
# add ace if not in have
# if the ace entry just has sequence then do nothing
# add normal ace entries from want
# case 2 - loop over remaining have and remove them
# remove all remarks in that
# deal with the rest of ace entry
# ipv4 and ipv6 acl
# check each acl for aces
# remarks if defined in an ace
# each ace turned to dict
# failing for mutually exclusive standard acl key
# index aces inside of each ace don't cluster them all
# en_name = str(acl.get("name")) + "remark"
# temp_rem.extend(ace.pop("remarks"))
# if temp_rem:  # add remarks to the temp ace
# update acl dict with req info
# if no acl type then here eg: ipv6
# NOTE - "514": "syslog" duplicate value device renders "cmd"
# handle aliased hostname as host
# find vlans to create wrt have
# remove vlan all as want blank
# remove excess vlans for replaced overridden with vlan entries
# add configuration needed
# generic
# bgp
# snmp
# redistribute
# to clear off all afs
# not deleted state
# adds router bgp AS_NUMB command
# for everything else
# for neighbors
# for networks
# for aggregate_addresses
# for distribution of ospfv2 and ospfv3 routes
# add af command
# always negate the command in the
# appropriate states before applying
# handles route_maps, prefix_lists
# address/ tag/ ipv6_address to neighbor_address
# prefix_list and prefix_lists
# deprecated made list
# route_map and route_maps
# slow_peer to slow_peer_options
# only one slow_peer is allowed
# we can skip deprecating these by handling the int to str conversion here
# int to float is not considered considering the size of as numbers
# make dict neighbors dict
# make dict networks dict
# make dict aggregate_addresses dict
# keep single dict to compare redistribute
# establish elif sections for future protocols
# if necessary
# Start handle deprecates
# map deprecated nssa_external type to new option
# deprecated external and nssa_external are boolean
# so both types mapped to true
# End handle deprecates
# make distinct address family entires
# Set the lldp global config based on the want and have config
# Delete the lldp global config based on the want and have config
# can oly have layer2 as switchport no show cli
# deleted, clean up global params
# delete as_number takes down whole bgp config
# if state is merged, merge want with have and then compare
# holds command length to add as_number
# for clean bgp global setup
# for dict type attributes
# for list type attributes
# for redistribute
# add as_number in the beginning of commands set if commands generated
# entry = self._handle_deprecated(entry)
# convert list to dict for areas
# list to dict for distribute_list
# list to dict for passive_interfaces
# list to dict for network
# remove exiting config of the member VNI
# remove remaining configs in have for replaced state
# bgp starts
# bgp ends
# neighbor remote-as starts
# neighbor remote-as ends
# neighbor peer-group starts
# neighbor peer-group ends
# redistribute starts
# redistribute ends
# contain sub list attr
# only traps
# for config_data in config_data["address_family"]:
# "compval": "afi",
# if config.get("segment_routing").get("enable"):
# only applicable for switches
# member vni starts
# member vni ends
# additive must be set last last
# neighbor starts
# neighbors end
# Initialize cmd to avoid potential unbound error
# To delete the passed config
# To set the passed config
# To verify the valid ipv6 address
# recursive function to convert input dict to set for comparision
# Generate dict with have dict value which is None in want dict
# as multiple IPV6 address can be configured on same
# interface, for replace state in place update will
# actually create new entry, which isn't as expected
# for replace state, so in case of IPV6 address
# every time 1st delete the existing IPV6 config and
# then apply the new change
# below conditions checks are added to check if
# secondary IP is configured, if yes then delete
# the already configured IP if want and have IP
# is different else if it's same no need to delete
# Remove duplicate interface from commands
# convert netmask to cidr and returns the cidr notation
# for IPv6
# for IPv4
# Cisco UCS doc fragment
# UCSModule creation above verifies ucsmsdk is present and exits on failure, so additional imports are done below.
# mo must exist but all properties do not have to match
# check top-level mo props
# create if mo does not already exist
# UCSModule verifies ucsmsdk is present and exits on failure.  Imports are below ucs object creation.
# dn is <org_dn>/mac-pool-<name>
# top-level props match, check next level mo/props
# mac address block specified, check properties
# no MAC address block specified, but top-level props matched
# set default params.  Done here to set values for lists which can't be done in the argument_spec
# dn is <org_dn>/san-conn-templ-<name>
# single resource specified, create list from the current params
# dn is <org_dn>/wwn-pool-<name> for WWNN or WWPN
# append purpose param with suffix used by UCSM
# UCSModule creation above verifies ucsmsdk is present and exits on failure.  Additional imports are done below.
# service profiles are of type 'instance'
# top-level props match
# no power state provided, use existing state as match
# logical server distinguished name is <org>/ls-<name> and physical node dn appends 'pn' or 'pn-req'
# check if logical server is assigned and associated
# check the current pool
# create if mo does not already exist in desired state
# profiles from templates will add a server pool, so remove and add the server again
# TODO Add ranges for cos, weight and mtu
# dn is <org_dn>/profile-<name>
# check local lun props
# mo must exist, but all properties do not have to match
# check mo props
# remove parent info and passwords because those aren't presented in the actual props
# explicit deep copy of child object since traverse_objects may modify parent mo information
# note that all objects specified in the object list report a single result (including a single changed).
# single commit for object and any children
# generic Exception because UCSM can throw a variety of exceptions
# UCSModule verifies ucsmsdk is present and exits on failure.
# Imports are below for UCS object creation.
# The Class(es) this module is managing
# Manage Attributes
# Determine state change
# Object exists, if it should exist has anything changed?
# Do some or all Object properties not match, that is a change
# Object does not exist but should, that is a change
# Object exists but should not, that is a change
# Apply state if not check_mode
# dn = fabric/lan/net-group-VLANGROUP
# Check for VLAN Group
# dn = fabric/lan/net-VLANNAME
# Check for VLAN
# dn = fabric/lan/net-group-VLANGROUP/net-VLANNAME
# Check for VLAN within VLAN Group
# Configuration MOs. Couldn't really get this to work off the DNs, so I built additional objects.
# Error out if the VLAN Group is missing
# Error out if VLAN is missing
# ipv4 block specified, check properties
# ipv6 block specified, check properties
# dn is <org_dn>/ip-pool-<name>
# only check ipv6 props if the top-level and ipv4 props matched
# dn is <org_dn>/uuid-pool-<name>
# uuid address block specified, check properties
# no UUID address block specified, but top-level props matched
# update mo, timezone mo always exists
# graphics_card_policy_name=module.params['graphics_card_policy'],
# Storage profile
# Management Interface
# LAN/SAN connectivity policy
# IQN pool
# power state
# server pool
# generic Exception handling because SDK can throw a variety of exceptions
# no stroage profile mo or desired state
# no mo and no desired state
# no pn-req object and no server pool name provided
# kwargs['graphics_card_policy_name'] = module.params['graphics_card_policy']
# code below should discontinue checks once any mismatch is found
# check storage profile 1st
# UCSModule creation above verifies ucsmsdk is present and exits on failure.
# Additional imports are done below or in called functions.
# dn is <org_dn>/ls-<name>
# state == 'present'
# configuration_mode == 'manual'
# generic Exception handling because SDK can throw a variety
# dn is <org_dn>/disk-group-config-<name>
# check vnicEther props
# mo_1 did not exist
# check vnicIScsiLCP props
# check vlan
# mo_1 props did not match
# check vnic 1st
# dn is <org_dn>/lan-conn-pol-<name>
# dn is <org_dn>/san-conn-pol-<name>
# vnicFcNode object
# check vnicFc props
# dn is fabric/san/net-<name> for common vsans or fabric/san/[A or B]/net-<name> for A or B
# dn is fabric/lan/net-<name> for common vlans or fabric/lan/[A or B]/net-<name> for A or B
# dn is <org_dn>/lan-conn-templ-<name>
# set default params for lists which can't be done in the argument_spec
# for target 'adapter', change to internal UCS Manager spelling 'adaptor'
# do not check shared props if this is a secondary template
# check vlan props
# secondary template only sets non shared props
# update/add mo
# (c) 2019 Cisco Systems Inc.
# import done here to provide common import check for all modules
# use_proxy=yes (default) and proxy=None (default) should be using the system defined proxy
# use_proxy=yes (default) and proxy=value should use the provided proxy
# use_proxy=no (user) should not be using a proxy
# force no proxy to be used.  Note that proxy=None in UcsHandle will
# use the system proxy so we must set to something else
# Before any commit happened, we can get a real configuration
# diff from the device and make it available by the iosxr_config module.
# This information can be useful either in check mode or normal mode.
# In some cases even a normal commit, i.e., !replace,
# throws a prompt and we need to handle it before
# proceeding further
# Generated by the protocol buffer compiler.  DO NOT EDIT!
# source: ems_grpc.proto
# @@protoc_insertion_point(imports)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ConfigGetArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ConfigGetReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.GetOperArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.GetOperReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ConfigArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ConfigReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CliConfigArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CliConfigReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CommitReplaceArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CommitReplaceReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CommitMsg)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CommitArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CommitReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.DiscardChangesArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.DiscardChangesReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ShowCmdArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ShowCmdTextReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.ShowCmdJSONReply)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CreateSubsArgs)
# @@protoc_insertion_point(class_scope:IOSXRExtensibleManagabilityService.CreateSubsReply)
# @@protoc_insertion_point(module_scope)
# this argument is deprecated in favor of setting match: none
# it will be removed in a future version
# Commenting the below cliconf deprecation support call for Ansible 2.9 as it'll be continued to be supported
# (c) 2017, Ansible by Red Hat, Inc
# Add iosxr 7.0 > specific parameters
# (c) 2017 Kedar Kekan (kkekan@redhat.com)
# TODO: change .xml to .data_xml, when ncclient supports data_xml on all platforms
# Copyright (c) 2017 Red Hat Inc.
# Objects defined in Route-policy Language guide of IOS_XR.
# Reconfiguring these objects replace existing configurations.
# Hence these objects should be played direcly from candidate
# configurations
# ignore rpc-reply root node and diff from data element onwards
# Note: Does not cache config in favour of latest config on every get operation.
# FIXME: check for platform behaviour and restore this
# conn.lock(target = 'candidate')
# conn.discard_changes()
# conn.unlock(target = 'candidate')
# Overwrite the default diff by the IOS XR commit diff.
# See plugins/cliconf/iosxr.py for this key set: show_commit_config_diff
# A list of commands like {end-set, end-policy, ...} are part of configuration
# block like { prefix-set, as-path-set , ... } but they are not indented properly
# to be included with their parent. sanitize_config will add indentation to
# end-* commands so they are included with their parents
# This might be end-set of another regex
# otherwise we would be having new start
# Add the last block
# gets policy names as there is no good way to get all policy data at once,
# other than slicing running-config
# for states like parsed to work
# generate a list of policy names
# only for states like parsed
# we enumerate the split data as the name and policy details are on the same sequence
# we send the name of the policy and the policy data is fetched for each name, one costly operation
# the list of policy facts is created as individual route-policy information is converted to facts
# handles empty config
# Get the configured IPV4 details
# Get the configured IPV6 details
# parse native config using the Bgp_templates template
# We iterate through the data and create a list of ACLs
# where each ACL is a dictionary in the following format:
# {'afi': 'ipv4', 'name': 'acl_1', 'aces': ['10 permit 172.16.0.0 0.0.255.255', '20 deny 192.168.34.0 0.0.0.255']}
# Here we group the ACLs based on AFI
# Now that we have the ACLs in a fairly structured format,
# we pass it on to render_config to convert it to model spec
# We deepcopy the actual queue and iterate through the
# copied queue. However, we pop off the elements from
# the actual queue. Then, in every pass we update the copied
# queue with the current state of the original queue.
# This is done because a queue cannot be mutated during iteration.
# `dscp` can be followed by either the dscp value itself or
# the same thing can be represented using "dscp eq <dscp_value>".
# In both cases, it would show up as {'dscp': {'eq': "dscp_value"}}.
# Create a queue with each word in the ace
# We parse each element and pop it off the queue
# An ACE will always have a sequence number, even if
# it is not explicitly provided while configuring
# If the entry is a non-remark entry, the third element
# will always be the protocol specified. By default, it's
# the AFI.
# Populate source dictionary
# Populate port_protocol key in source dictionary
# Populate destination dictionary
# Populate port_protocol key in destination dictionary
# Populate protocol_options dictionary
# Populate remaining match options' dictionaries
# At this stage the queue should be empty
# If the queue is not empty, it means that
# we haven't been able to parse the entire ACE
# In this case, we add the whole unprocessed ACE
# to a key called `line` and send it back
# default output format has more spaces with default identation
# reset the spaces with replace
# parse native config using the Vrf_interfaces template
# move global vals to their correct position in facts tree
# this is only needed for keys that are common between both global
# and VRF contexts
# transform vrfs into a list
# for purged state
# for all other states
# clean anything that is surplus, if state purged clean all have if want is empty
# to maintain the sanity of how commands are generated
# iterate on the list to preserve sequence
# loop over want's condition section
# if want clauses and have clauses are not same
# cannot add commands on a adhoc manner it replaces the whole config
# required to generate conditional statements
# handle elseif conditions
# adds else only once if there is else block
# condition commands added here
# as apply is a list
# apply config added here
# route-policy configs added here
# if we want to add any condition we have to start with if
# same as above
# add endif if there was a nested else
# add endif if there was a condition in the top level config
# if route-policy then end-policy
# the name of the route-policy
# wanted to do recursion but the overall performance is better this way
# if state is merged, merge want into have and then compare
# if state is overridden, first remove processes that are in have but not in want
# handling complex parsers for replaced and overridden state
# alligning cmd with negate cmd 1st followed by config cmd
# if state is deleted, empty out wantd
# remove superfluos config
# Delete all the top-level keys (VRFs/Global Route Entry) that are
# not specified in want.
# convert the next_hops list of dictionaries to dictionary of
# dictionaries with (`dest_vrf`, `forward_router_address`, `interface`) tuple
# being the key for each dictionary.
# a combination of these 3 attributes uniquely identifies a route entry.
# in case `dest_vrf` is not specified, `forward_router_address` and `interface`
# become the unique identifier
# for every dict in the want next_hops dictionaries, if the key
# is present in `rotated_have_next_hops`, we set `existing` to True,
# which means the the given want exit point exists and we run dict_diff
# on `value` which is basically all the other attributes of the exit point
# if the key is not present, it means that this is a new exit point
# dict_merge() is necessary to make sure that we
# don't end up overridding the entry and also to
# allow incremental updates
# Only add 'vrf' if in VRF context and not in normal context
# Add the want interface that's not already configured in have interface
# To handle L3 IPV4 configuration
# To handle L3 IPV6 configuration
# Instead of passing entire want and have
# list of dictionaries to the respective
# _state_* methods we are passing the want
# and have dictionaries per AFI
# Remove extraneous AFI that are present in config but not
# specified in `want`
# First we remove the extraneous ACLs from the AFIs that
# are present both in `want` and in `have` and then
# we call `_state_replaced` to update the ACEs within those ACLs
# `dict_diff` doesn't work properly for `protocol_options` diff,
# so we need to handle it separately
# Convert prefixes to "address wildcard bits" format for IPv4 addresses
# Not valid for IPv6 addresses because those can only be specified as prefixes
# and are always rendered in running-config as prefixes too
# For merging with already configured l2protocol
# Fetch the area info as that would be common to all the attributes per interface
# if state is deleted, clean up global params
# compare af under vrf separately to ensure correct generation of commands
# for deleted and overridden state
# and have dictionaries per interface
# if state is deleted, empty out wantd and set haved to elements to delete
# for deleted and replaced state
# if there are peers already configured
# we need to remove those before we pass
# the new ones otherwise the device appends
# them to the existing ones
# cleanup remaining VRFs
# but do not negate it entirely
# instead remove only those attributes
# that this module manages
# like "$param1"
# Checks if want doesn't have secondary IP but have has secondary IP set
# Copyright: (c) 2017, Swetha Chunduri (@schunduri)
# Sorted functionality for visual aid only, will result in 1/25, 1/3, 1/31
# If full sort is needed leverage natsort package (https://github.com/SethMMorton/natsort)
# Copyright: (c) 2017, Ramses Smeyers <rsmeyers@cisco.com>
# Copyright: (c) 2023, Shreyas Srish <ssrish@cisco.com>
# Copyright: (c) 2024, Akini Ross <akinross@cisco.com>
# This function takes a dictionary and a series of keys,
# and returns a list of dictionaries using recursive helper function 'listify_worker'
# This function walks through a dictionary 'd', depth-first,
# using the keys provided, and generates a new dictionary for each key:value pair encountered
# The prefix in the code is used to store the path of keys traversed in the nested dictionary,
# which helps to generate unique keys for each value when flattening the dictionary.
# The cache in this code is a temporary storage that holds key-value pairs as the function navigates through the nested dictionary.
# It helps to generate the final output by remembering the traversed path in each recursive call.
# If we're at the deepest level of keys
# Conditional to support nested dictionaries which are detected by item is string
# Copyright (c) 2020 Cisco and/or its affiliates.
# Optional, only used for APIC signature-based authentication
# Signature-based authentication using cryptography
# Login function is executed until connection to a host is established or until all the hosts in the list are exhausted
# One API call is made via each call to send_request from aci.py in module_utils
# As long as a host is active in the list the API call will go through
# recurse through function for retrying the request
# return statement executed upon each successful response from the request function
# Built-in-function
# Response check to remain consistent with fetch_url's response
# Response check to trigger key error if response_data is invalid
# base class verifies that file exists and is readable by current user
# this method will parse 'common format' inventory sources and
# update any options declared in DOCUMENTATION as needed
# Avoid argument spec validation warnings for plugin options
# Options are retrieved from the constructed plugin docs, see below links for more information:
# parse data and create inventory objects:
# replace the reserved Ansible inventory keys from host topSystem attributes
# Create user-defined groups using variables and Jinja2 conditionals
# Copyright: (c) 2020, nkatarmal-crest <nirav.katarmal@crestdatasys.com>
# Copyright: (c) 2020, Cindy Zhao <cizhao@cisco.com>
# FIXME: didn't find the flow_log in UI
# flow_log=dict(type='str'),
# Copyright: (c) 2023, Akini Ross <akinross@cisco.com>
# Not required for querying all objects
# Copyright: (c) 2021, Manuel Widmer <mawidmer@cisco.com>
# vmmProvP is not allowed to execute a query with rsp-subtree set in the filter_string
# due to complicated url construction logic which should be refactored creating a temporary fix inside module
# TODO refactor url construction logic if more occurences of rsp-subtree not supported problem appear
# check if the url is pointing towards the vmmProvP class and rsp-subtree is set in the filter_string
# Create new child configs payload
# Validate if existing and remove child object when is does not match provided configuration
# Copyright: (c) 2018, Dag Wieers (dagwieers) <dag@wieers.com>
# Copyright: (c) 2022, Mark Ciecior (@markciecior)
# Query scenario without any filters forcing class query on the subject_label_class
# Copyright: (c) 2022, Sabari Jaganathan <sajagana@cisco.com>
# Collect proper class and mo information based on pool_type
# Validate range_end and range_start are valid for its respective encap type
# Validate range_start is less than range_end
# Reset range managed object to None for aci util to properly handle query
# Vxlan does not support setting the allocation mode
# ACI Pool URL requires the allocation mode for vlan and vsan pools (ex: uni/infra/vlanns-[poolname]-static)
# Copyright: (c) 2022, Sabari Jaganathan (@sajagana) <sajagana@cisco.com>
# Copyright: (c) 2023, Gaspard Micol (@gmicol) <gmicol@cisco.com>
# Copyright: (c) 2023, Tim Cragg (@timcragg) <tcragg@cisco.com>
# Copyright: (c) 2023, Akini Ross (@akinross) <akinross@cisco.com
# Report when vm_provider is set when type is not virtual
# Vxlan pools do not support allocation modes
# Compile the full domain for URL building
# Ensure that querying all objects works when only domain_type is provided
# Filter out module params with null values
# Generate config diff which will be used as POST request body
# Submit changes if module not in check_mode and the proposed is different than existing
# Copyright: (c) 2023, Eric Girard <@netgirard>
# Defining the interface profile tDn for clarity
# Copyright: (c) 2020, Shreyas Srish <ssrish@cisco.com>
# Copyright: (c) 2025, Tim Cragg (@timcragg)
# Copyright: (c) 2025, Shreyas Srish (@shrsr)
# Copyright: (c) 2024, Faiz Mohammad (@Ziaf007) <faizmoh@cisco.com>
# No default provided on purpose
# Copyright: (c) 2017, Bruno Calogero <brunocalogero@hotmail.com>
# NOTE: Keyword 'from' is a reserved word in python, so we need it as a string
# Add infraNodeBlk only when leaf_node_blk was defined
# Add infraRsAccNodePGrp only when policy_group was defined
# NOTE: normal rn: leaves-{name}-typ-{type}, hence here hardcoded to range for purposes of module
# NOTE: infraNodeBlk is not made into a subclass because there is a 1-1 mapping between node block and leaf selector name
# Copyright: (c) 2023, Tim Cragg (@timcragg)
# Copyright: (c) 2023, Akini Ross (@akinross)
# Copyright: (c) 2019, Tim Knipper <tim.knipper@gmail.com>
# Copyright: (c) 2023, Samita Bhattacharjee (@samitab) <samitab@cisco.com>
# Validate if existing and remove child objects when do not match provided configuration
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class spanRsSrcGrpToFilterGrp is already attached.
# A seperate delete request to dn of the spanRsSrcGrpToFilterGrp is needed to remove the object prior to adding to child_configs.
# Failed child_config is displayed in below:
# child_configs.append(
# Only wrap the payload in parent class if cloud_tenant is set to avoid apic error
# Ensure that querying all objects works when only domain is provided
# Copyright: (c) 2020, nkatarmal-crest (@nirav.katarmal)
# Copyright: (c) 2021, Cindy Zhao (@cizhao)
# Copyright: (c) 2025, Akini Ross (@akinross)
# Copyright: (c) 2017, Jacob McGill (@jmcgill298)
# Assumption for simplicity of code that when a valid IPv4 address is not provided the intend is IPv6.
# Copyright: (c) 2023, Gaspard MICOL (@gmicol) <gmicol@cisco.com>
# TODO: split checks between IPv4 and IPv6 Addresses
# ACI Pool URL requires the pool_allocation mode for vlan and vsan pools (ex: uni/infra/vlanns-[poolname]-static)
# Vxlan pools do not support pool allocation modes
# Filter out module parameters with null values
# via UI vSwitch Policy can only be added for VMware and Microsoft vmm domains
# behavior for other domains is currently untested.
# enhanced_lag_spec = dict(
# netflow_spec = dict(
# Remove nutanix from VM_PROVIDER_MAPPING as it is not supported
# Copyright: (c) 2024, Samita Bhattacharjee (@samitab) <samitab@cisco.com>
# Copyright: (c) 2020, Sudhakar Shet Kudtarkar (@kudtarkar1)
# Copyright: (c) 2020, Anvitha Jain(@anvitha-jain) <anvjain@cisco.com>
# Build ctrl value for request
# See comments in aci_static_binding_to_epg module.
# child_configs = []
# child_configs=child_configs
# Copyright: (c) 2023, Shreyas Srish (@shrsr) <ssrish@cisco.com>
# Copyright: (c) 2022, Tim Cragg (@timcragg)
# Copyright: (c) 2023, Christian Kolrep <christian.kolrep@dataport.de>
# Mapping dicts are used to normalize the proposed data to what the APIC expects, which will keep diffs accurate
# Not required for querying all policies
# ESG Tag Selector key name
# ESG Tag Selector operator type
# ESG Tag Selector match value
# Copyright: (c) 2024, Samita Bhattacharjee <samitab@cisco.com>
# in aws cloud apic
# in azure cloud apic
# Copyright: (c) 2022, Jason Juenger (@jasonjuenger) <jasonjuenger@gmail.com>
# Copyright: (c) 2020, Lionel Hercot <lhercot@cisco.com>
# Copyright: (c) 2019, Simon Metzger <smnmtzgr@gmail.com>
# Not required for querying all objects and deleting sub port blocks
# NOTE: normal rn: hports-{name}-typ-{type}, hence here hardcoded to range for purposes of module
# Copyright: (c) 2024, Samita Bhattacharjee (@samitab) <samitab.cisco.com>
# Copyright: (c) 2017, Jacob McGill (jmcgill298)
# Collect proper mo information
# Validate block_end and block_start are valid for its respective encap type
# Validate block_start is less than block_end
# ACI Pool URL requires the allocation mode (ex: uni/infra/vlanns-[poolname]-static)
# Validation rule to only allow aggregate choice when there is a match in scope choice
# Copyright: (c) 2018, Simon Metzger <smnmtzgr@gmail.com>
# Copyright: (c) 2020, Zak Lantz (@manofcolombia) <zakodewald@gmail.com>
# Copyright: (c) 2023, Gaspard Micol <gmicol@cisco.com>
# NOTE: (This problem is also present on the APIC GUI)
# NOTE: When specifying a C(role) the new Fabric Node Member will be created but Role on GUI will be "unknown", hence not what seems to be a module problem
# NOTE: Originally we were sending 'rn', but now we need 'dn' for idempotency
# Copyright: (c) 2023, Anvitha Jain <anvjain@cisco.com>
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class spanRsSrcToCtx is already attached.
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class spanRsSrcToBD is already attached.
# Copyright: (c) 2023, Sabari Jaganathan <sajagana@cisco.com>
# Copyright: (c) 2023, Tim Cragg (@timcragg) <timcragg@cisco.com>
# alias=dict(type="str"), not implemented because of different (api/alias/mo/uni/) api endpoint
# Copyright: (c) 2020, Cindy Zhao (cizhao) <cizhao@cisco.com>
# Copyright: (c) 2021, Oleksandr Kreshchenko (@alexkross)
# Optional, only used for XML payload
# Recursively add annotation to children
# Use APICs built-in idempotency
# Report missing file
# Find request type
# Ensure we always return a status
# We include the payload as it may be templated
# TODO: Would be nice to template this, requires action-plugin
# Validate payload
# Validate inline YAML/JSON
# Validate XML string
# Perform actual request using auth cookie (Same as aci.request(), but also supports XML)
# NOTE By setting aci.path we ensure that Ansible displays accurate URL info when the plugin and the aci_rest module are used.
# Report failure
# APIC error
# NOTE A case when aci_rest is used with check mode and the apic host is used directly from the inventory
# Set changed to true so check_mode changed result is behaving similar to non aci_rest modules
# Only set proposed if we have a payload and thus also only allow output_path if we have a payload
# DELETE and GET do not have a payload
# Copyright: (c) 2025, Sabari Jaganathan <sajagana@cisco.com>
# Copyright: (c) 2025, Dev Sinha (@DevSinha13) <devsinh@cisco.com>
# Process leaves, and support dash-delimited leaves
# Users are likely to use integers for leaf IDs, which would raise an exception when using the join method
# rspathL2OutAtt-[topology/pod-1/protpaths-101-102/pathep-[L2o2_n7]]
# Copyright: (c) 2020, Tim Cragg <tcragg@cisco.com>
# TODO EXAMPLES
# Remove duplicate subnets
# Validate if existing and remove subnet objects when the config does not match the provided config.
# Create a new Snapshot
# Query for job information and add to results
# Prefix the proper url to export_policy
# Build POST request to used to remove Snapshot
# Mark Snapshot for Deletion
# WARNING: interface_selector accepts non existing interface_profile names and they appear on APIC gui with a state of "missing-target"
# Not required for querying all contracts
# NOTE: Since this module needs to include both infra:AccBndlGrp (for PC and VPC) and infra:AccPortGrp (for leaf access port policy group):
# NOTE: The user(s) can make the choice between (link(PC), node(VPC), leaf(leaf-access port policy group))
# Reset for target_filter
# Add infraRsattEntP binding only when aep is defined
# Add infraRsSynceEthIfPol/infraRsSynceEthIfPolBndlGrp binding only when sync_e_interface_policy is defined
# Add the children only when lag_type == leaf (Leaf Interface specific policies).
# Add infraRsOpticsIfPol binding only when transceiver_policy was defined
# Build child_configs dynamically
# Add infraRsAccBaseGrp only when policy_group was defined
# Copyright: (c) 2020, Dag Wieers (@dagwieers)
# Copyright: (c) 2020, sig9org (@sig9org)
# TODO: change 'deploy_immediacy' to 'resolution_immediacy' (as seen in aci_epg_to_domain)?
# Process leafs, and support dash-delimited leafs
# Process extpaths, and support dash-delimited extpaths
# Users are likely to use integers for extpaths IDs, which would raise an exception when using the join method
# Excluding below classes from the module:
# fvProtoAttr:
# fvUsegBDCont:
# Copyright: (c) 2022, Akini Ross (@akinross)
# Validate if existing and remove child objects when the config does not match the provided config.
# Copyright: (c) 2021, Marcel Zehnder (@maercu)
# Copyright: (c) 2025, Eric Girard @netgirard
# Delete existing aaaPwdStrengthProfile if enable is set to false and it exists
# This is done for setting the correct output for changed state
# Copyright: (c) 2025, Faiz Mohammad (@Ziaf007) <faizmoh@cisco.com>
# required for querying as provider class for INB and OOB are different
# validate that dst_port is not passed with dst_port_end or dst_port_start
# validate that source_port is not passed with source_port_end or source_port_start
# Sub Port ID - 0 is default value
# To handle the existing object property
# l3extRsProvLblDef, bgpDomainIdAllocator are auto-generated classes, added for query output
# Inbound route-map is removed when input is different or an empty string, otherwise ignored.
# default both because of back-worth compatibility and for determining which config to push
# named directives instead of log/directive for readability of code, aliases and input "none are kept for back-worth compatibility
# "none" is kept because of back-worth compatibility, could be deleted and keep only None
# start logic to be consistent with GUI to only allow both direction or a one-way connection
# end logic to be consistent with GUI to only allow both direction or a one-way connection
# dict unpacking with **base_subject_dict raises syntax error in python2.7 thus dict lookup
# filter the output of current/previous to tnVzFilterName only since existing consist full vzInTerm/vzOutTerm
# pass function to
# Copyright: (c) 2023, Dag Wieers (@dagwieers)
# This dict contains the name of the child classes as well as the corresping attribute input (and attribute name if the input is a string)
# this dict is deviating from normal child classes list structure in order to determine which child classes should be created, modified, deleted or ignored.
# This condition deal with child classes which do not exist in APIC version 4.2 and prior.
# This condition enables to user to keep its previous configurations if they are not passing anything in the payload.
# This condition checks if the attribute input is a dict and checks if all of its values are None (stored as a boolean in only_none).
# This condition checks if the child object needs to be deleted depending on the type of the corresponding attribute input (bool, str, dict).
# This condition checks if the child object needs to be modified or created depending on the type of the corresponding attribute input.
# ESG Admin State
# ESG VRF name
# Intra ESG Isolation
# Preferred Group Member
# VRF Selection - fvRsScope
# Copyright: (c) 2021, Shreyas Srish <ssrish@cisco.com>
# Order of control string must match ACI return value for idempotency
# start logic to be consistent with GUI to only allow both direction or one-way
# end logic to be consistent with GUI to only allow both direction or one-way
# Copyright: (c) 2021, Cindy Zhao <cizhao@cisco.com>
# Copyright: (c) 2023, Abraham Mughal (@abmughal) abmughal@cisco.com
# Post configuration on infraGeneric (subclass_1) level instead of on
# infraRsFuncToEpg (subclass_2) level.
# The reason being that the MO "gen-default" (of class infraGeneric) does not
# exist until the first EPG to AEP association is created.
# Optional, only used for rollback preview
# Generate rollback comparison
# Handle APIC response
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class vnsRsLIfCtxToBD is already attached.
# A seperate delete request to dn of the vnsRsLIfCtxToBD is needed to remove the object prior to adding to child_configs.
# Copyright: (c) 2024, Akini Ross (@akinross)
# Copyright: (c) 2019, Vasily Prokopov (@vasilyprokopov)
# Copyright: (c) 2021, Tim Cragg (@timcragg)
# Copyright: (c) 2025, Dev Sinha (@DevSinha13)
# Copyright: (c) 2022, Lukas Holub (@lukasholub)
# Number of devices it will run against concurrently
# The amount of minutes a process will be able to run (unlimited or dd:hh:mm:ss)
# The date the process will run YYYY-MM-DDTHH:MM:SS
# ethernet type is represented as regular, but that is not clear to the users
# the ACI default setting is an empty string, but that is not a good input value
# Only valid for APIC verion 5.2+
# BGP Peer Prefix Policy is ony configurable on Infra BGP Peer Profile
# Only add bgp_password if it is set to handle changed status properly because password is not part of existing config
# NOTE: Since this module needs to include both infra:FcAccPortGrp (for FC) and infra:FcAccBndlGrp (for FC PC):
# NOTE: The user(s) can make a choice between (port(FC), port_channel(FC PC))
# Add infraRsFcLagPol binding only when port_channel_policy is defined
# Appending to child_config list not possible because of APIC Error 182: Multiple fallback routes not allowed in one group.
# A seperate delete request to dn of the fvFBRoute is needed to remove the object prior to adding to child_configs.
# Copyright: (c) 2024, David Neilan (@dneilan-intel) <david.neilan@intel.com>
# encap is set to unknown when not provided to ensure change is executed and detected properly on update
# tDn is set to "" when not provided to ensure change is executed and detected properly on update
# Commented validate code to avoid making additional API request which is handled by APIC error
# Keeping for informational purposes
# Validate drop_packets are set on parent correctly
# if aci.api_call("GET", "{0}/rssrcGrpToFilterGrp.json".format(source_group_path)) != [] and drop_packets:
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class spanRsSrcToEpg is already attached.
# Appending to child_config list not possible because of APIC Error 103: child (Rn) of class spanRsSrcToL3extOut is already attached.
# Copyright: (c) 2020, Jacob McGill <jmcgill298>
# check with child classes added on all versions
# mapping dicts are used to normalize the proposed data to what the APIC expects, which will keep diffs accurate
# ICMPv4 Types Mapping
# ICMPv6 Types Mapping
# This code is part of Ansible, but is an independent component
# Copyright: (c) 2019, Rob Huelga (@RobW3LGA)
# Beware, this is not the same as client_key !
# Beware, this is not the same as client_cert !
# error output
# aci_rest output
# get no verify flag
# get suppress previous flag
# Set Connection plugin
# Get the current working directory
# Perform signature-based authentication, no need to log on separately
# When we expect value is of type=bool
# If all else fails, escalate back to user
# Set protocol for further use
# Retain cookie for later use
# NOTE: ACI documentation incorrectly uses complete URL
# Check if we got a private key. This allows the use of vaulting the private key.
# NOTE: ACI documentation incorrectly adds a space between method and path
# Extract JSON API output
# Handle possible APIC error information
# NOTE: The XML-to-JSON conversion is using the "Cobra" convention
# Reformat as ACI does for JSON API output
# TODO: This could be designed to update existing keys
# TODO: This could be designed to accept multiple obj_classes and keys
# State is 'query'
# Query for a specific object in the module's class
# Append child_classes to filter_string if filter string is empty
# State is absent or present
# Query for all objects of the module's class (filter by properties)
# Query for all objects of the module's class
# mo is known
# Query for all objects of the module's class that match the provided ID value
# parent_obj is known
# Query for all object's of the module's class that belong to a specific parent object
# Query for specific object in the module's class
# Query for all objects of the module's class matching the provided ID value of the object
# Query for all objects of the module's class that belong to any parent class
# matching the provided ID value for the parent object
# root_obj is known
# Query for all objects of the module's class that belong to a specific root object
# NOTE: No need to select by root_filter
# self.update_qs({'query-target-filter': self.build_filter(root_class, root_filter)})
# mo and parent_obj are known
# matching the provided ID values for both object and parent object
# mo and root_obj are known
# Query for all objects of the module's class that match the provided ID value and belong to a specific root object
# TODO: Filter by parent_filter and obj_filter
# root_obj and parent_obj are known
# Query for all objects of the module's class that belong to a specific parent object
# NOTE: No need to select by parent_filter
# self.update_qs({'query-target-filter': self.build_filter(parent_class, parent_filter)})
# Query for a specific object of the module's class
# TODO: Add all missing cases
# TODO: Filter by sec_filter, parent and obj_filter
# NOTE: No need to select by sec_filter
# self.update_qs({'query-target-filter': self.build_filter(sec_class, sec_filter)})
# TODO: Filter by ter_filter, parent and obj_filter
# NOTE: No need to select by ter_filter
# self.update_qs({'query-target-filter': self.build_filter(ter_class, ter_filter)})
# TODO: Filter by quad_filter, parent and obj_filter
# NOTE: No need to select by quad_filter
# self.update_qs({'query-target-filter': self.build_filter(quad_class, quad_filter)})
# TODO: Filter by quin_filter, parent and obj_filter
# self.update_qs({'query-target-filter': self.build_filter(quin_class, quin_filter)})
# Sign and encode request as to APIC's wishes
# values are strings, so any diff between proposed and existing can be a straight replace
# add name back to config only if the configs do not match
# TODO: If URLs are built with the object's name, then we should be able to leave off adding the name back
# check for updates to child configs and update new config dictionary
# Loop through proposed child configs and compare against existing child configuration
# Update list of updated child configs only if the child config is different than what exists
# existing already equals the previous
# FIXME: Design causes issues for repeated child_classes
# get existing dictionary from the list of existing to use for comparison
# NOTE: This is an ugly fix
# Return the one that is a subset match
# add child objects to proposed
# self.result['path'] = self.path  # Adding 'path' in result causes state: absent in output
# if self.module._diff and self.original != self.existing:
# Return error information, if we have it
# Code generated by release_script GitHub action; DO NOT EDIT MANUALLY.
# Copyright: (c) 2018, Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# Standard files for documentation fragment
# common_keys between val1 and val2
# Only compare common keys between ls1 and ls2
# Ensure ls1[i] is also a dictionary
# If elements are not dictionaries, compare them directly
# print("meraki_compare_equality", current_value, requested_value)
# print("requested_value is not None and current_value is None", False)
# print("requested_value is None", True)
# print("current_value", True)
# print("current_value == requested_value", current_value == requested_value)
# self.validate_response_schema = params.get("validate_response_schema")
# if params.get("meraki_debug") and LOGGING_IN_STANDARD:
# if not self.validate_response_schema and op_modifies:
# Add the host to the keyed groups
# Copyright: (c) 2019 Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
# Organization param check
# Network param check
# Assemble payload
# Create payload for organization
# Create payload for network
# Query settings for organization
# Query settings for network
# Set configuration for organization
# Set configuration for network
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
# sorted_servers = list()
# return sorted_servers
# Change all choices to lowercase for comparison
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
# Validate roles
# Convert port numbers to string for idempotency checks
# Roles must be sorted since out of order responses may break idempotency check
# Need to check to make sure servers are actually defined
# Sanitize roles for comparison
# TODO: Allow method to return actual item if True to reduce number of calls needed
# if meraki.params['dhcp_handling']:
# Create new VLAN
# Update existing VLAN
# Copyright: (c) 2020, Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# This may need to be reworked to avoid overwiting
# TODO: Does this code below have a purpose?
# elif snake == 'md5_authentication_key':
# execute checks for argument completeness
# Copyright: (c) 2018, 2019 Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# Network needs to be created
# Network exists, make changes
# Modify VLANs configuration
# Copyright: (c) 2022
# Marcin Woźniak (@y0rune) <y0rune@aol.com>
# Route exists, assign route_id
# Quick and simple check to avoid more processing
# Remove default rule for comparison
# There is only a single rule
# Append the default rule
# Ansible is setting the first item to be "None" so we need to clear this
# This happens when an empty list is provided to clear emails
# This happens when an empty list is provided to clear server IDs
# All data should be resubmitted, otherwise it will clear the alert
# Also, the order matters so it should go in the same order as current
# for alert in meraki.params["alerts"]:
# Copyright: (c) 2021, Tyler Christiansen (@supertylerc) <code@tylerc.me>
# Ansible spec for the 'five_ghz_settings' param, based on Meraki API.
# Ansible spec for the 'two_four_ghz_settings' param, based on Meraki API.
# Copyright: (c) 2019, Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# Name should be used to lookup number
# This will return True as long as there's an unclaimed SSID number!
# There are no available SSIDs or SSID numbers
# Copyright: (c) 2022, Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# meraki.fail_json(msg="Compare", have=have, payload=payload)
# Copyright: (c) 2022 Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# Check for argument completeness
# assign and lookup stack_id
# meraki.fail_json(msg=comparable)
# meraki.fail_json(msg=rules)
# This code is disgusting, rework it at some point
# Query a single webhook
# meraki.fail_json(msg=meraki.result)
# New webhook needs to be created
# Need to update
# Not all fields are included so it needs to be checked to avoid comparing same thing
# Make sure it is downloaded
# Test to see if it exists
# check for argument completeness
# Output only applications
# Detect if no rules are given, special case
# Conditionally wrap parameters in rules makes it comparable
# meraki.fail_json(msg=str(org_id), data=data, oid0=data[0]['id'], oid1=data[1]['id'])
# meraki.fail_json(msg='o', data=o['id'], type=str(type(o['id'])))
# Query by organization name
# Query all organizations, no matter what
# Cloning
# Create new organization
# Update an existing organization
# Corner case for resetting
# Copyright: (c) 2021, Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# No payload is specified for an update
# Get all Action Batches
# Query one Action Batch job
# Create a new Action Batch job
# Cannot update the body once a job is submitted
# Job needs to be modified
# Idempotent response
# profile = get_profile(meraki, profiles, meraki.params['name'])
# Create a new RF profile
# Copyright: (c) 2022, Joshua Coronado (@joshuajcoronado) <joshua@coronado.io>
# if meraki.params['switch_profile_ports'] is not None:
# switch_profile_ports_args = dict(profile=dict(type='str'),
# switch_profile_ports=dict(type='list', default=None, elements='dict', options=switch_profile_ports_args),
# Lag ID is valid
# Need to convert to int for comparison later
# Delete the default rule for comparison
# Create copy of default rule
# meraki.fail_json(msg=data)
# Query one interface
# Query all interfaces
# Create a new interface
# Claim a device to an organization
# A device is assumed to be in an organization
# Device is in network, update
# Claim device into network
# return snake_dict_to_camel_dict(new_struct)
# config_template_id=dict(type='str', aliases=['id']),
# Bind template
# Network is already bound, being explicit
# Include to be explicit
# Delete template
# Unbind template
# No network is bound, nothing to do
# Copyright: (c) 2023 Kevin Breit (@kbreit) <kevin.breit@kevinbreit.net>
# Create new admin
# Update existing admin
# Return all admins for org
# Return a single admin for org
# if meraki.params['enabled'] is not None:
# adaptive_policy_group_id=dict(type=str),
# peer_sgt_capable=dict(type=bool),
# Backdoor way to set default without conflicting on access
# meraki.fail_json(msg='payload', payload=payload)
# Use a set to remove duplicate items
# Convert from list to comma separated
# Exceptions need to be made for idempotency check based on how Meraki returns
# VLAN needs to be specified in access ports, but can't default to it
# Check voiceVlan to see if state is absent to remove the vlan.
# Evaluate Sticky Limit whether it was passed in or what is currently configured and was returned in GET call.
# API shouldn't include voice VLAN on a trunk port
# meraki.fail_json(msg='Compare', original=original, payload=payload)
# Need to add service as it's not in payload
# ("serial", "serial"),
# review it
# if isinstance(items, dict):
# result = get_dict_result(items, 'name', name)
# ("networkId", "networkId"),
# Validate if this
# rate limiting statistics
# If URLs need to be modified or added for specific purposes, use .update() on the url_catalog dictionary
# Used to retrieve only one item
# Module should add URLs which are required by the module
# TODO: This should be removed as org_name isn't always required
# self.module.mutually_exclusive = [('org_id', 'org_name'),
# new = {snake_k: data[k]}
# self.fail_json(msg="Keys", ignored_keys=self.ignored_keys, optional=optional_ignore)
# Normally for passing a list instead of a dict
# self.fail_json(msg='ogs', orgs=orgs)
# self.fail_json(msg=i['id'])
# This should return just the URL for <url>
# retry-after isn't returned for over 10 concurrent connections per IP
# raise RateLimitException(e)
# Needs to reset in case of future retries
# Gather the body (resp) and header (info)
# This needs to be refactored as it's not very clean
# Looping process for pagination
# Non-pagination
# Best effort messages: some API output keys may not exist on some platforms
# Order should not matter but some versions of NX-OS software fail
# to process the payload properly if 'input' gets serialized before
# 'type' and the payload of 'input' contains the word 'type'.
# update platform default stderr regexes to include modules specific ones
# not all NX-OS versions support `config replace`
# we let the device throw the invalid command error
# always reset terminal regexes to platform default
# set error regex for copy command
# do not change the ordering of this list
# set stdout regex for copy command to handle optional user prompts
# based on different match conditions
# always reset terminal regexes to default
# Match prompts ending in )# except those with (maint-mode)#
# if already at privilege level 15 return
# Idempotent case
# output = execute_show_command(command, self.module)[0].split("\n")
# Illegal first character. Name must start with a letter
# Step 0.0: Validate syntax of name and pwwn
# Step 0.1: Check DA status
# Step 1: Process distribute
# playbook has distribute as True(enabled)
# but switch distribute is disabled(false), so set it to
# true(enabled)
# playbook has distribute as False(disabled)
# but switch distribute is enabled(true), so set it to
# false(disabled)
# Check mode implemented at the da_add/da_remove stage
# Step 2: Process mode
# playbook has mode as basic
# but switch mode is enhanced, so set it to basic
# playbook has mode as enhanced
# but switch mode is basic, so set it to enhanced
# Check mode implemented at the end
# Step 3: Process da
# Step 5: Process rename
# Step END: check for 'check' mode
# If vni is already configured on vrf, unconfigure it first.
# Allow delay/retry for VRF changes
# For now we support only pwwn and device-alias under zone
# Ideally should use 'supported_choices'....but maybe next
# Step1: execute show zone status and get
# Process zone default zone options
# Process zone mode options
# Process zone smart-zone options
# Process zone member options
# TODO: Obviously this needs to be cleaned up properly, as there are a lot of ifelse statements which is bad
# Will take it up later becoz of time constraints
# Process zoneset member options
# Process zoneset activate options
# ensure that group-timeout command is configured last
# group-timeout will fail if igmp snooping is disabled
# returns en or dis
# the  next block of code is used to retrieve anything with:
# ip igmp static-oif *** i.e.. could be route-map ROUTEMAP
# or PREFIX source <ip>, etc.
# add new prefix/sources
# remove stale prefix/sources
# not json serializable
# delta check for all params except oif_ps
# destination is reqd; src & vrf are optional
# delta is missing this param because it's idempotent;
# however another pkl command has changed; therefore
# explicitly add it to delta so that the cli retains it.
# retain existing pkl commands even if not in playbook
# 'default' is a reserved word for vrf
# Copyright 2019 Cisco and/or its affiliates.
# sparse is long-running on some platforms, process it last
# returns a dict
# returns a list
# bfd is a tri-state string: enable, disable, default
# certain configurable features do not
# show up in the output of "show feature"
# but appear in running-config when set
# On N35 A8 images, some features return a yes/no prompt
# on enablement or disablement. Bypass using terminal dont-ask
# frp = full_remote_path, flp = full_local_path
# Build copy command components that will be used to initiate copy from the nxos device.
# Create local file directory under NX-OS filesystem if
# local_file_directory playbook parameter is set.
# Note: This is the local file directory on the remote nxos device.
# { command: <str>, output: <str>, prompt: <str>, response: <str> }
# the nxos cli prevents this by rule so capture it and display
# check if provided hashed password is infact a hash
# This is reload smu/patch rpm
# This is smu/patch rpm
# <value> may be 'n.n.n.n/s', 'none', or 'default'
# Remove rsvd SSM range
# no cmd needs a value but the actual value does not matter
# SSM syntax check
# Output options are 'text' or 'json'
# Check for server errors
# Check for errors and exit if found.
# Check for potentially transient conditions
# Check for messages indicating a successful upgrade.
# We get these messages when the upgrade is non-disruptive and
# we loose connection with the switchover but far enough along that
# we can be confident the upgrade succeeded.
# Begin normal parsing.
# Check to see if upgrade will be disruptive or non-disruptive and
# build dictionary of individual modules and their status.
# Sample Line:
# Module  bootable      Impact  Install-type  Reason
# ------  --------  ----------  ------------  ------
# Check to see if switch needs an upgrade and build a dictionary
# of individual modules and their individual upgrade status.
# Module  Image  Running-Version(pri:alt)    New-Version  Upg-Required
# ------  -----  ----------------------------------------  ------------
# 8       lcn9k                7.0(3)F3(2)    7.0(3)F2(2)           yes
# Transport cli returns a list containing one result item.
# Transport nxapi returns a list containing two items.  The second item
# contains the data we are interested in.
# Further processing may be needed for result_data
# We encountered a backend processing error for nxapi
# Different NX-OS platforms behave differently for
# disruptive and non-disruptive upgrade paths.
# 1) Combined kickstart/system image:
# 2) Separate kickstart + system images.
# The force option is not available for the impact command.
# Call parse_show_data on empty string to create the default upgrade
# data structure dictionary
# Process System Image
# Process Kickstart Image
# If an error is encountered when issu is 'desired' then try again
# but set issu to 'no'
# The system may be busy from the previous call to check_mode so loop
# until it's done.
# We encountered an unrecoverable error in the attempt to get upgrade
# impact data from the 'show install all impact' command.
# Fallback to legacy method.
# If we are upgrading from a device running a separate kickstart and
# system image the impact command will fail.
# Check mode set in the playbook so just return the impact data.
# Check mode discovered an error so return with this info.
# The switch is already upgraded.  Nothing more to do.
# If we get here, check_mode returned no errors and the switch
# needs to be upgraded.
# Check mode indicated that ISSU is not possible so issue the
# upgrade command without the non-disruptive flag unless the
# playbook specified issu: yes/required.
# The system may be busy from the call to check_mode so loop until
# it's done.
# Not all platforms support the 'force' keyword.  Check for this
# condition and re-try without the 'force' keyword if needed.
# Special case:  If we encounter a server error at this stage
# it means the command was sent and the upgrade was started but
# we will need to use the impact data instead of the current install
# Get system_image_file(sif), kickstart_image_file(kif) and
# issu settings from module params.
# This will enforce better practice with md5 and hsrp version.
# validate IP
# { name: <str>, vrf: <str> }
# {name: <str>, vrf: <str> }
# ignore default group-list
# if vtp_password is not set, some devices returns '\\' or the string 'None'
# 4094/4079 vsan is always present
# Negative case:
# For fcip,port-channel,vfc-port-channel need to remove the
# extra space to compare
# platform CLI needs the keywords in the following order
# There are two possible outcomes when nxapi is disabled on nxos platforms.
# 1. Nothing is displayed in the running config.
# 2. The 'no feature nxapi' command is displayed in the running config.
# when file_pull is enabled, the file_pull_timeout and connect_ssh_port options
# will override persistent_command_timeout and port
# this has been kept for backwards compatibility till these options are removed
# if file_pull_timeout is explicitly set, use that
# if file_pull_timeout is not set and command_timeout < 300s, bump to 300s.
# Copyright: (c) 2017, Red Hat Inc.
# Create a list of supported commands based on ref keys
# TBD: add this method logic to get_capabilities() after those methods
# Supported Platforms: N3K,N5K,N6K,N7K,N9K,N3K-F,N9K-F
# Fretta Platform
# Remove excluded commands (no platform support for command)
# Update platform-specific settings for each item in ref
# CLI may be feature disabled
# tuple to list
# Handle config strings that nvgen with the 'no' prefix.
# Example match behavior:
# When pattern is: '(no )*foo *(\S+)*$' AND
# Process any additional context that this propoerty might require.
# 1) Global context from NxosCmdRef _template.
# 2) Context passed in using context arg.
# Last key in context is the resource key
# Add context to proposed if state is present
# Add additional command context if needed.
# We need to remove the last item in context for state absent case.
# Walk each cmd in ref, use cmd pattern to discover existing cmds
# The getval pattern should contain regex named group keys that
# match up with the setval named placeholder keys; e.g.
# Resource module builder packs playvals under 'config' key
# Normalize each value
# The setval pattern should contain placeholder keys that
# match up with the getval regex named group keys; e.g.
# Commands may require parent commands for proper context.
# Global _template context is replaced by parameter context
# '_proposed' may be empty list or contain initializations; e.g. ['feature foo']
# Create a list of commands that have playbook values
# Compare against current state
# Create playval copy to avoid RuntimeError
# Multiple Instances:
# Remove values set to string 'None' from dvalue
# Single Instance:
# Remove any duplicate commands before returning.
# pylint: disable=unnecessary-lambda
# intf 'switchport' cli is not present so use the user-system-default
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)#!/usr/bin/python
# **TBD**
# N7K EOL/legacy image 6.2 does not support show vlan | json output.
# If support is still required for this image then:
# - Wrapp the json calls below in a try/except
# - When excepted, use a helper method to parse the run_cfg_output,
# Use structured for most of the vlan parameter states.
# This data is consistent across the supported nxos platforms.
# Not all devices support | json-pretty but is a workaround for
# libssh issue https://github.com/ansible/pylibssh/issues/208
# When json-pretty is not supported, we fall back to | json
# Raw cli config is needed for mapped_vni, which is not included in structured.
# Create a single dictionary from all data sources
# name: 'VLAN000x' (default name) or custom name
# mode: 'ce-vlan' or 'fabricpath-vlan'
# enabled: shutdown, noshutdown
# state: active, suspend
# non-structured data
# device output may be string, convert to list
# SAMPLE: {"TABLE_vlanbriefid": {"ROW_vlanbriefid": {
# SAMPLE: "TABLE_mtuinfoid": {"ROW_mtuinfoid": {
# vlanbrief is not a list when only one vlan is found.
# split out any per-vlan cli config
# Create a list of vlan dicts where each dict contains vlanbrief,
# mtuinfo, and non-structured running-config data for one vlan.
# Sample match lines
# 202\n  name Production-Segment-100101\n  vn-segment 100101
# 5\n  state suspend\n  shutdown\n  name test-changeme\n  vn-segment 942
# parse native config using the Fc_interfaces template
# - populate only fc interfaces
# - populate "analytics" value based on the presence or absense of "analytics_nvme" or "analytics_scsi" keys
# - dummy key "m" and "p" is added for sorting, which is removed after sorting
# match only fc interface
# and re.search(r'lldp', resource):
# for parsed state only
# pre-sort lists of dictionaries
# Some of the bfd attributes
# 'bfd'/'bfd echo' do not nvgen when enabled thus set to 'enable' when None.
# 'bfd' is not supported on some platforms
# parse native config using the L3_interfaces template
# pop top-level keys and assign values to them
# a template peer <> line has to preceed AF line
# it could be a.b.c.d or a.b.c.d/x or a.b.c.d/32
# or 'host' in option:
# pre-sort list of dictionaries
# Process allowed_vlans
# For fact gathering, module state should be 'present' when using
# NxosCmdRef to query state
# Get Telemetry Global Data
# Get Telemetry Destination Group Data
# Get Telemetry Sensorgroup Group Data
# Get Telemetry Subscription Data
# params = utils.validate_config(self.argument_spec, {'config': objs})
# Walk the argspec and cmd_ref objects and build out config dict.
# remove neighbor AF entries
# sort list of dictionaries
# we entered a neighbor context which
# also has address-family lines
# get the word after 'lldp'
# get the nested-dict/value for that param
# nested dicts
# config[tlv_select][system][name]=False
# config[tlv_select][dcbxp]=False
# config[tlv_select][management_address][v6]=False
# config[reinit]=4
# parse native config using the Bgp_neighbor_address_family template
# this is the "router bgp <asn>" line
# reattempt with | json
# Gets the interface data
# Remove null key
# No neighbors found as the TABLE_nbor key is missing and return empty dict
# in some N7Ks the key has been renamed
# if there are no neighbors the show command returns
# ERROR: No neighbour information
# facts from nxos_facts 2.1
# Some virtual images don't actually report faninfo. In this case, move on and
# just return an empty list.
# {TABLE,ROW}_psinfo keys have been renamed to
# {TABLE,ROW}_ps_info in later NX-OS releases
# this is only needed for keys that are valid for both global
# Telemetry Command Reference File
# Diff the want and have
# Remove merge items from diff; what's left will be used to
# remove states not specified in the playbook
# Remove default states from resulting diff
# merged_cmds: 'want' cmds to update 'have' states that don't match
# replaced_cmds: remaining 'have' cmds that need to be reset to default
# Remaining diff items are used to reset states to default
# overridden behavior is the same as replaced except for scope.
# Add wanted vlans that don't exist on the device yet
# all 'no ' commands must be executed first to avoid NXOS command incompatibility errors
# remove superfluos entries from have
# need to explicitly remove existing entry to correctly apply new one
# default CLI behaviour is to 'merge' with existing entry
# negate existing config so that want is not appended
# in case of replaced or overridden
# compare non list attributes directly
# compare list attributes directly
# remove extra ip or track
# adds hsrp [number]
# remove group via numbers
# handle the deprecated attribute, only appliacble for want
# this ensures that if user sets `enable: True` for a trap
# all suboptions for that trap are set to True
# remove superfluous items
# pop aliased keys to preserve idempotence
# remove existing config else it gets appeneded
# self.commands = afi_cmds + self.commands
# exclude_params = []
# Some platforms do not support the 'bfd' interface keyword;
# remove the 'bfd' key from each want/have interface.
# Clean up bfd attrs for any interfaces not listed in the play
# Let the 'want' loop handle all vals for this interface
# Update any want attrs if needed. The overridden state considers
# the play as the source of truth for the entire device, therefore
# set any unspecified attrs to their default state.
# 'bfd' and 'bfd echo' are enabled by default so the handling is
# counter-intuitive; we are enabling them to remove them. The end result
# is that they are removed from the interface config on the device.
# Set dict None values to default states
# "unpack" send_community
# attach top-level keys with their values
# port_pro_num = list(protocol.keys())
# convert number to name
# key could be eq,gt,lt,neq or range
# creates new ACL in replaced state
# acl in want exists in have
# no acls given in want, so delete all have acls
# want_afi is not present in have
# if afi is not in want
# and have != want:
# if want['afi] not in have, ignore
# only sequence number is specified to be deleted
# when want['ace'] does not have seq number
# only name given
# 'only afi is given'
# case 1 --> sequence number not given in want --> new ace
# case 2 --> new sequence number in want --> new ace
# case 3 --> existing sequence number given --> update rule (only for merged state.
# case 1
# case 2
# case 3
# merged will never negate commands
# 'have' has ACL defined without any ACE
# empty out want (in case something was specified)
# some items are populated later on for correct removal
# pre-process `event.x.y` keys
# set have to True to mimic default state
# this allows negate commands to be issued
# if want is missing and have is negated
# set want to True in order to revert to default state
# this ensures that updates are done
# with correct `state`
# set default states for keys that appear in negated form
# remove have config for hosts
# else want gets appended
# Convert VLAN lists to sets for easier comparison
# VLANs to be added (present in want, not in have)
# VLANs to be removed (present in have, not in want or not in add list)
# if want is none, remove all vlans
# When state is 'deleted', the module_params should not contain data
# under the 'config' key
# Normalize interface name.
# The deleted case is very simple since we purge all telemetry config
# and does not require any processing using NxosCmdRef objects.
# Save off module params
# Build Telemetry Global NxosCmdRef Object
# Only build the NxosCmdRef object for the td['name'] module parameters.
# Sensor group path setting can contain optional values.
# Call get_setval_path helper function to process any
# optional setval keys.
# Build Telemetry Destination Group NxosCmdRef Objects
# Build Telemetry Sensor Group NxosCmdRef Objects
# Build Telemetry Subscription NxosCmdRef Objects
# Order matters for state replaced.
# First remove all subscriptions, followed by sensor-groups and destination-groups.
# Second add all destination-groups, followed by sensor-groups and subscriptions
# Process Telemetry Global Want and Have Values
# Possible states:
# - want and have are (set) (equal: no action, not equal: replace with want)
# - want (set) have (not set) (add want)
# - want (not set) have (set) (delete have)
# - want (not set) have (not set) (no action)
# global_ctx = ref['tms_global']._ref['_template']['context']
# property_ctx = ref['tms_global']._ref['certificate'].get('context')
# setval = ref['tms_global']._ref['certificate']['setval']
# If all destination profile commands are being removed then just
# remove the config context instead.
# Process Telemetry destination_group, sensor_group and subscription Want and Have Values
# want not and have not set so delete have
# want and have are set.
# process wants:
# Want resource key not in have resource key so add it
# Want resource key exists in have resource keys but we need to
# inspect the individual items under the resource key
# for differences
# item wanted but does not exist so add it
# process haves:
# Want resource key is not in have resource keys so remove it
# Have resource key exists in want resource keys but we need to
# have item not wanted so remove it
# this key needs to be compared separately and
# popped from `authentication` dict to
# preserve idempotence for other keys in this dict
# this ensures that the "no" form of "ip ospf passive-interface"
# command is executed even when there is no existing config
# compare top-level `multi_areas` config
# remove superfluous top-level `multi_areas` config
# compare config->address_family->processes
# add and update config->address_family->processes
# compare config->address_family->processes->area
# compare config->address_family->processes->multi_areas
# remove superfluous processes->multi_areas config
# remove superflous config->address_family->processes
# remove config->address_family->processes->area
# if state is overridden or deleted, remove superfluos config
# handle vrf->af
# add VRF command at correct position once
# afi should always be present
# safi and vrf are optional
# a combination of these 3 uniquely
# identifies an AF context
# transform parameters which are
# list of dicts to dict of dicts
# transform all entries under
# config->address_family to dict of dicts
# group AFs by VRFs
# vrf_ denotes global AFs
# populate global AFs
# populate VRF AFs
# final structure: https://gist.github.com/NilashishC/628dae5fe39a4908e87c9e833bfbe57d
# first conditional is for deleted with config provided
# second conditional is for overridden
# third condition is for deleted with empty config
# mode = on will not be displayed in running-config
# force does not appear in config
# will only be applied for a net new member
# merge
# for merged, replaced vals need to be checked to add 'no'
# swap `both` and `set` for idempotence
# remove remaining items in have
# no need to fetch facts for rendered
# no need to fetch facts for parsed
# whatever in have is not in want
# only name is given to delete
# and 'access_groups' in have_name.keys():
# For parsed/rendered state, we assume defaults
# Handle the 'enabled' state separately
# Handle the 'mode' state separately
# Layer 2/3 mode defaults
# Interface enabled state defaults
# Filter out mgmt0 interface
# VRF parsers = 29
# end VRF parsers
# we fail early if state is merged or
# replaced and want ASN != have ASN
# neighbors have separate contexts in NX-OS
# cleanup remaining neighbors
# Check if both 'local_as' and 'local_as_config' are in the dictionary
# Move 'as_number' from 'local_as' to 'local_as_config'
# handle deprecated local_as with local_as_config
# we are inspecting neighbor within a VRF
# if the given neighbor has AF we return True
# we are inspecting VRF as a whole
# if there is at least one neighbor
# with AF or VRF has AF itself return True
# we are inspecting top level neighbors
# start neighbor parsers
# end neighbor parsers
# VRF only
# start template AF parsers
# in some cases, the `logging level` command
# has an extra space at the end
# match ip multicast source 192.1.2.0/24 group-range 239.0.0.1 to 239.255.255.255 rp 209.165.201.0/27 rp-type Bidir
# "setval": "analytics type {{ analytics|string }}",
# cannot use dict comprehension yet
# since we still test with Python 2.6
# Remove None values
# Remove any duplicate commands.
# Massage non global into a data structure that is indexed by id and
# normalized for destination_groups, sensor_groups and subscriptions.
# Copyright (c) 2019, René Moser <mail@renemoser.net>
# Copyright: (c) 2018, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# Get list of servers from cloudscale.ch API
# Merge servers with the same name
# Add servers to inventory
# Two servers with the same name exist, create a group
# with this name and add the servers by UUID
# Set variables
# Set composed variables
# Add host to composed groups
# Add host to keyed groups
# Copyright (c) 2020, René Moser <mail@renemoser.net>
# Copyright: (c) 2023, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# Copyright: (c) 2023, Kenneth Joss <kenneth.joss@cloudscale.ch>
# Either search by given health monitor's UUID or
# search the health monitor by its acossiated pool UUID (1:1)
# Fail on more than one resource with identical name
# Refresh if resource was updated in live mode
# Copyright: (c) 2025, Ciril Troxler <ciril.troxler@cloudscale.ch>
# Copyright (c) 2020, René Moser <rene.moser@cloudscale.ch>
# Skip networks in other zones
# We might have found more than one network with identical name
# For consistency, take a minimal network stub, but also include zone
# Resets to default values by the API
# No need to reset if user set the param anyway.
# Query by UUID
# network id case
# Query by name
# Resource has no name field, we use a defined tag as name
# Skip resource if constraints is not given e.g. in case of floating_ip the ip_version differs
# Copyright (c) 2017, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# Fail when missing params for creation
# Copyright: (c) 2017, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# Copyright: (c) 2019, René Moser <mail@renemoser.net>
# Initialize server dictionary
# Timeout succeeded
# Set the diff output
# Either the server is stopped or change is forced
# Response is 204: No Content
# State changes to "changing" after update, waiting for stopped/running
# Remember the names found
# Names are not unique, verify if name already found in previous iterations
# The API doesn't support to update server groups.
# Show a warning to the user if the desired state does not match.
# Remove interface properties that were not filled out by the user
# Compare the interfaces as specified by the user, with the interfaces
# as received by the API. The structures are somewhat different, so
# they need to be evaluated in detail
# If target state is stopped, stop before an potential update and force would not be required
# First, find the interface that belongs to the spec
# If we have a public network, only look for the right type
# If we have a private network, check the network's UUID
# If we only have an addresses block, match all subnet UUIDs
# looped through everything without match
# Fail if any of the addresses don't match
# Unspecified, skip
# If the wanted address is an empty list, but the actual list is
# not, the user wants to remove automatically set addresses
# If there is any interface that does not match, clearly not all
# wanted interfaces are present
# Copyright (c) 2018, Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
# TODO remove in version 3.0.0
# Copyright (c) 2021, Ciril Troxler <ciril.troxler@cloudscale.ch>
# Create a stub image from the import
# Even failed image imports are reported as present. This then
# represents a failed import resource.
# These fields are not present on the import, assume they are
# unchanged from the module parameters
# This method can be replaced by calling AnsibleCloudscaleBase._get form
# AnsibleCloudscaleCustomImage._get once the API bug is fixed.
# Return None to be compatible with AnsibleCloudscaleBase._get
# Workaround a bug in the cloudscale.ch API which wrongly returns
# 500 instead of 404
# Split api_call into components
# If the api_call does not contain the API URL
# Fetch image(s) from the regular API endpoint
# Additionally fetch image(s) from the image import API endpoint
# No image was found
# Convert single image responses (call with UUID) into a list
# Transform lists into UUID keyed dicts
# Filter the import list so that successfull and in_progress imports
# shadow failed imports
# Only add failed imports if no import with the same name exists
# Only add the last failed import in the list (there is no timestamp on
# imports)
# Merge import list into image list
# Merge addtional fields only present on the import
# Only new image imports are supported, no direct POST call to image
# resources are supported by the API
# Custom image imports use a different endpoint
# If the module passes the firmware_type argument,
# and the module argument and API response are not the same for
# argument firmware_type.
# Custom error if the module tries to change the firmware_type.
# If this is a failed upload and the URL changed or the "force_retry"
# parameter is used, create a new image import.
# This helps with tags when we have the full API resource href to update.
# Sanitize data dictionary
# Deepcopy: Duplicate the data object for iteration, because
# iterating an object and changing it at the same time is insecure
# api_call might be full href already
# The identifier key of the resource, usually 'uuid'
# The API resource e.g server-group
# Resource has no name field but tags, we use a defined tag as name
# Constraint Keys to match when query by name
# Fail if UUID/ID was provided but the resource was not found on state=present.
# Timeout reached
# If it looks like a stub
# Transform the name tag to a name field
# Get the environment variable
# Construct the path based on flavour
# Run the command with the constructed path
# (c) 2020 CyberArk Software Ltd. All rights reserved.
# pylint: disable=too-many-lines
# ************* REQUEST VALUES *************
# Get the telemetry header
# Normalize line endings
# If cert_content is invalid or missing, fall back to cert_file
# If both cert_content and cert_file are missing or invalid, raise an error
# Load configuration and return as dictionary if file is present on file system
# raise AnsibleError('Conjur configuration file `{conf_path}` was not found on the controlling host')
# Load identity and return as dictionary if file is present on file system
# raise AnsibleError(f'Conjur identity file `{identity_path}` was not found on the controlling host')
# Merge multiple dictionaries by using dict.update mechanism
# The `quote` method's default value for `safe` is '/' so it doesn't encode slashes
# into "%2F" which is what the Conjur server expects. Thus, we need to use this
# method with no safe characters. We can't use the method `quote_plus` (which encodes
# slashes correctly) because it encodes spaces into the character '+' instead of "%20"
# as expected by the Conjur server
# Prepare the telemetry header
# Prepare the telemetry value
# Encode to base64
# Use credentials to retrieve temporary authorization token
# Prepare headers
# Retrieve Conjur variable using the temporary token
# Fetch token from aure vm, func, app and authn with conjur for access token
# Parse JSON response
# pylint: disable=too-many-locals,missing-function-docstring,too-many-branches,too-many-statements
# We should register the variables as LookupModule options.
# Doing this has some nice advantages if we're considering supporting
# a set of Ansible variables that could sometimes replace environment
# variables.
# Registering the variables as options forces them to adhere to the
# behavior described in the DOCUMENTATION variable. An option can have
# both a Ansible variable and environment variable source, which means
# Ansible will do some juggling on our behalf.
# Create the empty dict we'll return later
# This regex separates the string into the CEF header and the extension
# data.  Once we do this, it's easier to use other regexes to parse each
# part.
# Split the header on the "|" char.  Uses a negative lookbehind
# assertion to ensure we don't accidentally split on escaped chars,
# though.
# If the input entry had any blanks in the required headers, that's wrong
# and we should return.  Note we explicitly don't check the last item in the
# split list becuase the header ends in a '|' which means the last item
# will always be an empty string (it doesn't exist, but the delimiter does).
# Since these values are set by their position in the header, it's
# easy to know which is which.
# The first value is actually the CEF version, formatted like
# "CEF:#".  Ignore anything before that (like a date from a syslog message).
# We then split on the colon and use the second value as the
# version number.
# The ugly, gnarly regex here finds a single key=value pair,
# taking into account multiple whitespaces, escaped '=' and '|'
# chars.  It returns an iterator of tuples.
# Split the tuples and put them into the dictionary
# Process custom field labels
# If the key string ends with Label, replace it in the appropriate
# custom field
# Find the corresponding customfield and replace with the label
# return None if our regex had now output
# Now we're done!
# Syslog event data received, and processed for EDA
# if not CEF, we will try JSON load of the text from first curly brace
# Getting parameters from module
# connection_number = module.params["connection_number"]
# if in check mode it will not perform password changes
# Defining initial values for open_url call
# Logon Action
# Different end_points based on the use of desired method of auth
# The payload will contain username, password
# and optionally use_radius_authentication and new_password
# COMMENT: I dont know what this is for and the old api seems like it didnt have this field
# if connection_number is not None:
# Logoff Action
# Get values from cyberark_session already established
# All off the logoff with the same endpoint
# Success
# Result token from REST Api uses a different key based
# the use of shared logon authentication
# the new one just returns a token
# if use:
# Preparing result of the module
# Only marks change if new_password was received resulting
# in a password change
# Logoff Action clears cyberark_session
# cyberark and radius -> mutually_exclusive is cyberark and ldap
# ldap and radius
# windows has to be by itself
# Prepare result, end_point, and headers
# Determining whether to add or update properties
# Internal child values
# Updating a property
# Adding a property value
# Processing child operations
# module.params[parameter_name]
# Credential changes
# No result_dct set yet
# too many records found
# Account already exists
# Account does not exist
# Get username from module parameters, and api base url
# along with validate_certs from the cyberark_session established
# Prepare result, paylod, and headers
# end_point and payload sets different depending on POST/PUT
# for POST -- create -- payload contains username
# for PUT -- update -- username is part of the endpoint
# With the put in this old format, we can not update the vaultAuthorization
# --- Optionally populate payload based on parameters passed ---
# In API V2 the parameter is called userType, V2 ignores the UserTypeName
# --------------------------------------------------------------
# execute REST action
# Return None if the user does not exist
# Say we have two users: 'someone' and 'someoneelse', a search on someone will return both
# So we will lopp over and see if the username returned matches the username we searched for
# If so, and we somehow found more than one raise an error
# If we made it here we had 1 or 0 users, return them
# If the user was not found by username we can return unchanged
# User does not exist
# Say we have two groups: 'groupone' and 'grouptwo', a search on group will return both
# So we will lopp over and see if the groupname returned matches the groupsname we searched for
# Get username, and groupname from module parameters, and api base url
# Not needed for new version
# Prepare result, end_point, headers and payload
# If we went "old school" and were provided a group_name instead of a vault_id we need to resolve it
# If we were given a group_name we need to lookup the vault_id
# For some reason the group add uses username instead of id
# User is already member of Group
# User already exists
# User does not exist, proceed to create it
# Add user to group if needed
# Wrap requests imports
# Wrap dateutil imports
# Remove requests.Response typings
# Copyright: (c) 2019, Hetzner Cloud GmbH <info@hetzner-cloud.de>
# Label selector targets have child targets that must be checked
# Report missing health status as unknown
# Copyright (c) 2019 Hetzner Cloud GmbH <info@hetzner-cloud.de>
# The typed dicts are only used to help development and we prefer not requiring
# the additional typing-extensions dependency
# Server Type
# Datacenter
# Network
# Image
# Ansible
# Resolve template string
# Ensure the api token is valid
# Set private_ipv4 if user filtered for one network
# Log warning that for this host can not be connected to, using the
# method specified in 'connect_with'. Users might use 'compose' to
# override the connection method, or implement custom logic, so we
# do not need to abort if nothing matched.
# every server has a name, no need to guard this
# Allow using extra variables arguments as template variables (e.g.
# '--extra-vars my_var=my_value')
# Add a top group
# Add hostvars prefix and suffix for variables coming from the Hetzner Cloud.
# Copyright: (c) 2025, Hetzner Cloud GmbH <info@hetzner-cloud.de>
# Copyright: (c) 2020, Hetzner Cloud GmbH <info@hetzner-cloud.de>
# We must the hcloud_volume_attachment name instead of hcloud_volume, because
# AnsibleHCloud.get_result does funny things.
# Copyright: (c) 2022, Hetzner Cloud GmbH <info@hetzner-cloud.de>
# Do not use the zone name to prevent a request to the API.
# zone name and id are interchangeable
# The "change" protection prevents us from updating the rrset. To reach the
# state the user provided, we must update the "change" protection:
# - before other updates if the current change protection is enabled,
# - after other updates if the current change protection is disabled.
# When not given, the API will choose the location.
# Action should take 60 to 90 seconds on average, but can be >10m when creating a
# server from a custom images
# 362 retries >= 1802 seconds
# Starting the server or attaching to the network might take a few minutes,
# depending on the current activity in the project.
# This waits up to 30minutes for each action in series, but in the background
# the actions are mostly running in parallel, so after the first one the other
# actions are usually completed already.
# Return if nothing changed
# Fetch resource if parameter is truthy
# Remove if current is defined
# Return if parameter is falsy
# Assign new
# Check if we should warn for using an deprecated server type
# Check if we should warn for updating to a deprecated server type
# Upgrading a server takes 160 seconds on average, upgrading the disk should
# take more time
# 122 retries >= 602 seconds
# 38 retries >= 182 seconds
# Return if parameter is falsy or resource is disabled
# Removing existing but not wanted networks
# Adding wanted networks that doesn't exist yet
# Removing existing but not wanted firewalls
# Adding wanted firewalls that doesn't exist yet
# Only stopped server can be upgraded
# Only rebuild the server if it already existed.
# When we rebuild the server progress takes some more time.
# 202 retries >= 1002 seconds
# Copyright: (c) 2022, Patrice Le Guyader
# heavily inspired by the work of @LKaemmerling
# Action should take 60 to 90 seconds on average, wait for 5m to
# allow DNS or Let's Encrypt slowdowns.
# 62 retries >= 302 seconds
# Provide typing definitions to the AnsibleModule class
# Total waiting time before timeout is > 117.0
# Warn when the server type is deprecated in the given location
# No location given, only warn when all locations are deprecated
# x-release-please-version
# Copyright: (c) 2023, Hetzner Cloud GmbH <info@hetzner-cloud.de>
# If the param is not a valid ID, prevent an unnecessary call to the API.
# noqa pylint: disable=C0414
# x-releaser-pleaser-version
# Exponential backoff
# Cap backoff
# Add jitter
# Return always true, because the API does not return an action for it. When an error occurs a HcloudAPIException will be raised
# Ensure that 'id', 'name' and 'type' are always populated.
# pylint: disable=too-many-branches,too-many-locals
# Do not send use_private_ip on remove_target
# type: ignore[var-annotated]
# Use the parent "default" base client.
# The *PageResult tuples MUST have the following structure
# `(result: List[Bound*], meta: Meta)`
# Override and reset hcloud.core.domain.BaseDomain.__repr__ method for bound
# models, as they will generate a lot of API call trying to print all the fields
# of the model.
# Use the same base client as the the resource base client. Allows us to
# choose the base client outside of the ResourceActionsClient.
# Backward compatibility, defaults to the parent ("top level") base client (`_client`).
# Return allays true, because the API does not return an action for it. When an error occurs a APIException will be raised
# 2001:db8::/64 to 2001:db8:: and 64
# Copyright: (c) 2018, Sumit Kumar <sumit4@netapp.com>, chris Archibald <carchi@netapp.com>
# Documentation fragment for E-Series
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
# (c) 2024, NetApp, Inc
# pylint: disable=arguments-renamed
# Add all expected group hosts
# Add common_volume_configuration information
# Add/update volume specific information
# This means that there is only one volume and volumes was stripped of its list
# Don't throw exceptions unless you want run to terminate!!!
# raise AnsibleError("Storage array information not available. Collect facts using na_santricity_facts module.")
# Remove any absent volumes
# Search for volumes that have a specified host or host group initiator
# host initiator is already mapped on the storage system
# target is an existing host group
# target is an existing host in the host group.
# Check whether volume is mapped to the expected host
# Check whether lun option differs from existing lun
# Volume has not been mapped to host initiator
# Check whether lun option has been used
# volume is being remapped with the same lun number
# Find associated group and the groups hosts
# add to group
# add to hosts
# Add host information to expected host
# Determine host type
# Update hosts object
# Add SAS ports
# unassign hosts that should not be part of the hostgroup
# Search for existing host group match
# Determine whether changes are required
# Apply any necessary changes
# (c) 2018, NetApp, Inc
# replace http url path with devmgr/utils/about
# Check whether request needs to be forwarded on to the controller web services rest api.
# When syslog servers should exist, search for them.
# If we get to this point we know that the states differ, and there is no 'err' state,
# so no need to revalidate
# In a suspended state
# A relatively primitive regex to validate that the input is formatted like a valid ip address
# Filter out non-ib-iser interfaces
# Unauthorized
# Fail over and discover any storage system without a set admin password. This will cover newly deployed systems.
# Wait for discover to complete
# Storage systems with embedded web services.
# Storage systems without embedded web services.
# Discover all added storage systems to the proxy.
# (c) 2018, NetApp Inc.
# Update host type with the corresponding index
# Ensure when state==present then host_type_index is defined
# Fix port representation if they are provided with colons
# Determine whether address is 16-byte WWPN and, if so, remove
# Determine port reference
# Create dictionary of hosts containing list of port references
# Unassign assigned ports
# Return the value equivalent of no group
# Augment the host objects
# Augment hostSidePorts with their ID (this is an omission in the API)
# Only check 'other' hosts
# Check if the port label is found in the port dict list of each host
# Remove ports that need reassigning from their current host.
# needs_reassignment = False
# (c) 2016, NetApp, Inc
# Filter out all not nvme-nvmeof hostside interfaces.
# legacy systems without embedded web services.
# Nothing to do for legacy systems without embedded web services.
# Determine whether user's password needs to be changed
# This ensures that login test functions correctly. The query onlycheck=true does not work.
# elif self.is_embedded_available():
# endpoint did not exist, old proxy version
# Check return codes to determine whether a change is required
# Update proxy's local users
# Update password using the password endpoints, this will also update the storaged password
# Update embedded local users
# Update embedded admin password via proxy passwords endpoint to include updating proxy/unified manager
# Update embedded non-admin passwords via proxy forward endpoint.
# # mutually_exclusive did not work with suboptions. Comment out this for now.
# mutually_exclusive = [["host", "script"],
# Build request body
# Check whether changes are required.
# Disable asupEnable is asup is disabled.
# Apply required changes.
# Add maintenance information to the key-value store
# Remove maintenance information to the key-value store
# This is going to catch cases like a connection failure
# Define a new domain based on the user input
# This is the current list of configurations
# Older versions of NetApp E-Series restAPI does not possess an API to remove all existing configs
# Create dictionary containing host/cluster references mapped to their names
# Verify there is no ambiguity between target's type (ie host and group have the same name)
# Build current mapping object
# Verify that when a lun is specified that it does not match an existing lun value unless it is associated with
# the specified volume (ie for an update)
# Verify volume and target exist if needed for expected state.
# Find matching volume reference
# Determine if lun mapping is attached to target with the
# Remove existing lun mapping for volume and target
# This likely isn"t an iSCSI-enabled system
# If the CHAP secret was provided, we trigger an update.
# If no secret was provided, then we disable chap
# system is a serial number
# system is a dictionary of system details
# Structure meta tags for Web Services
# Update default request headers
# Add all newly discovered systems. This is ignore any supplied systems to prevent any duplicates.
# Update controller_addresses
# Remove any undiscovered system from the systems list
# self.systems.remove(system)
# Mark systems for adding or removing
# Mark systems for removing
# Leave existing but undiscovered storage systems alone and throw a warning.
# successful login without password
# unauthorized
# Check if management paths should be updated
# Check for expected meta tag count
# Check for expected meta tag key-values
# Check whether CA certificate should be accepted
# Set only if embedded is available and accept_certificates==True
# Skip the password validation.
# Ensure the password is validated
# Determine whether the storage system requires updating
# Remove storage systems
# Add storage systems
# Update storage systems
# Wait for storage systems to be added or updated
# Report module actions
# Report no changes
# TODO: update validation for various selection criteria
# increase up to disk count first, then iteratively add disks until we meet requested capacity
# TODO: perform this calculation in check mode
# TODO: verify parameters against detail for changes
# requested state is absent
# run update here as well, since io_type can't be set on creation
# TODO: include other details about the storage pool (size, type, id, etc)
# Check for IB iSER
# Check for NVMe
# Check SAS, FC, iSCSI
# self.module.fail_json(msg="Invalid port type! Type [%s]. Port [%s]." % (port["type"], port["label"]))
# For older versions of web services
# Compare expected ports with those from other hosts definitions.
# Only check "other" hosts
# ["id"]
# Filter out non-iSCSI interfaces
# We could potentially retry this a few times, but it's probably a rare enough case (unless a playbook
# Handle authentication issues, etc.
# Get the storage array's capabilities and available options
# Get the current cache settings
# Generate payload for Python 2
# search existing configuration for syslog server entry match
# generate body for the http request
# remove specific syslog server configuration
# if no address is specified, remove all syslog server configurations
# make http request(s)
# send syslog test message
# Complete volume definitions.
# Check and convert pit_timestamp to datetime object. volume: snap-vol1
# Check for required arguments
# hostgroup_by_id = {hostgroup["id"]: hostgroup for hostgroup in hostgroups}
# Check if consistency group settings need to be updated.
# Check if base volumes need to be added or removed from consistency group.
# remaining_base_volumes = {base_volumes["name"]: base_volumes for base_volumes in group["base_volumes"]}  # NOT python2.6 compatible
# Check if reserve capacity needs to be expanded or trimmed.
# Check whether there are any snapshot images; if there are then throw an exception indicating that a trim operation
# Collect information about all that needs to be trimmed to meet or exceed required trim percentage.
# Expand after trim if needed.
# Check for existing view (collection of snapshot volumes for a consistency group) within consistency group.
# Determine snapshot volumes associated with view.
# Check snapshot volume needs mapped to host or hostgroup.
# Check snapshot volume needs unmapped to host or hostgroup.
# Check host mapping needs moved
# Check writable mode
# Check reserve capacity.
# Embedded web services should store the pit_image metadata since sending it to the proxy will be written to it instead.
# Determine snapshot volume mappings
# Ensure consistency group rollback priority is set correctly prior to rollback.
# Ensure a preferred_reserve_storage_pool has been selected
# Check storage group information.
# Check host mapping information
# Determine which changes are required.
# Determine whether changes are required.
# Determine if they're any key-value pairs that need to be cleaned up since snapshot pit images were deleted outside of this module.
# Apply any required changes.
# Get the storage array graph
# Get the storage array hardware inventory
# Get storage system specific key-value pairs
# This conditional ignores zero-length strings which indicates that the associated host-specific NVSRAM region has been cleared.
# Add access volume information to volumes when enabled.
# Determine all consistency group base volumes.
# Determine all consistency group pit snapshot images.
# Determine all consistency group pit views.
# Get all host mappings
# Get all host group mappings
# Add all host mappings to respective groups mappings
# Remove duplicate entries
# Select only the host side channels
# Build generic information for each interface entry
# enabled, config_method, address, subnet, gateway
# for expansion if needed
# Determine storage target identifiers
# iSCSI IO interface
# InfiniBand (iSER) protocol
# Get more details from hardware-inventory
# iSCSI protocol
# Fibre Channel IO interface
# NVMe over fibre channel protocol
# Fibre channel protocol
# SAS IO interface
# Infiniband IO interface
# Determine protocol (NVMe over Infiniband, InfiniBand iSER, InfiniBand SRP)
# Determine command protocol information
# Ethernet IO interface
# Gather information from controller->hostInterfaces if available (This is a deprecated data structure. Prefer information from ioInterface.
# Ignore any issue with this data structure since its a deprecated data structure.
# Add target information
# Only add interface if not already added (i.e. was part of ioInterface structure)
# Create a dictionary of volume lists keyed by host names
# Determine host io interface protocols
# Skip duplicate entries into host_port_information
# Determine workload name if there is one
# Get volume specific metadata tags
# Determine drive count
# Use the base volume to populate related details for snapshot volumes.
# ensure unique
# add the array
# array exists, modify...
# delete the array
# Needed for when a specific interface is not required (ie dns/ntp/ssh changes only)
# Add controller specific information (ssh, dns and ntp)
# Add interface specific information when configuring IP address.
# Check primary DNS address
# Check secondary DNS address
# Check primary NTP address
# Check secondary NTP address
# Build list of available web services rest api urls
# Update url if currently used interface will be modified
# Update management interface
# Validate all changes have been made
# This likely isn't an iSCSI-enabled system
# Very basic validation on email addresses: xx@yy.zz
# Create upgrade list, this ensures only the firmware uploaded is applied
# Determine whether upgrade is required
# Add drive references that are supported and differ from current firmware
# Check drive status
# system down or endpoint does not exist
# Wait for controller to be online again.
# Treat file as PEM encoded file.
# Add public certificates to bundle_info.
# Add private key to self.private_key.
# Check for PKCS8 PEM encoding.
# Check whether multiple private keys have been provided and fail if different
# Throw exception when no PEM certificates have been discovered.
# Treat file as DER encoded certificate
# Throw exception when no DER encoded certificates have been discovered.
# Treat file as DER encoded private key
# Determine bundle certificate ordering.
# Determine all remaining issuers.
# Search for the next certificate that is not an issuer of the remaining certificates in certificates_info dictionary.
# Add remaining root certificate if one exists.
# Determine whether any expected certificates are missing from the storage system's database.
# Create a initial remove_cert list.
# Determine expected certificates
# Determine whether new self-signed certificate needs to be generated.
# Verify there is no ambiguity between target's type (ie host and group has the same name)
# Verify that when target_type is specified then it matches the target's actually type
# Change all sizes to be measured in bytes
# Adjust unused raid level option to reflect documentation
# Parse usable drive string into tray:slot list
# slot must be one-indexed instead of zero.
# Standard minimum is 11 drives but some allow 10 drives. 10 will be the default
# Replace drive selection with required usable drives
# Evaluate candidates for required drive count, collective drive usable capacity and minimum drive size
# determine whether and how much expansion is need to satisfy the specified criteria
# Determine the appropriate expansion candidate list
# Determine if required drives and capacities are satisfied
# Update drive and storage pool information
# build expandable groupings of traditional raid candidate
# Wait for expansion completion unless it is the last request in the candidate list
# Determine whether changes need to be applied to the storage array
# Evaluate current storage pool for required change.
# Apply changes to storage array
# Expansion needs to occur before raid level migration to account for any sizing needs.
# Append web services proxy forward end point.
# Check if we want to search
# Check if we want to start or stop a copy operation
# Get the current status info
# If we want to start
# If we have already started
# If we need to start
# If we want to stop
# If it has already stopped
# If we need to stop it
# If we want the copy pair to exist we do this stuff
# We need to check if it exists first
# If no volume copy pair is found we need need to make it.
# In order to create we can not do so with just a volume_copy_pair_id
# If it does exist we do nothing
# We verify that it exists
# If we want it to not exist we do this
# We delete it by the volume_copy_pair_id
# This will override the NetAppESeriesModule request method timeout.
# Search firmware file for bundle or firmware version
# Check nvsram compatibility
# Determine whether nvsram is required
# Update bundle info
# Determine whether valid and compatible firmware
# Check whether downgrade is being attempted
# Verify controller consistency and get firmware versions
# Retrieve current bundle version
# Determine current NVSRAM version and whether change is required
# Verify firmware compatibility and whether changes are required
# This will upload the firmware files to the web services proxy but not to the controller
# Perform upgrade
# Here we wait for the role reversal to complete
# convert metadata to a list of dictionaries containing the keys "key" and "value" corresponding to
# Search long lived operations for volume
# Check for expansion
# Generate common indexed Ansible workload tag
# evaluate and update storage array when needed
# Determine if core attributes (everything but profileId) is the same
# only perform the required action when check_mode==False
# existing workload tag not found so create new workload tag
# check for invalid modifications
# common thick/thin volume properties
# controller ownership
# thick/thin volume specific properties
# Determine whether changes need to be applied to existing workload tags
# Determine if any changes need to be applied
# volume meta tags
# Must check the property changes first as it makes sure the segment size has no change before
# using the size to determine if the volume expansion is needed which will cause an irrelevant
# error message to show up.
# Determine whether nvsram upgrade is required
# Determine whether bundle upgrade is required
# Build the modules information for logging purposes
# Stage firmware and nvsram
# Activate firmware
# Determine the last known event
# Log firmware events
# When activation is successful, finish thread
# Wait for system to reflect changes
# Wait for system to be optimal
# Determine whether the current firmware version is the same as the file
# TODO(lorenp): Resolve ignored rc, data
# Check whether HIC speed should be changed.
# Create a dictionary containing supported HIC speeds keyed by simplified value to the complete value
# (ie. {"10g": "speed10gig"})
# Find the correct interface
# Check if api url is using the effected management interface to change itself
# Populate the body of the request and check for changes
# debug information
# We've probably recently changed the interface settings and it's still coming back up: retry.
# Check if the storage array can be contacted
# make the necessary changes to the storage system
# Doing a check after creation because the creation call fails to set the specified warning threshold
# Sort output based on tray and then drawer protection first
# Determine the appropriate candidate list
# Existing LDAP domain
# Request body
# Check whether temporary domain exists
# Todo : Replace hard-coded values with configurable parameters.
# Host name invoking the API.
# ID of event. A user defined event-id, range [0..2^32-2].
# Name of the application invoking the API.
# Version of application invoking the API.
# Application defined category of the event.
# Description of event to log. An application defined message to log.
# for debug purposes
# force ZAPI if requested or if some parameter requires it
# Don't fail here, if the ssid is wrong then it will fail on the next request. Causes issues for
# na_santricity_auth module.
# Contacted using embedded web services
# Standard F5 documentation fragment
# Copyright: (c) 2020, F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Copyright: (c) 2016, F5 Networks Inc.
# names must start with alphabetic character, and can contain hyphens and underscores and numbers
# no special characters are allowed
# Copyright: (c) 2018, F5 Networks Inc.
# because api filter on ASM is broken when names that contain numbers at the end we need to work around it
# An IP address was specified
# Assume a hostname was specified
# Copyright: (c) 2017, F5 Networks Inc.
# Copyright: (c) 2013, Matt Hite <mhite@hotmail.com>
# A list of modules currently provisioned on the device.
# This list is used by different fact managers to check to see
# if they should even attempt to gather information. If the module is
# not provisioned, then it is likely that the REST API will not
# return valid data.
# For example, ASM (at the time of this writing 13.x/14.x) will
# raise an exception if you attempt to query its APIs if it is
# not provisioned. An example error message is shown below.
# This list is provided to the specific fact manager by the
# master ModuleManager of this module.
# A list of packages currently installed on the device.
# if they should even attempt to gather information. If the package is
# resource.pop('fullPath', None)
# TODO include: web-scraping,ip-intelligence,session-tracking,
# TODO login-enforcement,data-guard,redirection-protection,vulnerability-assessment, parentPolicyReference
# TODO: add the following: filter, systems, signatureReferences
# Maximum Segment Size Override
# Cast some attributes to integer
# This fact is a combination of the availability_state and enabled_state
# The purpose of the fact is to give a higher-level view of the availability
# of the pool, that can be used in playbooks. If you need further detail,
# consider using the following facts together.
# - availability_state
# - enabled_state
# disabled
# Even though the ``profiles`` is a list, there is only ever 1
# for item in facts:
# def read_facts(self):
# Process members directly from the expanded collection
# Collect stats for each pool
# BIG-IP appears to store this as a string. This is a bug, so we handle both
# cases here.
# This removes the timezone portion from the string. This is done
# because Python has awfule tz parsing and strptime doesnt work with
# all timezones in %Z; it only uses the timezones found in time.tzname
# Yes, the REST API stores this as a string
# 'active' or 'passive'
# Remove the partition
# Covers the following examples
# /Common/2700:bc00:1f10:101::6%2.80
# 2700:bc00:1f10:101::6%2.80
# 1.1.1.1%2:80
# /Common/1.1.1.1%2:80
# /Common/2700:bc00:1f10:101::6%2.any
# /Common/Shared/1.1.1.1:80
# Can be a port of "any". This only happens with IPv6
# this will match any IPV4 Address and port, no RD
# match standalone IPV6 address, no port
# match IPV6 address with port
# this will match any alphanumeric Virtual Address and port
# this will match any alphanumeric Virtual Address
# match IPv6 wildcard with port without RD
# import logging
# import tempfile
# Add a logger for debugging to a temp file
# temp_log = tempfile.NamedTemporaryFile(delete=False, mode='a', prefix='f5_bigip_device_info_', suffix='.log')
# logger = logging.getLogger("f5_bigip_device_info")
# logger.setLevel(logging.DEBUG)
# handler = logging.StreamHandler(temp_log)
# formatter = logging.Formatter('[%(asctime)s] %(levelname)s %(message)s')
# handler.setFormatter(formatter)
# if not logger.hasHandlers():
# results = []
# for item in results:
# Prepare a mapping of fullPath to resource
# Run read_stats_from_device concurrently for all fullPaths
# Helper function for concurrent resource param creation
# Run param creation concurrently as well
# Who made this field a "description"!?
# We can't agree on field names...SMH
# Remove the excluded entries from the list of possible facts
# Meta choices
# Non-meta choices
# Negations of meta choices
# Negations of non-meta-choices
# issue originally found and submitted in https://github.com/F5Networks/f5-ansible/pull/1477 by @traittinen
# Look for 'first' from Ansible or REST
# Look for 'all' from Ansible or REST
# Look for 'best' from Ansible or REST
# These are custom strategies. The strategy may include the
# partition, but if it does not, then we add the partition
# that is provided to the module.
# In case rule values are unicode (as they may be coming from the API
# Because we always need to modify drafts, "creating on the device"
# is actually identical to just updating.
# Bad ICMP Checksum
# Bad ICMP Frame
# Bad IGMP Frame
# IP Option Illegal Length
# Bad IPv6 Hop Count
# Bad IPv6 Version
# Bad SCTP Checksum
# Bad TCP Checksum
# Bad TCP Flags (All Cleared)
# Bad TCP Flags (All Flags Set)
# Bad IP TTL Value
# Bad UDP Checksum
# Bad UDP Header (UDP Length > IP Length or L2 Length)
# Bad IP Version
# ARP Flood
# Single Endpoint Flood
# IGMP Flood
# IGMP Fragment Flood
# Bad Source
# IP Error Checksum
# IP Length > L2 Length
# IP Fragment Error
# IP Fragment Overlap
# IP Fragment Too Small
# IP Uncommon Proto
# IP Unknown Protocol
# IPv4 Mapped IPv6
# IPv6 Atomic Fragment
# Bad IPv6 Addr
# IPv6 Length > L2 Length
# IPv6 Fragment Error
# IPv6 Fragment Overlap
# IPv6 Fragment Too Small
# L2 Length >> IP Length
# No L4 (Extension Headers Go To Or Past The End of Frame)
# LAND Attack
# No L4
# No Listener Match
# Non TCP Connection
# Payload Length < L2 Length
# Routing Header Type 0
# SYN && FIN Set
# TCP BADACK Flood
# TCP Header Length > L2 Length
# TCP Header Length Too Short (Length < 5)
# Header Length > L2 Length
# Header Length Too Short
# IPv6 Extended Headers Wrong order
# IPv6 extension header too large
# IPv6 hop count <= <tunable>
# Host Unreachable
# ICMP Fragment
# ICMP Frame Too Large
# ICMPv4 flood
# ICMPv6 flood
# IP Fragment Flood
# TTL <= <tunable>
# IP Option Frames
# IPv6 Extended Header Frames
# IPv6 Fragment Flood
# Option Present With Illegal Length
# Sweep
# TCP Flags-Bad URG
# TCP Half Open
# TCP Option Overruns TCP Header
# TCP PUSH Flood
# TCP RST Flood
# TCP SYN Flood
# TCP SYN Oversize
# TCP SYN ACK Flood
# TCP Window Size
# TIDCMP
# Too Many Extension Headers
# IPv6 Duplicate Extension Headers
# FIN Only Set
# Ethernet Broadcast Packet
# Ethernet Multicast Packet
# Ethernet MAC Source Address == Destination Address
# UDP Flood
# Unknown Option Type
# Unknown TCP Option Type
# SIP ACK Method
# SIP BYE Method
# SIP CANCEL Method
# SIP INVITE Method
# SIP MESSAGE Method
# SIP NOTIFY Method
# SIP OPTIONS Method
# SIP OTHER Method
# SIP PRACK Method
# SIP PUBLISH Method
# SIP REGISTER Method
# sip-malformed
# SIP SUBSCRIBE Method
# uri-limit
# DNS A Query
# DNS AAAA Query
# DNS ANY Query
# DNS AXFR Query
# DNS CNAME Query
# DNS Malformed
# DNS NXDOMAIN Query
# DNS Response Flood
# DNS Oversize
# DNS IXFR Query
# DNS MX Query
# DNS NS Query
# DNS OTHER Query
# DNS PTR Query
# DNS QDCOUNT LIMIT
# DNS SOA Query
# DNS SRV Query
# DNS TXT Query
# "autoThreshold": "disabled",
# This is a deprecated parameter in 13.1.0. Use threshold_mode instead
# "enforce": "enabled",
# device-config specific settings
# The following are not enabled for device-config because I
# do not know what parameters in TMUI they map to. Additionally,
# they do not appear to have any "help" documentation available
# in ``tmsh help security dos device-config``.
# "allowUpstreamScrubbing": "disabled",
# "attackedDst": "disabled",
# "autoScrubbing": "disabled",
# device-config specific
# Attributes on the DoS profiles that hold the different vectors
# Each of these attributes is a list of dictionaries. Each dictionary
# contains the settings that affect the way the vector works.
# The vectors appear to all have the same attributes even if those
# attributes are not used. There may be cases where this is not true,
# however, and for those vectors we should either include specific
# error detection, or pass the unfiltered values through to mcpd and
# handle any unintuitive error messages that mcpd returns.
# A list of all the vectors queried from the API when reading current info
# from the device. This is used when updating the API as the value that needs
# to be updated is a list of vectors and PATCHing a list would override any
# default settings.
# A disabled vector does not appear in the list of existing vectors
# For non-device-config
# At this point we know the existing vector is not disabled, so we need
# to change it in some way.
# First, if we see that the vector is in the current list of vectors,
# we are going to update it
# else, we are going to add it to the list of vectors
# Since the name attribute is not a parameter tracked in the Parameter
# classes, we will add the name to the list of attributes so that when
# we update the API, it creates the correct vector
# Finally, the disabled state forces us to remove the vector from the
# list. However, items are only removed from the list if the profile
# being configured is not a device-config
# All of the vectors must be re-assembled into a list of dictionaries
# so that when we PATCH the API endpoint, the vectors list is filled
# There are **not** individual API endpoints for the individual vectors.
# Instead, the endpoint includes a list of vectors that is part of the
# DoS profile
# Overwrite specific names because they do not align with DoS Profile names
# The following names (on the right) differ from the functionally equivalent
# names (on the left) found in DoS Profiles. This seems like a bug to me,
# but I do not expect it to be fixed, so this works around it in the meantime.
# Attempt to normalize, else just return the name. This handles the default
# case where the name is actually correct and would not be found in the
# ``name_map`` above.
# in v15 and v16 new dos profiles do not have the vector family set so we need to "create" vector container
# on 404 response
# sustained_attack_detection_time=dict(),
# category_detection_time=dict(),
# per_dest_ip_detection_threshold=dict(),
# per_dest_ip_mitigation_threshold=dict(),
# When revoking a license, it should be acceptable to auto-accept the
# license since you accepted it the first time when you activated the
# license you are now revoking.
# Revoking seems to just be another way of saying "get me a new license".
# There appear to be revoke-specific wording in the license and I assume
# some special revoke-like signing is happening, but the process is essentially
# just another form of "create".
# Failures to connect to licensing server must be passed upstream not supressed
# This error occurs when there is a problem with the license server and it
# starts returning invalid XML (like if they upgraded something and the server
# is redirecting improperly.
# There's no way to recover from this error except by notifying F5 that there
# is an issue with the license server.
# Any other exceptions must be raised also
# Sleep a little to let mcpd settle and begin properly
# Per BZ617284, the BIG-IP UI does not raise a warning about this.
# So I do
# elif self._values['vlans_enabled'] is False:
# elif self._values['vlans_disabled'] is True:
# Specifically looking for /all because the vlans return value will be
# an FQDN list. This means that 'all' will be returned as '/partition/all',
# ex, /Common/all.
# We do not want to accidentally match values that would end with the word
# 'all', like 'vlansall'. Therefore we look for the forward slash because this
# is a path delimiter.
# Copyright: (c) 2019, F5 Networks Inc.
# lgtm [py/similar-function]
# Have to do this in cases where the BIG-IP stores the word
# "management-ip" when you specify the management IP address.
# Otherwise, a difference would be registered.
# Used for changing state
# user-enabled (enabled)
# user-disabled (disabled)
# user-disabled (offline)
# user-down (offline)
# Modifying the state before sending to BIG-IP
# The 'state' must be set to None to exclude the values (accepted by this
# module) from being sent to the BIG-IP because for specific Ansible states,
# BIG-IP will consider those state values invalid.
# State 'offline'
# Offline state will result in the monitors stopping for the node
# only a valid state can be specified. The module's value is "offline",
# but this is an invalid value for the BIG-IP. Therefore set it to user-down.
# Even user-down wil not work when _creating_ a node, so we register another
# want value (that is not sent to the API). This is checked for later to
# determine if we have to PATCH the node to be offline.
# These are being set here because the ``create_on_device`` method
# uses ``self.changes`` (to get formatting of parameters correct)
# but these two parameters here cannot be changed and also it is
# not easy to get the current versions of them for comparison.
# It appears that you cannot create a node in an 'offline' state, so instead
# we update its status to offline after we create it.
# Error only contains the lines that include the error
# Sleep a little to let rebooting take effect
# This can be caused by restjavad restarting.
# For vCMP, because it has to reboot, we also wait for mcpd to become available
# before "moving on", or else the REST API would not be available and subsequent
# Tasks would fail.
# To prevent things from running forever, the hack is to check
# for mprov's status twice. If mprov is finished, then in most
# cases (not ASM) the provisioning is probably ready.
# Sleep a little to let provisioning settle and begin properly
# /usr/libexec/qemu-kvm is added here to prevent vcmp provisioning
# from never allowing the mprov provisioning to succeed.
# It turns out that the 'mprov' string is found when enabling vcmp. The
# qemu-kvm command that is run includes it.
# For example,
# It is possible that the API call can return invalid JSON.
# This invalid JSON appears to be just empty strings.
# Crypto is used specifically for changing the root password via
# tmsh over REST.
# We utilize the crypto library to encrypt the contents of a file
# before we upload it, and then decrypt it on-box to change the
# password.
# To accomplish such a process, we need to be able to encrypt the
# temporary file with the public key found on the box.
# These libraries are used to do the encryption.
# Note that, if these are not available, the ability to change the
# root password is disabled and the user will be notified as such
# by a failure of the module.
# These libraries *should* be available on most Ansible controllers
# by default though as crypto is a dependency of Ansible.
# This function call requires that the public_key be expressed in bytes
# OpenSSL craziness
# Using this padding because it is the only one that works with
# the OpenSSL on BIG-IP at this time.
# OAEP is the recommended padding to use for encrypting, however, two
# things are wrong with it on BIG-IP.
# The first is that one of the parameters required to decrypt the data
# is not supported by the OpenSSL version on BIG-IP. A "parameter setting"
# error is raised when you attempt to use the OAEP parameters to specify
# hashing algorithms.
# This is validated by this thread here
# Were is supported, we could use OAEP, but the second problem is that OAEP
# is not the default mode of the ``openssl`` command. Therefore, we need
# to adjust the command we use to decrypt the encrypted file when it is
# placed on BIG-IP.
# The correct (and recommended if BIG-IP ever upgrades OpenSSL) code is
# shown below.
# Additionally, the code in ``update_on_device()`` would need to be changed
# to pass the correct command line arguments to decrypt the file.
# Decrypting logic
# The following commented out command will **not** work on BIG-IP versions
# utilizing OpenSSL 1.0.11-fips (15 Jan 2015).
# The reason is because that version of OpenSSL does not support the various
# ``-pkeyopt`` parameters shown below.
# Nevertheless, I am including it here as a possible future enhancement in
# case the method currently in use stops working.
# This command overrides defaults provided by OpenSSL because I am not
# sure how long the defaults will remain the defaults. Probably as long
# as it took OpenSSL to reach 1.0...
# The command we actually use is (while not recommended) also the only one
# that works. It forgoes the usage of OAEP and uses the defaults that come
# with OpenSSL (PKCS1v15)
# See this link for information on the parameters used
# If you change the command below, you will need to additionally change
# how the encryption is done in ``encrypt_password_change_file()``.
# Copyright: (c) 2023, F5 Networks Inc.
# This can be caused by services restarting, we only want to raise on SSL errors if there are
# any, otherwise there is a good chance device is rebooting or restarting
# we check again to ensure device did not start to restart services in between API calls
# Sometimes when deamons are restarting earlier and we get an invalid json in response, this does not
# mean device is rebooting so we are returning False here.
# we do a quick check if another reboot is required
# A list of all the syslogs queried from the API when reading current info
# to be updated is a list of syslogs and PATCHing a list would override any
# A absent syslog does not appear in the list of existing syslogs
# At this point we know the existing syslog is not absent, so we need
# First, if we see that the syslog is in the current list of syslogs,
# else, we are going to add it to the list of syslogs
# Finally, the absent state forces us to remove the syslog from the
# All of the syslogs must be re-assembled into a list of dictionaries
# so that when we PATCH the API endpoint, the syslogs list is filled
# There are **not** individual API endpoints for the individual syslogs.
# Instead, the endpoint includes a list of syslogs that is part of the
# system config
# Handle instances where there already exists many monitors, and the
# user runs the module again specifying that the monitor_type should be
# changed to 'single'
# Update to 'and_list' here because the above checks are all that need
# to be done before we change the value back to what is expected by
# BIG-IP.
# Remember that 'single' is nothing more than a fancy way of saying
# "and_list plus some extra checks"
# Idempotency check - removing monitors from a device where no monitors exists
# when monitors_list is [], remove all the monitors
# monitors is '' in the case of monitor_type and_list and min <quorum> of {  } in case monitor_type m_of_n
# The backup file is created in the bigip_imish_config action plugin. Refer
# to that if you have questions. The key below is removed by the action plugin.
# Add space to command list so that it won't chop last character from last command
# setup handler before scheduling signal, to eliminate a race
# Sleep a little to let daemons settle and begin checking if REST interface is available.
# First we check if SSH connection is ready by repeatedly attempting to run a simple command
# Wait for the reboot to happen and then start from the beginning
# of the waiting.
# The first test verifies that the REST API is available; this is done
# by repeatedly trying to login to it.
# The types of exception's we're handling here are "REST API is not
# ready" exceptions.
# Typically caused by device starting up:
# Typically caused by a device being down
# Typically caused by device still booting
# required to add CLI to choices and ssh_keyfile as per documentation
# Adding items to the end of the list causes the list of rules to match
# what the user specified in the original list.
# this response returns no payload
# Built-in profile global-network cannot disable network log profile
# First we compare if both lists are equal, if want is bigger or smaller than have, we assume user change
# If lists are equal then we compare items to verify change was made
# First we remove extra keys in have
# Compare each element in the list by position
# Device order is literally derived from the order in the array,
# hence lists with the same elements but in different order cannot be equal, so cmp_simple_list
# function will not work here.
# A general error when a resource already exists
# Returned when creating a duplicate cli alias
# ['list ltm virtual']
# ['cd /Common; list ltm virtual']
# ['tmsh -c "cd /Common; list ltm virtual"']
# This needs to be removed so that the ComplexList used in to_commands
# will work correctly.
# A regex to match the error IDs used in the F5 v2 logging framework.
# pattern = r'^[0-9A-Fa-f]+:?\d+?:'
# This range lookup is how you do lookups for single IP addresses. Weird.
# Artificial sleeping to wait for remote licensing (on BIG-IP) to complete
# This should be something that BIG-IQ can do natively in 6.1-ish time.
# Return cached copy if we have it
# Otherwise, get copy from image info cache
# self._values['build'] = self.image_info['build']
# Otherwise, get a new copy and store in cache
# Handle all exceptions because if the system is offline (for a
# reboot) the REST client will raise exceptions about
# connections
# We need to delay this slightly in case the the volume needs to be
# created first
# Suggests BIG-IP is still in the middle of restarting itself or
# restjavad is restarting.
# At times during reboot BIG-IP will reset or timeout connections so we catch and pass this here.
# Can't use TransactionContextManager here because
# it expects end result code to be 200 or so. 404 causes
# TransactionContextManager to fail.
# if resp.status in [200, 201] or 'code' in response and response['code'] in [200, 201]:
# this is to ensure no duplicates are in the provided collection
# params['name'] = self.want.name
# This needs to be done because of the way that BIG-IP creates certificates.
# The extra params (such as OCSP and issuer stuff) are not available in the
# payload. In a nutshell, the available resource attributes *change* after
# a create so that *more* are available.
# TransactionContextManager cannot be used for reading, for
# whatever reason
# Copyright (c) 2017 F5 Networks Inc.
# Check for valid IPv4 or IPv6 entries
# else fallback to checking reasonably well formatted hostnames
# check if datacenter exists
# Wait no more than half an hour
# Changes Pending:
# Awaiting Initial Sync:
# Not All Devices Synced:
# Converts to common stringiness
# The tuple set "issubset" check that happens in the Difference
# engine does not recognize that a u'foo' and 'foo' are equal "enough"
# to consider them a subset. Therefore, we cast everything here to
# whatever the common stringiness is.
# Names contains the index in which the rule is at.
# 'metadata',
# You cannot have rows without columns
# BIG-IP will inject an 'encrypted' key if you don't provide one.
# If you don't provide one, then we give you the default 'no', by
# default.
# This seems to happen only on 12.0.0
# BIG-IP removes empty values entries, so mimic this behavior
# for user-supplied values.
# Metadata needs to be zero'd before the service is removed because
# otherwise, the API will error out saying that "configuration items"
# currently exist.
# In other words, the REST API is not able to delete a service while
# there is existing metadata
# v13
# timer is required so that the api updates isModified state to a correct value.
# we need to remove active or apply from params as API will raise an error if the active or apply is set to yes,
# policies can only be activated or applied via apply-policy task endpoint.
# TODO Include creating ASM policies from custom templates in v13
# BIG-IP's value for "default" is that the key does not
# exist. This conflicts with our purpose of having a key
# not exist (which we equate to "i dont want to change that"
# therefore, if we load the information from BIG-IP and
# find that there is no 'network' key, that is BIG-IP's
# way of saying that the network value is "default"
# The values of the 'key' index literally need to be string values.
# If they are not, on BIG-IP 12.1.0 they will raise this REST exception.
# Append any options that might be specified
# Need to re-connect here because the REST framework will be restarting
# and thus be clearing its authorization cache
# Reloading a UCS configuration will cause restjavad to restart,
# aborting the connection.
# Timeouts appear to be able to happen in 12.1.2
# catching some edge cases where API becomes unstable after installation
# This is required for idempotency and updates as we do not compare these properties
# handle expired tokens
# We can provide all of the modules for removal task, without ensuring they were discovered
# The same process used for creating (load) can be used for updating
# BIG-IP can generate checksums of iApps, but the iApp needs to be
# on the box to do this. Additionally, the checksum is MD5, but it
# is not an MD5 of the entire content of the template. Instead, it
# is a hash of some portion of the template that is unknown to me.
# The code below is responsible for uploading the provided template
# under a unique name and creating a checksum for it so that that
# checksum can be compared to the one of the existing template.
# Using this method we can compare the checksums of the existing
# iApp and the iApp that the user is providing to the module.
# Override whatever name may have been provided so that we can
# temporarily create a new template to test checksums with
# Create and remove temporary template
# Set the template name back to what it was originally so that
# any future operations only happen on the real template.
# Handle route domains
# So I raise the error instead.
# starting with v14 options may return as a space delimited string in curly
# braces, eg "{ option1 option2 }", or simply "none" to indicate empty set
# we don't want options.  If we have any, indicate we should remove, else noop
# Copyright (c) 2018 F5 Networks Inc.
# license_state facts
# Yes, this is still called "bigip" even though this is querying the BIG-IQ
# product. This is likely due to BIG-IQ inheriting TMOS.
# CIDRs between 0 and 128 are allowed
# not a number, but that's ok. Further processing necessary
# Create a temporary address to check if the netmask IP is v4 or v6
# Create a more real v4 address using a wildcard, so that we can determine
# the CIDR value from it.
# It's easiest to just check the netmask by comparing dest IPs.
# The 'network' attribute is not updatable
# network 192.168.0.0 prefixlen 16 := "Network3"
# network 2402:9400:1000:0:: prefixlen 64 := "Network4"
# host 172.16.1.1/32 := "Host3"
# host 2001:0db8:85a3:0000:0000:8a2e:0370:7334 := "Host4"
# network 192.168.0.0%11/16 := "Network3"
# network 2402:9400:1000:0::%11/64 := "Network4"
# host 192.168.1.1%11/32 := "Host3"
# host 2001:0db8:85a3:0000:0000:8a2e:0370:7334%11 := "Host4"
# 10.0.0.0%12/8
# 2402:6940::%12/32 := "Network2"
# 192.168.1.1%12/32 := "Host1"
# 2402:9400:1000::%12/128
# 192.168.0.0/16 := "Network3"
# 2402:9400:1000:0::/64 := "Network4"
# 10.0.0.0/8
# 2402:6940::/32 := "Network2"
# 192.168.1.1/32 := "Host1"
# 2402:9400:1000::/128
# network 192.168.0.0%11 prefixlen 16 := "Network3",
# network 2402:9400:1000:0::%11 prefixlen 64 := "Network4",
# network 192.168.0.0 prefixlen 16 := "Network3",
# network 2402:9400:1000:0:: prefixlen 64 := "Network4",
# network 192.168.2.0/24,
# host 172.16.1.1%11/32 := "Host3"
# There is a 98% chance that the user will supply a data group that is < 1MB.
# 99.917% chance it is less than 10 MB. This is well within the range of typical
# memory available on a system.
# If this changes, this may need to be changed to use temporary files instead.
# External data groups are compared by their checksum, not their records. This
# is because the BIG-IP does not store the actual records in the API. It instead
# stores the checksum of the file. External DGs have the possibility of being huge
# and we would never want to do a comparison of such huge files.
# Therefore, comparison is no-op if the DG being worked with is an external DG.
# Remove the remote data group file if asked to
# First we remove extra keys in have for the same elements
# Next we do compare the lists as normal
# response = resp.json()
# Cannot create a partition in a partition, so nullify this
# BIG-IP will kill your management connection when you change the HTTP
# redirect setting. So this catches that and handles it gracefully.
# Wait for BIG-IP web server to settle after changing this
# result += [('address_list', x['address_list'])]
# Update any missing params
# The cli/script API is kinda weird in that it wont let us individually
# PATCH the description. We appear to need to include the content otherwise
# we get errors about us trying to replace procs that are needed by other
# scripts, ie, the script we're trying to update.
# Reserving the right to add well-known ports
# This timeout name needs to be overridden because 'timeout' is a connection
# parameter and we don't want that to be the value that is always set here.
# These two settings are for IP Encapsulation
# IP Encapsulation related
# There can be only one
# monitor-enabled + checking:
# monitor-enabled + down:
# monitor-enabled + up
# this is necessary as in v12 there is a bug where returned value has a space at the end
# If user provides us a name for the aggregate element, we use its name with port, this is
# how F5 BIG-IP behaves as well.
# Read the current list of tunnels so that IP encapsulation
# checking can take place.
# required to remove REST option from choices and set default to CLI to be in line with docs
# at times we time out waiting on task as sometimes task is gone from async queue after services reboot
# we are adding existence check here to catch where the file is created but async task is removed.
# these two params are mutually exclusive, and so one must be zeroed
# out so that the other can be set. This zeros the non-specified values
# out so that the PATCH can happen
# we need to remove the double quotes from the items on the list so that comparison engine
# does not return change
# this method is needed as the API has problems in handling spaces and using just double quotes causes
# api to complain about quote imbalance
# not in the official list
# if path.startswith(prof_path):
# if path.startswith(accprof_path):
# At the moment, BIG-IP wraps the names of log profiles in double-quotes if
# the profile name contains spaces. This is likely due to the REST code being
# too close to actual tmsh code and, at the tmsh level, a space in the profile
# name would cause tmsh to see the 2nd word (and beyond) as "the next parameter".
# This seems like a bug to me.
# match source with RD
# match source without RD
# we need to strip RD because when using Virtual Address names the RD is not needed.
# This is a special case for "all" enabled VLANs
# Regular checks
# For different server types
# Standard supports
# - tcp
# - udp
# - sctp
# - ipsec-ah
# - ipsec esp
# - all protocols
# Perf HTTP supports
# Stateless supports
# DHCP supports no IP protocols
# Internal supports
# Message Routing supports
# This must be changed back to a list to make a valid REST API
# value. The module manipulates this as a normal dictionary
# if self.want.has_message_routing_profiles:
# Sets a default profiles when creating a new standard virtual.
# It appears that if no profiles are deliberately specified, then under
# certain circumstances, the server type will default to ``performance-l4``.
# It's unclear what these circumstances are, but they are met in issue 00093.
# If this block of profile setting code is removed, the virtual server's
# type will change to performance-l4 for some reason.
# result = [fq_name(self.want.partition, x['name']) for x in response['items']]
# The internal type does not support the 'destination' parameter, so it is ignored.
# an FQDN list. This means that "all" will be returned as "/partition/all",
# "all", like "vlansall". Therefore we look for the forward slash because this
# raise F5ModuleError(f"want:{self.want.profiles} Have: {self.have.profiles}")
# when type is missing
# Mark the resource as managed by Ansible, this is default behavior
# Copyright: (c) 2022, F5 Networks Inc.
# The process of updating is a forced re-creation.
# Deleting images involves a short period of inconsistency in the REST
# API due to needing to remove files from disk and update MCPD.
# This should not (realistically) take more than 30 seconds.
# Creating images involves a short period of inconsistency in the REST
# API likely due to having to move files into appropriate places on disk
# and update MCPD with information.
# We want to return some information about the image that was just uploaded
# This must appear after the creation process because the information
# does not exist on the device (has been parsed by BIG-IP) until the
# ISO is uploaded.
# We need to do this because BIGIP allows / in names of GTM VS, allowing and users create such names incorrectly
# Despite the fact that GTM server and GTM Virtual Server cannot be created outside the Common partition
# The value of this parameter in the API includes an extra space
# Built-in profiles cannot be removed
# make sure we are in the right cli context which should be
# enable mode and not config module
# provider = load_provider(f5_provider_spec, self._task.args, module=self._task)
# (c) 2017, Red Hat, Inc.
# User requested backup and no error occurred in module.
# NOTE: If there is a parameter error, _backup key may not be in results.
# strip out any keys that have two leading and two trailing
# underscore characters
# - local is a special provider that is baked into the system and
# Fully Qualified name (with partition) for a list
# Handle a specific case of the user specifying ``|default(omit)``
# as the value to the auth_provider.
# In this case, Ansible will inject the omit-placeholder value
# and the module params incorrectly interpret this. This case
# can occur when specifying ``|default(omit)`` for a variable
# value defined in the ``environment`` section of a Play.
# An example of the omit placeholder is shown below.
# Adding this here because ``username`` is a connection parameter
# and in cases where it is also an API parameter, we run the risk
# of overriding the specified parameter with the connection parameter.
# Since this is a problem, and since "username" is never a valid
# parameter outside its usage in connection params (where we do not
# use the ApiParameter or ModuleParameters classes) it is safe to
# skip over it if it is provided.
# Handle weird API parameters like `dns.proxy.__iter__` by
# using a map provided by the module developer
# There is a mapped value for the api_map key
# If the mapped value does not have
# an associated setter
# The mapped value has a setter
# If the mapped value is not a @property
# Ensures that properties that weren't defined, and therefore stashed
# in the `_values` dict, will be retrievable.
# Copyright: (c) 2021, F5 Networks Inc.
# This collection version needs to be updated at each release
# Copyright (c) 2017, F5 Networks Inc.
# Set the last_url called
# This is used by the object destructor to erase the token when the
# ModuleManager exits and destroys the iControlRestSession object
# Catch HTTPError delivered from Ansible
# The structure of this object, in Ansible 2.8 is
# HttpError {
# current_bytes = 0
# fileobj.write(response.raw_content)
# If the size is zero, then this is the first time through
# the loop and we don't want to write data because we
# haven't yet figured out the total size of the file.
# Once we've downloaded the entire file, we can break out of
# the loop
# Determine the total number of bytes to read.
# If the file is smaller than the chunk_size, the BigIP
# will return an HTTP 400. Adjust the chunk_size down to
# the total file size...
# ...and pass on the rest of the code.
# This appears to be the largest chunk size that iControlREST can handle.
# The trade-off you are making by choosing a chunk size is speed, over size of
# transmission. A lower chunk size will be slower because a smaller amount of
# data is read from disk and sent via HTTP. Lots of disk reads are slower and
# There is overhead in sending the request to the BIG-IP.
# Larger chunk sizes are faster because more data is read from disk in one
# go, and therefore more data is transmitted to the BIG-IP in one HTTP request.
# If you are transmitting over a slow link though, it may be more reliable to
# transmit many small chunks that fewer large chunks. It will clearly take
# longer, but it may be more robust.
# Retries are used here to allow the REST API to recover if you kill
# an upload mid-transfer.
# There exists a case where retrying a new upload will result in the
# API returning the POSTed payload (in bytes) with a non-200 response
# Retrying (after seeking back to 0) seems to resolve this problem.
# Data should always be sent using the ``data`` keyword and not the
# ``json`` keyword. This allows bytes to be sent (such as in the case
# of uploading ISO files.
# When this fails, the output is usually the body of whatever you
# POSTed. This is almost always unreadable because it is a series
# of bytes.
# Therefore, we only inform on the returned HTTP error code.
# You must seek back to the beginning of the file upon exception.
# If this is not done, then you risk uploading a partial file.
# Copyright (c) 2020 F5 Networks Inc.
# we need to ensure that any connection errors to TEEM do not cause failure of module to run.
# result can be None if this branch is reached first
# For example, the mgmt/tm/net/trunk/NAME/stats API
# returns counters.bitsIn before anything else.
# Copyright(C) 2023 Kaytus Inc. All Rights Reserved.
# Copyright: (c) 2021, Alina Buzachis <@alinabuzachis>
# Parameters for the Lookup Managed Object Reference (MoID) plugins
# this is an internal flag that indicates if we tried to find the datacenter or not
# if its true, we stop trying and save some api calls.
# see add_intermediate_path_part_to_filter_spec
# were at the end of the object path. Either return the object, or return
# all of the objects it contains (for example, the children inside of a folder)
# were in the middle of an object path, lookup the object at this level
# and add it to the filters for the next round of searching
# If we havnt searched for a datacenter yet, this is the first item in the path and its likely
# the datacenter. If its not, continue the search as normal and dont search for the datacenter
# again
# Resource pools can only be in the vm filter spec
# Clusters can be used in the vm, host, or resource pool filter specs
# Hosts can be in the filter spec for vms, networks, datastores, or resource pools
# Folders can be used in the filter spec for everything except resource pools
# template: header.j2
# This module is autogenerated using the ansible.content_builder.
# See: https://github.com/ansible-community/ansible.content_builder
# This structure describes the format of the data expected by the end-points
# template: default_module.j2
# 7.0.2+
# 7.0.2 and greater
# TODO: fetch the object
# Nothing has changed
# 7.0.2
# e.g: content_configuration
# template: info_list_and_get_module.j2
# TODO extend the list of filter
# this is a list of id, we fetch the details
# aa
# template: info_no_list_module.j2
# The PUT answer does not let us know if the resource has actually been
# modified
# vSphere 7.0.2, a network configuration changes of the appliance raise a systemd error,
# but the change is applied. The problem can be resolved by a yum update.
# This file is maintained in the vmware_rest_code_generator project
# https://github.com/ansible-collections/vmware_rest_code_generator
# TODO: Handle session timeout
# < 7.0.2
# e.g: appliance_infraprofile_configs_info
# e.g: appliance_infraprofile_configs
# NOTE: this is a shortcut/hack. We get this issue if a CDROM already exists
# NOTE: another one for vcenter_host
# 7.0.3, vcenter_ovf_libraryitem returns status 200 on failure
# 7.0.2 <
# Content library returns string {"value": "library_id"}
# The list already comes with all the details
# remove the action=foo from the URL
# workaround for content_library_item_info
# NOTE: This mapping can be extracted from the delete end-point of the
# resource, e.g:
# /rest/vcenter/vm/{vm}/hardware/ethernet/{nic} -> nic
# Also, it sounds like we can use "list_index" instead
# Configuration file for the Sphinx documentation builder.
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# Add any paths that contain templates here, relative to this directory.
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages.  See the documentation for
# a list of builtin themes.
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# Copyright: (c) 2023, Ansible Cloud Team (@ansible-collections)
# This document fragment serves as a partial base for all vmware lookups. It should be used in addition to the base fragment, vmware.vmware.base_options
# since that contains the actual argument descriptions and defaults. This just defines the environment variables since plugins have something
# like the module spec where that is usually done.
# Copyright: (c) 2016, Charles Paul <cpaul@ansible.com>
# Copyright: (c) 2019, Abhijeet Kasurde <akasurde@redhat.com>
# This document fragment serves as a compliment to the vmware.vmware.base documentation fragment for modules
# that use the REST API SDK. You must include the base fragment in addition to this
# This vmware.vmware.additional_rest_options fragment will cover any options returned by rest_compatible_argument_spec()
# that are not included in vmware.vmware.base
# This document fragment serves as a base for all vmware modules. If you are using the REST API SDK in your module,
# you should also include the vmware.vmware.additional_rest_options fragment.
# This vmware.vmware.base_options fragment covers the arg spec provided by the base_argument_spec() function
# This document fragment serves as a base for all vmware deploy_* modules. It should be used in addition to
# the base argument specs for pyvmomi or rest
# This document fragment serves as a partial base for all vmware plugins. It should be used in addition to the base fragment, vmware.vmware.base_options
# First of all populate options,
# this will already take into account env vars and ini config
# an empty list will cause vsphere to include all object types
# if path is one level deep (or less), we will only ever find datacenters or folders.
# if path is more than one level deep, the seconds level of the path needs to be
# one of the known folder types
# Copyright: (c) 2024, Ansible Cloud Team
# Custom values
# check user-specified directive
# Already handled in base class
# needed by keyed_groups default value
# needed by esxi_host and cluster properties value
# We already looked up the management IP from vcenter this session, so
# reuse that value
# If this is an object created from the cache, we won't be able to access
# vcenter. But we stored the management IP in the properties when we originally
# created the object (before the cache) so use that value
# Finally, try to find the IP from vcenter. It might not exist, in which case we
# return an empty string
# needed to filter out disconnected or unreachable hosts in self.populate_from_vcenter
# needed by keyed_groups default
# Copyright: (c) 2019, Pavan Bidkar <pbidkar@vmware.com>
# User is searching for a library item in any library, we dont need to specify a
# library ID and can save an API call or two
# User specified the library search params or is trying to gather all items, we need
# to lookup all of the library IDs first
# iterate each host due to an error too_many_matches when looking at all vms on a cluster
# https://github.com/vmware/vsphere-automation-sdk-python/issues/142
# Copyright: (c) 2025, Ansible Cloud Team (@ansible-collections)
# empty string causes vmware to drop the setting altogether
# if user does not set the cpu or mem reservations, we use auto computing.
# we use autocomputing and the config does not, so there is a diff
# we dont use autocomputing and the config does, so there is a diff
# check the PDL response mode first
# if apd response is not taking any actions, we dont need to check the other options.
# check the rest of the options for apd
# HA VM Monitoring related parameters
# HA Admission Control related parameters
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# Tag already exists and is being updated or removed, so we can use the remote def ID
# Tag didn't exist and we created it, so we have the new ID available
# Caches to help speed up the crawl through the remote tags
# Maps category IDs to a list of tag IDs
# Maps tag names to tag models
# we found all of the categories in the parameters, no need to keep looking
# user specified the category name in the parameters and we found it
# Weve never looked this tag up, so we crawl through the category tags until we find it or we run out of tags
# Weve also never looked up the tags in this category, so we need to initialize the cache
# Cache tag for future lookups
# whatever folder is in the middle of the path doesn't exist, so we may as well
# start from the beginning, or /datacenter/type, and check the whole path
# since these are vmotion recommendations, there's only ever one action
# get the most recent task for the target VM, which should be the vmotion task we just triggered
# legacy output
# Initialize member variables
# Create template placement specs
# Handled in base class
# Create a library item
# Upload the file content to the file upload URL
# Ignore preview warnings on session
# complete tells vcenter that we are done making changes on our side and the upload can complete.
# Save a dictionary of portgroup details for reuse
# Copyright: (c) 2024, Ansible Cloud Team (@ansible-collections)
# Set result['changed'] immediately because
# shutdown and reboot return None.
# Need Force
# As this is async task, we create scheduled task and mark state to changed.
# can't break here, need to check all the others in case there's more than one
# Security
# Proxy
# Ntp
# DNS
# General
# Fetch the rules:
# Update the dictionary with user provided values:
# Update the rules:
# before is always a tag model, after is a dict of parameters
# Category already exists and is being updated or removed, so we can use the remote def ID
# Category didn't exist and we created it, so we have the new ID available
# Cycle through all remote categories and find the ones that match the parameters. Determine
# what changes need to be made with them.
# Determine what to do with the parameters that were not found in the remote categories
# Build the Spec
# the parent is a cluster
# Get the thumbprint of the SSL certificate
# this field has the ESXi host object in it, which can't be output by ansible without manipulation.
# but we dont need it in the output anyway, so just delete it
# User provided the tag ID, so we can just see if that is already associated with the object
# Tag is not associated with the object, so we need to add it
# Tag is already associated with the object, we dont want to remove it as an "extra"
# User provided the tag name, so we need to look up the tag by name and category
# Check if the tag is already managed and present on the object
# If the tag is not managed and present on the object, we need to look it up and then add it
# We cache the object tags lookups to avoid making duplicate API calls
# If a tag on the object has the same name and category as the one from the user,
# we dont want to remove it as an "extra"
# handled in base class
# Note: This utility is considered private, and can only be referenced from inside the vmware.vmware collection.
# this fits the format of a fully qualified path and even though datacenter name was passed in,
# we can attempt to treat it as a fq path and the user will just get an error later on
# the path is too vague to complete without the datacenter name
# Start with the immediate parent, accounting for different parent types
# - The default for most objects is 'parent'
# - VMs in VApps have 'parentVApp'
# Get the next parent in the hierarchy, accounting for different parent types
# - Regular folders have 'parent'
# - VApps can have 'parentFolder' (top-level VApp) or 'parent' (nested VApp)
# facts that may or may not exist
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
# dpm host power rate is reversed by the vsphere API. so a 1 in the API is really a 5 in the UI
# docs call one option 'automatic' but the API calls it 'automated'. So we adjust here to match docs
# drs vmotion rate is reversed by the vsphere API. so a 1 in the API is really a 5 in the UI
# Conversion to JSON
# To match gather_vm_facts output
# TODO remove hasattr checks once all objects implement abstractvsphereobject
# user did not specify this parameter, but ansible "set" it to None so
# we missed the KeyError that would have been raised above.
# Create services with focused dependencies
# Device linked handlers - manages VM device configurations that need to be linked to a controller
# All other handlers - manages VM parameters that are not device linked like resources, metadata, etc.
# Controller handlers are separate from the other handlers because they need to
# be processed and initiated before the disk params are parsed.
# Controller handlers need to be processed and initiated before the disk params are parsed
# some devices are not managed by this module (like VMCI),
# so we should skip them instead of failing to link and removing them
# dvs portgroups are technically standard portgroups, so we need to check for dvs first and
# then fallback to standard portgroups
# hot add is allowed, so we can proceed with the change without power cycling
# this cannot be 0 since it is used as a denominator, but 1 will still work
# Define parameter mappings for change detection
# hot add/remove is allowed, so we can proceed with the change without power cycling
# Network adapters are not easily identified, but vmware always lists them in the same order.
# So we can just link the first one that doesn't have a linked device.
# the device is not linked to anything, and no DeviceLinkError was raised,
# so the module will ignore it
# {bus_number: controller}
# Remove None values from both new and old values
# Controller configurations: (key_range_start, key_range_end)
# requests is required for exception handling of the ConnectionError
# not an ESXi
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
# Python < 2.7.9 or RHEL/Centos < 7.4
# Must be ContainerView
# Construct the actual objects by MOID
# Force a property fetch to confirm validity
# Copyright (c) 2018-2021 Fortinet and/or its affiliates.
# If Login worked, then inspect the FortiManager for Workspace Mode, and it's system information.
# CHECK FOR WORKSPACE MODE TO SEE IF WE HAVE TO ENABLE ADOM LOCKS
# THE CONNECTION GOT LOST SOMEHOW, REMOVE THE SID AND REPORT BAD LOGIN
# IF WE WERE USING WORKSPACES, THEN CLEAN UP OUR LOCKS IF THEY STILL EXIST
# Don't log sensitive information
# Sending URL and Data in Unicode, per Ansible Specifications for Connection Plugins
# Get Unicode Response - Must convert from StringIO to unicode first so we can do a replace function below
# The FortiManager is running in workspace mode, please `workspace_locking_adom` in your playbook
# FIXME:by default, users have to know whether their fmg devices are running in worksapce mode and
# specify the paramters in plaubook, we will find a better way to notify the users of this error
# XXX: here is a situation where user can still has no permission to access resources:
# indeed the worksapce lock is acquired by the user himself, but the lock is not
# associated with this session.
# XXX:defer the lock acquisition process after login is done
# it requires that the first task specify the workspace locking adom
# if it's really executed in lock context
##################################
# BEGIN DATABASE LOCK CONTEXT CODE
# Skip this step if user is not permitted to access this resource
# rc=-9: current adom is not in the workspace mode.
# if 'data' is not in the response, the adom is locked by no one
# END DATABASE LOCK CONTEXT CODE
# Copyright 2019-2024 Fortinet, Inc.
# Copyright 2019-2021 Fortinet, Inc.
# (c) 2017-2020 Fortinet, Inc
# BEGIN STATIC DATA / MESSAGES
# BEGIN ERROR EXCEPTIONS
# END ERROR CLASSES
# BEGIN CLASSES
# (c) 2020-2021 Fortinet, Inc
# This params are raw data, need to decide bypass manually.
# params can be user input or api format data
# var-name -> var_name
# fmgr_message -> message
# otherwise, user_params is str, int, float... return directly.
# Get target URL
# in case the `GET` method returns nothing... see module `fmgr_antivirus_mmschecksum`
# Try to get and compare, and skip update if same.
# Check required parameter.
# extracted required parameter.
# Check required parameter for selector: all
# process specific selector and 'all'
# This function is used for full_crud task only
# adom = "", choose default URL which is for all domains
# If has mvalue and not full crud {pkg_path}, add mvalue
# Won't update if remote = 'var' and local = ['var']
# local_value is not list or dict, maybe int, float or str
# e.g., subnet
# Won't update if remote = ['var'] and local = 'var'
# if system version is not determined, give up version checking
# Empty string means no max version
# If the parameter is not given, ignore that.
# assert blob['fail_action'] == 'quit':
# int, float, str, bool
# Mask sensitive data
# Handle special params here
# ignore ['1.2.3.4', '255.255.255.0'] and '1.2.3.4/24'
# ignore ['1.2.3.4', '255.255.255.0'] and '1.2.3.4 255.255.255.0'
# ignore ['var'] and 'var'
# User set it to empty
# ignore ['1', '2'] and ['2', '1']
# [], "", ...
# If this module doesn't have mvalue, it will be ""
# keys are internal in FMG devices.
# key is found
# Do immediate fix.
# make urls from schema and parameters provided.
# Fix for fmgr_sys_hitcount
# the failing conditions priority: failed_when > rc_failed > rc_succeeded.
# Copyright (c) 2022 Fortinet
# import requests
# _options is defined at https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/__init__.py#L60
# seems to be an ansible bug that the value can be defined in the doc above but not set/get
# token = self._options.get('access_token') if 'access_token' in self._options else None
# Read from session_key as dict in ausible host files, e.g: ansible_httpapi_session_key={"access_token":"XXX"}
# some older fortios version may pass session key through cookies
# pass
# cookies = response.raw.headers.get_all("Set-Cookie")
# self.log('all cookies returned from server: %s' % to_text(cookies))
# self.log("found session_key in cookie: %s" % session_key.group(1))
# self._session_key = session_key.group(1)
# headers['x-csrftoken'] = csrftoken_search.group(1)
# self._ccsrf_token = csrftoken_search.group(1)
# trigger automated login call by httpapi
# Copyright: (c) 2022 Fortinet
# check_mode starts from here
# global object response
# 2. if it exists and the state is 'present' then compare current settings with desired
# for non global modules, mkeyname must exist and it's a new module when mkey is None
# if mkey exists then compare each other
# record exits and they're matched or not
# pass post processed data to member operations
# no need to do underscore_to_hyphen since do_member_operation handles it by itself
# Copyright 2020 Fortinet, Inc.
# parameter validation will not block task, warning will be provided in case of parameters validation.
# Checking system status prevents upload.system.vmlicense from uploading a licence to a newly installed machine.
# Logging for fact module could be disabled/enabled.
# Give priority to jsonbody
# Copyright 2020-2021 Fortinet, Inc.
# from urllib.parse import quote
# For Python 3
# For Python 2
# XXX: The plugin level do not accept duplicated url keys, so we make only keep one key here.
# some raw results are not list so we need to wrap it first in order to use the flatten call below
# Only selector or selectors is provided.
# **selector_obj,
# if param_name not in provided_param_names:
# check for pyFMG lib - DEPRECATING
# ACTIVE BUG WITH OUR DEBUG IMPORT CALL -- BECAUSE IT'S UNDER MODULE_UTILITIES
# WHEN module_common.recursive_finder() runs under the module loader, it looks for this namespace debug import
# and because it's not there, it always fails, regardless of it being under a try/catch here.
# we're going to move it to a different namespace.
# # check for debug lib
# except:
# BEGIN HANDLER CLASSES
# if HAS_FMGR_DEBUG:
# Get the Return code from results
# init a few items
# Get the default values for the said return code.
# ONLY add to overrides if not none -- This is very important that the keys aren't added at this stage
# if they are empty. And there aren't that many, so let's just do a few if then statements.
# VALIDATION ERROR
# IDENTIFY SUCCESS/FAIL IF NOT DEFINED
# IF NO MESSAGE WAS SUPPLIED, GET IT FROM THE RESULTS, IF THAT DOESN'T WORK, THEN WRITE AN ERROR MESSAGE
# BECAUSE SKIPPED/FAILED WILL OFTEN OCCUR ON CODES THAT DON'T GET INCLUDED, THEY ARE CONSIDERED FAILURES
# HOWEVER, THEY ARE MUTUALLY EXCLUSIVE, SO IF IT IS MARKED SKIPPED OR UNREACHABLE BY THE MODULE LOGIC
# THEN REMOVE THE FAILED FLAG SO IT DOESN'T OVERRIDE THE DESIRED STATUS OF SKIPPED OR UNREACHABLE.
##########################
# BEGIN DEPRECATED METHODS
# SOME OF THIS CODE IS DUPLICATED IN THE PLUGIN, BUT THOSE ARE PLUGIN SPECIFIC. THIS VERSION STILL ALLOWS FOR
# THE USAGE OF PYFMG FOR CUSTOMERS WHO HAVE NOT YET UPGRADED TO ANSIBLE 2.7
# LEGACY PYFMG METHODS START
# USED TO DETERMINE LOCK CONTEXT ON A FORTIMANAGER. A DATABASE LOCKING CONCEPT THAT NEEDS TO BE ACCOUNTED FOR.
# DEPRECATED -- USE PLUGIN INSTEAD
# END DEPRECATED METHODS
# FMGR RETURN CODES
# RECURSIVE FUNCTIONS START
# BEGIN DEPRECATED
# check for pyFG lib
# see mantis #0690570, if the semantic meaning changes, remove choices as well
# also see accept_auth_by_cert of module fortios_system_csf.
# ansible-test now requires choices are present in spec
# try to detect the versioning gaps and mark them as violations:
# even it's not supported in earliest version
# Parameter inconsistency here is not covered by Ansible, we gracefully throw a warning
# in case no top level parameters are given.
# see module: fortios_firewall_policy
# END DEPRECATED
# when attr_params is a list
# attribute terminated
# terminated normally as last level parameter.
# DELETE share same url with GET
# here we get both module arg spec and provided params
# collect attribute metadata.
# validate parameters on attributes path.
# raise AssertionError(
# Handle the 'move' action logic here, as it is only supported in PUT requests.
# Failing to address this will result in an API issue, since action=move will be included
# in the parameters for a GET request.
# load in file_mode
# get  config
# set configs in object
# backup if needed
# Commit if not check mode
# Something's wrong (rollback is automatic)
# for pytest to look up the module in the same directory
# print("same value confirmed, continue", reorder_current[key], value)
# print("enter", small, big, isinstance(small, list), isinstance(big, list))
# A hack to track all keys that are known to be present and later can be used to filter out these current keys with default values.
# This helps to remove unwanted keys in the result and make the check and diff mode results more clear.
# Go through the small items first to collect existing configurations and
# then add the new ones to the end of the result list to make the diff mode # results more clear
# print('     result after small items:', result)
# print("    not in result", big_item, result)
# raise Exception(f"small: {small}, big before {big}, big after: {big.strip('" ')}, IP_PREFIX: {IP_PREFIX.match(big.strip('" '))}")
# raise Exception(match_applied_ip_address_format(strip_big, small) if IP_PREFIX.match(strip_big) else strip_big)
# print('data type', type(input_data), isinstance(input_data, list))
# print('base case')
# print("proc dict")
# print('proc list')
# Copyright 2019 Fortinet, Inc.
# the input is in the netmask format
# Copyright (c), RavenDB
# GNU General Public License v3.0 or later (see COPYING or
# RavenDB documentation fragment
# assert s.is_enabled
# Copyright: (c) 2021, Ishan Jain (@ishanjainn)
# Copyright: (c) 2024, téïcée (www.teicee.com)
# check if user exists by login provided login
# if no user has this login, check the email if provided
# try with new password to check if already changed
# Auth is OK, password does not need to be changed
# from here, we begin password change procedure
# Grafana admin API is only accessible with basic auth, not token
# So we shall provide admin name and its password
# Copyright (C) 2020 Inspur Inc. All Rights Reserved.
# Copyright(C) 2020 Inspur Inc. All Rights Reserved.
# Copyright (c), Inspur isib-group, 2020
# Copyright: (c) 2021, Devon Mar (@devon-mar)
# Copyright: (c) 2019. Chris Mills <chris@discreet-its.co.uk>
# Make call to NetBox API and capture any failures
# Copyright (c) 2018 Remy Leone
# get the user's cache option to see if we should save the cache if it is changing
# occurs if the cache_key is not in the cache or if the cache_key expired
# we need to fetch the URL now
# not reading from cache so do fetch
# Prevent inventory from failing completely if the token does not have the proper permissions for specific URLs
# Need to return mock response data that is empty to prevent any failures downstream
# put result in cache if enabled
# Handle pagination
# Make an API call for multiple specific IDs, like /api/ipam/ip-addresses?limit=0&device_id=1&device_id=2&device_id=3
# Drastically cuts down HTTP requests comnpared to 1 request per host, in the case where we don't want to fetch_all
# Make sure query_values is subscriptable
# Calculate how many queries we can do per API call to stay within max_url_length
# values are always id ints
# Sanity check, for case where max_uri_length < (api_url + length_per_value)
# Issue netbox-community/netbox#3507 was fixed in v2.7.5
# If using NetBox v2.7.0-v2.7.4 will have to manually set max_uri_length to 0,
# but it's probably faster to keep fetch_all: true
# (You should really just upgrade your NetBox install)
# process chunk of size <= chunk_size
# List of group_by options and hostvars to extract
# Some keys are different depending on plurals option
# Locations were added in 2.11 replacing rack-groups.
# If plurals is enabled, wrap in a single-element list for backwards compatibility
# Keep looping until the object has no parent
# Won't ever happen - defensively guard against infinite loop
# Get the parent of this object
# A host may have a rack. A rack may have a rack_group. A rack_group may have a parent rack_group.
# Produce a list of rack_groups:
# - it will be empty if the device has no rack, or the rack has no rack_group
# - it will have 1 element if the rack's group has no parent
# - it will have multiple elements if the rack's group has a parent group
# Device has no rack
# If prefixes have been pulled, attach prefix list to its assigned site
# Don't wrap in an array if we're about to flatten it to separate host vars
# Check the type of the first element in the "tags" array.
# If a dictionary (NetBox >= 2.9), return an array of tags' slugs.
# If a string (NetBox <= 2.8), return the original "tags" array.
# If tag_zero fails definition (no tags), return the empty array.
# Attach IP Addresses to their interface
# A host may have a site. A site may have a region. A region may have a parent region.
# Produce a list of regions:
# - it will be empty if the device has no site, or the site has no region set
# - it will have 1 element if the site's region has no parent
# - it will have multiple elements if the site's region has a parent region
# Device has no site
# A host may have a site. A site may have a site_group. A site_group may have a parent site_group.
# Produce a list of site_groups:
# - it will be empty if the device has no site, or the site has no site_group set
# - it will have 1 element if the site's site_group has no parent
# - it will have multiple elements if the site's site_group has a parent site_group
# A host may have a location. A location may have a parent location.
# Produce a list of locations:
# - it will be empty if the device has no location
# - it will have 1 element if the device's location has no parent
# - it will have multiple elements if the location has a parent location
# Device has no location
# cluster does not have a slug
# No primary IP assigned
# Don"t assign a host_var for empty dns_name
# Three dictionaries are created here.
# "sites_lookup_slug" only contains the slug. Used by _add_site_groups() when creating inventory groups
# "sites_lookup" contains the full data structure. Most site lookups use this
# "sites_with_prefixes" keeps track of which sites have prefixes assigned. Passed to get_resource_list_chunked()
# The following dictionary is used for host group creation only,
# as the grouping function expects a string as the value of each key
# If the "site_data" option is specified, keep the full data structure presented by the API response.
# The "prefixes" option necessitates this structure as well as it requires the site object to be dict().
# Otherwise, set equal to the "slug only" dictionary
# The following dictionary tracks which sites have prefixes assigned.
# Used by refresh_prefixes()
# Will fail if site does not have a region defined in NetBox
# Dictionary of site id to region id
# Will fail if site does not have a group defined in NetBox
# Dictionary of site id to site_group id
# Will fail if site does not have a time_zone defined in NetBox
# Dictionary of site id to time_zone name (if group by time_zone is used)
# Dictionary of site id to utc_offset name (if group by utc_offset is used)
# Will fail if site does not have a facility defined in NetBox
# Dictionary of site id to facility (if group by facility is used)
# Note: depends on the result of refresh_sites_lookup for self.sites_with_prefixes
# Pull all prefixes defined in NetBox
# We are only concerned with Prefixes that have actually been assigned to sites
# NetBox >=4.2
# NetBox <=4.1
# Remove "site" attribute, as it's redundant when prefixes are assigned to site
# Will fail if region does not have a parent region
# Dictionary of region id to parent region id
# Will fail if site_group does not have a parent site_group
# Dictionary of site_group id to parent site_group id
# Locations were added in v2.11. Return empty lookups for previous versions.
# Will fail if location does not have a parent location
# Locations MUST be assigned to a site
# Dictionary of location id to parent location id
# Location to site lookup
# Locations were added in v2.11 replacing rack groups. Do nothing for 2.11+
# Dictionary of rack group id to parent rack group id
# Will fail if cluster does not have a type (required property so should always be true)
# Will fail if cluster does not have a group (group is optional)
# Query only affected devices and vms and sanitize the list to only contain every ID once
# Construct a dictionary of dictionaries, separately for devices and vms.
# Allows looking up services by device id or vm id
# For a given device id or vm id, get a lookup of interface id to interface
# This is because interfaces may be returned multiple times when querying for virtual chassis parent and child in separate queries
# /dcim/interfaces gives count_ipaddresses per interface. /virtualization/interfaces does not
# Check if device_id is actually a device we've fetched, and was not filtered out by query_filters
# Check if device_id is part of a virtual chasis
# If so, treat its interfaces as actually part of the master
# Keep track of what devices have interfaces with IPs, so if fetch_all is False we can avoid unnecessary queries
# Note: depends on the result of refresh_interfaces for self.devices_with_ips
# Construct a dictionary of lists, to allow looking up ip addresses by interface id
# Note that interface ids share the same namespace for both devices and vms so this is a single dictionary
# Construct a dictionary of the IP addresses themselves
# NetBox v2.9 and onwards
# As of NetBox v2.9 "assigned_object_x" replaces "interface"
# We need to copy the ipaddress entry to preserve the original in case caching is used.
# Remove "assigned_object_X" attributes, as that's redundant when ipaddress is added to an interface
# Remove "interface" attribute, as that's redundant when ipaddress is added to an interface
# IP addresses are needed for either interfaces or dns_name options
# Exceptions that occur in threads by default are printed to stderr, and ignored by the main thread
# They need to be caught, and raised in the main thread to prevent further execution of this plugin
# Save for the main-thread to re-raise
# Also continue to raise on this thread, so the default handler can run to print to stderr
# Wait till we've joined all threads before raising any exceptions
# Avoid retain cycles
# For each element of query_filters, test if it's allowed
# Create a partial function with the device-specific list of query parameters
# Add query_filtes to both devices and vms query, if they're valid
# When query_filters is Iterable, and is not empty:
# - If none of the filters are valid for devices, do not fetch any devices
# - If none of the filters are valid for VMs, do not fetch any VMs
# If either device_query_filters or vm_query_filters are set,
# device_query_parameters and vm_query_parameters will have > 1 element so will continue to be requested
# Append the parameters to the URLs
# Exclude config_context if not required
# Allow looking up devices/vms by their ids
# There's nothing that explicitly says if a host is virtual or not - add in a new field
# An host in an Ansible inventory requires an hostname.
# name is an unique but not required attribute for a device in NetBox
# We default to an UUID for hostname in case the name is not set in NetBox
# Use virtual chassis name if set by the user.
# Check for special case - if group is a boolean, just return grouping name instead
# eg. "is_virtual" - returns true for VMs, should put them in a group named "is_virtual", not "is_virtual_True"
# Don't create the inverse group
# Special case. Extract name from service, which is a hash.
# Don't handle regions here since no hosts are ever added to region groups
# Sites and locations are also specially handled in the main()
# Make groups_for_host a list if it isn't already
# Group names may be transformed by the ansible TRANSFORM_INVALID_GROUP_CHARS setting
# add_group returns the actual group name used
# Map site id to transformed group names
# "Slug" only. Data not used for grouping
# Add the site group to get its transformed name
# Mapping of region id to group name
# Add site groups as children of region groups
# Mapping of site_group id to group name
# Add site groups as children of site_group groups
# Mapping of location id to group name
# Add location to site groups as children
# Only top level locations should be children of sites
# Mapping of id to group name
# Create groups for each object
# Now that all groups exist, add relationships between them
# Compare with None, not just check for a truth comparison - allow empty arrays, etc to be host vars
# Special case - all group_by options are single strings, but tag is a list of tags
# Keep the groups named singular "tag_sometag", but host attribute should be "tags":["sometag", "someothertag"]
# Flatten the dict into separate host vars, if enabled
# Check if pytz lib is install, and give error if not
# Get info about the API - version, allowed query parameters
# Interface, and Service lookup will depend on hosts, if option fetch_all is false
# Looking up IP Addresses depends on the result of interfaces count_ipaddresses field
# - can skip any device/vm without any IPs
# If we're grouping by regions, hosts are not added to region groups
# If we're grouping by locations, hosts may be added to the site or location
# - the site groups are added as sub-groups of regions
# - the location groups are added as sub-groups of sites
# So, we need to make sure we're also grouping by sites if regions or locations are enabled
# Create groups for locations. Will be a part of site groups.
# Create groups for regions, containing the site groups
# Create groups for site_groups, containing the site groups
# Device is part of a virtual chassis, but is not the master
# Special processing for sites and locations as those groups were already created
# Add host to location group when host is assigned to the location
# Add host to site group when host is NOT assigned to a location
# NetBox access
# check if token is new format
# Handle extra "/" from api_endpoint configuration and trim if necessary, see PR#49943
# Filter and group_by options
# Compile regular expressions, if any
# © 2020 Nokia
# Licensed under the GNU General Public License v3.0 only
# Copyright: (c) 2023, Andrii Konts (@andrii-konts) <andrew.konts@uk2group.com>
# Copyright: (c) 2018, Mikhail Yohman (@FragmentedPacket) <mikhail.yohman@gmail.com>
# Copyright: (c) 2019, Benjamin Vergnaud (@bvergnaud)
# Copyright: (c) 2021, Martin Rødvand (@rodvand)
# Copyright: (c) 2023, Martin Rødvand (@rodvand) <martin@rodvand.net>
# Copyright: (c) 2024, Daniel Chiquito (@dchiquito) <daniel.chiquito@gmail.com>
# Copyright: (c) 2022, Martin Rødvand (@rodvand) <martin@rodvand.net>
# Copyright: (c) 2024, Rich Bibby, NetBox Labs (@richbibby)
# Copyright: (c) 2024, Philipp Rintz (@p-rintz) <git@rintz.net>
# Copyright: (c) 2023, Antoine Dunn (@MinDBreaK) <15737031+MinDBreaK@users.noreply.github.com>
# Copyright: (c) 2023, Erwan TONNERRE (@etonnerre) <erwan.tonnerre@thalesgroup.com>
# state choices present, absent, new
# Copyright: (c) 2025, Martin Rødvand (@rodvand) <martin@rodvand.net>
# Copyright: (c) 2021, Andrew Simmons (@andybro19) <andrewkylesimmons@gmail.com>
# Copyright: (c) 2019, Mikhail Yohman (@FragmentedPacket) <mikhail.yohman@gmail.com>
# Copyright: (c) 2020, Pavel Korovin (@pkorovin) <p@tristero.se>
# Copyright: (c) 2019, Mikhail Yohman (@FragmentedPacket)
# Copyright: (c) 2019, Gaelle Mangin (@gmangin)
# Copyright: (c) 2018, David Gomez (@amb1s1) <david.gomez@networktocode.com>
# Copyright: (c) 2024, Martin Rødvand (@rodvand)
# Copyright: (c) 2022, Erwan TONNERRE (@etonnerre) <erwan.tonnerre@thalesgroup.com>
# Copyright: (c) 2019, Amy Liebowitz (@amylieb)
# Copyright: (c) 2019, Kulakov Ilya  (@TawR1024)
# Change port to ports for 2.10+ and convert to a list with the single integer
# Run the normal run() method
# Copyright: (c) 2022, Martin Rødvand (@rodvand)
# Copyright: (c) 2018, Mikhail Yohman (@fragmentedpacket) <mikhail.yohman@gmail.com>
# Used to dynamically set key when returning results
# Used for msg output
# Make color params lowercase
# Handle journal entry
# Do not diff the password field
# The initial response from Netbox obviously doesn't have a password field, so the nb_object also doesn't have a password field.
# Any fields that weren't on the initial response are ignored, so to update the password we must add the password field to the cache.
# Copyright: (c) 2021, Martin Rødvand (@rodvand) <martin@rodvand.net>
# Copyright: (c) 2020, Nokia, Tobias Groß (@toerb) <tobias.gross@nokia.com>
# Handle rack and form_factor
# Attempt to find the exact cable via the interface
# relationship
# This is logic to handle interfaces on a VC
# Import necessary packages
# Used to map endpoints to applications dynamically
# Used to normalize data for the respective query types used to find endpoints
# Specifies keys within data that need to be converted to ID and the endpoint to be used when queried
# Just a placeholder as scope can be several different types including sites.
# This is used to map non-clashing keys to NetBox API compliant keys to prevent bad logic in code for similar keys but different modules
# This is used to dynamically convert name to slug on endpoints requiring a slug
# These should not be required after making connection to NetBox
# Attempt to initiate connection to NetBox
# For NetBox versions without /api/status endpoint
# if self.module.params.get("query_params"):
# These methods will normalize the regular data
# convert to ints
# If major version is higher then return true right off the bat
# If major versions are equal, and minor version is higher, return True
# Fetch the OpenAPI spec to perform validation against
# Loop over passed in params and add to invalid_query_params and then fail if non-empty
# TODO: Remove this once the lowest supported Netbox version is 3.6 or greater as we can use default logic of CONVERT_KEYS moving forward.
# This will keep the original key for keys in list, but also convert it.
# This is to change the parent key to use the proper ALLOWED_QUERY_PARAMS below for termination searches.
# This is to skip any potential changes using module_data when the user
# provides user_query_params
# Determine endpoint name for scope ID resolution
# If user passes in an integer, add to ID list to id_list as user
# should have passed in a tag ID
# We need to assign the correct type for the assigned object so the user doesn't have to worry about this.
# We determine it by whether or not they pass in a device or virtual_machine
# Ensure idempotency for site on older netbox versions
# Ensure idempotency for virtual machine on older netbox versions
# Ensure idempotency for cable on netbox versions later than 3.3
# Sets each check to None so they are not run in AnsibleModule
# Quick fix to support ansible-core 2.11
# Load the params manually as the self.params already have the defaults set
# Run each check manually providing the params
# Copyright: (c) 2024, Fred De Backer (@freddebacker) <debacker.fred@gmail.com>
# (c) 2015, René Moser <mail@renemoser.net>
# This file is part of Ansible,
# Make a group per zone
# Copyright (c) 2015, René Moser <mail@renemoser.net>
# Standard cloudstack documentation fragment
# Additional Cloudstack Configuration with Environment Variables Mappings
# Copyright (c) 2020, Rafael del valle <rafael@privaz.io>
# TODO: plugin should work as 'cloudstack' only
# The J2 Template takes 'instance' object as returned from ACS and returns 'instance' object as returned by
# This inventory plugin.
# The data structure of this inventory has been designed according to the following criteria:
# - do not duplicate/compete with Ansible instance facts
# - do not duplicate/compete with Cloudstack facts modules
# - hide internal ACS structures and identifiers
# - if possible use similar naming to previous inventory script
# - prefer non-existing attributes over null values
# - populate the data required to group and filter instances
# The configuration logic matches modules specification
# is there a value to filter by? we will search with it
# we return all items related to the query involved in the filtering
# if we find the searched value as either an id or a name
# we add the corresponding filter as query argument
# Filtering as supported by ACS goes here
# This is the inventory Config
# We Initialize the query_api
# All Hosts from
# The ansible_host preference
# Retrieve the filtered list of instances
# we normalize the instance data using the embedded J2 template
# Add all available attributes
# set hostname preference
# Copyright (c) 2017, René Moser <mail@renemoser.net>
# Ensure we return a bool
# Copyright (c) 2015, Darren Worrall <darren@iweb.co.uk>
# ipaddress only works with CloudStack >=v4.13
# For the VPC case networkid is irrelevant, special case and we have to ignore it here.
# refresh resource
# Copyright (c) 2016, René Moser <mail@renemoser.net>
# The API as in 4.9 does not return an updated role yet
# Do not pass zoneid, as the instance name must be unique across zones.
# name or keyword is documented but not work on cloudstack 4.19
# commented util will work it
# 'name': name,
# Query the user data if we need to
# Fails if keypair for param is inexistent
# CloudStack 4.5 does return keypair on instance for a non existent key.
# Get fingerprint for keypair of instance but do not fail if inexistent.
# Compare fingerprints to ensure the keypair changed
# In check mode, we do not necessarily have an instance
# refresh instance data
# Service offering data
# Instance UserData
# Volume data
# Ensure VM has stopped
# Change service offering
# Update VM
# Reset SSH key
# SSH key data
# Root disk size
# Start VM again if it was running before
# migrate to other host
# in check mode instance may not be instantiated
# reset ip address and query new values
# make an alias, so we can use has_changed()
# Copyright (c) 2018, David Passante <@dpassante>
# Prevent confusion, the api returns a tags key for storage tags.
# Copyright (c) 2017, Netservers Ltd. <support@netservers.co.uk>
# import cloudstack common
# Host state: param state
# Cancel maintenance if target state is enabled/disabled
# Only an enabled host can put in maintenance
# Only a pool in maintenance can be deleted
# Copyright (c) 2018, Gregor Riepl <onitake@gmail.com>
# based on cs_sshkeypair (c) 2015, René Moser <mail@renemoser.net>
# this data come form users, we try what we can to parse it...
# debian / ubuntu
# centos 6
# centos 7
# openbsd
# get IP of string "option dhcp-server-identifier 185.19.28.176;"
# the user_security_group and cidr are mutually_exclusive, but cidr is defaulted to 0.0.0.0/0.
# that is why we ignore if we have a user_security_group.
# ingressrule / egressrule
# The API name argument filter also matches substrings, we have to
# iterate over the results to get an exact match
# Fail if the identifier matches more than one VPC
# these values will be casted to int
# API broken in 4.2.1?, workaround using remove/create instead of update
# portforwarding_rule = self.query_api('updatePortForwardingRule', **args)
# Copyright (c) 2017, David Passante (@dpassante)
# A cross zones template has one entry per zone but the same id
# Skip ACL check if the network is not a VPC tier
# Restarting only available for these states
# States only usable by the updateHost API
# In case host in maintenance and target is maintenance
# Set host allocationstate to be disabled/enabled
# last part of the path is the name
# cut off last /*
# Copyright: (c) 2015, René Moser <mail@renemoser.net>
# CloudStack 4.11 use the network cidr for 0.0.0.0/0 in egress
# That is why we need to replace it.
# we need to enable the user to lock it.
# register user api keys
# secretkey has been removed since CloudStack 4.10 from listUsers API
# Prevent confusion, the api returns a "tags" key for storage tags.
# Version < 4.16
# Workaround API does not return cross_zones=true
# if checksum is set, we only look on that.
# fix different return from API then request argument given
# API returns a list as result CLOUDSTACK-9205
# Copyright (c) 2019, Patryk D. Cichy <patryk.d.cichy@gmail.com>
# if state=present we get to the state before the service
# offering change.
# we need to enable the account to lock it.
# Tags are comma separated strings in network offerings
# Return a list of comma separated network offering tags
# Copyright (c) 2017, Marc-Aurèle Brothier @marcaurele
# Do not try to update if no IP address is given
# delete the ssh key with matching name but wrong fingerprint
# delete the ssh key with matching fingerprint but wrong name
# First match for key retrievement will be the fingerprint.
# We need to make another lookup if there is a key with identical name.
# Query by fingerprint of the public key
# When key has not been found by fingerprint, use the name
# Copyright (c) 2015, Jefferson Girão <jefferson@girao.net>
# Do not filter on DATADISK when state=extracted
# change unit from bytes to giga bytes to compare with args
# zone is optional as it means that the configuration is aimed at a global setting.
# Save result for RETURNS
# Copyright: (c) 2019, Patryk Cichy @PatTheSilent
# Cloudstack API expects 'provider' but returns 'providername'
# Common returns, will be merged with self.returns
# search_for_key: replace_with_key
# Init returns dict for use in subclasses
# these keys will be compared case sensitive in self.has_changed()
# Helper for VPCs
# Optionally limit by a list of keys
# Skip None values
# ensure we compare the same type
# Test for diff in case insensitive way
# Fail if the identifyer matches more than one VPC
# This is an efficient way to query a lot of networks at a time
# ignore any VPC network if vpc param is not given
# Do not add domain filter for disk offering listing.
# this check is theoretically not required, as module argument specification should take care of it
# however, due to deprecated default zone is left behind just in case non obvious callers.
# Some modules benefit form the check anyway like those where zone if effectively optional like
# template registration (local/cross zone) or configuration (zone or global)
# use the first hypervisor if no hypervisor param given
# Bad bad API does not always return int when it should.
# payload = json.dumps(payload) if payload else '{}'
# https://github.com/ansible/ansible/issues/65816
# https://github.com/PyCQA/pylint/issues/214
# (c) 2018, Adam Miller (admiller@redhat.com)
# map of keys for the splunk REST API that aren't pythonic so we have to
# handle the substitutes
# This is where the splunk_* args are processed
# Create it
# FIXME - adaptive response action association is probaby going to need to be a separate module we stitch together in a role
# the data monitor doesn't exist
# Have to custom craft the data here because they overload the saved searches
# endpoint in the rest api and we want to hide the nuance from the user
# FIXME - need to find a reasonable way to deal with action.correlationsearch.enabled
# If this is present, splunk assumes we're trying to create a new one wit the same name
# FIXME - adaptive response action association is probably going to need to be a separate module we stitch together in a role
# FIXME  need to figure out how to properly support these, the possible values appear to
# request_post_data['action.notable.param.extract_assets'] = '[\"src\",\"dest\",\"dvc\",\"orig_host\"]'
# request_post_data['action.notable.param.extract_identities'] = [\"src_user\",\"user\"]
# NOTE: version:1 appears to be hard coded when you create this via the splunk web UI
# NOTE: this field appears to be hard coded when you create this via the splunk web UI
# FIXME - need to figure out how to clear the action.notable.param fields from the api endpoint
# need to store 'recommended_actions','extract_artifacts','next_steps' and 'investigation_profiles'
# since merging in the parsed form will eliminate any differences
# responsible for correctly setting certain parameters depending on the state being triggered.
# These parameters are responsible for enabling and disabling notable response actions
# trimming trailing characters
# need to store correlation search details for populating future request payloads
# need to remove 'name', otherwise the API call will try to modify the correlation search
# Since there is no delete operation associated with an action,
# The delete operation will unset the relevant fields
# Compare obtained values with a dict representing values in a 'deleted' state
# if the obtained values are different from 'deleted' state values
# Check if have_conf has extra parameters
# need to store 'recommended_actions','extract_artifacts'
# 'next_steps' and 'investigation_profiles'
# restoring parameters
# config is retrieved as a string; need to deserialise
# need to store 'annotations' and 'throttle_fields_to_group_by'
# This is because these fields are getting converted from strings
# to lists/dictionaries, and so these fields need to be compared
# as such
# need to check for custom annotation frameworks
# setting parameters that enable correlation search
# need to store 'annotations' and 'throttle_group_by_field'
# while creating new correlation search, this is how to set the 'app' field
# API returns back "index", even though it can't be set within /tcp/cooked
# This function is meant to construct the URL and handle GET, POST and DELETE calls
# depending on th context. The URLs constructed and handled are:
# /tcp/raw[/{name}]
# /tcp/cooked[/{name}]
# /tcp/splunktcptoken[/{name}]
# /tcp/ssl[/{name}]
# /udp[/{name}]
# In all cases except "ssl" datatype, creation of objects is handled
# by a POST request to the parent directory. Therefore name shouldn't
# be included in the URL.
# if no "name" was provided
# Adding back protocol and datatype fields for better clarity
# If certain parameters are present, Splunk appends the value of those parameters
# to the name. Therefore this causes idempotency to fail. This function looks for
# said parameters and conducts checks to see if the configuration already exists.
# Int values confuse diff
# If "restrictToHost" parameter is set, the value of this parameter is appended
# to the numerical name meant to represent port number
# If datatype is "splunktcptoken", the value "splunktcptoken://" is appended
# to the name
# If the above parameters are present, but the object doesn't exist
# the value of the parameters shouldn't be prepended to the name.
# Otherwise Splunk returns 400. This check is takes advantage of this
# and sets the correct name.
# while creating new conf, we need to only use numerical values
# splunk will later append param value to it.
# SSL response returns a blank "name" parameter, which causes problems
# not returned
# splunk takes "crc-salt" as input parameter, and returns "crcSalt" in output
# therefore we can't directly use mapping
# self._result[self.module_name] = {}
# TODO: There is a ton of code only present to make sure the legacy modules
# work as intended. Once the modules are deprecated and no longer receive
# support, this object needs to be rewritten.
# needs to be dealt with after end of support
# The legacy modules had a partial implementation of keymap, where the data
# passed to 'create_update' would completely be overwritten, and replaced
# by the 'get_data' function. This flag ensures that the modules that hadn't
# yet been updated to use the keymap, can continue to work as originally intended
# check if call being made by legacy module (passes 'module' param)
# The Splunk REST API endpoints often use keys that aren't pythonic so
# we need to handle that with a mapping to allow keys to be proper
# variables in the module argspec
# Select whether payload passed to create update is overriden or not
# the rest data that don't follow the splunk_* naming convention
# when 'self.override' is True, the 'get_data' function replaces 'data'
# in order to make use of keymap
# SPDX-FileCopyrightText: Ansible Project
# Copyright (c) Ansible project
# Common parameters for Proxmox VE modules
# Should be used together with the standard fragment
# Should be used together with the standard fragment and the FACTS fragment
# Derived from ansible/plugins/connection/paramiko_ssh.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Copyright (c) 2024 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2023, Felix Fontein <felix@fontein.de>
# only required for POST/PUT/DELETE methods, which we are not using currently
# 'CSRFPreventionToken': json['data']['CSRFPreventionToken'],
# Clean and format token components
# Build token string without newlines
# Set headers with clean token
# /hosts/:id does not have a 'data' key
# /facts are returned as dict in 'data'
# sort interface by iface name to make selection as stable as possible
# only process interfaces adhering to these rules
# This happens on Windows, even though qemu agent is running, the IP address
# cannot be fetched, as it is unsupported, also a command disabled can happen.
# fixup disk images as they have no key
# Additional field containing parsed tags as list
# The first field in the agent string tells you whether the agent is enabled
# the rest of the comma separated string is extra config for the agent.
# In some (newer versions of proxmox) instances it can be 'enabled=1'.
# split off strings with commas to a dict
# skip over any keys that cannot be processed
# get status, config and snapshots if want_facts == True
# ensure the host satisfies filters
# add the host to the inventory
# get more details about the status of the qemu VM
# create common groups
# gather vm's on nodes
# get node IP address
# Setting composite variables
# add LXC/Qemu groups for the node
# get LXC containers and Qemu VMs for this node
# gather vm's in pools
# read and template auth options
# some more cleanup and validation
# read rest of options
# Copyright (c) 2021, Lammert Hellinga (@Kogelvis) <lammert@hellinga.it>
# Convert the current config to a dictionary
# determine the current model nic and mac-address
# build nic config string
# If vmid is not defined then retrieve its value from the vm name,
# Ensure VM id exists
# Copyright Julian Vanden Broeck (@l00ptr) <julian.vandenbroeck at dalibo.com>
# Copyright (c) 2023, Sergei Antipov <greendayonfire at gmail.com>
# Leave in dict only machines that user wants to know about
# Get list of unique node names and loop through it to get info about machines.
# "type" is mandatory and can have only values of "qemu" or "lxc". Seems that use of reflection is safe.
# When user wants to retrieve the VM configuration
# pending = 0, current = 1
# GET /nodes/{node}/qemu/{vmid}/config current=[0/1]
# Copyright (c) 2025, Markus Kötter <koetter@cispa.de>
# SPDX-FileCopyrightText: (c) 2025, Markus Kötter <koetter@cispa.de>
# Copyright Tristan Le Guern (@tleguern) <tleguern at bouledef.eu>
# Convert proxmox representation of lists, dicts and boolean for easier
# manipulation within ansible.
# Copyright (c) 2025, Florian Paul Azim Hoberg (@gyptazy) <florian.hoberg@credativ.de>
# Create payload for storage creation
# Validate required parameters based on storage type
# Check Mode validation
# Add storage
# Remove storage
# Initialize objects and avoid re-polling the current
# nodes in the cluster in each function call.
# Copyright (c) 2022, Castor Sky (@castorsky) <csky57@gmail.com>
# When empty CD-ROM drive present, the volume part of config string is "none".
# Sanitize parameters dictionary:
# - Remove not defined args
# - Ensure True and False converted to int.
# - Remove unnecessary parameters
# NOOP
# CREATE
# When 'import_from' option is present in task options.
# disk=<busN>, media=cdrom, iso_image=<ISO_NAME>
# UPDATE
# 'import_from' fails on disk updates
# Begin composing configuration string
# Append all mandatory fields from playbook_config
# Append to playbook_config fields which are constants for disk images
# CD-ROM is special disk device and its disk image is subject to change
# Values in params are numbers, but strings are needed to compare with disk_config
# Now compare old and new config to detect if changes are needed
# Remove not defined args
# Check if the disk is already in the target storage.
# Resize disk API endpoint has changed at v8.0: PUT method become async.
# Proxmox native parameters
# Disk moving relates parameters
# Module related parameters
# Verify disk name has appropriate name
# Ensure VM id exists and retrieve its config
# Do not try to perform actions on missing disk
# Normalize input SIDs to version without : in them
# Compare all keys in the desired state against current state
# Copyright (c) 2021, Andreas Botzner (@paginabianca) <andreas at botzner dot com>
# Copyright (c) 2020, Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
# shutdown container if running
# delete all mountpoints configs
# NOTE: requires auth as `root@pam`, API tokens are not supported
# see https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/lxc/{vmid}/config
# restore original config
# start container (if was running before snap)
# ignore the last snapshot, which is the current state
# sort by age, oldest first
# check if credentials will work
# WARN: it is crucial this check runs here!
# The correct permissions are required only to reconfig mounts.
# Not checking now would allow to remove the configuration BUT
# fail later, leaving the container in a misconfigured state.
# If hostname is set get the VM id from ProxmoxAPI
# Copyright John Berninger (@jberning) <john.berninger at gmail.com>
# Copyright (c) 2025, Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
# Copyright (c) 2025, Kevin Quick <kevin@overwrite.io>
# SPDX-FileCopyrightText: (c) 2025, Jeffrey van Pelt (@Thulium-Drake) <jeff@vanpelt.one>
# SPDX-FileCopyrightText: (c) 2025, Kevin Quick <kevin@overwrite.io>
# Check standard fields
# Check groups (API returns list, we send comma-separated string)
# Translate input to make API happy
# Build update parameters - only include non-None values
# We have no way of testing if the user's password needs to be changed
# so, if it's provided we will update it anyway
# if the user is new, post it to the API
# Convert empty strings to None for proper comparison
# Copyright Ansible Project
# Require one of clone, ostemplate, or update.
# Together with mutually_exclusive this ensures that we either
# clone a container or create a new one from a template file.
# Creating a new container is done either by cloning an existing one, or based on a template.
# check if the container exists already
# Update it if we should
# We're done if it shouldn't be forcefully created
# fetch current config
# create diff between the current and requested config
# if the arg isn't in the current config, it needs to be added
# compare all string values as lists as some of them may be lists separated by commas. order doesn't matter
# if it's not a list (or string) just compare the values
# some types don't match with the API, so force a string comparison
# update the config
# Remove empty values from kwargs
# By default, create a full copy only when the cloned container is not a template.
# Only accept parameters that are compatible with the clone endpoint.
# Cloning a template, so create a full copy instead of a linked copy
# Cloned container is not a template, so we need our 'storage' parameter
# If the volume is not explicitly defined but implicit by only passing a key,
# add the "volume=" key prefix for ease of parsing.
# Then create a dictionary from the arguments
# DISCLAIMER:
# There are two things called a "volume":
# 1. The "volume" key which describes the storage volume, device or directory to mount into the container.
# 2. The storage volume of a storage-backed mount point in the PVE storage sub system.
# In this section, we parse the "volume" key and check which type of mount point we are dealing with.
# Pattern matching only available in Python 3.10+
# TODO: Uncomment the following code once only Python 3.10+ is supported
# match match_dict:
# default to GiB
# Handle volume checks/creation
# TODO: Change the code below to pattern matching once only Python 3.10+ is supported
# 1. Check if defined volume exists
# 2. If volume not defined (but storage is), check if it exists
# The node must exist, but not the LXC
# If not, we have proxmox create one using the special syntax
# 3. If we have a host_path, we don't have storage, a volume, or a size
# Then we don't have to do anything, just build and return the vol_string
# requests_toolbelt is used internally by proxmoxer module
# noqa: F401, pylint: disable=unused-import
# download appliance template
# Copyright (c) 2016, Abdoul Bah (@helldorado) <bahabdoul at gmail.com>
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
# Convert all dict in kwargs to elements.
# For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
# Split information by type
# Increase task timeout in case of stopped state to be sure it waits longer than VM stop operation itself
# Wait an extra second as the API can be a ahead of the hypervisor
# Available only in PVE 4
# valid clone parameters
# Default args for vm. Note: -args option is for experts only. It allows you to pass arbitrary arguments to kvm.
# The features work only on PVE 4+
# The features work only on PVE 6
# The features work only on PVE 8
# 'sshkeys' param expects an urlencoded string
# If update, don't update disk (virtio, efidisk0, tpmstate0, ide, sata, scsi) and network interface, unless update_unsafe=True
# pool parameter not supported by qemu/<vmid>/config endpoint on "update" (PVE 6.2) - only with "create"
# Check that the bios option is set to ovmf if the efidisk0 option is present
# Flatten efidisk0 option to a string so that it is a string which is what Proxmoxer and the API expect
# Regexp to catch underscores in keys name, to replace them after by hyphens
# If present, the storage definition should be the first argument
# Join other elements from the dict as key=value using commas as separator, replacing any underscore in key
# by hyphens (needed for pre_enrolled_keys to pre-enrolled-keys)
# Flatten tpmstate0 option to a string so that it is a string which is what Proxmoxer and the API expect
# For audio[n], hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n], ipconfig[n], usb[n]
# The API also allows booleans instead of e.g. `enabled=1` for backward-compatibility.
# Not something that Ansible would parse as a boolean.
# Rename numa_enabled to numa, according the API documentation
# PVE api expects strings for the following params
# VM tags are expected to be valid and presented as a comma/semi-colon delimited string
# -args and skiplock require root@pam user - but can not use api tokens
# not sure why, but templating a container doesn't return a taskid
# the cloned vm name or retrieve the next free VM id from ProxmoxAPI.
# If newid is not defined then retrieve the next free id from ProxmoxAPI
# Ensure source VM name exists when cloning
# Ensure source VM id exists when cloning
# Ensure the chosen VM id doesn't already exist when cloning
# Copyright (c) 2025 Marzieh Raoufnezhad <raoufnezhad@gmail.com>
# Copyright (c) 2025 Maryam Mayabi <mayabi.ahm at gmail.com>
# Check if proxmoxer exist
# Start to connect to proxmox to get backup data
# Copyright Tristan Le Guern <tleguern at bouledef.eu>
# Data representation is not the same depending on API calls
# Copyright (c) 2024 Marzieh Raoufnezhad <raoufnezhad at gmail.com>
# Copyright (c) 2024 Maryam Mayabi <mayabi.ahm at gmail.com>
# Get all backup information
# Get VM information
# Get all backup information by VM ID and VM name
# Get proxmox backup information for a specific VM based on its VM ID or VM name
# Define module args
# Define (init) result value
# Update result value based on what requested (module args)
# Return result value
# Copyright (c) 2024, IamLunchbox <r.grieger@hotmail.com>
# Check for Datastore.AllocateSpace in the permission tree
# Since VM.Backup can be given for each vmid at a time, iterate through all of them
# and check, if the permission is set
# Finally, when no check succeeded, fail
# Loop through all cluster storages and get all matching storages
# filter all entries, which did not get a task id from the Cluster
# iterate through the task ids and check their values
# proxmox.api_task_ok does not suffice, since it only
# is true at `stopped` and `ok`
# Finally, reattach ok tasks to show, that all nodes were contacted
# ensure only valid post parameters are passed to proxmox
# list of dict items to replace with (new_val, old_val)
# Set mode specific values
# Create comma separated list from vmids, the API expects so
# remove whitespaces from option strings
# convert booleans to 0/1
# stop here, before anything gets changed
# SPDX-FileCopyrightText: (c) 2025, Jeffrey van Pelt (Thulium-Drake) <jeff@vanpelt.one>
# Format the fingerprint as uppercase hex pairs separated by colons to match Proxmox's output
# e.g., "A1:B2:C3:D4:E5:F6:G7:H8:I9:J0:K1:L2:M3:N4:O5:P6:Q7:R8:S9:T0"
# Copyright (c) 2023, Sergei Antipov (UnderGreen) <greendayonfire@gmail.com>
# Get cluster data
# If we have data, check if we're already member of the desired cluster or a different one
# The Proxmox VE API currently does not support leaving a cluster
# or removing a node from a cluster. Therefore, we only support creating
# and joining a cluster. (https://pve.proxmox.com/pve-docs/api-viewer/#/cluster/config/nodes)
# Copyright (c) 2020, Tristan Le Guern <tleguern at bouledef.eu>
# Test token validity
# This has been vendored from ansible.module_utils.common.file. This code has been removed from there for ansible-core 2.16.
# file wasn't opened, let context manager fail gracefully
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
# Copyright (c) 2016, Peter Sagerson <psagers@ignorare.net>
# Copyright (c) 2016, Jiri Tyr <jiri.tyr@gmail.com>
# Copyright (c) 2017-2018 Keller Fuchs (@KellerFuchs) <kellerfuchs@hashbang.sh>
# Standard LDAP documentation fragment
# Copyright (c) 2025 Ansible community
# Use together with the community.general.redfish module utils' REDFISH_COMMON_ARGUMENT_SPEC
# Copyright (c) 2016-2017, Hewlett Packard Enterprise Development LP
# OneView doc fragment
# Copyright (c) 2018, Bojan Vitnik <bvitnik@mainstream.rs>
# Common parameters for XenServer modules
# Copyright (c) 2021, Phillipe Smith <phsmithcc@gmail.com>
# Copyright (c) 2019, Sandeep Kasargod <sandeep@vexata.com>
# Documentation fragment for Vexata VX100 series
# Copyright (c) 2018, Hewlett Packard Enterprise Development LP
# HPE 3PAR doc fragment
# Copyright (c) 2016, Dimension Data
# Authors:
# Dimension Data ("wait-for-completion" parameters) doc fragment
# Copyright (c) 2021, Andreas Botzner <andreas at botzner dot com>
# Common parameters for Redis modules
# Dimension Data doc fragment
# Copyright (c) 2017, Daniel Korn <korndaniel1@gmail.com>
# Standard ManageIQ documentation fragment
# Copyright (c) 2020 FERREIRA Christophe <christophe.ferreira@cnaf.fr>
# Copyright (c) 2018, IBM CORPORATION
# Author(s): Tzur Eliyahu <tzure@il.ibm.com>
# ibm_storage documentation fragment
# Copyright (c) 2018, Johannes Brunswicker <johannes.brunswicker@gmail.com>
# Copyright (c) 2018, Oracle and/or its affiliates.
# DEPRECATED
# This fragment is deprecated and will be removed in community.general 13.0.0
# Copyright (c) 2017, Eike Frost <ei@kefro.st>
# Copyright (c) 2018, Huawei Inc.
# HWC doc fragment.
# Copyright (c) 2017, Simon Dodsley <simon@purestorage.com>
# Copyright (c) 2024, Alexei Znamensky <russoz@gmail.com>
# Common parameters for Consul modules
# Copyright (c) 2017, Ansible Project
# Copyright (c) 2017, Abhijeet Kasurde (akasurde@redhat.com)
# Parameters for influxdb modules
# Copyright (c) 2018, www.privaz.io Valletech AB
# OpenNebula common documentation
# Copyright (c) 2017-present Alibaba Group Holding Limited. He Guimin <heguimin36@163.com>
# Alicloud only documentation fragment
# Copyright (c) 2017-18, Ansible Project
# Copyright (c) 2017-18, Abhijeet Kasurde (akasurde@redhat.com)
# Parameters for FreeIPA/IPA modules
# Copyright (c) 2022, Guillaume MARTINEZ <lunik@tiwabbit.fr>
# Copyright (c) 2021, Florian Dambrine <android.florian@gmail.com>
# Copyright (c) 2023, Ansible Project
# Copyright (C) 2017 Lenovo, Inc.
# Standard Pylxca documentation fragment
# Copyright (c) 2015, Peter Sprygada <psprygada@ansible.com>
# Copyright (c) 2018, Luca Lorenzetto (@remix_tj) <lorenzetto.luca@gmail.com>
# Documentation fragment for VNX (emc_vnx)
# Copyright (c) 2019, Evgeniy Krysanov <evgeniy.krysanov@gmail.com>
# Copyright (c) 2015, Alejandro Guirao <lekumberri@gmail.com>
# Copyright (c) 2012-17 Ansible Project
# In case "file" or "key" are not present
# Search also in the role/files directory and in the playbook directory
# Convert the value read to string
# Copyright (c) 2013, Bradley Young <young.bradley@gmail.com>
# Copyright (c) 2023, Poh Wei Sheng <weisheng-p@hotmail.sg>
# vs pyjwt
# Copyright (c) 2015, Steve Gargan <steve.gargan@gmail.com>
# get options
# responds with a single or list of result maps
# Copyright (c) 2015-2021, Felix Fontein <felix@fontein.de>
# This is available since the Data Tagging PR has been merged
# If we are done, add to result list:
# Evaluate expression in current context
# Copyright (c) 2023, jantari (https://github.com/jantari)
# strip the prefix and grab the last segment, the version number
# Prepare set of params for Bitwarden Secrets Manager CLI
# Color output was not always disabled correctly with the default 'auto' setting so explicitly disable it.
# bws version 0.3.0 introduced a breaking change in the command line syntax:
# pre-0.3.0: verb noun
# 0.3.0 and later: noun verb
# Copyright (c) 2013, Jan-Piet Mens <jpmens(at)gmail.com>
# (m) 2016, Mihai Moldovanu <mihaim@tfm.ro>
# (m) 2017, Juan Manuel Parrilla <jparrill@redhat.com>
# this can be made configurable, not should not use ansible.cfg
# Made module configurable from playbooks:
# If etcd  v2 running on host 192.168.1.21 on port 2379
# we can use the following in a playbook to retrieve /tfm/network/config key
# - ansible.builtin.debug: msg={{lookup('etcd','/tfm/network/config', url='http://192.168.1.21:2379' , version='v2')}}
# Example Output:
# TASK [debug] *******************************************************************
# ok: [localhost] => {
# This function will receive all etcd tree,
# if the level requested has any node, the recursion starts
# create a list in the dir variable and it is passed to the
# recursive function, and so on, if we get a variable,
# the function will create a key-value at this level and
# undoing the loop.
# I will not support Version 1 of etcd for folder parsing
# When ETCD are working with just v1
# When a usual result from ETCD
# Here return an error when an unknown entry responds
# Copyright (c) 2012, Jan-Piet Mens <jpmens(at)gmail.com>
# setup connection
# connection failed or key not found
# Copyright (c) 2015, Jan-Piet Mens <jpmens(at)gmail.com>
# RRSIG: ['type_covered', 'algorithm', 'labels', 'original_ttl', 'expiration', 'inception', 'key_tag', 'signer', 'signature'],
# dig: Lookup DNS records
# Create Resolver object so that we can set NS if necessary
# e.g. "@10.0.1.2,192.0.2.1" is ok.
# Check if we have a valid IP address. If so, use that, otherwise
# try to resolve name to address using system's resolver. If that
# fails we bail out.
# print "--- domain = {domain} qtype={qtype} rdclass={rdclass}"
# Strip outside quotes on TXT rdata
# Copyright (c) 2025, Felix Fontein <felix@fontein.de>
# Copyright (c) 2017, Patrick Deelman <patrick@patrickdeelman.nl>
# backhacked check_output with input for python 2.7
# http://stackoverflow.com/questions/10103551/passing-data-to-subprocess-check-output
# note: contains special logic for calling 'pass', so not a drop-in replacement for check_output
# os.EX_CONFIG
# I went with the "traditional" param followed with space separated KV pairs.
# Waiting for final implementation of lookup parameter parsing.
# See: https://github.com/ansible/ansible/issues/12255
# the first param is the pass-name
# next parse the optional parameters in keyvalue pairs
# check and convert values
# Collect pass environment variables from the plugin's parameters.
# make sure to get errors in English as required by check_output2
# Set PASSWORD_STORE_DIR
# Set PASSWORD_STORE_UMASK if umask is set
# When using real pass, only accept password as found if there is a .gpg file for it (might be a tree node otherwise)
# 'not in password store' is the expected error if a password wasn't found
# generate new password, insert old lines from current result and return new password
# if the target is a subkey, only modify the subkey
# generate new file and insert lookup_pass: Generated by Ansible on {date}
# use pwgen to generate the password and insert values with pass -m
# pass and gopass are commands as well
# parse the input into paramvals
# password file exists
# if "overwrite", always update password
# target is a subkey, this subkey is not in passdict BUT missing == create
# password does not exist
# lookup password again if under write lock
# Copyright (c) 2020, Adam Migus <adam@migus.org>
# DNSTXT: DNS TXT records
# TODO: configurable resolver IPs
# Copyright (c) 2017, Juan Manuel Parrilla <jparrill@redhat.com>
# Copyright (c) 2016, Samuel Boucher <boucher.samuel.c@gmail.com>
# Copyright (c) 2013, Serge van Ginderachter <serge@vanginderachter.be>
# make sure term is not a list of one (list of one..) item
# return the final non list item if so
# ignore undefined items
# convert a variable to a list
# but avoid converting a plain string to a list of one string
# if it is a list, check recursively for items that are a list
# Copyright (c) 2025, Ansible Project
# Copyright (c) 2016, Andrew Zenk <azenk@umn.edu>
# Copyright (c) 2017-2018, Jan-Piet Mens <jpmens(at)gmail.com>
# strip asterisk
# Copyright (c) 2020, SCC France, Eric Belhomme <ebelhomme@fr.scc.com>
# 'grpc_options' Etcd3Client() option currently not supported by lookup module (maybe in future ?)
# create the etcd3 connection parameters dict to pass to etcd3 class
# etcd3 class expects host and port as connection parameters, so endpoints
# must be mangled a bit to fit in this scheme.
# so here we use a regex to extract server and port
# connect to etcd3 server
# we can pass many keys to lookup
# Copyright (c) 2018, Scott Buchanan <sbuchanan@ri.pn>
# Copyright (c) 2016, Andrew Zenk <azenk@umn.edu> (lastpass.py used as starting point)
# Try to load MANIFEST.json
# Try to load galaxy.yml
# Collection not found
# Copyright (c) 2020, Thales Netherlands
# Copyright (c) 2021, Ansible Project
# consider only own variables
# consider variables of hosts in given groups
# re-add hostvars
# tmp. switch renderer to context of current variables
# Render jinja2 templates
# var_type == "list"
# Copyright (c) 2021, Abhijeet Kasurde <akasurde@redhat.com>
# Override all the values
# Get pseudo randomization
# Randomize the order
# Copyright (c) 2022, Jonathan Lung <lungj@heresjono.com>
# Prepare set of params for Bitwarden CLI
# This includes things that matched in different fields.
# Filter to only include results from the right field, if a search is requested by value or field
# if there are no custom fields, then `match` has no key 'fields'
# Filter to only return the ID of a collections with exactly matching name
# Copyright (c) 2016 Dag Wieers <dag@wieers.com>
# note: the selinux module uses byte strings on python2 and text
# strings on python3
# Skip if relpath was already processed (from another root)
# Copyright (c) 2021, RevBits <info@revbits.com>
# Copyright (c) 2015, Ensighten <infra@ensighten.com>
# Copyright (c) 2016, Josh Bradley <jbradley(at)digitalocean.com>
# setup vars for data bag name and data bag item
# Ensure pychef has been loaded
# Copyright (c) 2018, Scott Buchanan <scott@buchanan.works>
# https://github.com/ansible-collections/community.general/pull/1610:
# check the details dictionary for `field_name` and return it immediately if it exists
# when the entry is a "password" instead of a "login" item, the password field is a key
# in the `details` dictionary:
# when the field is not found above, iterate through the fields list in the object details
# If the field name exists in the section, return that value
# If the field name doesn't exist in the section, match on the value of "label"
# then "id" and return "value"
# Look at the section data and get an identifier. The value of 'id' is either a unique ID
# or a human-readable string. If a 'label' field exists, prefer that since
# it is the value visible in the 1Password UI when both 'id' and 'label' exist.
# In the correct section. Check "label" then "id" for the desired field_name
# Running 'op account get' if there are no accounts configured on the system drops into
# an interactive prompt. Only run 'op account get' after first listing accounts to see
# if there are any previously configured accounts.
# If the config file exists, assume an initial sign in has taken place and try basic sign in
# A required parameter is missing, or a bad master password was supplied
# so don't bother attempting a full signin
# Attempt a full signin since there appears to be no existing signin
# Copyright (c) 2017, Edward Nunez <edward.nunez@cyberark.com>
# Support for Generic parameters to be able to specify
# FailRequestOnPasswordChange, Queryformat, Reason, etc.
# If no output is specified, return at least the password
# To avoid reference issues/confusion to values, all
# output 'keys' will be in lowercase.
# Known delimiter to split output results
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2013, Michael Scherer <misc@zarb.org>
# port is unused, this go on func
# totally ignores privilege escalation
# need to use a tmp dir due to difference of semantic for getfile
# ( who take a # directory as destination) and fetch_file, who
# take a file directly
# Based on jail.py
# (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2016, Stephan Lohse <dev-github@ploek.org>
# otherwise p.returncode would not be set
# Based on func.py
# Copyright (c) 2014, Michael Scherer <misc@zarb.org>
# while the name of the product is salt, naming that module salt cause
# trouble with module import
# need to add 'true;' to work around https://github.com/saltstack/salt/issues/28077
# TODO test it
# Copyright (c) 2016 Matt Clay <matt@mystile.com>
# Written by: Kushal Das (https://github.com/kushaldas)
# Default username in Qubes
# For dom0
# Means we have a remote_user value
# Here we are writing the actual command to the remote bash
# if qubes.VMRootShell service not supported, fallback to qubes.VMShell and
# hope it will have appropriate permissions
# We are running in dom0
# and chroot.py     (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# and jail.py       (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2015, Dagobert Michelsen <dam@baltic-online.de>
# 1:work:running:/zones/work:3126dc59-9a07-4829-cde9-a816e4c5040e:native:shared
# solaris10vm# zoneadm -z cswbuild list -p
# -:cswbuild:installed:/zones/cswbuild:479f3c4b-d0c6-e97b-cd04-fd58f2c0238e:native:shared
# stdout, stderr = p.communicate()
# NOTE: zlogin invokes a shell (just like ssh does) so we do not pass
# this through /bin/sh -c here.  Instead it goes through the shell
# that zlogin selects.
# Derived from ansible/plugins/connection/proxmox_pct_remote.py (c) 2024 Nils Stein (@mietzen) <github.nstein@mailbox.org>
# Copyright (c) 2025 Rui Lopes (@rgl) <ruilopes.com>
# see https://github.com/python/cpython/blob/3.11/Lib/subprocess.py#L576
# NB the full english error message is:
# (c) 2015, Joerg Thalheim <joerg@higgsboson.tk>
# python2-lxc needs bytes. python3-lxc needs text.
# this is needed in the lxc child process
# to flush internal python buffers
# Based on local.py by Michael DeHaan <michael.dehaan@gmail.com>
# and chroot.py by  Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# Pipelining may work.  Someone needs to test by setting this to True and
# having pipelining=True in their ansible.cfg
# update HOME since -U does not update the jail environment
# Based on lxd.py (c) 2016, Matt Clay <matt@mystile.com>
# (c) 2023, Stephane Graber <stgraber@stgraber.org>
# Return only the leading part of the FQDN as the instance name
# as Incus instance names cannot be a FQDN.
# su currently has an undiagnosed issue with calculating the file
# checksums (so copy, for instance, doesn't work right)
# Have to look into that before re-enabling this
# do some trivial checks for ensuring 'host' is actually a chroot'able dir
# Want to check for a usable bourne shell inside the chroot.
# is_executable() == True is sufficient.  For symlinks it
# gets really complicated really fast.  So we punt on finding that
# out.  As long as it is a symlink we assume that it will work
# Copyright (c) 2015, Filipe Niero Felisbino <filipenf@gmail.com>
# See issues https://github.com/ansible-collections/community.general/issues/320
# and https://github.com/ansible/ansible/issues/85600.
# Copyright (c) Stanislav Meduna (@numo68)
# pylint: disable=C0103
# contributed by Kelly Brazil <kellyjonbrazil@gmail.com>
# new API (jc v1.18.0 and higher) allows use of plugin parsers
# old API (jc v1.17.7 and lower)
# Copyright (c) 2024 Vladimir Botka <vbotka@gmail.com>
# Copyright (c) 2024 Felix Fontein <felix@fontein.de>
# test parameters
# test and transform target
# Copyright (C) 2020 Stanislav German-Evtushenko (@giner) <ginermail@gmail.com>
# Copyright (c) 2025, Timur Gadiev <tgadiev@gmail.com>
# Map parameter names to their original error message format
# Use the specific error message if available, otherwise use a generic one
# Maintain original error message format
# Direct key match
# Try boolean conversion for 'true'/'false' strings
# Try numeric conversion for string numbers
# No match found
# Check if the data list is not empty
# Store string version of the key
# Also store lowercase version for case-insensitive lookups
# We already validated alignment is a string and a valid value in the main function
# Just apply it here
# === Input validation ===
# Validate list type
# Validate dictionary items if list is not empty
# Get sample dictionary to determine fields - empty if no data
# === Process column order ===
# Handle both positional and keyword column_order
# Check for conflict between args and column_order
# Use positional args if provided
# Validate column_order
# Validate column_order doesn't exceed the number of fields (skip if data is empty)
# === Process headers ===
# Determine field names and ensure they are strings
# Use field names from first dictionary, ensuring all are strings
# Process custom headers
# Validate header_names doesn't exceed the number of fields (skip if data is empty)
# Validate that column_order and header_names have the same size if both provided
# === Process alignments ===
# Get column alignments and validate
# Validate column_alignments is a dictionary
# Validate column_alignments keys and values
# Check that keys are strings
# Check that values are strings
# Check that values are valid alignments
# Validate column_alignments doesn't have more keys than fields (skip if data is empty)
# Check for unknown parameters
# === Build the table ===
# Set the field names for display
# Configure alignments after setting field_names
# Build key maps only if not using explicit column_order and we have data
# Only needed when using original dictionary keys and we have data
# If we have an empty list with no custom parameters, return a simple empty table
# Process each row if we have data
# Try direct mapping first
# Try to find a matching key in the item
# Try case-insensitive lookup as last resort
# No need to handle 0
# Copyright (c) 2021, Remy Keil <remy.keil@gmail.com>
# Copyright (c) 2020-2024, Vladimir Botka <vbotka@gmail.com>
# allow the user to do `[list1, list2, ...] | lists_mergeby('index')`
# Generate random int between x1000000000 and xFFFFFFFFFF
# Select first n chars to complement input prefix
# Copyright (c) Contributors to the Ansible project
# This is ansible-core 2.19+
# This only works up to ansible-core 2.18:
# But that's fine, since this code path isn't taken on ansible-core 2.19+ anyway.
# same string as in ansible-core 2.19 by transform_to_native_types()
# Copyright (c) 2021, Andrew Pantuso (@ajpantuso) <ajpantuso@gmail.com>
# Copyright (c) 2018, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright (c) Max Gautier <mg@max.gautier.name>
# Handles the case where a single int is not encapsulated in a list or tuple.
# User convenience seems preferable to strict typing in this case
# Also avoids obfuscated error messages related to single invalid inputs
# TODO: expose use_native_type parameter
# Copyright (C) 2021 Eric Lavarde <elavarde@redhat.com>
# Copyright (c) 2023, Steffen Scheib <steffen@scheib.me>
# catching empty dicts
# config.getvalue() returns two \n at the end
# with the below insanity, we remove the very last character of
# the resulting string
# This happens for unhashable values `item`. If this happens,
# convert `seen` to a list and continue.
# Some unused kwargs remain
# This happens for unhashable values,
# use a list instead and redo.
# build the intersection of `a` and `b` backed
# by a list instead of a set and redo.
# Copyright (c) 2022, Julien Riou <julien@riou.xyz>
# Copyright (c) 2018, Samir Musali <samir.musali@logdna.com>
# Getting MAC Address of system:
# Getting hostname of system:
# Getting IP of system:
# Is it JSON?
# LogDNA Callback Module:
# Copyright (c) 2021, Victor Martinez <VictorMartinezRubio@gmail.com>
# Populate trace metadata attributes
# Support loops
# Record an exception with the task message
# Create the span and log attributes
# This will allow to enrich the service map
# Send logs
# This will avoid populating span attributes to the logs
# Close span always
# the order matters
# See https://github.com/open-telemetry/opentelemetry-specification/issues/740
# ansible.builtin.uri contains the response in the json field
# ansible.builtin.slurp contains the response in the content field
# Copyright (c) Fastly, inc 2016
# Copyright (c) 2025, Max Mitschke <maxmitschke@fastmail.com>
# Copyright (c) 2019, Trevor Highfill <trevor.highfill@outlook.com>
# With Data Tagging, omit is sentinel
# remove arguments that reference a loop var because they cause templating issues in
# callbacks that do not have the loop context(e.g. playbook_on_task_start)
# not implemented as the call to this is not implemented yet
# Copyright (C) 2016 maxn nikolaev.makc@gmail.com
# Copyright (c) 2024, Felix Fontein <felix@fontein.de>
# Copyright (c) 2020, Yevhen Khmelenko <ujenmr@gmail.com>
# deprecated field
# This wraps the json payload in and outer json event needed by Splunk
# Collect task start times
# Copyright (c) 2024, kurokobo <kurokobo@protonmail.com>
# Copyright (c) 2014, Michael DeHaan <michael.dehaan@gmail.com>
# Store whether the zoneinfo module is available
# +1 for leading space
# Replace the banner method of the display object with the custom one
# Store zoneinfo for specified timezone if available
# Inject options into the display object
# Copyright (c) 2018 Remi Verchere <remi@verchere.fr>
# Nagios states
# Critical when failed tasks or unreachable host
# Warning when changed tasks
# Send Critical
# Send Warning
# Send OK
# Copyright (c) 2014-2015, Matt Martz <matt@sivel.net>
# This is a 6 character identifier provided with each message
# This makes it easier to correlate messages when there are more
# than 1 simultaneous playbooks running
# Copyright (c) 2023, Al Bowles <@akatch>
# TODO how else can we display these?
# Only call this once, as early as possible.
# Copyright (c) 2015, Logentries.com, Jimmy Tang <jimmy.tang@logentries.com>
# Error message displayed when an incorrect Token has been detected
# Unicode Line separator character   \u2028
# Replace newlines with Unicode line separator
# for multi-line events
# Send data, reconnect if needed
# for systems without TLS support.
# TODO: allow for alternate posting methods (REST/UDP/agent/etc)
# verify dependencies
# FIXME: make configurable, move to options
# Build authorisation signature for Azure log analytics API call
# Removing args since it can contain sensitive data
# Adding extra vars info
# Preparing the playbook logs as JSON format and send to Azure log analytics
# Copyright (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# 'say' binary available, it might be GNUstep tool which doesn't support 'voice' parameter
# plugin disable itself if say is not present
# ansible will not call any callback if disabled is set to True
# NOTE: in Ansible 1.2 or later general logging is available without
# this plugin, just set ANSIBLE_LOG_PATH as an environment variable
# or log_path in the DEFAULTS section of your ansible configuration
# file.  This callback is an example of per hosts logging for those
# that want it.
# avoid logging extraneous data
# Copyright (c) 2012, Dag Wieers <dag@wieers.com>
# Add subject
# Unrelated exceptions are added to output :-/
# Make playbook name visible (e.g. in Outlook/Gmail condensed view)
# Add task information (as much as possible)
# Add item / message
# Add stdout / stderr / exception / warnings / deprecations
# Pass item information to task failure
# Copyright (c) 2018, Ivan Aragones Muniesa <ivan.aragones.muniesa@gmail.com>
# host and task need to be specified in case 'magic variables' (host vars, group vars, etc)
# need to be loaded as well
# print custom stats
# fallback on constants for inherited plugins missing docs
# from http://stackoverflow.com/a/15423007/115478
# we care more about readable than accuracy, so...
# ...no trailing space
# ...and non-printable characters
# ...tabs prevent blocks from expanding
# ...and odd bits of whitespace
# ...as does trailing space
# import below was added in https://github.com/ansible/ansible/pull/85039,
# first contained in ansible-core 2.19.0b2:
# In case transform_to_native_types cannot be imported, we either have ansible-core 2.19.0b1
# (or some random commit from the devel or stable-2.19 branch after merging the DT changes
# and before transform_to_native_types was added), or we have a version without the DT changes.
# Here we simply assume we have a version without the DT changes, and thus can continue as
# with ansible-core 2.18 and before.
# pylint: disable=inherit-non-class
# Since 2.19.0b7, this should no longer be needed:
# https://github.com/ansible/ansible/issues/85325
# https://github.com/ansible/ansible/pull/85389
# remove exception from screen output
# put changed and skipped into a header line
# indent by a couple of spaces
# Copyright (c) 2016, Dag Wieers <dag@wieers.com>
# Design goals:
# FIXME: Importing constants as C simply does not work, beats me :-/
# from ansible import constants as C
# Taken from Dstat
# From CallbackModule
# Attributes to remove from results for more density
# Initiate data structures
# Start immediately on the first line
# Add a new status in case a failed task is ignored
# Check if we have to update an existing state (when looping over items)
# Store delegated hostname, if needed
# Print progress bar
# Ensure that tasks with changes/failures stay on-screen
# Print task title, if needed
# Remove non-essential attributes
# Remove empty attributes (list, dict, str)
# Remove the exception from the result so it is not shown every time
# Always rewrite the complete line
# Print out each host in its own status-color
# Leave the previous task on screen (as it has changes/errors)
# Reset at the start of each play
# Write the next play on screen IN UPPERCASE, and make it permanent
# Do not clear line, since we want to retain the previous output
# Reset at the start of each task
# Enumerate task if not setup (task names are too long for dense output)
# Write the next task on screen (behind the prompt is the previous output)
# Reset at the start of each handler
# Enumerate handler if not setup (handler names may be too long for dense output)
# TBD
# Old definition in v2.0
# In normal mode screen output should be sufficient, summary is redundant
# When using -vv or higher, simply do the default action
# see https://github.com/ansible-collections/community.general/issues/6932
# See https://github.com/ansible/ansible/issues/81254,
# https://github.com/ansible/ansible/pull/78111
# See https://github.com/ansible-collections/community.general/issues/9977,
# FIXME: more accurate would be: 'doas (%s@' % remote_user
# however become plugins don't have that information currently
# Copyright (c) 2024, Ansible Project
# Prompt handling for ``ksu`` is more complicated, this
# Introduced with Data Tagging (https://github.com/ansible/ansible/pull/84621):
# The following types were introduced with Data Tagging (https://github.com/ansible/ansible/pull/84621):
# Copyright (c) 2021, Cliff Hults <cliff.hlts@gmail.com>
# Icinga2 API Call
# Manipulate returned API data to Ansible inventory spec
# When looking for address for inventory, if missing fallback to object name
# If the address attribute is populated, override ansible_host with the value
# Adds all attributes to a variable 'icinga2_attributes'
# Not currently enabled
# self.cache_key = self.get_cache_key(path)
# self.use_cache = cache and self.get_option('cache')
# Test connection to API
# Call our internal helper to populate the dynamic inventory
# set vars in inventory from hostvars
# create vars from vbox properties
# actually update inventory
# constructed keyed_groups
# needed to possibly set ansible_host
# skip non splitable
# skip empty
# found host
# some setting strings appear in Name
# try to get network info
# found groups
# found vars, accumulate in hostvars for clean inventory set
# The original implementation of this inventory plugin treated `/` as
# a delimeter to split and use as Ansible Groups.
# Per the VirtualBox documentation, a VM can be part of many groups,
# and it is possible to have nested groups.
# Many groups are separated by commas ",", and nested groups use
# slash "/".
# https://www.virtualbox.org/manual/UserManual.html#gui-vmgroups
# Multi groups: VBoxManage modifyvm "vm01" --groups "/TestGroup,/TestGroup2"
# Nested groups: VBoxManage modifyvm "vm01" --groups "/TestGroup/TestGroup2"
# We could get an empty element due how to split works, and
# possible assignments from VirtualBox. e.g. ,/Group1
# This is the "root" group. We get here if the VM was not
# assigned to a particular group. Consider the host to be
# unassigned to a group.
# Similarly to above, we could get an empty element.
# e.g //Group1
# "root" group.
# Consider the host to be unassigned
# set _options from config data
# start getting data
# Optionally, get the jails names from the properties notes.
# Requires the notes format "t1=v1 t2=v2 ..."
# Copyright (c) 2018, Stefan Heitmueller <stefan.heitmueller@gmx.com>
# Grouping by power state
# Grouping by host
# Grouping by pool
# Grouping VMs with an IP together
# Adding meta
# TODO: Refactor
# Prepare general groups
# Copyright (c) 2021, Frank Dornheim <dornheim@posteo.de>
# e.g. {'type': 'sync',
# e.g. {
# tuple(('instances','metadata/templates')) to get section in branch
# e.g. /1.0/instances/<name>/metadata/templates
# init
# instance have network interfaces
# generator if interfaces which start with the desired pattern
# get network device configuration and store {network: vlan_id}
# get networkdevices of instance and return
# "eth0":{ "name":"eth0",
# recursion
# create branch "inventory"
# name or None
# Only consider instances that match the "state" filter, if self.state is not None
# add instance
# add network information
# add os
# add release
# add profile
# add state
# add type
# add location information
# wrong type by lxd 'none' != 'None'
# add VLAN_ID information
# add project
# maybe we just want to expand one group
# Ignore invalid IP addresses returned by lxd
# Due to the compatibility with python 2 no use of map
# If no data is injected by unittests open socket
# The first version of the inventory only supported containers.
# This will change in the future.
# The following function cleans up the data.
# self.display.vvv(self.save_json_data([os.path.abspath(__file__)]))
# store for inventory-data
# none in config is str()
# Copyright (C) 2020 Orion Poplawski <orion@nwra.com>
# xmlrpc
# If more facts are requested, gather them all from Cobbler
# Create a hierarchy of profile names
# Add default group for this inventory if specified
# Get the FQDN for the host and add it to the right groups
# hostname is often empty for non-static IP hosts
# Add host to profile group
# Add host to groups specified by group_by fields
# Add to group for this inventory
# Add host variables
# Set to first interface or management interface if defined or hostname matches dns_name
# Collect all interface name mappings for adding to group vars
# Add ip_address to host if defined, use first if no management or matched dns_name
# If a server does not have a zone, it means it is archived
# If no filtering is defined, all tags are valid groups
# No suitable hostname were found in the attributes and the host won't be in the inventory
# update if the user has caching enabled and the cache is being refreshed; update this value to True if the cache has expired below
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
# setup command
# execute
# parse results
# if dns only shows arpa, just use ip instead as hostname
# if no reverse dns exists, just use ip instead as hostname
# update inventory
# if any leftovers
# Copyright (c) 2020, FELDSAM s.r.o. - FeldHost™ <support@feldhost.cz>
# get hosts (VMs)
# iterate over hosts
# filter by label
# Add a top group 'one'
# check for labels
# handle construcable implementation: get composed variables if any
# groups based on jinja conditionals get added to specific groups
# groups based on variables associated with them in the inventory
# Check for None rather than False in order to allow
# for empty sets of cached instances
# Copyright (c) 2023, Vladimir Botka <vbotka@gmail.com>
# Copyright (c) 2022 Western Digital Corporation
# admin credentials used for authentication
# Check that Category is valid
# Check that all commands are valid
# Fail if even one command given is invalid
# Build root URI(s)
# Organize by Categories / Commands
# execute only if we find UpdateService resources
# Copyright (c) 2017-2018 Dell EMC Inc.
# update handle
# manager
# Build root URI
# Build Category list
# one or more categories specified
# Build Command list for each Category
# True if we don't specify a command --> use default
# one or more commands
# Verify that all commands are valid
# Fail if even one category given is invalid
# service-level commands are always available
# execute only if we find a Systems resource
# execute only if we find Chassis resource
# execute only if we find an Account service resource
# execute only if we find SessionService resources
# execute only if we find a Manager service resource
# Return data back
# Copyright (c) 2025, Klention Mali <klention@gmail.com>
# Based on lvol module by Jeroen Hoekx <jeroen.hoekx@dsquare.be>
# Determine parent device if partition exists
# Determine rescan path
# Validate device existence for present state
# Create PV if needed
# Handle resizing
# In check mode, assume resize would change
# Perform device rescan if each time
# Generate final message
# Copyright (c) 2021, Kris Budde <kris@budd.ee
# Added param to set the transactional mode (true/false)
# If transactional mode is requested, start a transaction
# Process the script into batches
# Ignore the Byte Order Mark, if found
# Assume each 'GO' is on its own line but may have leading/trailing whitespace
# and be of mixed-case
# Catch and exit on any bad query errors
# We know we executed the statement so this error just means we have no resultset
# which is ok (eg UPDATE/INSERT)
# Rollback transaction before failing the module in case of error
# Commit transaction before exiting the module in case of no error
# ensure that the result is json serializable
# Copyright (c) 2016, Werner Dijkerman (ikben@werner-dijkerman.nl)
# Copyright (c) 2024, Stanislav Shamilov <shamilovstas@protonmail.com>
# Scaleway volumes management module
# Copyright (C) 2018 Henryk Konsek Consulting (hekonsek@gmail.com).
# IPA does not allow to modify Sub CA's subject DN
# So skip it for now.
# Copyright (c) 2018, Simon Weald <ansible@simonweald.com>
# the reload job was submitted but polling failed. Don't return this as an overall task failure.
# this is the first time the API is called; incorrect credentials will
# manifest themselves at this point so we need to ensure the user is
# informed of the reason.
# set changed to true if the reload request was accepted.
# empty msg var as we don't want to return the API's json response twice.
# hand off to the poll function.
# assemble return variables.
# populate the dict with the user-provided vars.
# Copyright (c) 2017, Milan Ilic <milani@nordeus.com>
# fail if there are more services with same name
# make sure that the values in custom_attrs dict are strings
# if check_mode=true and there would be changes, service doesn't exist and we can not get it
# if something has changed, fetch service info again
# Instantiate a service
# The task should be failed when we want to manage a non-existent service identified by its name
# Copyright (c) 2021, Alexei Znamensky (@russoz) <russoz@gmail.com>
# Copyright (c) 2013, Matthias Vogelgesang <matthias.vogelgesang@gmail.com>
# Copyright 2014, Max Riveiro, <kavu13@gmail.com>
# Copyright (c) 2021, quidame <quidame@poivron.org>
# These are the multiplicative suffixes understood (or returned) by dd and
# others (ls, df, lvresize, lsblk...).
# Basically, for a file of 8kB (=8000B), system's block size of 4096 bytes
# is not usable. The smallest integer number of kB to work with 512B blocks
# is 64, the nexts are 128, 192, 256, and so on.
# For sparse files (create, truncate, grow): write count=0 block.
# Create file
# Truncate file
# Grow file
# dd follows symlinks, and so does this module, while file module doesn't.
# If we call it, this is to manage file's mode, owner and so on, not the
# symlink's ones.
# Copyright (c) 2013, Daniel Jaouen <dcj24@cornell.edu>
# Copyright (c) 2016, Indrajit Raychaudhuri <irc+code@indrajit.com>
# exceptions -------------------------------------------------------------- {{{
# /exceptions ------------------------------------------------------------- }}}
# utils ------------------------------------------------------------------- {{{
# /utils ------------------------------------------------------------------ }}}
# class regexes ------------------------------------------------ {{{
# /class regexes ----------------------------------------------- }}}
# class validations -------------------------------------------- {{{
# /class validations ------------------------------------------- }}}
# class properties --------------------------------------------- {{{
# /class properties -------------------------------------------- }}}
# prep --------------------------------------------------------- {{{
# /prep -------------------------------------------------------- }}}
# checks ------------------------------------------------------- {{{
# The `brew cask` replacements were fully available in 2.6.0 (https://brew.sh/2020/12/01/homebrew-2.6.0/)
# /checks ------------------------------------------------------ }}}
# commands ----------------------------------------------------- {{{
# sudo_password fix ---------------------- {{{
# /sudo_password fix --------------------- }}}
# updated -------------------------------- {{{
# /updated ------------------------------- }}}
# _upgrade_all --------------------------- {{{
# 'brew upgrade --cask' does not output anything if no casks are upgraded
# handle legacy 'brew cask upgrade'
# /_upgrade_all -------------------------- }}}
# installed ------------------------------ {{{
# /installed ----------------------------- }}}
# upgraded ------------------------------- {{{
# /upgraded ------------------------------ }}}
# uninstalled ---------------------------- {{{
# /uninstalled --------------------------- }}}
# /commands ---------------------------------------------------- }}}
# Copyright (c) 2012, Franck Cuny <franck@lumberjaph.net>
# Copyright (c) 2021, Alexei Znamensky <russoz@gmail.com>
# Copyright (c) 2018, Sean Myers <sean.myers@redhat.com>
# Matches release-like values such as 7.2, 5.10, 6Server, 8
# but rejects unlikely values, like 100Server, 1.100, 7server etc.
# pass args to s-m release, e.g. _sm_release(module, '--set', '0.1') becomes
# "subscription-manager release --set 0.1"
# delegate nonzero rc handling to run_command
# Get the current release version, or None if release unset
# 0'th index did not exist; no matches
# Set current release version, or unset if release is None
# sanity check: the target release at least looks like a valid release
# Will fail with useful error from s-m if system not subscribed
# If setting the release fails, then a fail_json would have exited with
# the s-m error, e.g. "No releases match '7.20'...".  If not, then the
# current release is now set to the target release (job's done)
# Copyright (c) 2013, Jan-Piet Mens <jpmens () gmail.com>
# IRC module support methods.
# Supported since Python 2.7.13
# The server might send back a shorter nick than we specified (due to NICKLEN),
# Copyright (c) 2018, Martin Migasiewicz <migasiew.nk@gmail.com>
# Check if readPlist is available or not
# readPlist is deprecated in Python 3 and onwards
# writePlist is deprecated in Python 3 and onwards
# Launchctl does not expose functionality to set the RunAtLoad
# attribute of a job definition. So we parse and modify the job
# definition plist file directly for this purpose.
# Update the plist with one of the changes done.
# Set KeepAlive to false in case force_stop is defined to avoid
# that the service gets restarted when stopping was requested.
# From launchctl man page:
# If the number [...] is negative, it represents  the
# negative of the signal which killed the job.  Thus,
# "-15" would indicate that the job was terminated with
# SIGTERM.
# Something strange happened and we have no clue in
# which state the service is now. Therefore we mark
# the service state as UNKNOWN.
# PID seems to be an integer so we assume the service
# is started.
# Exit code is 0 and PID is not available so we assume
# the service is stopped.
# Unfortunately launchd does not wait until the process really started.
# Unfortunately launchd does not wait until the process really stopped.
# TODO: check for rc, out, err
# In case the service is already in started state but the
# job definition was changed we need to unload/load the
# service and start the service again.
# We are in an unknown state, let's try to reload the config
# and start the service again.
# In case the service is stopped and we might later decide
# to start it, we need to reload the job definition by
# forcing an unload and load first.
# Afterwards we need to stop it as it might have been
# started again (KeepAlive or RunAtLoad).
# and stop the service gracefully.
# launchd throws an error if we do an unload on an already
# unloaded service.
# Do nothing, the list functionality is done by the
# base class run method.
# We will tailor the plist file in case one of the options
# (enabled, force_stop) was specified.
# Gather information about the service to be controlled.
# Map the actions to specific tasks
# Run the requested task
# This module is proudly sponsored by CGI (www.cgi.com) and
# KPN (www.kpn.com).
# Template attribute is not allowed in modification
# Copyright (c) 2024, Max Maxopoly <max@dermax.org>
# Set LANG env since we parse stdout
# ttl is not required to change records
# Function to validate file paths exist on disk
# Gets the Jenkins crumb for CSRF protection which is required for API calls
# Cookie is needed to generate API token
# Set the crumb in headers
# Set Content-Type for form data
# Set session cookie for token operations
# Return for test purposes
# Function to clean the data sent via API by removing unwanted keys and None values
# Keys to remove (including those with None values)
# Filter out None values and unwanted keys
# Function to check if credentials/domain exists
# Can't check token
# Function to delete the scope or credential provided
# Function to read the private key for types texts and ssh_key
# Function to builds multipart form-data body and content-type header for file credential upload.
# Main function to run the Ansible module
# Scope specifications parameters
# Get the crumb for CSRF protection
# Check if the credential/domain doesn't exist and the user wants to delete
# If updating, we need to delete the existing credential/domain first based on force parameter
# Create a domain in Jenkins
# Build multipart body and content-type
# PEM mode
# Delete
# Check if custom scope exists if adding to a custom scope
# Copyright (c) 2024, Zoran Krleza (zoran.krleza@true-north.hr)
# Based on code:
# Copyright (c) 2019, Guillaume Martinez (lunik@tiwabbit.fr)
# Copyright (c) 2018, Marcus Watkins <marwatk@marcuswatkins.net>
# Copyright (c) 2013, Phillip Gentry <phillip@cx.com>
# Copyright (c) 2013, bleader
# Written by bleader <bleader@ratonland.org>
# Based on pkgin module written by Shaun Zinck <shaun.zinck at gmail.com>
# that was based on pacman module written by Afterburn <https://github.com/afterburn>
# Check to see if a package upgrade is available.
# rc = 0, no updates available or package not installed
# rc = 1, updates available
# Run a 'pkg upgrade', updating all packages.
# Using a for loop in case of error, we can report the package that failed
# Query the package first, to see if we even need to remove
# install/upgrade all named packages with one pkg command
# Do nothing, but count up how many actions
# would be performed so that the changed/msg
# is correct.
# individually verify packages are in requested state
# Annotation does not exist, add it.
# Annotation exists, but value differs
# Annotation exists, nothing to do
# No such tag
# No change in value
# pkg sometimes exits with rc == 1, even though the modification succeeded
# Check the output for a success message
# Split on commas with optional trailing whitespace,
# to support the old style of multiple annotations
# on a single line, rather than YAML list syntax
# Note to future maintainers: A dash (-) in a regex character class ([-+:] below)
# must appear as the first character in the class, or it will be interpreted
# as a range of characters.
# as of pkg-1.1.4, PACKAGESITE is deprecated in favor of repository definitions
# in /usr/local/etc/pkg/repos
# If environ_update is specified to be "passed through"
# to module.run_command, then merge its values into pkgng_env
# Operate on all installed packages. Only state: latest makes sense here.
# Operate on named packages
# The documentation used to show multiple packages specified in one line
# with comma or space delimiters. That doesn't result in a YAML list, and
# wrong actions (install vs upgrade) can be reported if those
# comma- or space-delimited strings make it to the pkg command line.
# Copyright (c) 2016, Loic Blot <loic.blot@unix-experience.fr>
# Sponsored by Infopro Digital. http://www.infopro-digital.com/
# Sponsored by E.T.A.I. http://www.etai.fr/
# If host was not found using macaddr, add create message
# Forge update message
# Name cannot be changed
# Copyright (c) 2018 Nicolai Buchwitz <nb@tipi-net.de>
# try to get existing record
# module specific functions
# module logic
# avoid catching this on python 2.4
# Copyright (c) 2016, Ben Doherty <bendohmv@gmail.com>
# Sponsored by Oomph, Inc. http://www.oomphinc.com
# python 3.2+
# older python
# python3 tarfile module allows xz format but for python2 we have to create the tarfile
# in memory and then compress it with lzma.
# Just picking another exception that's also listed below
# The python implementations of gzip, bz2, and lzma do not support restoring compressed files
# to their original names so only file checksum is returned
# Copyright (c) 2021, Christophe Gilles <christophe.gilles54@gmail.com>
# Avoid the dict passed in to be modified
# Obtain access token, initialize API
# convert module parameters to realm representation parameters (if they belong in there)
# Filter and map the parameters names that apply to the role
# See whether the realm already exists in Keycloak
# Build a proposed changeset from parameters given to this module
# Prepare the desired values using the existing values (non-existence results in a dict that is save to use as a basis)
# Cater for when it doesn't exist (an empty dict)
# Do nothing and exit
# Process a creation
# create it
# Process an update
# doing an update
# We can only compare the current realm with the proposed updates we have
# do the update
# Process a deletion (because state was not 'present')
# delete it
# Copyright (c) 2016-2017 Hewlett Packard Enterprise Development LP
# Copyright (C) 2019 Huawei
# Copyright (c) 2017, Giovanni Sciortino (@giovannisciortino)
# ignore lines that are:
# - empty
# - "+---------[...]" -- i.e. header
# - "    Available Repositories [...]" -- i.e. header
# Update current_repo_list to return it as result variable
# Disable all enabled repos on the system that are not in the task and not
# marked as disabled by the task
# Manually set the cn to the global policy because pwpolicy_find will return a random
# different policy if cn is `None`
# Copyright (c) 2020, Florent Madiot (scodeman@scode.io)
# Copyright (c) 2019, Markus Bergholz (markuman@gmail.com)
# we need to do this, because it was determined in a previous version - more or less buggy
# basically it is not necessary and might results in more/other bugs!
# but it is required  and only relevant for check mode!!
# logic represents state 'present' when not purge. all other can be derived from that
# untouched => equal in both
# updated => name and scope are equal
# added => name and scope does not exist
# refetch and filter
# value, type, and description do not matter on removing variables.
# please mind whenever changing the variables dict to also change module_utils/gitlab.py's
# KNOWN dict in filter_returned_variables or bad evil will happen
# check prerequisites and connect to gitlab server
# postprocessing
# Copyright (c) 2016, Adfinis SyGroup AG
# Tobias Rueetschi <tobias.ruetschi@adfinis-sygroup.ch>
# handle some special values
# Copyright (c) 2013, Evgenii Terechkov
# Written by Evgenii Terechkov <evg@altlinux.org>
# Based on urpmi module written by Philippe Makowski <philippem@mageia.org>
# rpm -q returns 0 if the package is installed,
# 1 if it is not installed
# compare installed and candidate version
# if newest version already installed return True
# otherwise return False
# Likely a local RPM file
# apt-rpm always have 0 for exit code if --force is used
# Return total modification status and output of all commands
# Copyright (c) 2022, Ansible Project
# Copyright (c) 2022, VMware, Inc. All Rights Reserved.
# The upper dir exist, we only add subdirectoy
# For ISO with Rock Ridge 1.09 / 1.10, it won't overwrite the existing file
# So we take workaround here: delete the existing file and then add file
# For ISO with UDF, it won't always succeed to overwrite the existing file
# Check the parameters
# Get the potential missing parameters
# Fetch missing role_id
# Fetch missing role_name
# Fetch roles to assign if state present
# Fetch roles to remove if state absent
# Handle double removal
# Assign roles
# Remove mapping of role
# Copyright (c) 2013 Shaun Zinck <shaun.zinck at gmail.com>
# Copyright (c) 2015 Lawrence Leonard Gilbert <larry@L2G.to>
# Copyright (c) 2016 Jasper Lievisse Adriaanse <j at jasper.la>
# Written by Shaun Zinck
# Based on pacman module written by Afterburn <http://github.com/afterburn>
# test whether '-p' (parsable) flag is supported.
# Use "pkgin search" to find the package. The regular expression will
# only match on the complete name.
# rc will not be 0 unless the search was a success
# Search results may contain more than one line (e.g., 'emacs'), so iterate
# through each line to see if we have a match.
# Break up line at spaces.  The first part will be the package with its
# version (e.g. 'gcc47-libs-4.7.2nb4'), and the second will be the state
# of the package:
# Search for package, stripping version
# (results in sth like 'gcc47-libs' or 'emacs24-nox11')
# Do not proceed unless we have a match
# Grab matched string
# The package was found; now return its state
# Package found but not installed
# no fall-through
# No packages were matched
# Search failed
# Not all commands take a package argument, so cover this up by passing
# an empty string. Some commands (e.g. 'update') will ignore extra
# arguments, however this behaviour cannot be relied on for others.
# There's no indication if 'clean' actually removed anything,
# so assume it did.
# Copyright 2015 WP Engine, Inc. All rights reserved.
# Copyright (c) 2024, Alexander Bakanovskii <skottttt228@gmail.com>
# -2 means "Resources belonging to all users"
# the other two parameters are used for pagination, -1 for both essentially means "return all"
# These params will always be present
# These are optional so firstly check for presence
# and if not present set value to Null
# -1 means that network won't be added to any cluster which happens by default
# 0 = replace the whole template
# Unfortunately it is not easy to detect if the template would have changed, therefore always report a change here.
# if the previous parsed template data is not equal to the updated one, this has changed
# Copyright (c) 2017, Yaacov Zamir <yzamir@redhat.com>
# add the manageiq connection arguments to the arguments
# get the action and resource type
# assign or unassign the profiles
# Copyright (c) 2022, Alexei Znamensky <russoz@gmail.com>
# Scaleway SSH keys management module
# Copyright (C) 2018 Online SAS.
# https://www.scaleway.com
# If key not found create it!
# Recursively remove null values from dictionaries
# Recursively remove null values from lists
# Return the data if it is neither a dictionary nor a list
# Convert keys to camelCase and apply recursively
# Apply camelCase conversion to each item in the list
# Return the data as-is if it is not a dict or list
# Initialize the result object. Only "changed" seems to have special
# meaning for Ansible.
# This will include the current state of the realm userprofile if it is already
# present. This is only used for diff-mode.
# Build the changeset with proper JSON serialization for kc_user_profile_config
# Generate a JSON payload for Keycloak Admin API from the module
# parameters.  Parameters that do not belong to the JSON payload (e.g.
# "state" or "auth_keycloal_url") have been filtered away earlier (see
# This loop converts Ansible module parameters (snake-case) into
# Keycloak-compatible format (camel-case). For example proider_id
# becomes providerId. It also handles some special cases, e.g. aliases.
# realm/parent_id parameter
# complex parameters in config suboptions
# special parameter kc_user_profile_config
# rename parameter to be accepted by Keycloak API
# make sure no null values are passed to Keycloak API
# convert aliases to camelCase
# rename validations to be accepted by Keycloak API
# usual camelCase parameters
# Directly use the raw value
# usual parameters
# Make it easier to refer to current module parameters
# Make a deep copy of the changeset. This is use when determining
# changes to the current state.
# Get a list of all Keycloak components that are of userprofile provider type.
# If this component is present get its userprofile ID. Confusingly the userprofile ID is
# also known as the Provider ID.
# Track individual parameter changes
# This tells Ansible whether the userprofile was changed (added, removed, modified)
# Loop through the list of components. If we encounter a component whose
# name matches the value of the name parameter then assume the userprofile is
# already present.
# keycloak returns kc.user.profile.config as a single JSON formatted string, so we have to deserialize it
# Compare top-level parameters
# Compare parameters under the "config" userprofile
# Check all the possible states of the resource and do what is needed to
# converge current state with desired state (create, update or delete
# the userprofile).
# keycloak expects kc.user.profile.config as a single JSON formatted string, so we have to serialize it
# Copyright (c) 2015, Brian Coca <bcoca@ansible.com>
# full_state *may* contain information about the logger:
# "down: /etc/service/service-without-logger: 1s, normally up\n"
# "down: /etc/service/updater: 127s, normally up; run: log: (pid 364) 263439s\n"
# Copyright (c) 2014, Chris Schmidt <chris.schmidt () contrastsecurity.com>
# Built using https://github.com/hamnis/useful-scripts/blob/master/python/download-maven-artifact
# as a reference and starting point.
# This means that version string is not a valid semantic versioning
# example -> (,1.0]
# example -> 1.0
# example -> [1.0]
# example -> [1.2, 1.3]
# example -> [1.2, 1.3)
# example -> [1.5,)
# To deal when repos on maven don't have patch number on first build (e.g. 3.8 instead of 3.8.0)
# for small files, directly get the full content
# only for HTTP request
# Hack to add parameters in the way that fetch_url expects
# copy to temp file
# if verify_change was set, the previous file would be deleted
# all good, now copy temp file to target
# Check if remote checksum only contains md5/sha1 or md5/sha1 + filename
# remote_checksum is empty so we continue and keep original checksum string
# This should not happen since we check for remote_checksum before
# if the user did not supply unredirected params, we use the default
# More will be added as module features are expanded
# user to add/modify/delete
# System, Manager or Chassis ID to modify
# update options
# Boot override options
# VirtualMedia options
# Etag options
# BIOS Attributes options
# If a password change is required and the user is attempting to
# modify their password, try to proceed.
# execute only if we find a System resource
# Check if more than one led_command is present
# standardize on the Power* commands, but allow the legacy
# GracefulRestart command
# Return data back or fail with proper message
# Copyright (c) 2024, Michael Ilg
# line format is "ip:port,target_portal_group_tag targetname"
# older versions of scsiadm don't have nice return codes
# for newer versions see iscsiadm(8); also usr/iscsiadm.c for details
# err can contain [N|n]o records...
# if anyone know a better way to find out which devicenodes get created for
# a given target...
# exclude partitions
# only add once (multi-path?)
# load ansible module object
# target
# return json dict
# Disable strict return code checking if there are multiple targets
# That will allow to skip target where we have no rights to login
# check given target is in cache
# give udev some time
# Check if there are multiple targets on a single portal and
# do not mark the task changed if host could not login to one of them
# Copyright (c) 2020, Andrew Klaus <andrewklaus@gmail.com>
# Setup command flags
# Force only applies to snapshots
# release flag
# installurl must be the last argument
# Copyright (c) 2013, Jeroen Hoekx <jeroen.hoekx@dsquare.be>, Alexander Bulimov <lazywolf0@gmail.com>
# make sure we use the C locale when running lvol-related commands
# Determine if the "--yes" option should be used
# First LVM with the "--yes" option
# Add --test option when running in check-mode
# LVEXTEND(8)/LVREDUCE(8) -l, -L options: Check for relative value for resizing
# LVCREATE(8) does not support [+-]
# LVCREATE(8)/LVEXTEND(8)/LVREDUCE(8) -l --extents option with percentage
# LVCREATE(8)/LVEXTEND(8)/LVREDUCE(8) -L --size option unit
# when no unit, megabytes by default
# Get information on volume group requested
# Get information on logical volume requested
# Check snapshot pre-conditions
# Check thin volume pre-conditions
# Require size argument except for snapshot of thin volumes
# create LV
# remove LV
# Resize LV based on % value
# size_whole == 'FREE':
# According to latest documentation (LVM2-2.03.11) all tools round down
# more than an extent too large
# resize LV based on absolute values
# Copyright (c) 2023, Pedro Nascimento <apecnascimento@gmail.com>
# Copyright (c) 2015, Matt Makai <matthew.makai@gmail.com>
# =======================================
# sendgrid module support methods
# Remove this check when adding Sendgrid API v3 support
# Copyright (c) 2017, Kairo Araujo <kairo@kairo.eti.br>
# Validate if package exists on repository path.
# If package exists on repository path, check if package is installed.
# If package is already installed.
# Check if package is a package and not a fileset, get version
# and add the package into already installed list
# If the package is not a package but a fileset, confirm
# and add the fileset/package into already installed list
# Grab existing users from this org
# Check if the pritunl user already exists
# Compare remote user params with local user_params and trigger update if needed
# When a param is not specified grab existing ones to prevent from changing it with the PUT request
# 'groups' and 'mac_addresses' are list comparison
# otherwise it is either a boolean or a string
# Trigger a PUT on the API to update the current user if settings have changed
# Check if the pritunl user exists, if not, do nothing
# Otherwise remove the org from Pritunl
# zone domain length must be less than 250 chars.
# set changed to true if the operation would cause a change.
# update the zone if the desired TTL is different.
# populate return var with zone info.
# we need to fail out if force was not explicitly set.
# return raw JSON from API in named var and then unset msg var so we aren't returning the same thing twice.
# zone names are not unique, so we cannot safely delete the requested
# zone at this time.
# get the zones and check if the relevant zone exists.
# validate some API-specific limitations.
# Copyright (c) 2023, Gabriele Pongelli (gabriele.pongelli@gmail.com)
# save returns None
# delete returns None
# basically it is not necessary and might result in more/other bugs!
# filter out and enrich before compare
# add defaults when not present
# group label does not have priority, removing for comparison
# remove field only from server
# field present only when it is a project's label
# create raises exception with following error message when label already exists
# re-fetch
# find_project can return None, but the other must exist
# find_group can return None, but the other must exist
# if both not found, module must exist
# color is mandatory when creating label, but it is optional when changing name or updating other fields
# Copyright (c) 2020, quidame <quidame@poivron.org>
# Populate a temporary file
# Prepare to copy temporary file to the final destination
# Do it
# The last resort.
# We'll parse iptables-restore stderr
# The issue comes when wanting to restore state from empty iptable-save's
# output... what happens when, say:
# - no table is specified, and iptables-save's output is only nat table;
# - we give filter's ruleset to iptables-restore, that locks ourselves out
# then trying to roll iptables state back to the previous (working) setup
# doesn't override current filter table because no filter table is stored
# in the backup ! So we have to ensure tables to be restored have a backup
# in case of rollback.
# Depending on the value of 'table', initref_state may differ from
# initial_state.
# All remaining code is for state=restored
# Due to a bug in iptables-nft-restore --test, we have to validate tables
# one by one (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960003).
# Let time enough to the plugin to retrieve async status of the module
# in case of bad option type/value and the like.
# Would initialize a table, which doesn't exist yet
# Content of some table changes
# The rollback implementation currently needs:
# Here:
# * test existence of the backup file, exit with success if it doesn't exist
# * otherwise, restore iptables from this file and return failure
# Action plugin:
# * try to remove the backup file
# * wait async task is finished and retrieve its final status
# * modify it and return the result
# Task:
# * task attribute 'async' set to the same value (or lower) than ansible
# * task attribute 'poll' equals 0
# Here we are: for whatever reason, but probably due to the current ruleset,
# the action plugin (i.e. on the controller) was unable to remove the backup
# cookie, so we restore initial state from it.
# Delay for 1 sec
# Delay for 5 sec to not harass the API
# Connect to OVH API
# Check that the load balancing exists
# Check that no task is pending before going on
# Move the IP and get the created taskId
# Just wait for the given taskId to be completed
# Copyright 2016 Dino Occhialini <dino.occhialini@gmail.com>
# package is not installed locally
# avoid loops by not trying self-upgrade again
# avoid loops by not trying self-update again
# normalize the state parameter
# Copyright (c) 2019-2020, Andrew Klaus <andrewklaus@gmail.com>
# define available arguments/parameters a user can pass to the module
# Set safe defaults for run_flag and check_flag
# Run check command
# Changes pending
# No changes pending
# Workaround syspatch ln bug:
# http://openbsd-archive.7691.n7.nabble.com/Warning-applying-latest-syspatch-td354250.html
# Kernel update applied
# If no stdout, then warn user
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or
# Convenience variables
# Sanitize required actions
# Filter out duplicate required actions
# Get required actions
# Initialize empty lists to hold the required actions that need to be
# registered, updated, and original ones of the updated one
# Loop through the desired required actions and check if they exist in the before required actions
# Loop through the before required actions and check if the aliases match
# Fill in the parameters
# Loop through the keys of the desired and before required actions
# and check if there are any differences between them
# If there are differences, add the before and desired required actions
# to their respective lists for updating
# If the desired required action is not found in the before required actions,
# add it to the list of required actions to register
# Check if name is provided
# Check if provider ID is provided
# Handle diff
# Handle changed
# Register required actions
# Update required actions
# Initialize the final list of required actions
# Iterate over the before_required_actions
# Check if there is an updated_required_action with the same alias
# Merge the two dictionaries, favoring the values from updated_required_action
# Add the merged dictionary to the final list of required actions
# Mark the updated_required_action as found
# Stop looking for updated_required_action
# If no matching updated_required_action was found, add the before_required_action to the final list of required actions
# Append any remaining updated_required_actions that were not merged
# Append newly registered required actions
# Handle message and end state
# Filter out the deleted required actions
# Delete required actions
# Copyright (c) 2024, Ryan Cook <rcook@redhat.com>
# Copyright (c) 2013, RSD Services S.A
# trying der encoded file
# this time it is a real failure
# cleanup file before to compare
# Append optional alias
# For Java's nonProxyHosts property, items are separated by '|',
# and patterns have to start with "*".
# The property name is http.nonProxyHosts, there is no
# separate setting for HTTPS.
# Fetch SSL certificate from remote host.
# Append optional aliases
# Password of a new keystore must be entered twice, for confirmation
# Use local certificate from local path and import it to a java keystore
# Delete SSL certificate from keystore
# Keystore doesn't exist we want to create it
# openssl dependency resolution
# dump certificate to enroll in the keystore on disk and compute digest
# The alias exists in the keystore so we must now compare the SHA256 hash of the
# public certificate already in the keystore, and the certificate we  are wanting to add
# Extracting certificate with openssl
# Extracting the X509 digest is a bit easier. Keytool will print the PEM
# certificate to stdout so we don't need to do any transformations.
# Getting the X509 digest from a URL is the same as from a path, we just have
# to download the cert first
# The certificate in the keystore does not match with the one we want to be present
# The existing certificate must first be deleted before we insert the correct one
# Copyright (c) 2015, Adam Števko <adam.stevko@gmail.com>
# Merge additional attributes with the image manifest.
# Copyright (c) 2024, Lincoln Wallace (locnnil) <lincoln.wallace@canonical.com>
# Copyright (c) 2021, Alexei Znamensky (russoz) <russoz@gmail.com>
# Copyright (c) 2021, Marcus Rickert <marcus.rickert@web.de>
# Copyright (c) 2018, Stanislas Lange (angristan) <angristan@pm.me>
# Copyright (c) 2018, Victor Carceler <vcarceler@iespuigcastellar.xeill.net>
# if state=present there might be file names passed in 'name', in
# which case they must be converted to their actual snap names, which
# is done using the names_from_snaps() method calling 'snap info'.
# This needs to be "\n---" instead of just "---" because otherwise
# if a snap uses "---" in its description then that will incorrectly
# be interpreted as a separator between snaps in the output.
# get base cmd parts
# Copyright (c) 2016, William L Thomson Jr
# Copyright (c) 2013, Yap Sok Ann
# Written by Yap Sok Ann <sokann@gmail.com>
# Modified by William L. Thomson Jr. <wlt@o-sinc.com>
# Based on apt module written by Matthew Williams <matthew@flowroute.com>
# Note: In the 3 functions below, package querying is done one-by-one,
# but emerge is done in one go. If that is not desirable, split the
# packages into multiple tasks instead of joining them together with
# comma.
# Check for SSH error with PORTAGE_BINHOST, since rc is still 0 despite
# Copyright (c) 2014, Gabe Mulley <gabe.mulley@gmail.com>
# Copyright (c) 2015, David Wittman <dwittman@gmail.com>
# Copyright (c) 2022, Marius Rieder <marius.rieder@scs.com>
# Check if we need to (re)install
# Check if we need to set the preference
# Check if we need to reset to auto
# Check if we need to uninstall
# Path takes precedence over family as it is more specific
# Run `update-alternatives --display <name>` to find existing alternatives
# Copyright (c) 2020, Zainab Alsaffar <Zainab.Alsaffar@mail.rit.edu>
# check if the user exists
# create a user account on PD
# delete a user account from PD
# print out the list of incidents
# get incidents assigned to a user
# add a user to a team/teams
# authenticate with PD API
# remove user
# in case that the user does not exist
# add user, adds user with the default notification rule and contact info (email)
# get user's id
# add a user to the team/s
# Scaleway Serverless function management module
# Creation doesn't support `redeploy` parameter
# Create function
# Copyright (c) 2016 Dimension Data
# Not currently supported for deletes due to a bug in libcloud (module will error out if "wait" is specified when "state" is not "present").
# Bizarre bug in libcloud when checking status after delete; socket.error is too generic to catch in this context so for now we don't even try.
# Is configured prefix size greater than or less than the actual prefix size?
# Cannot change base address for private IPv4 network.
# Cannot shrink private IPv4 network (by increasing prefix size).
# Copyright (c) 2014, Steve <yo@groks.org>
# Copyright (C) 2018 Huawei
# the link will include Nones if required format parameters are missed
# Copyright (c) 2013, Johan Wiren <johan.wiren.se@gmail.com>
# retrive all role from client_scope
# retrive all role from realm
# convert to indexed Dict by name
# update desired
# remove role if present
# no changes
# doing update
# doing delete
# Copyright (c) 2016, Tomas Karasek <tom.to.the.k@gmail.com>
# Copyright (c) 2016, Matt Baldwin <baldwin@stackpointcloud.com>
# Copyright (c) 2016, Thibaud Morel l'Horset <teebes@gmail.com>
# Also include each IPs as a key for easier lookup in roles.
# Key names:
# - public_ipv4
# - public_ipv6
# - private_ipv4
# - private_ipv6 (if there is one)
# Packet doesn't give public ipv6 yet, but maybe one
# day they will
# hostname is a list-typed param, so I guess it should return list
# (and it does, in Ansible 2.2.1) but in order to be defensive,
# I keep here the code to convert an eventual string to list
# at this point, hostnames is a list
# states where we might create non-existing specified devices
# First do non-creation actions, it might be faster
# At last create missing devices
# def __new__(cls, *args, **kwargs):
# Make sure service_plan argument is defined
# Create network
# Copyright (c) 2017, Alberto Murillo <alberto.murillo.silva@intel.com>
# Fail if swupd is not found
# Initialize return values
# Copyright (c) 2021, Werner Dijkerman (ikben@werner-dijkerman.nl)
# else, there is no hook and we want there to be no hook
# Copyright (c) 2018 Genome Research Ltd.
# Note: although the py-consul implementation implies that using a key with a value of `None` with `put` has a special
# meaning (https://github.com/criteo/py-consul/blob/master/consul/api/kv.py), if not set in the subsequently API call,
# the value just defaults to an empty string (https://www.consul.io/api/kv.html#create-update-key)
# Existing value was not decodable but all values we set are valid utf-8
# To be able to call fail_json
# Shortcuts for the params
# Authentication for non-Jenkins calls
# Crumb
# Authentication for Jenkins calls
# Cookie jar for crumb session
# Get list of installed plugins
# Get the JSON data
# Parse the JSON data
# Compose default messages
# extend error message
# failed on all urls
# Get the URL data
# Check if we got valid data
# Create final list of installed/pined plugins
# Install the plugin (with dependencies)
# Send the installation request
# Fallback to manually downloading the plugin
# Check if the plugin directory exists
# Make the checksum of the currently installed plugin
# Install dependencies
# Take latest version
# Take specific version
# Download the plugin file directly
# Write downloaded plugin into file if checksums don't match
# No previously installed plugin
# Get data for the MD5
# Make new checksum
# If the checksum is different from the currently installed
# plugin, store the new plugin
# Check for update from the updates JSON file
# If the latest version changed, download it
# Change file attributes if needed
# Not sure how to run this in the check mode
# See the comment above
# Check if file is saved localy
# Save it to file for next time
# Get dependencies for the specified plugin version
# Download the updates file if needed
# Get the data
# Write the updates file
# Open the updates file
# Read only the second line
# Move the updates file to the right place if we could read it
# Check if we have the plugin data available
# Download the plugin
# Store the plugin into a temp file and then move it
# Move the file onto the right place
# Perform the action
# Check if the plugin is pinned/unpinned
# Module arguments
# Module settings
# Convert timeout to float
# Instantiate the JenkinsPlugin object
# Set version to latest if state is latest
# Set version to latest compatible version if version is latest
# Create some shortcuts
# Initial change state of the task
# Perform action depending on the requested state
# Print status of the change
# Copyright 2014 Peter Oliver <ansible@mavit.org.uk>
# search_after=dict(),
# search_before=dict(),
# Copyright (c) 2014, Steve Smith <ssmith@atlassian.com>
# Atlassian open-source approval reference OSR-76.
# Copyright (c) 2020, Per Abildgaard Toft <per@minfejl.dk> Search and update function
# Copyright (c) 2021, Brandon McNama <brandonmcnama@outlook.com> Issue attachment functionality
# Copyright (c) 2022, Hugo Prudente <hugo.kenshin+oss@gmail.com> Worklog functionality
# Merge in any additional or overridden fields
# if comment_visibility is specified restrict visibility
# Use 'fields' to merge in any additional data
# Find the transition id
# Perform it
# Ideally we'd just use prepare_multipart from ansible.module_utils.urls, but
# unfortunately it does not support specifying the encoding and also defaults to
# base64. Jira doesn't support base64 encoded attachments (and is therefore not
# spec compliant. Go figure). I originally wrote this function as an almost
# exact copypasta of prepare_multipart, but ran into some encoding issues when
# using the noop encoder. Hand rolling the entire message body seemed to work
# out much better.
# https://community.atlassian.com/t5/Jira-questions/Jira-dosen-t-decode-base64-attachment-request-REST-API/qaq-p/916427
# content is expected to be a base64 encoded string since Ansible doesn't
# support passing raw bytes objects.
# NOTE: fetch_url uses a password manager, which follows the
# standard request-then-challenge basic-auth semantics. However as
# JIRA allows some unauthorised operations it doesn't necessarily
# send the challenge, so the request occurs as the anonymous user,
# resulting in unexpected results. To work around this we manually
# inject the auth header up-front to ensure that JIRA treats
# the requests as authorized for this user.
# Fallback print body, if it can't be decoded
# Copyright (c) 2013, Alexander Winkler <mail () winkler-alexander.de>
# based on svr4pkg by
# Find packages in the catalog which are not up to date
# Fail with an explicit error when trying to "install" '*'
# Build list of packages that are actually not installed from the ones requested
# If the package list is empty then all packages are already present
# When using latest for *
# Check for packages that are actually outdated
# If the package list comes up empty, everything is already up to date
# If there are packages to update, just empty the list and run the command without it
# pkgutil logic is to update all when run without packages names
# Build list of packages that are either outdated or not installed
# If the package list is empty that means all packages are installed and up to date
# Build list of packages requested for removal that are actually present
# If the list is empty, no packages need to be removed
# pkgutil was not executed because the package was already present/absent/up to date
# Copyright (C) 2018 IBM CORPORATION
# required args
# Copyright (c) 2017-2018, Keller Fuchs <kellerfuchs@hashbang.sh>
# Shortcuts
# Exit early if the password is already valid
# Change the password (or throw an exception)
# Password successfully changed
# If state has changed, update vm_params.
# Module will exit with an error message if no VM is found.
# Set VM power state.
# Copyright (c) 2012, Matt Wright <matt@nobien.net>
# Copy and add to the arguments
# Try easy_install with the virtualenv directory first.
# easy_install should have been found by now.  The final call to
# get_bin_path will trigger fail_json.
# Copyright (c) 2023, Dominik Kukacka <dominik.kukacka@gmail.com>
# OTP password generated by FreeIPA is visible only for host_add command
# so, return directly from here.
# Copyright (c) 2018, Albert Autin
# If no node passed, use the first one (local)
# data will be in format 'command\touput'
# some commands don't return in kv format
# so we dont want a dict from those.
# if only 1 element found, and not kv, return just the value.
# just checks to see if the version is 4.3 or greater
# if version <4.3 we can't use cluster-stable info cmd
# regex hack to check for versions beginning with 0-3 or
# beginning with 4.0,4.1,4.2
# if we are getting more than 1 size, lets say no
# unstable-cluster is returned in form of Exception
# These checks are outside of the while loop because
# we probably want to skip & sleep instead of failing entirely
# print("_has_migs")
# print(skip_reason)
# Get the "id" of the client based on the usually more human-readable
# "clientId"
# Get current state of the Authorization Scope using its name as the search
# filter. This returns False if it is not found.
# Generate a JSON payload for Keycloak Admin API. This is needed for
# "create" and "update" operations.
# Add "id" to payload for modify operations
# Ensure that undefined (null) optional parameters are presented as empty
# strings in the desired state. This makes comparisons with current state
# much easier.
# Do the above for the current state
# At this point we know we have to update the object anyways,
# so there's no need to do more work.
# Copyright (c) 2014, Sebastien Rohaut <sebastien.rohaut@gmail.com>
# Backup
# Tempfile
# Remove comment in line
# Found the line
# Change line only if value has changed
# Move tempfile to newfile
# Copyright (c) 2015, Paul Markham <pmarkham@netrefinery.com>
# Copyright (C) 2013, Peter Sprygada <sprygada@gmail.com>
# Copyright (c) 2020, Adam Vaughan (@adamvaughan) avaughan@pagerduty.com
# API documented at https://developer.pagerduty.com/docs/events-api-v2/send-change-events/
# Copyright (c) 2016, James Hogarth <james.hogarth@gmail.com>
# If feature is already in good state, just exit
# RC is not 0 for this already disabled feature, handle it as no change applied
# Copyright (c) 2020, Christian Wollinger <cwollinger@web.de>
# create NAPTR record with the given params
# create SRV record with the given params
# create A record with the given params
# check if the record exists via list on ipwcli
# check what happens if create fails on ipworks
# define result
# Copyright (c) 2019, Adam Goossens <adam.goossens@gmail.com>
# attributes in Keycloak have their values returned as lists
# using the API. attributes is a dict, so we'll transparently convert
# the values to lists.
# See if it already exists in Keycloak
# refresh
# Resolve server
# Attempt to change the server state, only if it is not already there
# or on its way.
# Make sure the server has reached the desired state
# Copyright (c) 2013, André Paramés <git@andreparames.com>
# Based on the Git module by Michael DeHaan <michael.dehaan@gmail.com>
# if there is no bzr configuration, do a branch operation
# else pull and switch the version
# we cloned or pulled
# For some unknown reason, while IPA returns the secret in base64,
# it wants the secret passed in as base32.  This makes it more difficult
# for comparison (does 'current' equal to 'new').  Moreover, this may
# cause some subtle issue in a playbook as the output is encoded
# in a different way than if it was passed in as a parameter.  For
# these reasons, have the module standardize on base64 input (as parameter)
# and output (from IPA).
# Used to hold values that will be sanitized from output as no_log.
# For the case where secretkey is not specified at the module, but
# is passed back from IPA.
# Rename the IPA parameters to the more friendly ansible module names for them
# Change the type from IPA's list of string to the appropriate return value type
# based on field.  By default, assume they should be strings.
# For someone unknown reason, the returns from IPA put almost all
# values in a list, even though passing them in a list (even of
# length 1) will be rejected.  The module values for all elements
# other than type (totp or hotp) have this happen.
# We stored the secret key in base32 since we had assumed that would need to
# be the format if we were contacting IPA to create it.  However, we are
# now comparing it against what is already set in the IPA server, so convert
# back to base64 for comparison.
# For the secret key, it is even more specific in that the key is returned
# in a dict, in the list, as the __base64__ entry for the IPA response.
# dict to map from ansible parameter names to attribute names
# used by IPA (which are not so friendly).
# Create inverse dictionary for mapping return values
# Check to see if the new unique id is already taken in use
# It would not make sense to have a rename after creation, so if the user
# specified a newuniqueid, just replace the uniqueid with the updated one
# before creation
# IPA wants the unique id in the first position and not as a key/value pair.
# Get rid of it from the otptoken dict and just specify it in the name field
# for otptoken_add.
# IPA will reject 'modifications' that do not actually modify anything
# if any of the unmodifiable elements are specified.  Explicitly
# get rid of them here.  They were not different or else the
# we would have failed out in validate_modifications.
# for otptoken_mod.
# Transform the output to use ansible keywords (not the IPA keywords) and
# sanitize any key values in the output.
# some old servers require this, also the sleep following send
# sending to room instead of user, need to join
# Copyright (c) 2019, Jon Ellis (@JonEllis) <ellis.jp@gmail.com>
# Keys consist of a protocol, the key data, and an optional comment.
# Copyright (c) 2015, Sebastian Kornehl <sebastian.kornehl@asideas.de>
# Import Datadog
# Prepare Datadog
# Check if api_key and app_key is correct or not
# if not, then fail here.
# Scaleway Container registry management module
# Create container registry
# if result != "paused":             # api output buggy - accept raw exception for now
# if result != "up":                 # api output buggy - accept raw exception for now
# Copyright (c) 2015, Benjamin Copeland (@bhcopeland) <ben@copeland.me.uk>
# TODO: Add RETURN documentation.
# items not found in the api
# Work out end date/time based on minutes
# start_date
# end_date
# user supplied a user token instead of account api token
# try to import dnsimple >= 2.0.0
# Let's figure out what operation we want to do
# No domain, return a list
# Domain & No record
# domain does not exist
# state is absent
# need the not none check since record could be an empty string
# delete any records that have the same name and record type
# check if we need to update
# Make sure these record_ids either all exist or none
# Copyright (c) 2014, Vedit Firat Arig <firatarig@gmail.com>
# Outline and parts are reused from Mark Theunissen's mysql_db module
# Copyright (c) 2017, Dag Wieers <dag@wieers.com>
# Handle errors
# Perform login first
# Store cookie for future requests
# Prepare request data
# Wrap the XML documents in a <root> element
# Handle each XML document separately in the same session
# Add cookie to XML
# Perform actual request
# Merge results with previous results
# Check for any changes
# NOTE: Unfortunately IMC API always report status as 'modified'
# Scaleway VPC management module
# search on next page if needed
# private network need to be updated
# private network need to be create
# Copyright (c) 2020, Lukas Bestle <project-ansible@lukasbestle.com>
# Copyright (c) 2017, Michael Heap <m@michaelheap.com>
# Initialize data properties
# Populated only if needed
# No error or dry run
# Is the `mas` tool available at all?
# Is the version recent enough?
# Only check this once per execution
# Checking if user is signed-in is disabled due to https://github.com/mas-cli/mas/issues/417
# Format: "123456789 App Name"
# Populate cache if not already done
# Run operations on the given app IDs
# Ensure we are root
# Upgrade all apps if requested
# Clear cache
# Exit with the collected data
# Copyright (c) 2015, Björn Andersson
# See if the identity file exists or not, relative to the config file
# Delete host from the configuration
# Update host in the configuration
# Make sure we set the permission
# Make sure the file is owned by the right user and group
# Copyright (c) 2024 Alexander Bakanovskii <skottttt228@gmail.com>
# Copyright (c) 2013, Andrew Dunham <andrew@du.nham.ca>
# Copyright (c) 2015, Indrajit Raychaudhuri <irc+code@indrajit.com>
# Copyright (c) 2024, Kit Ham <kitizz.devside@gmail.com>
# Stores validated arguments for an instance of an action.
# See DOCUMENTATION string for argument-specific information.
# Stores the state of a Homebrew service.
# type: (HomebrewServiceArgs, AnsibleModule) -> HomebrewServiceState
# type: (HomebrewServiceArgs, AnsibleModule, bool, Optional[str]) -> None
# type: (AnsibleModule) -> HomebrewServiceArgs
# type: (HomebrewServiceArgs, AnsibleModule) -> None
# Nothing to do, return early.
# Pre-validate arguments.
# Choose logic based on the desired state.
# Copyright (c) 2016, Kamil Szczygiel <kamil.szczygiel () intel.com>
# Copyright (c) 2013, Alexander Bulimov <lazywolf0@gmail.com>
# find mountpoint
# wipefs comes with util-linux package (as 'blockdev' & 'findmnt' above)
# that is ported to FreeBSD. The use of dd as a portable fallback is
# not doable here if it needs get_mountpoint() (to prevent corruption of
# a mounted filesystem), since 'findmnt' is not available on FreeBSD,
# even in util-linux port for this OS.
# Depending on the versions, xfs_info is able to get info from the
# device, whenever it is mounted or not, or only if unmounted, or
# only if mounted, or not at all. For any version until now, it is
# able to query info from the mountpoint. So try it first, and use
# device as the last resort: it may or may not work.
# v0.20-rc1 use stderr
# v0.20-rc1 doesn't have --force parameter added in following version v3.12
# assume version is greater or equal to 3.12
# Looking for "	F2FS-tools: mkfs.f2fs Ver: 1.10.0 (2018-01-30)"
# mkfs.f2fs displays version since v1.2.0
# Since 1.9.0, mkfs.f2fs check overwrite before make filesystem
# before that version -f switch wasn't used
# expected: 'Info: sector size = 512'
# expected: 'Info: total FS sectors = 102400 (50 MB)'
# There is no "single command" to manipulate filesystems, so we map them all out and their options
# In case blkid/fstyp isn't able to identify an existing filesystem, device
# is considered as empty, then this existing filesystem would be overwritten
# even if force isn't enabled.
# create fs
# wipe fs signatures
# Get id of the client based on client_id
# Get current state of the permission using its name as the search
# Copyright (c) 2018, Dag Wieers (dagwieers) <dag@wieers.com>
# system = conn.get_system(name, token)
# Legacy Python that doesn't verify HTTPS certificates by default
# Handle target environment that doesn't support HTTPS verification
# result['system'] = system
# Turn it into a dictionary of dictionaries
# all_systems = conn.get_systems()
# result['systems'] = { system['name']: system for system in all_systems }
# Return a list of dictionaries
# Update existing entry
# Create a new entry
# Add interface properties
# Only save when the entry was changed
# Copyright (c) 2017, Vitaliy Zhhuta <zhhuta () gmail.com>
# insipred by Kamil Szczygiel <kamil.szczygiel () intel.com> influxdb_database module
# restore previous user
# Fix privileges wording
# check if the current grants are included in the desired ones
# check if the desired grants are included in the current ones
# Copyright (c) 2016, Hiroaki Nakamura <hnakamur@gmail.com>
# Copyright (c) 2020, Frank Dornheim <dornheim@posteo.de>
# ANSIBLE_LXD_DEFAULT_URL is a default value of the lxd endpoint
# PROFILE_STATES is a list for states supported
# CONFIG_PARAMS is a list of config attribute names.
# get node or create one
# merge or copy the sections from the existing profile to 'config'
# merge or copy the sections from the ansible-task to 'config'
# upload config to lxd
# Copyright (c) 2014, Dimitrios Tydeas Mengidis <tydeas.dr@gmail.com>
# get all available options from a composer command using composer help to json
# Get composer command with fallback to default
# Default options
# Composer version > 1.0.0-alpha9 now use stderr for standard notification messages
# Copyright (c) 2014, Anders Ingemann <aim@secoya.dk>
# File not found, non-fatal
# Convert to json
# Remove obsolete custom params
# Capacity
# Strategy
# Scaling
# Third party integrations
# Compute
# Multai
# Scheduling
# Only put product on group creation
# Handle primitive fields
# Retrieve creds file variables
# End of creds file retrieval
# Remove 'connectionInfo' from comparison, since it is not possible to validate it.
# Copyright (c) 2015, Tim Hoiberg <tim.hoiberg@gmail.com>
# Copyright (c) 2016, Jiangge Zhang <tonyseek@gmail.com>
# Redis module specific support methods.
# The passed client has been connected to the database already
# Replica Command section -----------
# Check if we have all the data
# Only need data if we want to be replica
# Connect and check
# Check if we are already in the mode that we want
# Do the stuff
# (Check Check_mode before commands so the commands aren't evaluated
# if not necessary)
# flush Command section -----------
# Flush never fails :)
# try to parse the value as if it were the memory size
# Scaleway IP management module
# IP is assigned to a server
# IP is unassigned to a server
# IP is migrated between 2 different servers
# Create IP
# param_value may be of type xmlrpc.client.DateTime,
# which is not simply convertible to str.
# Use 'value' attr to get the str value,
# following an example in xmlrpc.client.DateTime document
# We only have one host, so just return its entry
# Copyright (c) 2018, Mikhail Gordeev
# make args list to use in concatenation
# first add to validate repository
# we don't want to return the same thing twice
# Add the objectClass into the list of attributes
# Load attributes
# Instantiate the LdapEntry object
# Get the action function
# Copyright (c) 2016, Adam Števko <adam.stevko@gmail.com>
# On FreeBSD, we exclude currently mounted BE on /, as it is
# special and can be activated even if it is mounted. That is not
# possible with non-root BEs.
# beadm on FreeBSD and Solarish systems differs in delete behaviour in
# that we are not allowed to delete activated BE on FreeBSD while on
# Solarish systems we cannot delete BE if it is mounted. We add mount
# check for both platforms as BE should be explicitly unmounted before
# being deleted. On FreeBSD, we also check if the BE is activated.
# On FreeBSD, beadm is unable to activate mounted BEs, so we add
# an explicit check for that case.
# Resource to modify
# Copyright (c) 2014, Ahti Kitsik <ak@ahtik.com>
# Copyright (c) 2014, Jarno Keskikangas <jarno.keskikangas@gmail.com>
# Copyright (c) 2013, Aleksey Ovcharenko <aleksey.ovcharenko@gmail.com>
# Copyright (c) 2013, James Martin <jmartin@basho.com>
# Mutual exclusivity with `interface` implied by `required_by`.
# Convert version to numbers
# Ensure ufw is available
# Save the pre state and rules in order to recognize changes
# Execute filter
# "active" would also match "inactive", hence the space
# Rules are constructed according to the long format
# ufw [--dry-run] [route] [delete | insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \
# ufw does not like it when the insert number is larger than the
# maximal rule number for IPv4/IPv6.
# comment is supported only in ufw version after 0.35
# ufw dry-run doesn't send all rules so have to compare ipv4 or ipv6 rules
# Get the new state
# Copyright (c) 2017, Red Hat Inc.
# Idempotency: it is okay if the file doesn't exist
# Build handler configuration from module arguments
# Load the current config, if there is one, so we can compare
# File either doesn't exist or it is invalid JSON
# Config is the same, let's not change anything
# Validate that directory exists before trying to write to it
# Copyright (c) 2016, Mathieu Bultel <mbultel@redhat.com>
# something went wrong emit the msg
# Verify that the platform supports atomic command
# Copyright (c) 2023, Alexei Znamensky
# Copyright (c) 2016-2023, Vlad Glagolev <scm@vaygr.net>
# Check if the service is already enabled/disabled
# Scaleway Serverless container registry info module
# @TODO change to int?
# Scaleway Compute management module
# We don't want a public ip
# IP is only attached to the instance and is released as soon as the instance terminates
# We check that the IP we want to attach exists, if so its ID is returned
# A server MUST be stopped to be deleted.
# Only the name attribute is accepted in the Compute query API
# When you are working with dict, only ID matter as we ask user to put only the resource ID in the playbook
# Handling other structure compare simply the two objects content
# Setting all key to current value except ID
# Setting ID to the user specified ID
# IP parameters of the wished server depends on the configuration
# Extra logic necessary because vgchange returns error when autoactivation is already set
# If there is a missing pv on the machine, versions of pvresize rc indicates failure.
# If there is a missing pv on the machine, pvchange rc indicates failure.
# LVM always uses real paths not symlinks so replace symlinks with actual path
# check given devices
# get pv list
# check pv for devices
# create VG
# create PV
# remove VG
# activate/deactivate existing VG
# reset VG uuid
# resize VG
# add PV to our VG
# remove some PV from our VG
# ============================================
# DNSMadeEasy module specific support methods.
# ["domain_name"] => ID
# ["record_name"] => ID
# ["record_ID"] => <record>
# ["contactList_name"] => ID
# Lookup the domain ID if passed as a domain name vs. ID
# Try to find a single record matching this one.
# How we do this depends on the type of record. For instance, there
# can be several MX records for a single record_name while there can
# only be a single CNAME for a particular record_name. Note also that
# there can be several records with different types for a single name.
# Get all the records if not already cached
# Note that TXT records are surrounded by quotes in the API response.
# @TODO cache this call so it is executed only once per ansible execution
# iterate over e.g. self.getDomains() || self.getRecords()
# e.g. self.domain_map || self.record_map
# e.g. self.domains || self.records
# @TODO update the cache w/ resultant record + id when implemented
# @TODO remove record from the cache when implemented
# Follow Keyword Controlled Behavior
# Fetch existing record + Build new one
# Special handling for mx record
# Special handling for SRV records
# Fetch existing monitor if the A record indicates it should exist and build the new monitor
# Build the new monitor
# The API requires protocol to be a numeric in the range 1-6
# The API requires sensitivity to be a numeric of 8, 5, or 3
# The module accepts either the name or the id of the contact list
# The module option names match the API field names
# Compare new record against existing one
# Remove leading and trailing quote character from values because TXT records
# are surrounded by quotes.
# return the record if no value is specified
# create record and monitor as the record does not exist
# update the record
# return the record (no changes)
# delete the record (and the monitor/failover) if it exists
# record does not exist, return w/o change.
# At minimum we need account and key
# If we have a record return info on that record
# If we have the account only and domain, return records for the domain
# If we have the account only, return domains
# we check error message for a pattern, so we need to make sure that's in C locale
# copy the master args
# One status line may look like one of these two:
# process not in group:
# process in group:
# If there is ':', this process must be in a group.
# from this point onwards, if there are no matching processes, module cannot go on.
# Copyright (c) 2015, Werner Dijkerman (ikben@werner-dijkerman.nl)
# Because we have already call userExists in main()
# add avatar to group
# Filter out None values
# When group/user exists, object will be stored in self.group_object.
# Define default group_path based on group_name
# Copyright (c) 2016, Alain Dejoux <adejoux@djouxtech.net>
# Add echo command when running in check-mode
# check if system commands are available
# Calculate pp size and round it up based on pp size.
# change lv allocation policy
# from here the last remaining action is to resize it, if no size parameter is passed we do nothing.
# Copyright (c) 2017 John Kwiatkoski (@JayKayy) <jkwiat40@gmail.com>
# Copyright (c) 2018 Alexander Bethke (@oolongbrothers) <oolongbrothers@gmx.net>
# pylint: disable=global-variable-not-assigned
# This is a difficult function, since if the user supplies a flatpakref url,
# we have to rely on a naming convention:
# The flatpakref file name needs to match the flatpak name
# Try running flatpak list with columns feature
# Probably flatpak before 1.2
# Probably flatpak >= 1.2
# For guidelines on application IDs, refer to the following resources:
# Flatpak:
# https://docs.flatpak.org/en/latest/conventions.html#application-ids
# Flathub:
# https://docs.flathub.org/docs/for-app-authors/requirements#application-id
# This module supports check mode
# If the binary was not found, fail the operation
# Copyright (c) 2022, Gregory Furlong <gnfzdz@fzdz.io>
# Look through the all response pages in search of deploy key we need
# Check parameters
# Retrieve access token for authorized API requests
# Retrieve existing deploy key (if any)
# Create new deploy key in case it doesn't exists
# Update deploy key if the old value does not match the new one
# Bitbucket doesn't support update key for the same label,
# so we need to delete the old one first
# Delete deploy key
# Copyright (c) 2016, Ryan Scott Brown <ryansb@redhat.com>
# gather some facts about the deployment
# first delete so clientscopes can change type
# Copyright 2013 Matt Coddington <coddington@gmail.com>
# build list of params
# If we're in check mode, just exit pretending like we succeeded
# Send the data to New Relic
# Build the common request body
# Insert state-specific attributes to body
# Build the deployment object we return
# Send the data to bigpanda
# Ansible Specific Variables
# Module will fail if the response is not 200
# Copyright (c) 2020, Datadog, Inc
# Validate api and app keys
# Copyright (c) 2016, Kenneth D. Evensen <kevensen@redhat.com>
# Copyright (c) 2017, Abhijeet Kasurde <akasurde@redhat.com>
# Copyright (c) 2020, Pavlo Bashynskyi (@levonet) <levonet@gmail.com>
# Copyright 2012 Dag Wieers <dag@wieers.com>
# Suppress warnings from hpilo
# TODO: Count number of CPUs, DIMMs and total memory
# BIOS Information
# System Information
# Embedded NIC MAC Assignment
# HPQ NIC iSCSI MAC Info
# Embedded NIC MAC Assignment (Alternate data format)
# Collect health (RAM/CPU data)
# RAM as reported by iLO 2.10 on ProLiant BL460c Gen8
# reformat into a text friendly format
# Report host state
# clientSecret returned by API when using `get_identity_provider(alias, realm)` is always **********
# to detect changes to the secret, we get the actual cleartext secret from the full realm info
# Filter and map the parameters names that apply to the identity provider.
# special handling of mappers list to allow change detection
# eventually this holds all desired mappers, unchanged, modified and newly added
# ensure idempotency in case module.params.mappers is not sorted by name
# only update existing if there is a change
# Process a deletion
# Remove possible separator from MAC address
# If we don't end up with 12 hexadecimal characters, fail
# Test if it converts to an integer, otherwise fail
# Create payload for magic packet
# Broadcast payload to network
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# along with Ansible. If not, see http://www.gnu.org/licenses/.
# Check if the locale is not found in any of the matches
# locale may be installed but not listed in the file, for example C.UTF-8 in some systems
# Write the modified content back to the file
# Create locale.
# Ubuntu's patched locale-gen automatically adds the new locale to /var/lib/locales/supported.d/local
# Delete locale involves discarding the locale from /var/lib/locales/supported.d/local and regenerating all locales.
# Purge locales and regenerate.
# Please provide a patch if you know how to avoid regenerating the locales to keep!
# PubNub Real-time Cloud-Hosted Push API and Push Notification Client
# Frameworks
# Copyright (C) 2016 PubNub Inc.
# http://www.pubnub.com/
# http://www.pubnub.com/terms
# Import PubNub BLOCKS client.
# Report error because block doesn't exists and at the same time
# requested to start/stop.
# Update block information if required.
# Prepare payload for event handler update.
# Create event handler if required.
# Update event handler if required.
# Authorize user.
# Initialize PubNub account instance.
# Try fetch application with which module should work.
# Try fetch keyset with which module should work.
# Try fetch block with which module should work.
# Check whether block should be removed or not.
# Process event changes to event handlers.
# Update block operation state if required.
# Save current account state.
# Report module execution results.
# Copyright (c) 2014-2015, Epic Games, Inc.
# topics was introduced on gitlab >=14 and replace tag_list. We get current gitlab version
# and check if less than 14. If yes we use tag_list instead topics
# add avatar to project
# When project exists, object will be stored in self.project_object.
# Set project_path to project_name if it is empty.
# Gather facts.
# priority can only be integer 0 > 999
# data value must be max 250 chars
# record value must be max 250 chars
# relative isn't used for all record types
# if any of the above failed then fail early
# assemble the new record.
# if we have any matches, update them.
# record exists, add ID to payload.
# nothing to do; record is already correct so we populate
# the return var with the existing record's details.
# merge dicts ensuring we change any updated values
# return the new record to the user in the returned var.
# empty msg as we don't want to return a boatload of json to the user.
# no record found, so we need to create it
# populate the return var with the new record's details.
# if we have any matches, delete them.
# get a list of all records ( as we can't limit records by zone)
# find any matching records
# perform some Memset API-specific validation
# Copyright (c) 2017, Steven Bambling <smbambling@gmail.com>
# Remove keys with None value
# Test if silence exists before clearing
# If check/subscription doesn't exist
# exit with changed state of False
# module.check_mode is inherited from the AnsibleMOdule class
# Copyright (c) 2020, FERREIRA Christophe <christophe.ferreira@cnaf.fr>
# PROJECTS_STATES is a list for states supported
# no need to call api if merged config is the same
# as old config
# Copyright (c) 2016, 2017 Jasper Lievisse Adriaanse <j@jasper.la>
# Shortcut for the imgadm(1M) command. While imgadm(1M) supports a
# -E option to return any errors in JSON, the generated JSON does not play well
# with the JSON parsers of Python. The returned message contains '\n' as part of
# the stacktrace, which breaks the parsers.
# Since there are a number of (natural) aliases, prevent having to look
# them up every time we operate on `state`.
# Perform basic UUID validation upfront.
# Helper method to massage stderr
# There is no feedback from imgadm(1M) to determine if anything
# was actually changed. So treat this as an 'always-changes' operation.
# Note that 'imgadm -v' produces unparsable JSON...
# Check the various responses.
# Note that trying to add a source with the wrong type is handled
# above as it results in a non-zero status.
# Type is ignored by imgadm(1M) here
# Unconditionally pass '--force', otherwise we're prompted with 'y/N'
# Even if the 'rc' was non-zero (3), we handled the situation
# in order to determine if there was a change.
# This module relies largely on imgadm(1M) to enforce idempotency, which does not
# provide a "noop" (or equivalent) mode to do a dry-run.
# Either manage sources or images.
# Make sure operate on a single image for the following actions
# Copyright (c) 2015, Marius Gedminas <marius@pov.lt>
# Copyright (c) 2016, Matthew Gamble <git@matthewgamble.net>
# We check error message for a pattern, so we need to make sure the messages appear in the form we're expecting.
# Set the locale to C to ensure consistent messages.
# If the return code is 1, it just means the option hasn't been set yet, which is fine.
# Until this point, the git config was just read and in case no change is needed, the module has already exited.
# Run from root directory to avoid accidentally picking up any local config settings
# include source '-' so that creation-only properties are not removed
# to avoids errors when the dataset already exists and the property is not changed
# this scenario is most likely when the same playbook is run more than once
# Add alias for enhanced sharing properties
# Strip last newline
# Reverse the boolification of zfs properties
# find all alerts to add to the profile
# we do this first to fail early if one is missing.
# build the profile dict to send to the server
# send it to the server
# now that it has been created, we can assign the alerts
# we need to use client.get to query the alert definitions
# figure out which alerts we need to assign / unassign
# alerts listed by the user:
# alert which currently exist in the profile
# we use get_alert_href to have a direct href to the alert
# no alerts in this profile
# assign / unassign the alerts, if needed
# update other properties
# mode needs to be updated
# check if notes need to be updated
# if we have any updated values
# we need to add or update the alert profile
# a profile with this name doesn't exist yet, let's create it
# a profile with this name exists, we might need to update it
# this alert profile should not exist
# if we have an alert profile with this name, delete it
# This alert profile does not exist in ManageIQ, and that's okay
# Scaleway Serverless container namespace info module
# Copyright (c) 2021, Álvaro Torres Cogollo
# Copyright (c) 2012, Afterburn <https://github.com/afterburn>
# Copyright (c) 2013, Aaron Bull Schaefer <aaron@elasticdog.com>
# Copyright (c) 2015, Jonathan Lestrelin <jonathan.lestrelin@gmail.com>
# get the version installed locally (if any)
# get the version in the repository
# Return True to indicate that the package is installed locally,
# and the result of the version number comparison
# to determine if the package is up-to-date.
# Preparing prompts answer according to item type
# If the current item is a dict then we expect its key to be the prompt regex and its value to be the answer
# We also expect here that the dict only has ONE key and the first key will be taken
# if the package is installed and state == present
# or state == latest and is up-to-date then skip
# Get pv list.
# Check if pv exists and is free.
# Disk None, looks free.
# Check if PV is not already in use by Oracle ASM.
# Check if PV is already in use for the same vg.
# Command option parameters.
# Validate if PV are not already in use.
# Volume group extension.
# Volume group creation.
# Define pvs_to_remove (list of physical volumes to be removed).
# Remove VG if pvs are note informed.
# Remark: AIX will permit remove only if the VG has not LVs.
# Reduce volume group.
# Copyright (c) 2017, 2018 Kairo Araujo <kairo@kairo.eti.br>
# change attributes on device
# discovery devices (cfgmgr)
# run cfgmgr on specific device
# Remove device
# Copyright (c) 2019, George Rawlinson <george@rawlinson.net.nz>
# obtain binary paths for gpg & pacman-key
# obtain module parameters
# sanitise key ID & check if key exists in the keyring
# check mode
# Copyright (c) 2020, Silvie Chlupova <schlupov@redhat.com>
# Copyright (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# Copyright (c) 2015, Ales Nosek <anosek.nosek () gmail.com>
# deduplicate entries in values
# ini file could be empty
# last line of file may not contain a trailing newline
# append fake section lines to simplify the logic
# At top:
# Fake random section to do not match any other in the file
# Using commit hash as fake section name
# Insert it at the beginning
# At bottom:
# If no section is defined, fake section is used
# end of section:
# look for another section
# find start and end of section
# Keep track of changed section_lines
# Determine whether to consider using commented out/inactive options or only active ones
# handling multiple instances of option=value when state is 'present' with/without exclusive is a bit complex
# 1. edit all lines where we have a option=value pair with a matching value in values[]
# 2. edit all the remaining lines where we have a matching option
# 3. delete remaining lines where we have a matching option
# 4. insert missing option line(s) at the end of the section
# replace existing option with no value line(s)
# replace existing option=value line(s)
# override option with no value to option with value if not allow_no_value
# remove all remaining option occurrences from the rest of the section
# insert missing option line(s) at the end of the section
# search backwards for previous non-blank or non-comment line
# insert option line(s)
# items are added backwards, so traverse the list backwards to not confuse the user
# otherwise some of their options might appear in reverse order for whatever fancy reason ¯\_(ツ)_/¯
# insert option=value line
# insert option with no value line
# insert option with no value line(s)
# delete all option line(s) with given option and ignore value
# delete specified option=value line(s)
# drop the entire section
# reassemble the ini_lines after manipulation
# remove the fake section line
# Copyright (c) 2018, Stephan Schwarz <stearz@gmx.de>
# This is needed because the bool value only accepts int values in the backend
# Copyright (c) 2021-2022 Hewlett Packard Enterprise, Inc. All rights reserved.
# Copyright (c) 2013, James Martin <jmartin@basho.com>, Drew Kerrigan <dkerrigan@basho.com>
# make sure riak commands are on the path
# here we attempt to load those stats,
# this could take a while, recommend to run in async mode
# Copyright (c) 2018, Juergen Wiebe <wiebe@e-spirit.com>
# Copyright (c) 2018, Milan Ilic <milani@nordeus.com>
# Filter -2 means fetch all images user can Use
# if the specific name is indicated
# Check that the command is valid
# TODO: Disabled RETURN as it is breaking the build for docs. Needs to be fixed.
# TODO: get this info from API
# Check if OS or Image Template is provided (Can't be both, defaults to OS)
# Blank out disks since it will use the template
# Copyright (c) 2016, Dag Wieers (@dagwieers) <dag@wieers.com>
# Add missing entries (backward compatible)
# Make backward compatible
# Beware that records comprise of a string representation of the file_type
# Modify existing entry
# Add missing entry
# Modify existing path substitution entry
# Add missing path substitution entry
# Remove existing entry
# Remove existing path substitution entry
# Copyright (c) 2014, GeekChimp - Franck Nijhof <franck@geekchimp.com> (DO NOT CONTACT!)
# Copyright (c) 2019, Ansible project
# Copyright (c) 2019, Abhijeet Kasurde <akasurde@redhat.com>
# exceptions --------------------------------------------------------------- {{{
# /exceptions -------------------------------------------------------------- }}}
# class MacDefaults -------------------------------------------------------- {{{
# init ---------------------------------------------------------------- {{{
# Initial var for storing current defaults value
# Try to find the defaults executable
# Ensure the value is the correct type
# /init --------------------------------------------------------------- }}}
# tools --------------------------------------------------------------- {{{
# Split output of defaults. Every line contains a value
# Remove first and last item, those are not actual values
# Remove spaces at beginning and comma (,) at the end, unquote and unescape double quotes
# /tools -------------------------------------------------------------- }}}
# commands ------------------------------------------------------------ {{{
# First try to find out the type
# If RC is 1, the key does not exist
# If the RC is not 0, then terrible happened! Ooooh nooo!
# Ok, lets parse the type from output
# Now get the current value
# Strip output
# A non zero RC at this point is kinda strange...
# Convert string to list when type is array
# Store the current_value
# We need to convert some values so the defaults commandline understands it
# When the type is array and array_add is enabled, morph the type :)
# All values should be a list, for easy passing it to the command
# /commands ----------------------------------------------------------- }}}
# run ----------------------------------------------------------------- {{{
# Get the current value from defaults
# Handle absent state
# Check if there is a type mismatch, e.g. given type does not match the type in defaults
# Current value matches the given value. Nothing need to be done. Arrays need extra care
# Change/Create/Set given key/value for domain in defaults
# /run ---------------------------------------------------------------- }}}
# /class MacDefaults ------------------------------------------------------ }}}
# main -------------------------------------------------------------------- {{{
# /main ------------------------------------------------------------------- }}}
# there might be multiple reasons for a 422
# so we must check if the reason is that the key already exists
# to forcefully modify an existing key, the existing key must be deleted first
# Copyright (c) 2016, Jonathan Mainguy <jon@soh.re>
# basis of code taken from the ansible twillio and nexmo modules
# Hack module params to have the Basic auth params that fetch_url expects
# cn is returned as list even with only a single value.
# client.modify_if_diff does not work as each option must be removed/added by its own
# Copyright (c) 2020, VMware, Inc. All Rights Reserved.
# In standard ISO interchange level 1, file names have a maximum of 8 characters, followed by a required dot,
# followed by a maximum 3 character extension, followed by a semicolon and a version
# will create intermediate dir for new ISO file
# if specify a dir then go through the dir to add files and dirs
# get dir list and file list
# if specify a file then add this file directly to the '/' path in ISO
# Copyright (c) 2019, Nurfet Becirevic <nurfet.becirevic@gmail.com>
# Copyright (c) 2017, Tomas Karasek <tom.to.the.k@gmail.com>
# Don't reattach volume which is attached to a different device.
# Rather fail than force remove a device on state == 'present'.
# started out with AWX's scan_packages module
# query resource id, fail if resource does not exist
# assign or unassign the tags
# Copyright (c) 2014, Justin Lecher <jlec@gentoo.org>
# A repo can be uniquely identified by an alias + url
# exit code 6 is ZYPPER_EXIT_NO_REPOS (no repositories defined)
# look for repos that have matching alias or url to the one searched
# Repo does not exist yet
# Found an existing repo, look for changes
# Found two repos and want to overwrite_multiple
# priority on addrepo available since 1.12.25
# https://github.com/openSUSE/zypper/blob/b9b3cb6db76c47dc4c47e26f6a4d2d4a0d12b06d/package/zypper.changes#L327-L336
# gpgcheck available since 1.6.2
# https://github.com/openSUSE/zypper/blob/b9b3cb6db76c47dc4c47e26f6a4d2d4a0d12b06d/package/zypper.changes#L2446-L2449
# the default changed in the past, so don't assume a default here and show warning for old zypper versions
# rewrite bools in the language that zypper lr -x provides for easier comparison
# Check run-time module parameters
# Download / Open and parse .repo file to ensure idempotency
# No support for .repo file with zero or more than one repository
# Only proceed if at least baseurl is available
# Set alias (name) and url based on values from .repo file
# If gpgkey is part of the .repo file, auto import key
# Map additional values, if available
# Use chech_rc = False because
# If no tickets present, klist command will always return rc = 1
# miq_expression is a field that needs a special case, because
# it is returned surrounded by a dict named exp even though we don't
# send it with that dict.
# hash expressions must have the following fields
# hash expression supports depends on https://github.com/ManageIQ/manageiq-api/pull/76
# actually miq_expression, but we call it "expression" for backwards-compatibility
# build the alret
# add the actual expression.
# Running on an older version of ManageIQ and trying to create a hash expression
# no change needed - alerts are identical
# make sure that the update was indeed successful by comparing
# the result to the expected result.
# success!
# unexpected result
# Running on an older version of ManageIQ and trying to update a hash expression
# we need to add or update the alert
# an alert with this description doesn't exist yet, let's create it
# an alert with this description exists, we might need to update it
# this alert should not exist
# if we have an alert with this description, delete it
# it doesn't exist, and that's okay
# Copyright (c) 2016 Michael Gruener <michael.gruener@chaosmoon.net>
# Forbidden
# Too many requests
# Method not allowed
# Unsupported Media Type
# Bad Request
# Without a valid/parsed JSON response no more error processing can be done
# strip "page" parameter from call parameters (if there are any)
# necessary because None as value means to override user
# set module value
# there can only be one CNAME per record
# ignoring the value when searching for existing
# CNAME records allows us to update the value if it
# in theory this should be impossible as cloudflare does not allow
# the creation of duplicate records but lets cover it anyways
# As Cloudflare API cannot filter record containing quotes
# CAA records must be compared locally
# record already exists, check if it must be updated
# sanity checks
# perform add, delete or update (only the TTL can be updated) of one or
# more records
# delete all records matching record name + type
# force solo to False, just to be sure
# Copyright (c) 2016, Hugh Ma <Hugh.Ma@flextronics.com>
# Get Initial CSRF
# Make Header Dictionary with initial CSRF
# Endpoint to get final authentication header
# Get Final CSRF and Session ID
# If state is present, but host exists, need force_install flag to put host back into install state
# If state is present, but host exists, and force_install and false, do nothing
# Otherwise, state is present, but host doesn't exists, require more params to add host
# If state is absent, and host exists, lets remove it.
# Copyright (c) 2024, Björn Bösel <bjoernboesel@gmail.com>
# This will include the current state of the component if it is already
# Keycloak-compatible format (camel-case). For example private_key
# becomes privateKey.
# It also converts bool, str and int parameters into lists with a single
# entry of 'str' type. Bool values are also lowercased. This is required
# by Keycloak.
# No need for camelcase in here as these are one word parameters
# Get a list of all Keycloak components that are of keyprovider type.
# If this component is present get its key ID. Confusingly the key ID is
# This tells Ansible whether the key was changed (added, removed, modified)
# name matches the value of the name parameter then assume the key is
# Compare parameters under the "config" key
# the key).
# Copyright (c) 2014, Peter Oliver <ansible@mavit.org.uk>
# pkg(5) FRMIs include a comma before the release number, but
# AnsibleModule will have split this into multiple items for us.
# Try to spot where this has happened and fix it.
# Verify that the platform is atomic host
# Copyright (c) 2021, Christian Wollinger <cwollinger@web.de>
# Copyright (c) 2013, Darryl Stoflet <stoflet@gmail.com>
# Use only major and minor even if there are more these should be enough
# 'validate' always has rc = 1
# force recheck of status every second try
# Copyright (c) 2018, Jan Christian Grünhage <jan.christian@gruenhage.xyz>
# create a client object
# make sure we are in a given room and return a room object for it
# send an html formatted messages
# Copyright (c) 2017, Petr Lautrbach <plautrba@redhat.com>
# Based on seport.py module (c) 2014, Dan Keder <dan.keder@gmail.com>
# module.fail_json(msg="%s: %s %s" % (all_logins, login, sestore))
# for local_login in all_logins:
# Parameters definition.
# Creates a NFS file system.
# Creates a LVM file system.
# Command parameters.
# Check if fs is mounted or exists.
# If parameter size was passed, resize fs.
# If fs doesn't exist, create it.
# Check if fs will be a NFS device.
# Create a fs from NFS export.
# Create a fs from
# Create a fs from a previously lv device.
# Unreachable codeblock
# Copyright (c) 2018, René Moser <mail@renemoser.net>
# Copyright 2016 Tomas Karasek <tom.to.the.k@gmail.com>
# if key string is specified, compare only the key strings
# if key string not specified, all the fields must match
# there is no key matching the fields from module call
# => create the key, label and
# state is 'absent' => delete matching keys
# Copyright 2013 Bruce Pennypacker <bruce@pennypacker.org>
# Build list of params
# v4 API documented at https://airbrake.io/docs/api/#create-deploy-v4
# Build deploy url
# Build header
# Notify Airbrake of deploy
# Set immutable options only on (re)creation
# Exists, but couldn't update. So, delete first
# these keys are not set on update the same way they are on creation
# Copyright (c) 2013, Jeroen Hoekx <jeroen.hoekx@dsquare.be>
# Copyright (c) 2016, Matt Robinson <git@nerdoftheherd.com>
# No default on purpose
# We want to know if the user provided it or not, so we set default here
# When executable was provided and binary not found, warn user !
# Check if we have to process any files based on existence
# Use 7zip when we have a binary, otherwise try to mount
# Copyright (c) 2024, Florian Apolloner (@apollo13)
# Copyright 2018 www.privaz.io Valletech AB
# TODO: pending setting guidelines on returned values
# TODO: Documentation on valid state transitions is required to properly implement all valid cases
# TODO: To be coherent with CLI this module should also provide "flush" functionality
# handled at module utils
# Pseudo definitions...
# the host is absent (special case defined by this module)
# Get the list of hosts
# manage host state
# apply properties
# returns host ID integer
# if we reach this point we can assume that the host was taken to the desired state
# manipulate or modify the template
# complete the template with specific ansible parameters
# setup the root element so that pyone will generate XML instead of attribute vector
# merge the template, returns host ID integer
# the cluster
# returns cluster id in int
# Copyright (c) 2013, Patrick Pelletier <pp.pelletier@gmail.com>
# Based on pacman (Afterburn) and pkgin (Shaun Zinck) modules
# 12.0.0 function _force() to be removed entirely
# 12.0.0 replace with cmd_runner_fmt.as_optval("--force-")
# Copyright (c) 2013, 2014, Jan-Piet Mens <jpmens () gmail.com>
# MQTT module support methods.
# Specifying `None` on later versions of python seems sufficient to
# instruct python to autonegotiate the SSL/TLS connection. On versions
# 3.5.2 and lower though we need to specify the version.
# Note that this is an alias for PROTOCOL_TLS, but PROTOCOL_TLS was
# not available until 3.5.3.
# Copyright (c) 2016, Roman Belyakovsky <ihryamzik () gmail.com>
# currmap = calloc(1, sizeof *currmap);
# interface not found
# add new option
# if more than one option found edit the last one
# Changing method of interface is not an addition
# interface has no options, ident
# Copyright (c) 2020, Lee Goolsbee <lgoolsbee@atlassian.com>
# Copyright (c) 2020, Michal Middleton <mm.404@icloud.com>
# Copyright (c) 2017, Steve Pletcher <steve@steve-pletcher.com>
# Copyright (c) 2015, Stefan Berggren <nsg@nsg.cc>
# Copyright (c) 2014, Ramon de la Fuente <ramon@delafuente.nl>
# Escaping quotes and apostrophes to avoid ending string prematurely in ansible call.
# We do not escape other characters used as Slack metacharacters (e.g. &, <, >).
# With a custom color we have to set the message as attachment, and explicitly turn markdown parsing on for it.
# New style webhook token
# each API requires different handling
# if updating an existing message, we can check if there's anything to update
# if check mode is active, we shouldn't do anything regardless.
# if changed=False, we don't need to do anything, so don't do it.
# Evaluate WebAPI response
# return payload as a string for backwards compatibility
# Exit with plain OK from WebHook, since we don't have more information
# If we get 200 from webhook, the only answer is OK
# Copyright (c) 2016, Deepak Kothandan <deepak.kothandan@outlook.com>
# Copyright (c) 2020, Sebastian Pfahl <eryx@gmx.net>
# @FIXME RV 'results' is meant to be used when 'loop:' was used with the module.
# See https://github.com/ansible/ansible/issues/80258#issuecomment-1477038952 for details.
# We want to make sure that all strings are properly UTF-8 encoded, even if they were not,
# or happened to be byte strings.
# See also https://github.com/ansible-collections/community.general/issues/5704.
# Author: Artūras 'arturaz' Šlajus <x11@arturaz.net>
# Author: Naoya Nakazawa <naoya.n@gmail.com>
# This module is proudly sponsored by iGeolise (www.igeolise.com) and
# Tiny Lab Productions (www.tinylabproductions.com).
# check whether the required parameter passed or not
# call to create_instance method from footmark
# According to state to modify instance's some special attribute
# password can be modified only when restart instance
# userdata can be modified only when instance is stopped
# Security Group join/leave begin
# Security Group join/leave ends here
# Attach/Detach key pair
# Modify instance attribute
# Modify instance charge type
# This is more complex than you might normally expect because we want to
# open the file with only u+rw set. Also, we use the stat constants
# because ansible still supports python 2.4 and the octal syntax changed
# The following is not related to Ansible's diff; see https://github.com/ansible-collections/community.general/pull/3980#issuecomment-1005666154
# Check if puppet is disabled here
# rc==1 could be because it is disabled
# rc==1 could also mean there was a compilation failure
# success with changes
# failure
# Copyright (c) 2013-2014, Christian Berendt <berendt@b1-systems.de>
# a2enmod name replacement to apache2ctl -M names
# re expressions to extract subparts of names
# force exists only for a2dismod on debian
# Copyright (c) 2022, Christian Wollinger <@cwollinger>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or SPDX-License-Identifier: GPL-3.0-or-later
# When an org_name is provided but no organization match return an error
# Copyright (c) 2023, Ondrej Zvara (ozvara1@gmail.com)
# Copyright (c) 2021, Lennert Mertens (lennert@nubera.be)
# sorting necessary in order to properly detect changes, as we don't want to get false positive
# results due to differences in ids ordering;
# The special case to release the IP from any assignment
# Copyright (c) 2016, Olivier Boukili <boukili.olivier@gmail.com>
# balancer member attributes extraction regexp:
# Apache2 server version extraction regexp:
# This file is largely copied from the Nagios module included in the
# Func project. Original copyright follows:
# func-nagios - Schedule downtime and enables/disable notifications
# Copyright 2011, Red Hat, Inc.
# Tim Bielawa <tbielawa@redhat.com>
# rhel
# debian
# older debian
# bsd, solaris
# groundwork it monitoring
# open monitoring distribution
# icinga on debian/ubuntu
# icinga installed from source (default location)
# Downtime for a host if no svc specified
# host or service downtime?
# toggle the host AND service alerts
# toggle host/svc alerts
# self.action == 'command'
# Copyright (c) 2021, Sergey Mikhaltsov <metanovii@gmail.com>
# get all members in a project
# get single member in a project by user name
# check if the user is a member of the project
# add user to a project
# remove user from a project
# get user's access level
# update user's access level in a project
# project doesn't exist
# only single user given
# list of users given
# user doesn't exist
# check if the user is a member in the project
# add user to the project
# state as absent
# in case that a user is a member
# compare the access level
# update the access level for the user
# remove the user from the project
# if state = present and purge_users set delete users which are in members having give access level but not in gitlab_users
# if single user given and an error occurred return error for list errors will be per user
# Copyright (c) 2022, Alexander Hussey <ahussey@redhat.com>
# Only return the line containing the password
# Ditch the index, this can be grabbed from the results
# TTL is in seconds
# Copyright (c) 2017-18, Abhijeet Kasurde <akasurde@redhat.com>
# Copyright (c) 2016, Fabrizio Colonna <colofabrix@tin.it>
# Reference prefixes (International System of Units and IEC)
# "<cylinder>,<head>,<sector>" format
# Normal format: "<number>[<unit>]"
# Generic device info
# The unit is read once, because parted always returns the same unit
# CYL and CHS have an additional line in the output
# CHS use a different format than BYT, but contrary to what stated by
# the author, CYL is the same as BYT. I've tested this undocumented
# behaviour down to parted version 1.8.3, which is the first version
# that supports the machine parseable output.
# Shortcut
# Cases where we default to 'compact'
# Find the appropriate multiplier
# Corrections to round up as per IEEE754 standard
# Round and return
# As per format_disk_size, default to compact, which defaults to megabytes
# If parted complains about missing labels, it means there are no partitions.
# In this case only, use a custom function to fetch information and emulate
# parted formats for the unit.
# Check the version
# Older parted versions return a message in the stdout and RC > 0.
# Sample parted versions (see as well test unit):
# parted (GNU parted) 3.3
# parted (GNU parted) 3.4.5
# parted (GNU parted) 3.3.14-dfc61
# unit <unit> command
# mklabel <label-type> command
# mkpart <part-type> [<fs-type>] <start> <end> command
# name <partition> <name> command
# set <partition> <flag> <state> command
# rm/mkpart command
# resize part
# Data extraction
# Parted executable
# Conditioning
# Read the current disk information
# Assign label if required
# Create partition if required
# Set the unit of the run
# If partition exists, try to resize
# Ensure new end is different to current
# Execute the script and update the data structure.
# This will create the partition for the next steps
# Empty structure for the check-mode
# Assign name to the partition
# The double quotes need to be included in the arg passed to parted
# Manage flags
# Parted infers boot with esp, if you assign esp, boot is set
# and if boot is unset, esp is also unset.
# Compute only the changes in flags status
# Execute the script
# Final status of the device
# Scaleway Serverless container namespace management module
# Create container namespace
# Copyright (c) 2015, Kevin Brebanov <https://github.com/kbrebanov>
# Based on pacman (Afterburn <https://github.com/afterburn>, Aaron Bull Schaefer <aaron@elasticdog.com>)
# and apt (Matthew Williams <matthew@flowroute.com>) modules.
# Import module snippets.
# world contains a list of top-level packages separated by ' ' or \n
# packages may contain repository (@) or version (=<>~) separator characters or start with negation !
# Check if virtual package
# Get virtual package dependencies
# Check to see if packages are still present because of dependencies
# ==========================================
# Main control flow.
# add repositories to the APK_PATH
# Copyright (c) 2016, Joe Adams <@sysadmind>
# If there's no distributor specified, we will publish them all
# Ensure that the importer_ssl_* is the content and not a file path
# Check to make sure all the settings are correct
# The importer config gets overwritten on set and not updated, so
# we set the whole config at the same time.
# Copyright (c) 2016, Marcin Skarbek <github@skarbek.name>
# Copyright (c) 2016, Andreas Olsson <andreas@arrakis.se>
# Copyright (c) 2017, Loic Blot <loic.blot@unix-experience.fr>
# This module was ported from https://github.com/mskarbek/ansible-nsupdate
# If the response contains an Answer SOA RR whose name matches the queried name,
# this is the name of the zone in which the record needs to be inserted.
# If the response contains an Authority SOA RR whose name is a subdomain of the queried name,
# this SOA name is the zone in which the record needs to be inserted.
# When modifying a NS record, Bind9 silently refuses to delete all the NS entries for a zone:
# > 09-May-2022 18:00:50.352 client @0x7fe7dd1f9568 192.168.1.3#45458/key rndc_ddns_ansible:
# > updating zone 'lab/IN': attempt to delete all SOA or NS records ignored
# https://gitlab.isc.org/isc-projects/bind9/-/blob/v9_18/lib/ns/update.c#L3304
# Let's perform dns inserts and updates first, deletes after.
# Check mode and record exists, declared fake change.
# Copyright (c) 2017, Nathan Davison <ndavison85@gmail.com>
# likely unprivileged user, so add empty name & pid
# set variables to default state, in case they are not specified
# nestat distinguishes between tcp6 and tcp
# safety measure, similar to tcp6
# unexpected stdout from ss
# skip headers (-H arg is not present on e.g. Ubuntu 16)
# no process column, e.g. due to unprivileged user
# as we do in netstat logic to be consistent with output
# which ports are listening for connections?
# only display state and foreign_address for include_non_listening.
# Copyright (c) 2014, Red Hat, Inc.
# Copyright (c) 2014, Tim Bielawa <tbielawa@redhat.com>
# Copyright (c) 2014, Magnus Hedemark <mhedemar@redhat.com>
# Note: we can't reasonably support the 'if you need to put both ' and " in a string, concatenate
# strings wrapped by the other delimiter' XPath trick, especially as simple XPath.
# OK, it found something
# lxml 5.1.1 removed etree._ElementStringResult, so we can no longer simply assume it is there
# (https://github.com/lxml/lxml/commit/eba79343d0e7ad1ce40169f60460cdd4caa29eb3)
# Get the xpath for this result
# Delete an attribute
# Pop this attribute match out of the parent
# node's 'attrib' dict by using this match's
# 'attrname' attribute for the key
# Delete an element
# Create a list of our new children
# xpaths always return matches as a list, so....
# Check if elements differ
# Write it out
# requesting an element to exist
# requesting an element to exist with an inner text
# requesting an attribute to exist
# requesting an attribute to exist with a value
# requesting a change of inner text
# return "{{%s}}%s" % (namespaces[nsname], rawname)
# no namespace name here
# we test again after calling check_or_make_target
# implicitly creating an element
# module.fail_json(msg="now tree=%s" % etree.tostring(tree, pretty_print=True))
# module.fail_json(msg="element=%s subexpr=%s node=%s now tree=%s" %
# module.fail_json(msg="arf %s changing=%s as curval=%s changed tree=%s" %
# NOTE: This checks only the namespaces defined in root element!
# TODO: Implement a more robust check to check for child namespaces' existence
# attribute = "{{%s}}%s" % (namespaces[attr_ns], attr_name)
# NOTE: Modifying a string is not considered a change !
# Check if we have lxml 2.3.0 or newer installed
# Check if the file exists
# Parse and evaluate xpath expression
# Try to parse in the target XML file
# Ensure we have the original copy to compare
# File exists:
# - absent: delete xpath target
# - present: carry on
# children && value both set?: should have already aborted by now
# add_children && set_children both set?: should have already aborted by now
# set_children set?
# add_children set?
# No?: Carry on
# Is the xpath target an attribute selector?
# If an xpath was provided, we need to do something with the data
# Otherwise only reformat the xml data?
# Copyright (c) 2025, Dexter Le <dextersydney2001@gmail.com>
# TODO: Pluralize operation_options in separate PR and remove this helper fmt function
# Copyright (c) 2013, Jimmy Tang <jcftang@gmail.com>
# Based on okpg (Patrick Pelletier <pp.pelletier@gmail.com>), pacman
# (Afterburn) and pkgin (Shaun Zinck) modules
# rc is 1 when nothing to upgrade so check stdout first.
# Using a for loop in case of error, we can report the port that failed
# Query the port first, to see if we even need to remove
# execution state
# check required mounts & mount
# check required mounts
# Identify the targeted filesystem and obtain the current state
# The filesystem must be mounted to obtain the current state (subvolumes, default, etc)
# TODO is failing the module an appropriate outcome in this scenario?
# Prepare unit of work
# No change required
# prepare unit of work
# TODO potentially unmount the subvolume if automount=True ?
# reverse to ensure children are deleted before their parent
# Stage operations to the unit of work
# Execute the unit of work
# the target may have been created earlier in module execution
# Create/cleanup temporary mountpoints
# this check should be redundant
# The subvolume was already mounted, so return the current path
# Format and return results
# Copyright (c) 2017, Alejandro Gomez <alexgomez2202@gmail.com>
# available gunicorn options on module
# temporary files in case no option provided
# remove temp file if exists
# obtain app name and venv
# use venv path if exists
# to daemonize the process
# fill options
# place error log somewhere in case of fail
# add option for pid file if not found on config file
# put args together
# wait for gunicorn to dump to log
# if user defined own error log, check that
# delete tmp log
# Copyright (c) 2019 David Lundgren <dlundgren@syberisle.net>
# OID style names are not supported
# Prior to FreeBSD 11.1 sysrc would write "unknown variable" to stdout and not stderr
# https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229806
# This file is part of Networklore's snmp library for Ansible
# From SNMPv2-MIB
# From IF-MIB
# From IP-MIB
# Verify that we receive a community when using snmp v2
# Use SNMP Version 2
# Use SNMP Version 3 with authNoPriv
# Use SNMP Version 3 with authPriv
# Use p to prefix OIDs with a dot for polling
# Use v without a prefix to use with return values
# Copyright (c) 2025, Marcos Alano <marcoshalano@gmail.com>
# Based on gio_mime module. Copyright (c) 2022, Alexei Znamensky <russoz@gmail.com>
# In memory: This code is dedicated to my late grandmother, Maria Marlene. 1936-2025. Rest in peace, grandma.
# -Marcos Alano-
# TODO: Add support for diff mode
# Copyright (c) 2021 Radek Sprta <mail@radeksprta.eu>
# Copyright (c) 2024 Colin Nolan <cn580@alumni.york.ac.uk>
# absent
# Copyright (c) 2019, Jan Meerkamp <meerkamp@dvv.de>
# Copyright (c) 2025, Tom Paine <github@aioue.net>
# Updateconf attributes documentation: https://docs.opennebula.io/6.10/integration_and_development/system_interfaces/api.html#one-vm-updateconf
# Filter -2 means fetch all templates user can Use
# LCM_STATE is VM's sub-state that is relevant only when STATE is ACTIVE
# 600 -> 110000000
# 1: Merge new template with the existing one.
# check if the number of disks is correct
# Check does the name have indexed format
# If the name has indexed format and after base_name it has only digits it'll be matched
# If the name is not indexed it has to be same
# Create list of used indexes
# Make list which contains used indexes in format ['000', '001',...]
# Create indexed name
# Update NAME value in the attributes in case there is index
# Add more VMs
# Delete surplus VMs
# store only the remaining instances
# Firstly, power-off all instances
# Wait for all to be power-off
# Check the format of the name attribute
# wait for VM to leave the hotplug_saveas_poweroff state
# Fetch template
# Fetch datastore
# Deploy an exact count of VMs
# Deploy count VMs
# instances_list - new instances
# tagged_instances_list - all instances with specified `count_attributes` and `count_labels`
# Fetch data of instances, or change their state
# instances - a list of instances info whose state is changed or which are fetched with C(instance_ids) option
# tagged_instances - A list of instances info based on a specific attributes and/or labels that are specified with C(count_attributes) and C(count_labels)
# Copyright (c) 2016, Aleksei Kostiuk <unitoff@gmail.com>
# Copyright (c) 2018, Sebastian Schenzel <sebastian.schenzel@mailbox.org>
# Copyright (c) 2017, Branko Majic <branko@majic.rs>
# Store passed-in arguments and set-up some defaults.
# Try to extract existing D-Bus session address.
# If no existing D-Bus session was detected, check if dbus-run-session
# We'll be checking the processes of current user only.
# Go through all the pids for this user, try to extract the D-Bus
# session bus address from environment, and ensure it is possible to
# connect to it.
# This can happen with things like SSH sessions etc.
# Process has disappeared while inspecting it
# Check if dconf binary exists
# It's unset in dconf database, so anything the user is trying to
# set is a change.
# If no change is needed (or won't be done due to check_mode), notify
# caller straight away.
# Set-up command to run. Since DBus is needed for write operation, wrap
# dconf command dbus-launch.
# Run the command and fetch standard return code, stdout, and stderr.
# Value was changed.
# Read the current value first.
# No change was needed, key is not set at all, or just notify user if we
# are in check mode.
# Set-up command to run. Since DBus is needed for reset operation, wrap
# Setup the Ansible module
# Converted to str below after special handling of bool.
# This interpreter can't see the GLib module. To try to fix that, we'll
# look in common locations for system-owned interpreters that can see
# it; if we find one, we'll respawn under it. Otherwise we'll proceed
# with degraded performance, without the ability to parse GVariants.
# Later (in a different PR) we'll actually deprecate this degraded
# performance level and fail with an error if the library can't be
# found.
# This shouldn't be possible; short-circuit early if it happens.
# Found the Python bindings; respawn this module under the
# interpreter where we found them.
# This is the end of the line for this process, it will exit here
# once the respawned module has completed.
# Try to be forgiving about the user specifying a boolean as the value, or
# more accurately about the fact that YAML and Ansible are quite insistent
# about converting strings that look like booleans into booleans. Convert
# the boolean into a string of the type dconf will understand. Any type for
# the value other than boolean is just converted into a string directly.
# Create wrapper instance.
# Process based on different states.
# Copyright (c) 2023, Salvatore Mesoraca <s.mesoraca16@gmail.com>
# Copyright (c) 2021, Tyler Gates <tgates81@gmail.com>
# If the user did not define a full path to the restul space in url:
# params, add what we believe it to be.
# Align these with what is defined in OneClick's UI under:
# Locator -> Devices -> By Model Name -> <enter any model> ->
# Build the update URL
# None values should be converted to empty strings
# POST to /model to update the attributes, or fail.
# Load and parse the JSON response and either fail or set results.
# I'm not 100% confident on the expected failure structure so just
# dump all of ['attribute'].
# Should be OK if we get to here, set results.
# If no return attributes were asked for, return Model_Handle.
# Set the XML <rs:requested-attribute id=<id>> tags. If no hex ID
# is found for the name, assume it is already in hex. {name: hex ID}
# Build the complete XML search query for HTTP POST.
# POST to /models and fail on errors.
# Parse through the XML response and fail on any detected errors.
# XML response should be successful. Iterate and set each returned
# attribute ID/name and value for return.
# Note: all values except empty strings (None) are strings only!
# Parent filter tag
# Logically and
# Model Name
# Model Type Name
# Get a list of all requested attribute names/IDs plus Model_Handle and
# use them to query the values currently set. Store finding in a
# Survey attributes currently set and store in a dict.
# Iterate through the requested attributes names/IDs values pair and
# compare with those currently set. If different, attempt to change.
# The API will return None on empty string
# Copyright (c) 2019, Francois Lallart (@fraff)
# Check that the instance exists
# Is monthlyBilling already enabled or pending ?
# We should never reach here
# Filter and map the parameters names that apply to the client template
# lists in the Keycloak API are sorted
# We can only compare the current client template with the proposed updates we have
# Copyright (c) 2022, Jean Raby <jean@raby.sh>
# Normalize for old configs
# mandatory, but might be empty
# This happens if an empty list has been provided for name
# URL packages bypass the latest / upgradable_pkgs test
# They go through the dry-run to let pacman decide if they will be installed
# Dry run to build the installation diff
# With Pacman v6.0.1 - libalpm v13.0.1, --upgrade outputs "loading packages..." on stdout. strip that.
# When installing from URLs, pacman can also output a 'nothing to do' message. strip that too.
# This can happen with URL packages if pacman decides there's nothing to do
# actually do it
# set reason
# filter out pkgs that are already absent
# There's something to do, set this in advance
# nosave_args conflicts with --print-format. Added later.
# https://github.com/ansible-collections/community.general/issues/4315
# This is a bit of a TOCTOU but it is better than parsing the output of
# pacman -R, which is different depending on the user config (VerbosePkgLists)
# Start by gathering what would be removed
# trailing \n to avoid diff complaints
# there are upgrades, so there will be changes
# Build diff based on inventory first.
# Dump package database to get contents before update
# Always changed when force=true
# Dump package database to get contents after update
# If contents changed, set changed=true
# Expand group members
# Just a regular pkg, either available in the repositories,
# or locally installed, which we need to know for absent state
# Last resort, call out to pacman to extract the info,
# pkg is possibly in the <repo>/<pkgname> format, or a filename or a URL
# Start with <repo>/<pkgname> case
# fallback to filename / URL
# Don't bark for unavailable packages when trying to remove them
# With Pacman v6.0.1 - libalpm v13.0.1, --upgrade outputs " filename_without_extension downloading..." if the URL is unseen.
# In all cases, pacman outputs "loading packages..." on stdout. strip both
# Format of a line: "pacman 6.0.1-2"
# Format of lines:
# Format of a line: "core pacman 6.0.1-2"
# non-zero exit with nothing in stdout -> nothing to upgrade, all good
# stderr can have warnings, so not checked here
# nothing to upgrade
# stuff in stdout but rc!=0, abort
# Split extra_args as the shell would for easier handling later
# Validate device existence
# Check destination has enough free space
# Copyright (c) 2019, Saranya Sridharan
# See https://psutil.readthedocs.io/en/latest/#find-process-by-name for more information
# this could just be "return False" but would lead to surprising behavior if both a and b are None
# see https://gitlab.com/gitlab-org/gitlab-foss/-/issues/27355
# results due to differences in ids ordering; see `mr_has_changed()`
# Copyright (c) 2013, Ivan Vanderbyl <ivan@app.io>
# Query the log first, to see if we even need to remove.
# Handle multiple log files
# Copyright (c) 2016, Timothy Vandenbrande <timothy.vandenbrande@gmail.com>
# Create VM
# Set MEMORY and MEMORY POLICY
# Set CPU
# Set CPU SHARE
# Set DISKS
# Set NETWORKS
# Set Delete Protection
# Set Boot Order
# Set VM Host
# Remove VM
# Copyright (c) 2017-2020, Yann Amar <quidame@poivron.org>
# Option --listpackage is needed and comes with 1.15.0
# Used for things not doable with a single dpkg-divert command (as forced
# renaming of files, and diversion's 'holder' or 'divert' updates).
# Append options as requested in the task parameters, but ignore some of
# them when removing the diversion.
# Start to populate the returned objects.
# Just try and see
# else... cases of failure with dpkg-divert are:
# - The diversion does not belong to the same package (or LOCAL)
# - The divert filename is not the same (e.g. path.distrib != path.divert)
# - The renaming is forbidden by dpkg-divert (i.e. both the file and the
# There should be no case with 'divert' and 'holder' when creating the
# diversion from none, and they're ignored when removing the diversion.
# So this is all about renaming...
# The situation is that we want to modify the settings (holder or divert)
# of an existing diversion. dpkg-divert does not handle this, and we have
# to remove the existing diversion first, and then set a new one.
# Avoid if possible to orphan files (i.e. to dereference them in diversion
# database but let them in place), but do not make renaming issues fatal.
# BTW, this module is not about state of files involved in the diversion.
# Copyright (c) 2013, berenddeboer
# Written by berenddeboer <berend@pobox.com>
# Based on pkgng module written by bleader <bleader at ratonland.org>
# Assume that if we have pkg_info, we haven't upgraded to pkgng
# TODO: convert run_comand() argument to list!
# databases/mysql55-client installs as mysql-client, so try solving
# that the ugly way. Pity FreeBSD doesn't have a fool proof way of checking
# some package is installed
# counts the number of packages found
# If pkg_delete not found, we assume pkgng
# If portinstall not found, automagically install
# TODO: check how many match
# Copyright (c) 2018, Jean-Philippe Evrard <jean-philippe@evrard.me>
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
# It is possible to set `ca_cert` to verify the server identity without
# setting `client_cert` or `client_key` to authenticate the client
# so required_together is enough
# Due to `required_together=[['client_cert', 'client_key']]`, checking the presence
# of either `client_cert` or `client_key` is enough
# Make the cluster_value[0] a string for string comparisons
# during the execution of the module, if there is an exception or a
# conditional state that effectively causes a failure, run
# AnsibleModule.fail_json() to pass in the message and the result
# Copyright (c) 2018, Juan Manuel Parrilla <jparrill@redhat.com>
# New vault
# Already exists
# Copyright (c) 2019, Tomas Karasek <tom.to.the.k@gmail.com>
# Scaleway Serverless container info module
# do_notify_grove
# Copyright (c) Benjamin Jolivot <bjolivot@gmail.com>
# Inspired by slack module :
# init return dict
# define webhook
# define payload
# http headers
# Nothing is done in check mode
# it'll pass even if your server is down or/and if your token is invalid.
# If someone find good way to check...
# send request if not in test mode
# something's wrong
# some problem
# Looks good
# Copyright (c) 2025, Tom Hesse <contact@tomhesse.xyz>
# loop devices
# nvme drives
# sata/scsi drives
# config is optional, if not provided we keep the current config as is
# Keep in current state if enabled arg_spec is not given
# Handle job config
# Handle job disable/enable
# Copyright (c) 2017 Marc Sensenich <hello@marc-sensenich.com>
# In check mode, send an empty payload to validate connection
# Scaleway database backups management module
# Copyright (C) 2020 Guillaume Rodriguez (g.rodriguez@opendecide.com).
# python-jenkins includes the internal Jenkins class used for each job
# in its return value; we strip that out because the leading underscore
# (and the fact that it is not documented in the python-jenkins docs)
# indicates that it is not part of the dependable public interface.
# The query operation for the remote needs to be run even in check mode
# Copyright (c) 2019 Dell EMC Inc.
# Manager attributes are supported as part of iDRAC OEM extension
# Attributes are supported only on iDRAC9
# execute only if we find a Manager resource
# Copyright (c) 2014, Jasper N. Brouwer <jasper@nerdsweide.nl>
# destroy the facts
# Copyright (c) 2017, Jasper Lievisse Adriaanse <j@jasper.la>
# While vmadm(1M) supports a -E option to return any errors in JSON, the
# generated JSON does not play well with the JSON parsers of Python.
# The returned message contains '\n' as part of the stacktrace,
# which breaks the parsers.
# Lookup a property for the given VM.
# Returns the property, or None if not found.
# Lookup the uuid that goes with the given alias.
# Returns the uuid or '' if not found.
# If no VM was found matching the given alias, we get back an empty array.
# That is not an error condition as we might be explicitly checking for its
# absence.
# Retrieve the UUIDs for all VMs.
# 'vmadm create' returns all output to stderr...
# Now that the VM is created, ensure it is in the desired state (if not 'running')
# Since the payload may contain sensitive information, fail hard
# if we cannot remove the file so the operator knows about it.
# Create a new VM using the provided payload.
# Check if the VM is already in the desired state.
# Lookup table for the state to be in, and which command to use for that.
# vm_state: [vmadm commandm, forceable?]
# Create the JSON payload (vmdef) and return the filename.
# Filter out the few options that are not valid VM properties.
# Create the temporary file that contains our payload, and set tight
# permissions for it may container sensitive information.
# XXX: When there's a way to get the current ansible temporary directory
# drop the mkstemp call and rely on ANSIBLE_KEEP_REMOTE_FILES to retain
# the payload (thus removing the `save_payload` option).
# Whether the VM changed state.
# Handle operations for all VMs, which can by definition only
# be state transitions.
# If any of the VMs has a change, the task as a whole has a change.
# First get all VM uuids and for each check their state, and adjust it if needed.
# In order to reduce the clutter and boilerplate for trivial options,
# abstract the vmadm properties and build the dict of arguments later.
# Dict of all options that are simple to define based on their type.
# They're not required and have a default of None.
# Start with the options that are not as trivial as those above.
# Regular strings, however these require additional options.
# Add our 'simple' options to options dict.
# Translate the state parameter into something we can use later on.
# While it is possible to refer to a given VM by its `alias`, it is easier
# to operate on VMs by their UUID. So if we're not given a `uuid`, look
# Bit of a chicken and egg problem here for VMs with state == deleted.
# If they're going to be removed in this play, we have to lookup the
# uuid. If they're already deleted there's nothing to lookup.
# So if state == deleted and get_vm_uuid() returned '', the VM is already
# deleted and there's nothing else to do.
# The general flow is as follows:
# - first the current state of the VM is obtained by its UUID.
# - If the state was not found and the desired state is 'deleted', return.
# - If the state was not found, it means the VM has to be created.
# - Otherwise, it means the VM exists already and we operate on its
# In the future it should be possible to query the VM for a particular
# property as a valid state (i.e. queried) so the result can be
# registered.
# Also, VMs should be able to get their properties updated.
# Managing VM snapshots should be part of a standalone module.
# First obtain the VM state to determine what needs to be done with it.
# First handle the case where the VM should be deleted and is not present.
# Shortcut for check mode, if there is no VM yet, it will need to be created.
# Or, if the VM is not in the desired state yet, it needs to transition.
# No VM was found that matched the given ID (alias or uuid), so we create it.
# VM was found, operate on its state directly.
# kc completely removes the parameter `krbPrincipalAttribute` if it is set to `''`; the unset kc parameter is equivalent to `''`;
# to make change detection and diff more accurate we set it again in the kc responses
# kc stores a timestamp of the last sync in `lastSync` to time the periodic sync, it is removed to minimize diff/changes
# Keycloak API expects config parameters to be arrays containing a single string element
# Filter and map the parameters names that apply
# if user federation exists, get associated mappers
# changeset contains all desired mappers: those existing, to update or to create
# to keep unspecified existing mappers we add them to the desired mappers list, unless they're already present
# when creating a user federation, keycloak automatically creates default mappers
# create new mappers or update existing default mappers
# we remove all unwanted default mappers
# we use ids so we dont accidently remove one of the previously updated default mapper
# exclude bindCredential when checking wether an update is required, therefore
# updating it only if there are other changes
# remove unwanted existing mappers that will not be updated
# Copyright (c) 2019, Sandeep Kasargod (sandeep@vexata.com)
# Copyright (c) 2024, Philippe Duveau <pduvax@gmail.com>
# Copyright (c) 2019, Maciej Delmanowski <drybjed@gmail.com> (ldap_attrs.py)
# Copyright (c) 2017, Alexander Korinek <noles@a3k.net> (ldap_attrs.py)
# Copyright (c) 2016, Peter Sagerson <psagers@ignorare.net> (ldap_attrs.py)
# Copyright (c) 2016, Jiri Tyr <jiri.tyr@gmail.com> (ldap_attrs.py)
# The code of this module is derived from that of ldap_attrs.py
# Instantiate the LdapAttr object
# if the current value first arg in inc_legacy has changed then the modify will fail
# Account, repository or SSH key pair was not found.
# Retrieve existing ssh key
# Create or update key pair
# Delete key pair
# Consider every action a change (not idempotent yet!)
# Get effective client-level role mappings
# Copyright (c) 2022, Masayoshi Mizuma <msys.mizuma@gmail.com>
# in case command[] has number
# To check this system has dcpmm
# There's nothing namespaces in this system. Nothing to do.
# Disable and destroy all namespaces
# delete the goal request
# Probably it is a bug of ipmctl to show the socket goal
# which isn't specified by the -socket option.
# Anyway, filter the noise out here:
# First, do dry run ipmctl create command to check the error and warning.
# Run actual creation here
# Copyright (c) 2022, Håkon Lerring
# Copyright (c) 2013, Raul Melo
# Written by Raul Melo <raulmelo@gmail.com>
# Check local version
# Check depot version
# Install new version
# Copyright (c) 2016, Shinichi TAMURA (@tmshn)
# platform.system() returns SunOS, which is too broad. So look at the
# platform version instead. However we have to ensure that we're not
# running in the global zone where changing the timezone has no effect.
# Not supported yet
# `self.value` holds the values for each params on each phases.
# Initially there's only info of "planned" phase, but the
# `self.check()` function will fill out it.
# Validate given timezone
# For key='hwclock'; convert yes/no -> local/UTC
# For key='hwclock'; convert UTC/local -> yes/no
# To be set in __init__
# It is fine if all tree config files don't exist
# `--remove-destination` is needed if /etc/localtime is a symlink so
# that it overwrites it instead of following it.
# Distribution-specific configurations
# Debian/Ubuntu
# RHEL/CentOS/SUSE
# tzdata-update cannot update the timezone if /etc/localtime is
# a symlink so we have to use cp to update the time zone which
# was set above.
# If the config file doesn't exist detect the distribution and set regexps.
# For SUSE
# For RHEL/CentOS
# The key for timezone might be `ZONE` or `TIMEZONE`
# (the former is used in RHEL/CentOS and the latter is used in SUSE linux).
# So check the content of /etc/sysconfig/clock and decide which key to use.
# In some cases, even if the target file does not exist,
# simply creating it may solve the problem.
# In such cases, we should continue the configuration rather than aborting.
# If the error is not ENOENT ("No such file or directory"),
# (e.g., permission error, etc), we should abort.
# Find the all matched lines
# Remove all matched lines
# ...and insert the value
# Write the changes
# If we cannot find UTC in the config that's fine.
# If we cannot find UTC/LOCAL in /etc/cannot that means UTC
# will be used by default.
# In 'before' phase UTC/LOCAL doesn't need to be set in
# the timezone config file, so we ignore this error.
# convert yes/no -> UTC/local
# convert LOCAL -> local
# If the value in the config file is the same as the 'planned'
# value, we need to check /etc/adjtime.
# If the planned values is the same as the one in the config file
# we need to check if /etc/localtime is also set to the 'planned' zone.
# If /etc/localtime is a symlink and is not set to the TZ we 'planned'
# to set, we need to return the TZ which the symlink points to.
# We use readlink() because on some distros zone files are symlinks
# to other zone files, so it is hard to get which TZ is actually set
# if we follow the symlink.
# most linuxes has it in /usr/share/zoneinfo
# alpine linux links under /etc/zoneinfo
# Set current TZ to 'n/a' if the symlink points to a path
# which isn't a zone file.
# Set current TZ to 'n/a' if the symlink to the zone file is broken.
# If /etc/localtime is not a symlink best we can do is compare it with
# the 'planned' zone info file and return 'n/a' if they are different.
# sm-set-timezone knows no state and will always set the timezone.
# XXX: https://github.com/joyent/smtools/pull/2
# Lookup the list of supported timezones via `systemsetup -listtimezones`.
# Note: Skip the first line that contains the label 'Time Zones:'
# Strategy 1:
# Strategy 2:
# OSError means "end of symlink chain" or broken link.
# Strategy 3:
# Strategy 4:
# First determine if the requested timezone is valid by looking in
# the zoneinfo directory.
# Now (somewhat) atomically update the symlink by creating a new
# symlink and move it into place. Otherwise we have to remove the
# original symlink and create the new symlink, however that would
# create a race condition in case another process tries to read
# /etc/localtime between removal and creation.
# chtz seems to always return 0 on AIX 7.2, even for invalid timezone values.
# It will only return non-zero if the chtz command itself fails, it does not check for
# This does mean that we can only support Olson for now. The below commented out regex
# regex_olson = re.compile('^([a-z0-9_\-\+]+\/?)+$', re.IGNORECASE)
# if not regex_olson.match(value):
# First determine if the requested timezone is valid by looking in the zoneinfo
# Now set the TZ using chtz
# The best condition check we can do is to check the value of TZ after making the
# Construct 'module' and 'tz'
# Check the current state
# In check mode, 'planned' state is treated as 'after' state
# Make change
# Examine if the current state matches planned state
# An issue is identified by the project_name, the issue_subject and the issue_type
# The issue does not exist in the project
# This implies a change
# Create the issue
# If does not exist, do nothing
# The issue exists in the project
# Delete the issue
# More than 1 matching issue
# check MediaTypes
# if ejected, 'Inserted' should be False and 'ImageName' cleared
# read the VirtualMedia resources from systems
# read the VirtualMedia resources from manager
# find the VirtualMedia resource to eject
# try to eject via PATCH if no EjectMedia action found
# if Allow header present and PATCH missing, return error
# POST to the EjectMedia Action
# empty payload for Eject action
# POST to action
# already ejected: return success but changed=False
# return failure (no resources matching image_url found)
# eject specified one media
# eject all inserted media when no image_url specified
# read all the VirtualMedia resources
# eject all inserted media one by one
# no media inserted: return success but changed=False
# see if image already inserted; if so, nothing to do
# find an empty slot to insert the media
# try first with strict media_type matching
# if not found, try without strict media_type matching
# confirm InsertMedia action found
# try to insert via PATCH if no InsertMedia action found
# get the action property
# get ActionInfo or AllowableValues
# construct payload
# get member resource one by one
# check whether resource_uri existing or not
# check validity of keys in request_body
# perform patch
# check whether changed or not
# get action base uri data for further checking
# check resouce_uri with target uri found in action base uri data
# check request_body with parameter name defined by @Redfish.ActionInfo
# perform post
# resource_uri
# request_body
# For virtual media resource locates on Systems service
# For virtual media resource locates on Managers service
# Copyright (c) 2014, Nate Coraor <nate@bx.psu.edu>
# need to add capability
# remove from current cap list if it is already set (but op/flags differ)
# add new cap with correct op/flags
# need to remove capability
# remove from current cap list and then set current list
# If file xattrs are set but no caps are set the output will be:
# If file xattrs are unset the output will be:
# If the file does not exist, the stderr will be (with rc == 0...):
# process output of an older version of libcap
# '/foo (Error Message)'
# otherwise, we have a newer version here
# see original commit message of cap/v0.2.40-18-g177cd41 in libcap.git
# getcap condenses capabilities with the same op/flags into a
# comma-separated list, so we have to parse that
# Copyright (c) 2018, Luca 'remix_tj' Lorenzetto <lorenzetto.luca@gmail.com>
# being not attached when using absent is OK
# --- run command ---
# bridge_request is supported on pyghmi 1.5.30 and later
# Copyright (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
# LXC_COMPRESSION_MAP is a map of available compression types when creating
# an archive of a container.
# LXC_COMMAND_MAP is a map of variables that are available to a method based
# on the state the container is in.
# lxc-clone is deprecated in favor of lxc-copy
# LXC_BACKING_STORE is a map of available storage backends and options that
# are incompatible with the given storage backend.
# LXC_LOGGING_LEVELS is a map of available log levels
# LXC_ANSIBLE_STATES is a map of states that contain values of methods used
# when a particular state is evoked.
# This is used to attach to a running container and execute commands from
# within the container on the host.  This will provide local access to a
# container without using SSH.  The template will attempt to work within the
# home directory of the user that was attached to the container and source
# that users environment variables by default.
# Ensure the script is executable.
# Output log file.
# Error log file.
# Execute the script command.
# Close the log files.
# Remove the script file upon completion of execution.
# Remove incompatible storage backend options.
# Look for key in config
# If the sanitized values don't match replace them
# Break the flow as values are written or not at this point
# If the config changed restart the container.
# Ensure that the state of the original container is stopped
# Load logging for the instance when creating it.
# Check for backing_store == overlayfs if so force the use of snapshot
# If overlay fs is used and snapshot is unset the clone command will
# fail with an unsupported type.
# Restore the original state of the origin container if it was
# not in a stopped state.
# Set the logging path to the /var/log/lxc if uid is root. else
# set it to the home folder of the user executing.
# Add the template commands to the end of the command if there are any
# post startup sleep for 1 second.
# Check if the container needs to have an archive created.
# Check if the container is to be cloned
# post destroy attempt sleep for 1 second.
# Perform any configuration updates
# Run container startup
# Return data
# Create LVM Snapshot
# remove trailing / if present.
# This loop is created to support overlayfs archives. This should
# squash all of the layers into a single archive.
# Set the path to the container data
# Run the sync command
# Create a temp dir
# Set the name of the working dir, temp + container_name
# LXC container rootfs
# Test if the containers rootfs is a block device
# Test if the container is using overlayfs
# Set the snapshot name if needed
# Ensure the original container is stopped or frozen
# Sync the container data from the container_path to work_dir
# Take snapshot
# Mount snapshot
# Set the state as changed and set a new fact
# unmount snapshot
# Remove snapshot
# Restore original state of container
# Remove tmpdir
# LXD_ANSIBLE_STATES is a map of states that contain values of methods used
# ANSIBLE_LXD_STATES is a map of states of lxd containers to the Ansible
# lxc_container module state parameter value.
# CONFIG_CREATION_PARAMS is a list of attribute names that are only applied
# on instance creation.
# LXD (3.19) Rest API provides instances endpoint, failback to containers and virtual-machines
# https://documentation.ubuntu.com/lxd/en/latest/rest-api/#instances-containers-and-virtual-machines
# self.old_sections is already filtered for volatile keys if necessary
# preliminary, will be overwritten in _apply_instance_configs() if called
# Copyright (c) 2016, Guillaume Grossetie <ggrossetie@yuzutech.fr>
# Re-attempt with no password to match existing behavior
# when keypass is provided, add -passin
# Preserve properties of the destination file, if any.
# Preserve properties of the source file
# keytool may return 0 whereas the keystore has not been created.
# Utility functions
# Copyright (c) 2018, Evert Mulder (base on manageiq_user.py by Daniel Korn <korndaniel1@gmail.com>)
# No parent or parent id supplied we select the root tenant
# check if we need to update ( compare_tenant is true is no difference found )
# try to update tenant
# check for required arguments
# Change the byte values to GB
# The root tenant does not return the ancestry attribute
# tenant should not exist
# if we have a tenant, delete it
# if we do not have a tenant, nothing to do
# tenant should exist
# if we have a tenant, edit it
# if we do not have a tenant, create it
# quotas as supplied and we have a tenant
# group doesn't exist
# if there is a password param, but 'update_password' is 'on_create'
# then discard the password (since we're editing an existing user)
# check if we need to update ( compare_user is true is no difference found )
# try to update user
# try to create a new user
# user should not exist
# if we have a user, delete it
# if we do not have a user, nothing to do
# user should exist
# if we have a user, edit it
# if we do not have a user, create it
# Modules import
# Wait for 5s before continue
# Based on macports (Jimmy Tang <jcftang@gmail.com>)
# "brew info" can lookup by name, full_name, token, full_token,
# oldnames, old_tokens, or aliases. In addition, any of the
# above names can be prefixed by the tap. Any of these can be
# supplied by the user as the package name.  In case of
# ambiguity, where a given name might match multiple packages,
# formulae are preferred over casks. For all other ambiguities,
# the results are an error.  Note that in the homebrew/core and
# homebrew/cask taps, there are no "other" ambiguities.
# according to brew info
# Issue https://github.com/ansible-collections/community.general/issues/9803:
# name can include the tap as a prefix, in order to disambiguate,
# e.g. casks from identically named formulae.
# Issue https://github.com/ansible-collections/community.general/issues/10012:
# package_detail["tap"] is None if package is no longer available.
# Issue https://github.com/ansible-collections/community.general/issues/10804
# name can be an alias, oldnames or old_tokens optionally prefixed by tap
# names so far, with tap prefix added to each
# Finally, identify which of all those package names was the one supplied by the user.
# Then make sure the user provided name resurface.
# There are 3 action possible here depending on installed and outdated states:
# /uninstalled ----------------------------- }}}
# linked --------------------------------- {{{
# /linked -------------------------------- }}}
# unlinked ------------------------------- {{{
# /unlinked ------------------------------ }}}
# zone doesn't exist
# get endpoint defaults
# build a connection_configuration object for each endpoint
# get role and authtype
# set a connection_configuration
# NOTE: we do not check for diff's between requested and current
# clean nulls, we do not send nulls to the api
# try to update provider
# try to create a new provider
# add the endpoint arguments to the arguments
# provider should not exist
# if we have a provider, delete it
# if we do not have a provider, nothing to do
# provider should exist
# get data user did not explicitly give
# if we do not have a provider_type, use the current provider_type
# check supported_providers types
# build "connection_configurations" objects from user requested endpoints
# "provider" is a required endpoint, if we have it, we have endpoints
# if we have a provider, edit it
# if we do not have a provider, create it
# refresh provider (trigger sync)
# Scaleway user data management module
# First we remove keys that are not defined in the wished user_data
# Then we patch keys that are different
# note: unfortunately public key cannot be updated directly by
# Because we have already call exists_deploy_key in main()
# Copyright (c) 2015, Mathew Davies <thepixeldeveloper@googlemail.com>
# Copyright (c) 2017, Sam Doran <sdoran@redhat.com>
# We first consider the simplest form: pluginname
# We consider the form: username/pluginname
# remove elasticsearch- prefix
# remove es- prefix
# Timeout and version are only valid for plugin, not elasticsearch-plugin
# Elasticsearch 8.x
# Older Elasticsearch versions
# Legacy ES 1.x
# Use the plugin_bin that was supplied first before trying other options
# Add the plugin_bin passed into the module to the top of the list of paths to test,
# testing for that binary name first before falling back to the default paths.
# Get separate lists of dirs and binary names from the full paths to the
# plugin binaries.
# Check for the binary names in the default system paths as well as the path
# specified in the module arguments.
# Search provided path and system paths for valid binary
# skip if the state is correct
# Copyright (c) 2017, Joseph Benden <joe@benden.us>
# stringify all values - in the CLI they will all be happy strings anyway
# and by doing this here the rest of the code can be agnostic to it
# use one single type for the entire list
# or complain if lists' lengths are different
# calculates if it is an array
# Copyright (c) 2017, <meiliu@fusionlayer.com>
# ---------------------------------------------------------------------------
# get_network()
# get_network_id()
# reserve_next_available_ip()
# -------------------------
# release_ip()
# delete_network()
# reserve_network()
# release_network()
# add_network()
# Modifying existing custom policies is not possible
# Filter and map the parameters names that apply to the group
# create it ...
# ... as subgroup of another parent group
# ... as toplvl base group
# updated => title are equal
# added => title does not exist
# group milestone has group_id, while project has project_id
# create raises exception with following error message when milestone already exists
# there is no way to retrieve the details of checks so if a check is present
# in the service it must be re-registered
# check that it registered correctly
# Copyright (c) 2018, Evert Mulder <evertmulder@gmail.com> (base on manageiq_user.py by Daniel Korn <korndaniel1@gmail.com>)
# No tenant name or tenant id supplied
# No role name or role id supplied
# If no updated values are supplied, in merge mode, the original values must be returned
# otherwise the existing tag filters will be removed.
# If no existing tag filters exist, use the user supplied values
# start with norm_current_values's keys and values
# replace res with norm_updated_values's keys and values
# Only compare if filters are supplied
# No existing filters exist, use supplied filters
# try to update group
# Process belongsto managed filter tags
# The input is in the form dict with keys are the categories and the tags are supplied string array
# ManageIQ, the current_managed, uses an array of arrays. One array of categories.
# We normalize the user input from a dict with arrays to a dict of sorted arrays
# We normalize the current manageiq array of arrays also to a dict of sorted arrays so we can compare
# Do not add empty categories. ManageIQ will remove all categories that are not supplied
# group should not exist
# if we have a group, delete it
# if we do not have a group, nothing to do
# group should exist
# if we have a group, edit it
# if we do not have a group, create it
# If in check mode don't create project, simulate a fake project creation
# If not in check mode, remove the project
# Also allow the user to set values for fetch_url
# Copyright (c) 2013, Michael DeHaan <michael@ansible.com>
# Copyright (c) 2013, Nimbis Services, Inc.
# TODO double check if this hack below is still needed.
# Check file for blank lines in effort to avoid "need more than 1 value to unpack" error.
# If the file gets edited, it returns true, so only edit the file if it has blank lines
# If check mode, create a temporary file
# No preexisting file to remove blank lines from
# needed to make pylint happy
# Safety check.
# Is this an existing running VM?
# Find a SR we can use for VM.copy(). We use SR of the first disk
# if specified or default SR if not specified.
# VM name could be an empty string which is bad.
# Support for Ansible check mode.
# Now we can instantiate VM. We use VM.clone for linked_clone and
# VM.copy for non linked_clone.
# Description is copied over from template so we reset it.
# If template is one of built-in XenServer templates, we have to
# do some additional steps.
# Note: VM.get_is_default_template() is supported from XenServer 7.2
# other_config of built-in XenServer templates have a key called
# 'disks' with the following content:
# This value of other_data is copied to cloned or copied VM and
# it prevents provisioning of VM because sr is not specified and
# XAPI returns an error. To get around this, we remove the
# 'disks' key and add disks to VM later ourselves.
# At this point we have VM ready for provisioning.
# After provisioning we can prepare vm_params for reconfigure().
# VM is almost ready. We just need to reconfigure it...
# Power on VM if needed.
# If there is no CD present, we have to create one.
# We will try to place cdrom at userdevice position
# 3 (which is default) if it is not already occupied
# else we will place it at first allowed position.
# Unimplemented!
# To change network or MAC, we destroy old
# VIF and then create a new one with changed
# parameters. That's how XenCenter does it.
# Copy all old parameters to new VIF record.
# A user could have manually changed network
# or mac e.g. through XenCenter and then also
# make those changes in playbook manually.
# In that case, module will not detect any
# changes and info in xenstore_data will
# become stale. For that reason we always
# update name and mac in xenstore_data.
# Since we handle name and mac differently,
# we have to remove them from
# network_change_list.
# We first have to remove any existing data
# from xenstore_data because there could be
# some old leftover data from some interface
# that once occupied same device location as
# our new interface.
# We get MAC from VIF itself instead of
# networks.mac because it could be
# autogenerated.
# Gather new params after reconfiguration.
# Make sure that VM is poweredoff before we can destroy it.
# Destroy VM!
# Destroy all VDIs associated with VM!
# This VM could be a template or a snapshot. In that case we fail
# because we can't reconfigure them or it would just be too
# dangerous.
# Let's build a list of parameters that changed.
# Name could only differ if we found an existing VM by uuid.
# Folder parameter is found in other_config.
# Check existence only. Ignore return value.
# Kept for compatibility with older Ansible versions that
# do not support subargument specs.
# We can use VCPUs_at_startup or VCPUs_max parameter. I'd
# say the former is the way to go but this needs
# confirmation and testing.
# For now, we don't support hotpluging so VM has to be in
# poweredoff state to reconfigure.
# For now, we don't support hotpluging so VM has to be
# in poweredoff state to reconfigure.
# There are multiple memory parameters:
# memory_target seems like a good candidate but it returns 0 for
# halted VMs so we can't use it.
# I decided to use memory_dynamic_max and memory_static_max
# and use whichever is larger. This strategy needs validation
# and testing.
# XenServer stores memory size in bytes so we need to divide
# it by 1024*1024 = 1048576.
# Find allowed userdevices.
# Get the list of all disk. Filter out any CDs found.
# Number of disks defined in module params have to be same or
# higher than a number of existing disks attached to the VM.
# We don't support removal or detachment of disks.
# Find the highest disk occupied userdevice.
# If this is an existing disk.
# If this is a new disk.
# We need to place a new disk right above the highest
# placed existing disk to maintain relative disk
# positions pairable with disk specifications in
# module params. That place must not be occupied by
# some other device like CD-ROM.
# If no place was found.
# Highest occupied place could be a CD-ROM device
# so we have to include all devices regardless of
# type when calculating out-of-bound position.
# For new disks we only track their position.
# We should append config_changes_disks to config_changes only
# if there is at least one changed disk, else skip.
# Get the list of all CD-ROMs. Filter out any regular disks
# found. If we found no existing CD-ROM, we will create it
# later else take the first one found.
# If no existing CD-ROM is found, we will need to add one.
# We need to check if there is any userdevice allowed.
# If cdrom.iso_name is specified but cdrom.type is not,
# then set cdrom.type to 'iso', unless cdrom.iso_name is
# an empty string, in that case set cdrom.type to 'none'.
# If type changed.
# Check if ISO exists.
# Is ISO image changed?
# Find allowed devices.
# Number of VIFs defined in module params have to be same or
# higher than a number of existing VIFs attached to the VM.
# We don't support removal of VIFs.
# Find the highest occupied device.
# IPv4 reconfiguration.
# If networks.ip is specified and networks.type is not,
# then set networks.type to 'static'.
# XenServer natively supports only 'none' and 'static'
# type with 'none' being the same as 'dhcp'.
# If any parameter is overridden at this point, update it.
# Gateway can be an empty string (when removing gateway
# configuration) but if it is not, it should be validated.
# IPv6 reconfiguration.
# If networks.ip6 is specified and networks.type6 is not,
# then set networks.type6 to 'static'.
# If this is an existing VIF.
# If this is a new VIF.
# Restart is needed if we are adding new network
# interface with IP/gateway parameters specified
# and custom agent is used.
# We need to place a new network interface right above the
# highest placed existing interface to maintain relative
# positions pairable with network interface specifications
# in module params.
# For new VIFs we only track their position.
# We should append config_changes_networks to config_changes only
# if there is at least one changed network, else skip.
# We only need to track custom param position.
# There should be only single size spec but we make a list of all size
# specs just in case. Priority is given to 'size' but if not found, we
# check for 'size_tb', 'size_gb', 'size_mb' etc. and use first one
# size_tb, size_gb, size_mb, size_kb, size_b
# We found float value in string, let's typecast it.
# If we found float but unit is bytes, we get the integer part only.
# We found int value in string, let's typecast it.
# Common failure
# TODO: implement support for detecting type host. No server to test
# this on at the moment.
# Find existing VM
# Make new disk and network changes more user friendly
# and informative.
# Look through the all response pages in search of variable we need
# We are at the end of list
# Retrieve existing pipeline variable (if any)
# Create new variable in case it doesn't exists
# Update variable if it is secured or the old value does not match the new one
# Delete variable
# Using 'if id:' doesn't work properly when id=0
# Using 'if requested_id:' doesn't work properly when requested_id=0
# It might be already deleted by the time this function is called
# Copyright (c) 2025, Alexei Znamensky <russoz@gmail.com>
# Copyright (c) 2018, Ryan Conway (@rylon)
# Copyright (c) 2018, Scott Buchanan <sbuchanan@ri.pn> (onepassword.py used as starting point)
# Adds the session token to all commands if we're logged in.
# This is actually a document, let's fetch the document data instead!
# This is not a document, let's try to find the requested field
# Some types of 1Password items have a 'password' field directly alongside the 'fields' attribute,
# not inside it, so we need to check there first.
# Otherwise we continue looking inside the 'fields' attribute for the specified field.
# Not found it yet, so now lets see if there are any sections defined
# and search through those for the field. If a section was given, we skip
# any non-matching sections, otherwise we search them all until we find the field.
# We will get here if the field could not be found in any section and the item wasn't a document to be downloaded.
# If the config file exists, assume an initial signin has taken place and try basic sign in
# Since we are not currently signed in, master_password is required at a minimum
# Try signing in using the master_password and a subdomain if one is provided
# Attempt a full sign in since there appears to be no existing sign in
# If we already have a result for this key, we have to append this result dictionary
# to the existing one. This is only applicable when there is a single item
# in 1Password which has two different fields, and we want to retrieve both of them.
# If this is the first result for this key, simply set it.
# If the organization already exists
# Otherwise create it
# No organization found
# Otherwise attempt to delete it
# Only accept deletion under specific conditions
# Copyright (c) 2017, Joris Weijters <joris.weijters@gmail.com>
# Import necessary libraries
# end import modules
# start defining the functions
# Check if entry exists, if not return False in exists in return dict,
# if true return True and the entry in return dict
# strip non readable characters as \n
# Find commandline strings
# check if the new entry exists
# if action is install or change,
# create new entry string
# If current entry exists or fields are different(if the entry does not
# exists, then the entry will be created
# If the entry does exist then change the entry
# If the entry does not exist create the entry
# If the action is remove and the entry exists then remove the entry
# Copyright (c) 2013, Philippe Makowski
# Written by Philippe Makowski <philippem@mageia.org>
# urpmi always have 0 for exit code if --force is used
# Copyright (c) 2013, Scott Anderson <scottanderson42@gmail.com>
# forces --noinput on every command that needs it
# These params are allowed for certain commands only
# These params are automatically added to the command if present
# these params always get tacked on the end of the command
# Python-3.2 or later
# Hack to add basic auth username and password the way fetch_url expects
# Send some audible notification if requested
# Send the message
# Scaleway Serverless function namespace info module
# Copyright (c) 2014, Jakub Jirutka <jakub@jirutka.cz>
# Hack to add params in the form that fetch_url expects
# read Layman configuration
# reload config
# define module
# Based on homebrew (Andrew Dunham <andrew@du.nham.ca>)
# No tap URL provided explicitly, continue with bulk addition
# of all the taps.
# When an tap URL is provided explicitly, we allow adding
# *single* tap only. Validate and proceed to add single tap.
# Copyright (c) 2018, Emmanouil Kampitakis <info@kampitakis.de>
# Copyright (c) 2018, William Leemans <willie@elaba.net>
# Set limits
# Ensure that a non-existing quota does not trigger a change
# Copyright (c) 2017, Ted Trask <ttrask01@yahoo.com>
# Copyright (c) 2014, Matt Martz <matt@sivel.net>
# Workaround for: Error communicating with iLO: Problem manipulating EV
# TODO: Verify if image URL exists/works
# Only perform a boot when state is boot_once or boot_always, or in case we want to force a reboot
# module.deprecate(
# Copyright (c) James Laska (jlaska@redhat.com)
# Remove any existing redhat.repo
# Pass supplied **kwargs as parameters to subscription-manager.  Ignore
# non-configuration parameters and replace '_' with '.'.  For example,
# 'server_hostname' becomes '--server.hostname'.
# When there is nothing to configure, then it is not necessary
# to run config command, because it only returns current
# content of current configuration file
# subscription-manager in any supported Fedora version has the interface.
# Any other distro: assume it is EL;
# the D-Bus interface was added to subscription-manager in RHEL 7.4.
# Technically speaking, subscription-manager uses dbus-python
# as D-Bus library, so this ought to work; better be safe than
# sorry, I guess...
# There is no support for token-based registration in the D-Bus API
# of rhsm, so always use the CLI in that case;
# also, since the specified environments are names, and the D-Bus APIs
# require IDs for the environments, use the CLI also in that case
# Generate command arguments
# Seconds to wait for Registration to complete over DBus;
# 10 minutes should be a pretty generous timeout.
# Stop the rhsm service when using systemd (which means Fedora or
# RHEL 7+): this is because the service may not use new configuration bits
# - with subscription-manager < 1.26.5-1 (in RHEL < 8.2);
# - sporadically: https://bugzilla.redhat.com/show_bug.cgi?id=2049296
# While there is a 'force' options for the registration, it is actually
# not implemented (and thus it does not work)
# - in RHEL 7 and earlier
# - in RHEL 8 before 8.8: https://bugzilla.redhat.com/show_bug.cgi?id=2118486
# - in RHEL 9 before 9.2: https://bugzilla.redhat.com/show_bug.cgi?id=2121350
# Hence, use it only when implemented, manually unregistering otherwise.
# Match it on RHEL, since we know about it; other distributions
# will need their own logic.
# We need to use the 'enable_content' D-Bus option to ensure that
# content is enabled; sadly the option is available depending on the
# version of the distro, and also depending on which API/method is used
# for registration.
# subscription-manager in Fedora >= 41 has the new option.
# Assume EL distros here.
# subscription-manager in any supported Fedora version
# has the new option.
# Check for RHEL 8 >= 8.6, or RHEL >= 9.
# CentOS: similar checks as for RHEL, with one extra bit:
# if the 2nd part of the version is empty, it means it is
# CentOS Stream, and thus we can assume it has the latest
# version of subscription-manager.
# Unknown or old distro: assume it does not support
# the new option.
# The option for the consumer type used to be 'type' in versions
# of RHEL before 9 & in RHEL 9 before 9.2, and then it changed to
# 'consumer_type'; since the Register*() D-Bus functions reject
# unknown options, we have to pass the right option depending on
# the version -- funky.
# Check for RHEL 9 >= 9.2, or RHEL >= 10.
# CentOS: since the change was only done in EL 9, then there is
# only CentOS Stream for 9, and thus we can assume it has the
# latest version of subscription-manager.
# The option for environments used to be 'environment' in versions
# of RHEL before 8.6, and then it changed to 'environments'; since
# the Register*() D-Bus functions reject unknown options, we have
# to pass the right option depending on the version -- funky.
# Wrap it as proper D-Bus dict
# Use the private bus to register the system
# Sometimes we get NoReply but the registration has succeeded.
# Check the registration status before deciding if this is an error.
# Host is not registered so re-raise the error
# Host was registered so continue
# Always shut down the private bus
# Make sure to refresh all the local data: this will fetch all the
# certificates, update redhat.repo, etc.
# There is no support for setting the release via D-Bus, so invoke
# the CLI for this.
# Remove leading+trailing whitespace
# An empty line implies the end of a output group
# If a colon ':' is found, parse
# To unify
# Remember the name for later processing
# Associate value with most recently recorded product
# FIXME - log some warning?
# warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value))
# Update current syspurpose with new values
# When some key is not listed in new syspurpose, then delete it from current syspurpose
# and ignore custom attributes created by user (e.g. "foo": "bar")
# Note: the default values for parameters are:
# 'type': 'str', 'default': None, 'required': False
# So there is no need to repeat these values for each parameter.
# Load RHSM configuration from file
# Ensure system is registered
# Cache the status of the system before the changes
# Register system
# Ensure system is *not* registered
# Copyright (c) 2017, Tim Rightnour <thegarbledone@gmail.com>
# do the logging
# Copyright (c) 2018, Red Hat, Inc.
# Generate a list of VDO volumes, whether they are running or stopped.
# @param module  The AnsibleModule object.
# @param vdocmd  The path of the 'vdo' command.
# @return vdolist  A list of currently created VDO volumes.
# if rc != 0:
# If there is no /etc/vdoconf.yml file, assume there are no
# VDO volumes. Return an empty list of VDO volumes.
# Generate a string containing options to pass to the 'VDO' command.
# Note that a 'create' operation will pass more options than a
# 'modify' operation.
# @param params  A dictionary of parameters, and their values
# @return vdocmdoptions  A string to be used in a 'vdo <action>' command.
# Entering an invalid thread config results in a cryptic
# 'Could not set up device mapper for %s' error from the 'vdo'
# command execution.  The dmsetup module on the system will
# output a more helpful message, but one would have to log
# onto that system to read the error.  For now, heed the thread
# limit warnings in the DOCUMENTATION section above.
# Define the available arguments/parameters that a user can pass to
# the module.
# Defaults for VDO parameters are None, in order to facilitate
# the detection of parameters passed from the playbook.
# Creation param defaults are determined by the creation section.
# Seed the result dictionary in the object.  There will be an
# 'invocation' dictionary added with 'module_args' (arguments
# given).
# Print a pre-run list of VDO volumes in the result object.
# Collect the name of the desired VDO volume, and its state.  These will
# determine what to do.
# Create a desired VDO volume that doesn't exist yet.
# Create a dictionary of the options from the AnsibleModule
# parameters, compile the vdo command options, and run "vdo create"
# with those options.
# Since this is a creation of a new VDO volume, it will contain all
# all of the parameters given by the playbook; the rest will
# assume default values.
# Print a post-run list of VDO volumes in the result object.
# Modify the current parameters of a VDO that exists.
# An empty dictionary to contain dictionaries of VDO statistics
# The 'vdo status' keys that are currently modifiable.
# A key translation table from 'vdo status' output to Ansible
# module parameters.  This covers all of the 'vdo status'
# parameter keys that could be modified with the 'vdo'
# Build a dictionary of the current VDO status parameters, with
# the keys used by VDO.  (These keys will be converted later.)
# Build a "lookup table" dictionary containing a translation table
# of the parameters that can be modified
# Build a dictionary of current parameters formatted with the
# same keys as the AnsibleModule parameters.
# Check for differences between the playbook parameters and the
# current parameters. This will need a comparison function;
# since AnsibleModule params are all strings, compare them as
# strings (but if it is None; skip).
# Process the size parameters, to determine of a growPhysical or
# growLogical operation needs to occur.
# Set a growPhysical threshold to grow only when there is
# guaranteed to be more than 2 slabs worth of unallocated
# space on the device to use.  For now, set to device
# size + 64 GB, since 32 GB is the largest possible
# slab size.
# Note that a disabled VDO volume cannot be started by the
# 'vdo start' command, by design.  To accurately track changed
# status, don't try to start a disabled VDO volume.
# If the playbook contains 'activated: true', assume that
# the activate_vdo() operation succeeded, as 'vdoactivatestatus'
# will have the activated status prior to the activate_vdo()
# call.
# Remove a desired VDO that currently exists.
# fall through
# The state for the desired VDO volume was absent, and it does
# not exist. Print a post-run list of VDO volumes in the result
# Set to a default value.
# Filter and map the parameters names that apply to the client scope
# Unfortunately, the ansible argument spec checker introduces variables with null values when
# they are not specified
# remove ids for compare, problematic if desired has no ids set (not required),
# normalize for consentRequired in protocolMappers
# We can only compare the current clientscope with the proposed updates we have
# do the protocolmappers update
# update if protocolmapper exist
# create otherwise
# Copyright (c) 2016, Artem Feofanov <artem.feofanov@gmail.com>
# filling backward compatibility args
# SSL errors, connection problems, etc.
# Copyright (c) 2022, Fynn Chen <ethan.cfchen@gmail.com>
# Create new secret
# Copyright (c) 2014, Michael Warkentin <mwarkentin@gmail.com>
# If path is specified, cd into that path and run the command.
# Named dependency not installed
# Absent
# Cronvar Plugin: The goal of this plugin is to provide an idempotent
# method for set cron variable values.  It should play well with the
# existing cron module as well as allow for manually added variables.
# Each variable entered will be preceded with a comment describing the
# variable so that it can be found later.  This is required to be
# present in order for this plugin to find/modify the variable
# This module is based on the crontab module.
# using safely quoted shell for now, but this really should be two non-shell calls instead.  FIXME
# quoting shell args for now but really this should be two non-shell calls.  FIXME
# Add the variable to the top of the file.
# Throws if not a var line
# Append.
# ==================================================
# - community.general.cronvar: name="SHELL" value="/bin/bash"
# - name: Set the email
# - name: Get rid of the old new host variable
# SHELL = /bin/bash
# EMAILTO = doug@ansibmod.con.com
# Copyright (c) 2017, Fran Fitzpatrick (francis.x.fitzpatrick@gmail.com)
# Borrowed heavily from other work by Abhijeet Kasurde (akasurde@redhat.com)
# does zone exist
# check for generic zone existence
# Copyright (c) 2021, Raphaël Droz (raphael.droz@gmail.com)
# Copyright (c) 2018, Samy Coenen <samy.coenen@nubera.be>
# Whether to operate on GitLab-instance-wide or project-wide runners
# See https://gitlab.com/gitlab-org/gitlab-ce/issues/60774
# for group runner token access
# python-gitlab 2.2 through at least 2.5 returns a list of dicts for list() instead of a Runner
# object, so we need to handle both
# When runner exists, object will be stored in self.runner_object.
# the paused attribute for runners is available since 14.8
# if `accessor_id` is not supplied we can only create objects and are not idempotent
# SecretID is usually not supplied
# ExpirationTTL is only supported on create, not for update
# it writes to ExpirationTime, so we need to remove that as well
# Scaleway Serverless function info module
# Copyright (c) 2019, Kaarle Ritvanen <kaarle.ritvanen@datakunkku.fi>
# Scaleway Serverless container management module
# Copyright Ansible Team
# login to github
# GitHub's token formats:
# Test if we're actually logged in, but skip this check for some token prefixes
# Copyright (c) 2017, Arie Bregman <abregman@redhat.com>
# add "normal" parameters here, put uncheckable
# params in the dict below
# note: for some attributes like this one the key
# put "uncheckable" params here, this means params
# which the gitlab does accept for setting but does
# not return any information about it
# note: as we unfortunately have some uncheckable parameters
# Assign ssh keys
# Assign group
# When user exists, object will be stored in self.user_object.
# Copyright (c) 2023 Aritra Sen <aretrosen@proton.me>
# Copyright (c) 2017 Chris Hoffman <christopher.hoffman@gmail.com>
# BUG: It will not show correct error sometimes, like when it has
# plain text output intermingled with a {}
# HACK: To fix the above bug, the following hack is implemented
# get the resource type
# return a list of current profiles for this object
# node.js v0.10.22 changed the `npm outdated` module separator
# from "@" to " ". Split on both for backwards compatibility.
# Copyright (c) 2023, Benedikt Braunger (bebr@adm.ku.dk)
# but it is required and only relevant for check mode!!
# Copyright (c) 2014, Dan Keder <dan.keder@gmail.com>
# Linode API currently requires the following:
# It must contain at least two of these four character classes:
# lower case letters - upper case letters - numbers - punctuation
# we play it safe :)
# as of python 2.4, this reseeds the PRNG from urandom
# Populate with ips
# See if we can match an existing server details with the provided linode_id
# For the moment we only consider linode_id as criteria for match
# Later we can use more (size, name, etc.) and update existing
# Attempt to fetch details about disks and configs only if servers are
# found with linode_id
# Act on the state
# TODO: validate all the plan / distribution / datacenter are valid
# Multi step process/validation:
# Any create step triggers a job that need to be waited for.
# Create linode entity
# Get size of all individually listed disks to subtract from Distribution disk
# Update linode Label to match name
# Update Linode with Ansible configuration options
# Save server
# Add private IP to Linode
# Create disks (1 from distrib, 1 for SWAP)
# Password is required on creation, if not provided generate one
# Create data disk
# Create SWAP disk
# Create individually listed disks at specified size
# If a disk Type is not passed in, default to ext4
# TODO: destroy linode ?
# Check architecture
# Get latest kernel matching arch if kernel_id is not specified
# Get disk list
# Trick to get the 9 items in the list
# Create config
# Start / Ensure servers are running
# Refresh server state
# Ensure existing servers are up and running, boot if necessary
# wait here until the instances are up
# refresh the server details
# status:
# Get a fresh copy of the server details
# From now on we know the task is a success
# Build instance report
# depending on wait flag select the status
# Return the root password if this is a new box and no SSH key
# has been provided
# Ease parsing if only 1 instance
# setup the auth
# Look through all response pages in search of hostname we need
# Retrieve existing known host
# Create new host in case it doesn't exists
# Delete host
# Copyright (c) 2018, Dusty Mabe <dusty@dustymabe.com>
# Ensure rpm-ostree command exists
# Decide action to perform
# Add the options to the command line
# Additional parameters
# Determine if system needs a reboot to apply change
# A few possible options:
# Copyright (c) 2012, Dag Wieers (@dagwieers) <dag@wieers.com>
# `domain` is only available in Python 3
# NOTE: Backward compatible with old syntax using '|' as delimiter
# address only, w/o phrase
# NOTE: Backward compatibility with old syntax using space as delimiter is not retained
# Copyright (c) 2014, Ravi Bhure <ravibhure@gmail.com>
# Discover backends if none are given
# Run the command for each requested backend
# Fail when backends were not found
# We can assume there will only be 1 element in state because both svname and pxname are always set when we get here
# When using track we get a status like this: MAINT (via pxname/svname) so we need to do substring matching
# check if haproxy version supports DRAIN state (starting with 1.5)
# Get the state before the run
# toggle enable/disable server
# Get the state after the run
# Report change status
# zone domain length must be less than 250 chars
# zone domain already exists, nothing to change.
# we need to create the domain
# unset msg as we don't want to return unnecessary info to the user.
# the zone needs to be unique - this isn't a requirement of Memset's API but it
# makes sense in the context of this module.
# we would need to populate the return values with the API's response
# in several places so it is easier to do it at the end instead.
# Copyright (c) 2019, INSPQ (@elfelip)
# Filter and map the parameters names that apply to the user
# Delete user
# If the force option is set to true
# Delete the existing user
# Add user ID to new representation
# Compare users
# If the new user does not introduce a change to the existing user
# Update the user
# set user groups
# Get the user groups
# Copyright (c) 2012, Jim Richardson <weaselkeeper@gmail.com>
# Copyright (c) 2019, Bernd Arnold <wopfel@gmail.com>
# parse config
# Copyright (c) 2013, Patrick Callahan <pmc@patrickcallahan.com>
# based on
# exit code 104 is ZYPPER_EXIT_INF_CAP_NOT_FOUND (no packages found)
# zypper exit codes
# 0: success
# 106: signature verification failed
# 102: ZYPPER_EXIT_INF_REBOOT_NEEDED - Returned after a successful installation of a patch which requires reboot of computer.
# 103: zypper was upgraded, run same command again
# 107: ZYPPER_EXIT_INF_RPM_SCRIPT_FAILED -  Some of the packages %post install scripts returned an error, but package is installed.
# if this was the first run and it failed with 103
# run zypper again with the same command to complete update
# if this was the first run and it failed with 107 with skip_post_errors flag
# apply simple_errors logic to rc 0,102,103,106,107
# apply simple_errors logic to rc other than 0,102,103,106,107
# add global options before zypper command
# TODO: if there is only one package, set before/after to version numbers
# add oldpackage flag when a version is given to allow downgrades
# for state=present: filter out already installed packages
# if a version is given leave the package in to let zypper handle the version
# generate lists of packages to install or remove
# nothing to install/remove and nothing to update
# zypper install also updates packages
# pass packages to zypper
# allow for + or - prefixes in install/remove lists
# also add version specifier if given
# do this in one zypper run to allow for dependency-resolution
# for example "-exim postfix" runs without removing packages depending on mailserver
# Get package state
# remove empty strings from package list
# Refresh repositories
# Perform requested action
# Scaleway Load-balancer management module
# Create Load-balancer
# Copyright (c) 2019, John Westcott <john.westcott.iv@redhat.com>
# Try to make a connection with the DSN
# Get the rows out into an 2d array
# Return additional information from the cursor
# Copyright (c) 2021, Roberto Moreda <moreda@allenta.com>
# NEVRA regex.
# Call dnf versionlock using a just one full NEVR package-name-spec each
# time because multiple package-name-spec and globs are not well supported.
# This is a workaround for two alleged bugs in the dnf versionlock plugin:
# * Multiple package-name-spec arguments don't lock correctly
# * Locking a version of a not installed package disallows locking other
# NOTE: This is suboptimal in terms of performance if there are more than a
# few package-name-spec patterns to lock, because there is a command
# execution per each. This will improve by changing the strategy once the
# mentioned alleged bugs in the dnf versionlock plugin are fixed.
# This is equivalent to the _match function of the versionlock plugin.
# indexing a match object with [] is a Python 3.6+ construct
# Extract the NEVRA pattern.
# fallback to dnf
# Check module pre-requisites.
# Check incompatible options.
# Add raw patterns as specs to add.
# Get available packages that match the patterns.
# Get installed packages that match the patterns.
# Obtain the list of package specs that require an entry in the
# locklist. This list is composed by:
# Add raw patterns as specs to delete.
# Get patterns that match the some line in the locklist.
# Copyright (c) 2015, Chris Long <alcamie@gmail.com> <chlong@redhat.com>
# Options common to multiple connection types.
# IP address options.
# The ovs-interface type can be both ip_conn_type and have a master
# An interface that has a master but is of slave type vrf can have an IP address
# when 'method' is disabled the 'may_fail' no make sense but accepted by nmcli with keeping 'yes'
# force ignoring to save idempotency
# Layer 2 options.
# Connections that can have a master.
# Options specific to a connection type.
# priority make sense when stp enabled, otherwise nmcli keeps bridge-priority to 32768 regrdless of input.
# Convert settings values based on the situation.
# Convert all bool options to yes/no.
# Convert VLAN/VXLAN IDs to text when detecting changes.
# MTU is 'auto' by default when detecting changes.
# Convert lists to strings for nmcli create/modify commands.
# Use connection name as default for interface name on creation.
# VPN doesn't need an interface but if sended it must be a valid interface.
# Constructing the command.
# self.down_connection()
# Aliases such as 'miimon', 'downdelay' are equivalent to the +bond.options 'option=value' syntax.
# We can't just do `if not value` because then if there's a value
# of 0 specified as an integer it'll be interpreted as empty when
# it actually isn't.
# MAC addresses are case insensitive, nmcli always reports them in uppercase
# ensure current_value is also converted to uppercase in case nmcli changes behaviour
# Depending on version nmcli adds double-qoutes to gsm.apn
# Need to strip them in order to compare both
# parameter does not exist
# compare values between two lists
# The order of IP addresses matters because the first one
# is the default source address for outbound connections.
# Similarly, the order of DNS nameservers and search
# suffixes is important.
# Parsing argument file
# Bond Specific vars
# general usage
# bridge specific vars
# team specific vars
# team active-backup runner specific options
# team lacp runner specific options
# vlan specific vars
# vxlan specific vars
# ip-tunnel specific vars
# ip-tunnel type gre specific vars
# 802-11-wireless* specific vars
# infiniband specific vars
# team checks
# modify connection (note: this function is check mode aware)
# result['Connection']=('Connection %s of Type %s is not being added' % (nmcli.conn_name, nmcli.type))
# Copyright (c) 2013, Patrik Lundin <patrik@sigterm.se>
# Function used for executing commands.
# Break command line into arguments.
# This makes run_command() use shell=False which we need to not cause shell
# expansion of special characters like '*'.
# We set TERM to 'dumb' to keep pkg_add happy if the machine running
# ansible is using a TERM that the managed machine does not know about,
# e.g.: "No progress meter: failed termcap lookup on xterm-kitty".
# Function used to find out if a package is currently installed.
# If the requested package name is just a stem, like "python", we may
# find multiple packages with that name.
# Function used to make sure a package is present.
# It is possible package_present() has been called from package_latest().
# In that case we do not want to operate on the whole list of names,
# only the leftovers.
# Attempt to install the package
# The behaviour of pkg_add is a bit different depending on if a
# specific version is supplied or not.
# When a specific version is supplied the return code will be 0 when
# a package is found and 1 when it is not. If a version is not
# supplied the tool will exit 0 in both cases.
# It is important to note that "version" relates to the
# packages-specs(7) notion of a version. If using the branch syntax
# (like "python%3.5") even though a branch name may look like a
# version string it is not used an one by pkg_add.
# Depend on the return code.
# Depend on stderr instead.
# There is a corner case where having an empty directory in
# installpath prior to the right location will result in a
# "file:/local/package/directory/ is empty" message on stderr
# while still installing the package, so we need to look for
# for a message like "packagename-1.0: ok" just in case.
# It turns out we were able to install the package.
# We really did fail, fake the return code.
# Function used to make sure a package is the latest available version.
# Attempt to upgrade the package.
# Look for output looking something like "nmap-6.01->6.25: ok" to see if
# something changed (or would have changed). Use \W to delimit the match
# from progress meter output.
# FIXME: This part is problematic. Based on the issues mentioned (and
# handled) in package_present() it is not safe to blindly trust stderr
# as an indicator that the command failed, and in the case with
# empty installpath directories this will break.
# For now keep this safeguard here, but ignore it if we managed to
# parse out a successful update above. This way we will report a
# successful run when we actually modify something but fail
# Note packages that need to be handled by package_present
# If there were any packages that were not installed we call
# package_present() which will handle those.
# Function used to make sure a package is not installed.
# Attempt to remove the package.
# Function used to remove unused dependencies.
# If we run the commands, we set changed to true to let
# the package list change detection code do the actual work.
# Create a minimal pkg_spec entry for '*' to store return values.
# Attempt to remove unused dependencies.
# Function used to parse the package name based on packages-specs(7).
# The general name structure is "stem-version[-flavors]".
# Names containing "%" are a special variation not part of the
# packages-specs(7) syntax. See pkg_add(1) on OpenBSD 6.0 or later for a
# description.
# Initialize empty list of package_latest() leftovers.
# Do some initial matches so we can base the more advanced regex on that.
# Stop if someone is giving us a name that both has a version and is
# version-less at the same time.
# All information for a given name is kept in the pkg_spec keyed by that name.
# If name includes a version.
# If name includes no version but is version-less ("--").
# If name includes no version, and is not version-less, it is all a
# stem, possibly with a branch (%branchname) tacked on at the
# end.
# Verify that the managed host is new enough to support branch syntax.
# Sanity check that there are no trailing dashes in flavor.
# Try to stop strange stuff early so we can be strict later.
# Function used for figuring out the port path.
# try for an exact match first
# next, try for a fuzzier match
# error if we don't find exactly 1 match
# there's exactly 1 match, so figure out the subpackage, if any, then return
# Function used for upgrading all installed packages.
# Attempt to upgrade all packages.
# Try to find any occurrence of a package changing version like:
# "bzip2-1.0.6->1.0.6p0: ok".
# It seems we can not trust the return value, so depend on the presence of
# stderr to know if something failed.
# The data structure used to keep track of package information.
# build sqlports if its not installed yet
# Perform an upgrade of all installed packages.
# Remove unused dependencies.
# Parse package names and put results in the pkg_spec dictionary.
# Not sure how the branch syntax is supposed to play together
# with build mode. Disable it for now.
# Get state for all package names.
# Perform requested action.
# Handle autoremove if requested for non-asterisk packages
# The combined changed status for all requested packages. If anything
# is changed this is set to True.
# The combined failed status for all requested packages. If anything
# failed this is set to True.
# We combine all error messages in this comma separated string, for example:
# "msg": "Can't find nmapp\n, Can't find nmappp\n"
# Loop over all requested package names and check if anything failed or
# changed.
# If combined_error_message contains anything at least some part of the
# list of requested package names failed.
# Copyright (c) 2013, David Stygstra <david.stygstra@gmail.com>
# Copyright (c) 2024, Tobias Zeuch <tobias.zeuch@sap.com>
# Build client configuration from module arguments
# Copyright (c) 2022, James Livulpi
# Cannot run homectl commands if service is not active
# Get user properties if they exist in json
# User exists now compare password given with current hashed password stored in the user metadata.
# Don't need checking on remove user
# Read the user record from standard input.
# Resize disksize now resize = true
# This is not valid in user record (json) and requires it to be passed on command.
# Build up dictionary to jsonify for homectl commands.
# Get the current user record if not creating a new user record.
# Remove elements that are not meant to be updated from record.
# These are always part of the record when a user exists.
# Let last change Usec be updated by homed when command runs.
# Now only change fields that are called on leaving what's currently in the record intact.
# Cannot update storage unless were creating a new user.
# See 'Fields in the binding section' at https://systemd.io/USER_RECORD/
# Cannot update homedir unless were creating a new user.
# Cannot update imagepath unless were creating a new user.
# convert human readable to bytes
# First we need to make sure homed service is active
# handle removing user
# Handle adding a user
# Run this to see if changed would be True or False which is useful for check_mode
# User gave wrong password fail with message
# Now actually modify the user if changed was set to true at any point.
# If detached mode is active mark as success, we wouldn't be able to get here if it didn't exist
# This will include the current state of the realm key if it is already
# We only support one component provider type in this module
# As provider_type is not a module parameter we have to add it to the
# changeset explicitly.
# It is not possible to compare current keys to desired keys, because the
# certificate parameter is a base64-encoded binary blob created on the fly
# when a key is added. Moreover, the Keycloak Admin API does not seem to
# return the value of the private key for comparison.  So, in effect, it we
# just have to ignore changes to the keys.  However, as the privateKey
# parameter needs be present in the JSON payload, any changes done to any
# other parameters (e.g.  config.priority) will trigger update of the keys
# as a side-effect.
# Sanitize linefeeds for the privateKey. Without this the JSON payload
# will be invalid.
# get user id if the user exists
# get group id if group exists
# get all members in a group
# get single member in a group by user name
# check if the user is a member of the group
# add user to a group
# remove user from a group
# update user's access level in a group
# check if the user is a member in the group
# add user to the group
# remove the user from the group
# Copyright (c) 2019, INSPQ <philippe.gauthier@inspq.qc.ca>
# Get existing executions on the Keycloak server for this alias
# Get flowalias parent if given
# Check if same providerId or displayName name between existing and new execution
# Remove key that doesn't need to be compared with existing_exec
# Compare the executions to see if it need changes
# Remove exec from list in case 2 exec with same name
# Update the existing execution
# add the execution configuration
# remove unwanted key for the next API call
# If copyFrom is defined, create authentication flow from a copy
# Create an empty authentication flow
# If the authentication still not exist on the server, raise an exception.
# Configure the executions for the flow
# Get executions created
# If force option is true
# Delete the actual authentication flow
# Remove backend
# Get properties
# Change weight
# Change probe
# Creates backend
# Copyright 2014 Benjamin Curtis <benjamin.curtis@gmail.com>
# Copyright (c) 2013, Yeukhon Wong <yeukhon@acm.org>
# no more local modification
# before purge, find out if there are any untracked files
# there are some untrackd files
# Assume it is a rev number, tag, or branch
# initial states
# If there is no hgrc file, then assume repo is absent
# and perform clone. Otherwise, perform pull and update.
# no update needed, don't pull
# but force and purge if desired
# get the current state before doing pulling
# can perform force and purge
# Copyright (c) 2018, Florian Paul Azim Hoberg (@gyptazy) <gyptazy@gyptazy.ch>
# on DNF-based distros, yum is a symlink to dnf, so we try to handle their different entry formats.
# If no package can be found this will be written on stdout with rc 0
# Get an overview of all packages that have a version lock
# Ensure versionlock state of packages
# === when check_mode ===
# Clean up old failed deployment
# Copyright (c) 2022, Dušan Marković (@bratwurzt)
# Get effective role mappings
# Copyright (c) 2015, Nate Coraor <nate@coraor.org>
# This module does not return anything other than the standard
# changed/state/msg/stdout
# ansible ensures the else cannot happen here
# If we have a version spec and no source, use the version spec as source
# Scaleway Security Group Rule management module
# Copyright (C) 2018 Antoine Barbare (antoinebarbare@gmail.com).
# Create Security Group Rule
# Copyright (c) 2018 Dell EMC Inc.
# Search for 'key' entry and extract URI from it
# Extract proper URI
# NOTE: Currently overriding the usage of 'data_modification' due to
# how 'resource_id' is processed.  In the case of CreateBiosConfigJob,
# we interact with BOTH systems and managers, so you currently cannot
# specify a single 'resource_id' to make both '_find_systems_resource'
# and '_find_managers_resource' return success.  Since
# CreateBiosConfigJob doesn't use the matched 'resource_id' for a
# system regardless of what's specified, disabling the 'resource_id'
# inspection for the next call allows a specific manager to be
# specified with 'resource_id'.  If we ever need to expand the input
# to inspect a specific system and manager in parallel, this will need
# execute only if we find a Managers resource
# Copyright (c) 2022, Guillaume MARTINEZ (lunik@tiwabbit.fr)
# Heavily influenced from Fran Fitzpatrick <francis.x.fitzpatrick@gmail.com> ipa_config module
# Copyright (c) 2019, Maciej Delmanowski <drybjed@gmail.com>
# Copyright (c) 2017, Alexander Korinek <noles@a3k.net>
# Perform action
# result = {'raw': out}
# Prepend the key with the namespace if defined
# Ansible module to manage rundeck projects
# Copyright (c) Seth Edwards, 2014
# Hack send parameters the way fetch_url wants them
# Convert bool to string
# kc drops the variable 'authorizationServicesEnabled' if set to false
# to minimize diff/changes we set it to false if not set by kc
# Filter and map the parameters names that apply to the client
# We can only compare the current client with the proposed updates we have
# Copyright (c) 2017, Ryan Scott Brown <ryansb@redhat.com>
# failure to plan
# changes, but successful
# Ignore diff if resource_changes does not exists in tfplan
# only to handle anything unforeseen
# on the top-level we need to pass just the python string with necessary
# terraform string escape sequences
# we aren't sure if this plan will result in changes, so assume yes
# checks out to decide if changes were made during execution
# Restore the Terraform workspace found when running the module
# Scaleway Serverless function namespace management module
# Create function namespace
# Copyright (c) 2019 Gregory Thiemonge <gregory.thiemonge@gmail.com>
# Scaleway Security Group management module
# Help user when check mode is enabled by defining id key
# Create Security Group
# Copyright (c) 2023, Andrew Hyatt <andy@hyatt.xyz>
# Sort for consistent results
# Copyright (c) 2016, Thierno IB. BARRY @barryib
# Sponsored by Polyconseil http://polyconseil.fr.
# BIOS attributes to update
# boot order
# manager nic
# HostInterface config options
# HostInterface instance ID
# Service Identification
# Sessions config options
# Volume deletion options
# Set SecureBoot options
# Volume creation options
# Power Restore Policy
# execute only if we find a Sessions resource
# Copyright (c) 2012, Boyd Adamson <boyd () boydadamson.com>
# Stdout is normally empty but for some packages can be
# very long and is not often useful
# Returncodes as per pkgadd(1m)
# no install nor uninstall, or failed
# rc will be none when the package already was installed and no action took place
# Only return failed=False when the returncode is known to be good as there may be more
# undocumented failure return codes
# Copyright (c) 2018, Bruce Smith <Bruce.Smith.IT@gmail.com>
# Copyright (c) 2017, Kenneth D. Evensen <kdevensen@gmail.com>
# Method to check if a rule matches the type, control and path.
# Validate the rule type
# Validate the rule control
# TODO: Validate path
# PamdService encapsulates an entire service and contains one or more rules.  It seems the best way is to do this
# as a doubly linked list.
# Get a list of rules we want to change
# There are two cases to consider.
# 1. The new rule doesn't exist before the existing rule
# 2. The new rule exists
# Create a new rule
# First we'll get the previous rule.
# Next we may have to loop backwards if the previous line is a comment.  If it
# is, we'll get the previous "rule's" previous.
# Next we'll see if the previous rule matches what we are trying to insert.
# First set the original previous rule's next to the new_rule
# Second, set the new_rule's previous to the original previous
# Third, set the new rule's next to the current rule
# Fourth, set the current rule's previous to the new_rule
# Handle the case where it is the first rule in the list.
# This is the case where the current rule is not only the first rule
# but the first line as well.  So we set the head to the new rule
# This case would occur if the previous line was a comment.
# 1. The new rule doesn't exist after the existing rule
# First we'll get the next rule.
# Next we may have to loop forwards if the next line is a comment.  If it
# is, we'll get the next "rule's" next.
# First we create a new rule
# If the previous rule doesn't match we'll insert our new rule.
# Second set the original next rule's previous to the new_rule
# Third, set the new_rule's next to the original next rule
# Fourth, set the new rule's previous to the current rule
# Fifth, set the current rule's next to the new_rule
# This is the case where the current_rule is the last in the list
# create some structures to evaluate the situation
# Handle new simple arguments
# Handle new key value arguments
# Handle existing key value arguments when value is not equal
# Let's check to see if there are any args to remove by finding the intersection
# of the rule's current args and the args_to_remove lists
# There are args to remove, so we create a list of new_args absent the args
# to remove.
# If args is None, return empty list by default.
# But if return_none is True, then return None
# From this point on, module_arguments is guaranteed to be a list, empty or not
# Open the file and read the content or fail
# If unable to read the file, fail out
# Assuming we didn't fail, create the service
# Set the action
# Take action
# If the module is not valid (meaning one of the rules is invalid), we will fail
# If not check mode and something changed, backup the original if necessary then write out the file or fail
# First, create a backup if desired.
# Copyright (c) 2015, Linus Unnebäck <linus@folkdatorn.se>
# Build up the invocation of `make` we are going to use
# For non-Linux OSes, prefer gmake (GNU make) over make
# Fall back to system make
# build command:
# handle any make specific arguments included in params
# add make target
# add makefile parameters
# Check if the target is already up to date
# If we've been asked to do a dry run, we only need
# to report whether or not the target is up to date
# The target is up to date, so we don't have to
# The target isn't up to date, so we need to run it
# We don't report the return code, as if this module failed
# we would be calling fail_json from run_command, so even if
# we had a non-zero return code, we did not fail. However, if
# we report a non-zero return code here, we will be marked as
# failed regardless of what we signal using the failed= kwarg.
# Copyright (c) 2014, Kim Nørgaard
# Written by Kim Nørgaard <jasen@jasen.dk>
# Based on pkgng module written by bleader <bleader@ratonland.org>
# that was based on pkgin module written by Shaun Zinck <shaun.zinck at gmail.com>
# that was based on apt module written by Matthew Williams <matthew@flowroute.com>
# Exception for kernel-headers package on x86_64
# Copyright (c) 2015-2023, Vlad Glagolev <scm@vaygr.net>
# auto-filled at module init
# if any grimoire is not fresh, we invalidate the Codex
# drop 4-line header and empty trailing line
# return only specified grimoires unless requested to skip new
# SILENT is required as a workaround for query() in libgpg
# normalize status
# drop providers spec
# when local status is 'off' and dependency is provider,
# use only provider value
# .escape() is needed mostly for the spells like 'libsigc++'
# we matched the line "spell:dependency:on|off:optional:"
# if we also matched the local status, mark dependency
# as empty and put it back into depends file
# status is not that we need, so keep this dependency
# in the list for further reverse switching;
# stop and process the next line in both cases
# back up original queue
# see update_codex()
# extract versions from the 'gaze' command
# fail if any of spells cannot be found
# drop 2-line header and empty trailing line
# spell is not installed..
# ..so set up depends reqs for it
# spell is installed..
# ..but does not conform depends reqs
# grimoire and installed versions do not match..
# ..so check for depends reqs first and set them up
# grimoire and installed versions match..
# ..but the spell does not conform depends reqs
# 'absent'
# prepare environment: run sorcery commands without asking questions
# normalize 'state' parameter
# twilio module support methods
# Copyright (c) 2015, Michael Scherer <misc@zarb.org>
# inspired by code of github.com/dandiker/
# not supported on EL 6
# Store attrs which were not found in the system
# Search for key entry and extract URI from it
# Check if attribute exists
# Skip and proceed to next attribute if this isn't valid
# Find out if value is already set to what we want. If yes, exclude
# those attributes
# list of mutually exclusive commands for a category
# check for mutually exclusive commands
# check_mutually_exclusive accepts a single list or list of lists that
# are groups of terms that should be mutually exclusive with one another
# and checks that against a dictionary
# Init pushbullet
# Checks for channel/device
# Search for given device
# Search for given channel
# If in check mode, exit saying that we succeeded
# Send push notification
# the underlying vardict does not allow the name "output"
# Copyright (c) 2017, Thomas Caravia <taca@kadisius.eu>
# fail if service not found
# will fail if not service is not loaded
# descend past service path header
# computed prior in control flow
# preset, effect only if option set to true (no reverse preset)
# run preset if needed
# enabled/disabled state
# change enable/disable if needed
# default to desired state, no action
# computed prior in control flow, possibly modified by handle_enabled()
# service not loaded -> not started by manager, no status information
# service is loaded
# get status information
# reset = start/stop according to enabled status
# start if not running, 'service' module constraint
# change state as needed
# check service can be found (or fail) and get path
# get preliminary service facts
# set enabled state, service need not be loaded
# set service running state
# get final service status if possible
# (c) 2017 David Gunter <david.gunter@tivix.com>
# Specify a version of package if version arg passed in
# Yarn global arg is inserted before the command (e.g. `yarn global {some-command}`)
# Module will make directory if not exists.
# We need to filter for errors, since Yarn warnings are included in stderr
# `yarn global list` should be treated as "unsupported with global" even though it exists,
# because it only only lists binaries, but `yarn global add` can install libraries too.
# Yarn has a separate command for installing packages by name...
# And one for installing all packages in package.json
# the package.json in the global dir is missing a license field, so warnings are expected on stderr
# Outdated dependencies returned as a list of lists, where
# item at index 0 is the name of the dependency
# When installing globally, users should not be able to define a path for installation.
# Require a path if global is False, though!
# When installing globally, use the defined path for global node_modules
# Copyright (c) 2017, 2018, Oracle and/or its affiliates.
# return a list of current tags for this object
# With group_add attribute nonposix is passed, whereas with group_mod only posix can be passed.
# Only non-posix groups can be changed to posix
# Copyright (c) 2018, Fran Fitzpatrick <francis.x.fitzpatrick@gmail.com>
# Copyright (c) 2016, Renato Orgito <orgito@gmail.com>
# get the first device
# get the attributes
# derive the landscape handler from the model handler of the device
# @TODO remove the 'required', given the required_if ?
# uefimode may not supported by BMC, so use desired value as default
# not there
# Copyright (c) 2025, Marco Noce <nce.marco@gmail.com>
# Copyright (c) 2021, Jyrki Gadinger <nilsding@nilsding.org>
# filter was included, for discussions see:
# Issue: https://github.com/ansible-collections/community.general/issues/9278
# PR: https://github.com/ansible-collections/community.general/pull/9547
# Add the resource id, if any, to the payload. While the data type is a
# list, it is only possible to have one entry in it based on what Keycloak
# Admin Console does.
# Generate a list of scope ids based on scope names. Fail if the
# defined resource does not include all those scopes.
# Add policy ids, if any, to the payload.
# Add "id" to payload for update operations
# Handle the special case where the user attempts to change an already
# existing permission's type - something that can't be done without a
# full delete -> (re)create cycle.
# Updating an authorization  permission is tricky for several reasons.
# Firstly, the current permission is retrieved using a _policy_ endpoint,
# not from a permission endpoint. Also, the data that is returned is in a
# different format than what is expected by the payload. So, comparing the
# current state attribute by attribute to the payload is not possible.  For
# example the data contains a JSON object "config" which may contain the
# authorization type, but which is no required in the payload.  Moreover,
# information about resources, scopes and policies is _not_ present in the
# data. So, there is no way to determine if any of those fields have
# changed. Therefore the best options we have are
# a) Always apply the payload without checking the current state
# b) Refuse to make any changes to any settings (only support create and delete)
# The approach taken here is a).
# Assume that something changed, although we don't know if that is the case.
# Copyright (c) 2016, Andrew Gaffney <andrew@agaffney.org>
# Check if service is enabled
# check if service exists
# openwrt init scripts can return a non-zero exit code on a successful 'enable'
# command if the init script doesn't contain a STOP value, so we ignore the exit
# code and explicitly check if the service is now in the desired state
# check if service is currently running
# this should be busybox ps, so we only want/need to the 'w' option
# determine action, if any
# sshpubkeyfp is the list of ssh key fingerprints. IPA doesn't return the keys itself but instead the fingerprints.
# These are used for comparison.
# Remove the ipasshpubkey element as it is not returned from IPA but save its value to be used later on
# If there are public keys, remove the fingerprints and add them back to the dict
# If sshpubkey is defined as None than module.params['sshpubkey'] is [None]. IPA itself returns None (not a list).
# Therefore a small check here to replace list(None) by None. Otherwise get_user_diff() would return sshpubkey
# as different which should be avoided.
# Node would only not be present if in check mode and if not present there
# is no way to know what would and would not be changed.
# type: (bool) -> bool
# Some versions of python-jenkins < 1.8.3 has an authorization bug when
# handling redirects returned when posting to resources. If the node is
# created OK then can ignore the error.
# TODO: Remove authorization workaround.
# Used to gate downstream queries when in check mode.
# deleted OK then can ignore the error.
# disabled OK then can ignore the error.
# Would have created node with initial state enabled therefore would not have
# needed to enable therefore not enabled.
# Don't configure until after disabled, in case the change in configuration
# causes the node to pick up a job.
# n.b. Internally disable_node uses toggleOffline gated by a not
# offline condition. This means that disable_node can not be used to
# update an offline message if the node is already offline.
# Toggling the node online to set the message when toggling offline
# again is not an option as during this transient online time jobs
# may be scheduled on the node which is not acceptable.
# Would have created node with initial state enabled therefore would have
# needed to disable therefore disabled.
# Copyright (c) 2023, Guenther Grill <grill.guenther@gmail.com>
# This just means nothing has been set at the given scope
# Keep internal params away from user interactions
# At least one iteration is required, even if timeout is 0.
# FUTURE: better to let _execute_module calculate this internally?
# Set short names for values we'll have to compare or reuse
# inject the async directory based on the shell option into the
# module args
# Bind the loop max duration to consistent values on both
# remote and local sides (if not the same, make the loop
# longer on the controller); and set a backup file path.
# Then the 3-steps "go ahead or rollback":
# 1. Catch early errors of the module (in asynchronous task) if any.
# 2. Reset connection to ensure a persistent one will not be reused.
# 3. Confirm the restored state by removing the backup on the remote.
# Catch early errors due to missing mandatory option, bad
# option type/value, missing required system command, etc.
# The module is aware to not process the main iptables-restore
# command before finding (and deleting) the 'starter' cookie on
# the host, so the previous query will not reach ssh timeout.
# As the main command is not yet executed on the target, here
# 'finished' means 'failed before main command be executed'.
# - AnsibleConnectionFailure covers rejected requests (i.e.
# - ansible_timeout is able to cover dropped requests (due
# Cleanup async related stuff and internal params
# Copyright (c) 2020, Amin Vakil <info@aminvakil.com>
# Copyright (c) 2016-2018, Matt Davis <mdavis@ansible.com>
# Copyright (c) 2018, Sam Doran <sdoran@redhat.com>
# FIXME: switch all this to user arg spec validation methods when they are available
# Error if we didn't get a list
# find the path to the shutdown command
# if we could not find the shutdown command
# tell the user we will try with systemd
# find the path to the systemctl command
# if we couldn't find systemctl
# we give up here
# done, since we cannot use args with systemd shutdown
# systemd case taken care of, here we add args to the command
# Convert seconds to minutes. If less that 60, set it to 0.
# If running with local connection, fail so we don't shutdown ourself
# Initiate shutdown
# Copyright (c) 2014, Toshio Kuratomi <tkuratomi@ansible.com>
# Input patterns for is_input_dangerous function:
# 1. '"' in string and '--' in string or
# "'" in string and '--' in string
# 2. union \ intersect \ except + select
# 3. ';' and any KEY_WORDS
# maps a type of identifier to the maximum number of dot levels that are
# allowed to specify that identifier.  For example, a database column can be
# specified by up to 4 levels: database.schema.table.column
# most likely an user@host:path or user@host/path type URL
# For this type of URL, colon specifies the path, not the port
# this should be something we can parse with urlparse
# parts[1] will be empty on python2.4 on ssh:// or git:// urls, so
# ensure we actually have a parts[1] before continuing.
# this is a variant of code found in connection_plugins/paramiko.py and we should modify
# the paramiko code to import and use this.
# this is a hashed known host entry
# invalid hashed host key, skip it
# standard host file entry
# ssh-keyscan gives a 0 exit code and prints nothing on timeout
# Establish connection
# Try to find the X_ORDERed version of the DN
# Switch off chasing of referrals (https://github.com/ansible-collections/community.general/issues/1067)
# match X_ORDERed DNs
# filter by type if record is not set
# Removing one or more values from a record, we update the record with the remaining values
# Copyright (c) 2020, Andrew Klychkov (@Andersson007) <aaklychkov@mail.ru>
# Regarding RFC4013,
# This profile specifies:
# If not the "commonly mapped to nothing"
# map non-ASCII space characters
# (that can be mapped) to Unicode space
# Regarding RFC3454,
# Table D.1 lists the characters that belong
# to Unicode bidirectional categories "R" and "AL".
# If a string contains any RandALCat character, a RandALCat
# character MUST be the first character of the string, and a
# RandALCat character MUST be the last character of the string.
# Implements:
# RFC4013, 2.3. Prohibited Output.
# This profile specifies the following characters as prohibited input:
# RFC4013, 2.4. Bidirectional Characters.
# RFC4013, 2.5. Unassigned Code Points.
# Determine how to handle bidirectional characters (RFC3454):
# If a string contains any RandALCat characters,
# The string MUST NOT contain any LCat character:
# Forbid RandALCat characters in LCat string:
# RFC4013 2.3. Prohibited Output:
# RFC4013, 2.4. Bidirectional Characters:
# RFC4013, 2.5. Unassigned Code Points:
# RFC4013: "The algorithm assumes all strings are
# comprised of characters from the Unicode [Unicode] character set."
# Validate the string is a Unicode string
# (text_type is the string type if PY3 and unicode otherwise):
# RFC4013: 2.1. Mapping.
# RFC4013: 2.2. Normalization.
# "This profile specifies using Unicode normalization form KC."
# RFC4013: 2.3. Prohibited Output.
# RFC4013: 2.4. Bidirectional Characters.
# RFC4013: 2.5. Unassigned Code Points.
# Copyright (2016-2017) Hewlett Packard Enterprise Development LP
# (TODO: remove next line!)
# Workaround to avoid erroneous comparison between int and float
# Removes zero from integer floats
# Preload params for get_all - used by facts
# Preload options as dict - used by facts
# The first resource is True / Not Null and the second resource is False / Null
# Checks all keys in first dict against the second dict
# Inexistent key is equivalent to exist with value None
# If both values are null, empty or False it will be considered equal.
# recursive call
# change comparison function to compare_list
# Checks all keys in the second dict, looking for missing elements
# The second list is null / empty  / False
# change comparison function to compare dictionaries
# no differences found
# Almost all errors should be caught above, but just in case
# Get the resource so that we have the Etag
# Issue the PUT to do the reboot (unless we are in check mode)
# See if the LED is already set as requested.
# Set the LED (unless we are in check mode)
# See if the PowerState is already set as requested.
# Set the Power State (unless we are in check mode)
# Generate a random boundary
# Post the firmware (unless we are in check mode)
# We have to do a GET to obtain the Etag.  It's required on the PUT.
# Issue the PUT (unless we are in check mode)
# Job not found -- assume 0%
# We have to do a GET to obtain the Etag.  It's required on the DELETE.
# Do the DELETE (unless we are in check mode)
# Copyright (c) 2023 Felix Fontein <felix@fontein.de>
# Use together with the community.general.redfish docs fragment
# Check if the property is supported by the service
# Perform additional checks based on the type of property
# If the property is a dictionary, check the nested properties
# Unsupported property or other error condition; no change
# Subordinate dictionary requires changes
# For other properties, just compare the values
# Note: This is also a fallthrough for cases where the request
# payload and current settings do not match in their data type.
# There are cases where this can be expected, such as when a
# property is always 'null' in responses, so we want to attempt
# the PATCH request.
# Note: This is also a fallthrough for properties that are
# arrays of objects.  Some services erroneously omit properties
# within arrays of objects when not configured, and it is
# expecting the client to provide them anyway.
# No changes required; all properties set
# The following functions are to send GET/POST/PATCH/DELETE requests
# Service root is an unauthenticated resource; remove credentials
# in case the caller will be using sessions later.
# No response data; this is okay in certain cases
# When performing a POST to the session collection, credentials are
# provided in the request body.  Do not provide the basic auth
# header since this can cause conflicts with some services
# Multipart requests require special handling to encode the request body
# No response data; this is okay in many cases
# Get etag from etag header or @odata.etag property
# Check the payload with the current settings to see if changes
# are needed or if there are unsupported properties
# Adds to the multipart body based on the provided data type
# At this time there is only support for strings, dictionaries, and bytes (default)
# Generate a random boundary marker; may need to consider probing the
# payload for potential conflicts in the future
# Fill in the form details
# Insert the headers (Content-Disposition and Content-Type)
# Insert the payload; read from the file if not given by the caller
# Finalize the entire request
# if the ExtendedInfo contains a user friendly message send it
# otherwise try to send the entire contents of ExtendedInfo
# If we got the vendor info once, don't get it again
# Find the vendor info from the service root
# Extract the vendor string from the Vendor property
# Determine the vendor from the OEM object if needed
# HPE uses Pascal-casing for their OEM object
# Older systems reported 'Hp' (pre-split)
# Could not determine; use an empty string
# Get the service root
# Check for the session service and session collection.  Well-known
# defaults are provided in the constructor, but services that predate
# Redfish 1.6.0 might contain different values.
# If one isn't found, return an error
# fallback to default values
# Override the timeout since the service root is expected to be readily
# available.
# Failed, either due to a timeout or HTTP error; not available
# Successfully accessed the service root; available
# Find LogService
# Find all entries in LogServices
# For each entry in LogServices, get log name and all log entries
# Get all log entries for each type of log found
# list_of_logs[logs{list_of_log_entries[entry{}]}]
# Check to make sure option is available, otherwise error is ugly
# Get these entries, but does not fail if not found
# Find Storage service
# Get a list of all storage controllers and build respective URIs
# Loop through Members and their StorageControllers
# and gather properties from each StorageController
# Get a list of all volumes and build respective URIs
# Get related Drives Id
# If no resource is specified; default to the Chassis resource
# Perform a PATCH on the IndicatorLED property based on the requested command
# command should be PowerOn, PowerForceOff, etc.
# Commands (except PowerCycle) will be stripped of the 'Power' prefix
# map Reboot to a ResetType that does a reboot
# read the resource and get the current power state
# if power is already in target state, nothing to do
# get the reset Action and target URI
# get AllowableValues
# map ResetType to an allowable value if needed
# POST to Action URI
# If requested to wait for the service to be available again, block
# until it is ready
# Start with a large enough sleep.  Some services will process new
# requests while in the middle of shutting down, thus breaking out
# early.
# Periodically check for the service's availability.
# It is available; we are done
# Exhausted the wait timer; error
# Password change required; go directly to the specified URI
# Walk the accounts collection to find the desired user
# first slot may be reserved, so move to end of list
# listing all users has always been slower than other operations, why?
# user_list[] are URIs
# for each user, get details
# Filter out empty account slots
# An empty account slot can be detected if the username is an empty
# string and if the account is disabled
# If Id slot specified, use it
# Otherwise find first empty slot
# account_username already exists, nothing to do
# if Allow header present and POST not listed, add via PATCH
# if POST returned a 405, try to add via PATCH
# account does not exist, nothing to do
# some error encountered
# if Allow header present and DELETE not listed, del via PATCH
# if DELETE returned a 405, try to delete via PATCH
# Find the AccountService resource
# Perform a PATCH on the AccountService resource with the requested properties
# Find the extended messages in the response payload
# Go through each message and look for Base.1.X.PasswordChangeRequired
# While this is invalid, treat the lack of a MessageId as "no message"
# Password change required; get the URI of the user account
# session_list[] are URIs
# for each session, get details
# if no active sessions, return as success
# loop to delete every active session
# Get details for each software or firmware member
# Get these standard properties if present
# No content; successful, but nothing to return
# Use the Redfish "Completed" enum from TaskState for the operation status
# Parse the response body for details
# Determine the next handle, if any
# Task generated; get the task monitor URI
# Pull out the status and messages based on the body format
# Task and Job have similar enough structures to treat the same
# Error response body, which is a bit of a misnomer since it is used in successful action responses
# No response body (or malformed); build based on status code
# Clear out the handle if the operation is complete
# Scan the messages to see if next steps are needed
# Operation rerouted to a job; update the status and handle
# No need to process other messages in this case
# A reset to some device is needed to continue the update
# Ensure the image file is provided
# Check that multipart HTTP push updates are supported
# Assemble the JSON payload portion of the request
# Get the task or job tracking the update
# Inspect the response to build the update status
# Get the current update status
# Perform any requested updates
# Override the 'changed' indicator since other resets may have
# been successful
# Will need to consider finetuning this message if the scope of the
# requested operations grow over time
# Get these entries from BootOption, if present
# Retrieve BootOptions if present
# Get BootOptions resource
# Retrieve Members array
# Build dict of BootOptions keyed by BootOptionReference
# fetch the props to display for this boot device
# Retrieve System resource
# Confirm needed Boot properties are present
# Build boot device list
# Find the Bios resource from the requested ComputerSystem resource
# Find the URI of the ResetBios action
# Perform the ResetBios action
# Extract the requested boot override options
# Get the current boot override options from the Boot property
# Check if the requested target is supported by the system
# Build the request payload based on the desired boot override options
# If needed, also specify UEFI mode
# Apply the requested boot override request
# WORKAROUND
# Older Dell systems do not allow BootSourceOverrideEnabled to be
# specified with UefiTarget as the target device
# Get the current BIOS settings
# Make a copy of the attributes dict
# List to hold attributes not found
# Check the attributes
# Remove and proceed to next attribute if this isn't valid
# If already set to requested value, remove it from PATCH payload
# Return success w/ changed=False if no attrs need to be changed
# Get the SettingsObject URI to apply the attributes
# Construct payload and issue PATCH command
# Dell systems require manually setting the apply time to "OnReset"
# to spawn a proprietary job to apply the BIOS settings
# Verify the requested boot options are valid
# Apply the boot order
# get the #ComputerSystem.SetDefaultBootOrder Action and target URI
# Go through list
# match: found an entry for "Thermal" information = fans
# Checking if fans are present
# Get a list of all CPUs and build respective URIs
# Get a list of all DIMMs and build respective URIs
# Get a list of all network controllers and build respective URIs
# Get a list of all virtual media and build respective URIs
# Base on current Lenovo server capability, filter out slot RDOC1/2 and Remote1/2/3/4 which are not supported to Insert/Eject.
# Add Inserted to the payload if needed
# PATCH the resource
# Older HPE systems with iLO 4 and Supermicro do not support
# specifying Inserted or WriteProtected
# locate and read the VirtualMedia resources
# Inserted is not writable
# PATCH resource
# specifying Inserted
# Given resource_type, use the proper URI
# Get a list of all Chassis and build URIs, then get all PowerSupplies
# from each Power entry in the Chassis
# Find NetworkProtocol
# Check input data validity
# Find the ManagerNetworkProtocol resource
# Modify the ManagerNetworkProtocol resource
# collections case
# non-collections case
# Get health status of top level resource
# Get health status of subsystems
# ex: Links.PCIeDevices
# ex: Thermal.Fans
# ex: Memory
# Get the manager ethernet interface uri
# Convert input to payload and check validity
# Note: Some properties in the EthernetInterface resource are arrays of
# objects.  The call into this module expects a flattened view, meaning
# the user specifies exactly one object for an array property.  For
# example, if a user provides IPv4StaticAddresses in the request to this
# module, it will turn that into an array of one member.  This pattern
# should be avoided for future commands in this module, but needs to be
# preserved here for backwards compatibility.
# Modify the EthernetInterface resource
# A helper function to get the EthernetInterface URI
# Get EthernetInterface collection
# Find target EthernetInterface
# Find root_uri matched EthernetInterface when nic_addr is not specified
# split port if existing
# Find the HostInterfaceCollection resource
# Capture list of URIs that match a specified HostInterface resource Id
# Modify the HostInterface resource
# dictionary for capturing individual HostInterface properties
# Check for the presence of a ManagerEthernetInterface
# object, a link to a _single_ EthernetInterface that the
# BMC uses to communicate with the host.
# Check for the presence of a HostEthernetInterfaces
# object, a link to a _collection_ of EthernetInterfaces
# that the host uses to communicate with the BMC.
# Check if this is the first
# HostEthernetInterfaces item and create empty
# list if so.
# This method verifies BIOS attributes against the provided input
# Verify bios_attributes with BIOS settings available in the server
# This function enable Secure Boot on an OOB controller
# Find the Storage resource from the requested ComputerSystem resource
# Get Storage Collection
# Collect Storage Subsystems
# Matching Storage Subsystem ID with user input
# Get Volume Collection
# Collect Volumes
# Delete each volume
# Navigate to the volume uri of the correct storage subsystem
# Deleting any volumes of RAIDType None present on the Storage Subsystem
# Construct payload and issue POST command to create volume
# Get /redfish/v1
# Get Registries URI
# Get BIOS attribute registry URI
# Get the location URI
# Get the location URI response
# return {"msg": self.creds, "ret": False}
# HPE systems with iLO 4 will have BIOS Attribute Registries location URI as a dictionary with key 'extref'
# Hence adding condition to fetch the Uri
# HPE systems with iLO 4 or iLO5 compresses (gzip) for some URIs
# Hence adding encoding to the header
# UUID has precedence over name.
# Find object by UUID. If no object is found using given UUID,
# an exception will be generated.
# Find object by name (name_label).
# If obj_ref_list is empty.
# If obj_ref_list contains multiple object references.
# The obj_ref_list contains only one object reference.
# We silently return empty vm_params if bad vm_ref was supplied.
# We need some params like affinity, VBDs, VIFs, VDIs etc. dereferenced.
# Affinity.
# VBDs.
# List of VBDs is usually sorted by userdevice but we sort just
# in case. We need this list sorted by userdevice so that we can
# make positional pairing with module.params['disks'].
# VDIs.
# VIFs.
# List of VIFs is usually sorted by device but we sort just
# in case. We need this list sorted by device so that we can
# make positional pairing with module.params['networks'].
# Networks.
# Guest metrics.
# Detect customization agent.
# We silently return empty vm_facts if no vm_params are available.
# Fail if we don't have a valid VM reference.
# Get current state of the VM.
# VM can be in either halted, suspended, paused or running state.
# For VM to be in running state, start has to be called on halted,
# resume on suspended and unpause on paused VM.
# hard_shutdown will halt VM regardless of current state.
# hard_reboot will restart VM only if VM is in paused or running state.
# running state is required for suspend.
# running state is required for guest shutdown.
# running state is required for guest reboot.
# Fail if we don't have a valid task reference.
# If we have to wait indefinitely, make time_left larger than 0 so we can
# enter while loop.
# Task is still running.
# We decrease time_left only if we don't wait indefinitely.
# Task is done.
# Task failed.
# We timed out.
# We translate VM power state string so that error message can be
# consistent with module VM power states.
# If scheme is not specified we default to http:// because https://
# is problematic in most setups.
# ignore_ssl is supported in XenAPI library from XenServer 7.2
# SDK onward but there is no way to tell which version we
# are using. TypeError will be raised if ignore_ssl is not
# supported. Additionally, ignore_ssl requires Python 2.7.9
# or newer.
# Try without ignore_ssl.
# Copyright (c) 2023, Alexei Znamensky <russoz@gmail.com>
# snap_alias only
# (c) 2022, Alexei Znamensky <russoz@gmail.com>
# ensure it exists
# sdkmanager --help 2>&1 | grep -A 2 -- --channel
# Without this, sdkmanager binary crashes
# Example: '  platform-tools     | 27.0.0  | Android SDK Platform-Tools 27 | platform-tools  '
# Example: '   platform-tools | 27.0.0    | 35.0.2'
# Add Unix dialect from Python 3
# Create a dictionary from only set options
# the config parser from here: https://github.com/emre/storm/blob/master/storm/parsers/ssh_config_parser.py
# Copyright (C) <2013> <Emre Yilmaz>
# Ensure ProxyCommand gets properly split
# find first whitespace, and split there
# minor bug in paramiko.SSHConfig that duplicates
# "Host *" entries.
# remove parameter ignore_value_none in community.general 12.0.0
# DEPRECATION: remove in community.general 12.0.0
# not decided whether to keep it or not, but if deprecating it will happen in a farther future.
# DEPRECATION: parameter ignore_value_none at the context level is deprecated and will be removed in community.general 12.0.0
# DEPRECATION: remove parameter ctx_ignore_none in 12.0.0
# Common functionality to be used by various module components
# (TODO: remove AnsibleModule from next line!)
# MCP 2.x version pattern for location (datacenter) names.
# Note that this is not a totally reliable way of determining MCP version.
# Unfortunately, libcloud's NodeLocation currently makes no provision for extended properties.
# At some point we may therefore want to either enhance libcloud or enable overriding mcp_version
# by specifying it in the module parameters.
# Credentials are common to all Dimension Data modules.
# Region and location are common to all Dimension Data modules.
# Determine the MCP API version (this depends on the target datacenter).
# Optional "wait-for-completion" arguments
# First, try the module configuration
# Fall back to environment
# Finally, try dotfile (~/.dimensiondata)
# One or more credentials not found. Function can't recover from this
# so it has to raise an error instead of fail silently.
# Both found, return data
# Get endpoints
# Only Dimension Data endpoints (no prefix)
# handle import errors
# clean the returned rest api profile object to look like:
# {profile_name: STR, profile_description: STR, policies: ARR<POLICIES>}
# clean the returned rest api policy object to look like:
# {name: STR, description: STR, active: BOOL}
# make a list of assigned full profile names strings
# e.g. ['openscap profile', ...]
# add/update the policy profile href field
# {name: STR, ...} => {name: STR, href: STR}
# get a list of profiles needed to be changed
# try to assign or unassign profiles to resource
# check all entities in result to be successful
# successfully changed all needed profiles
# clean the returned rest api tag object to look like:
# {full_name: STR, name: STR, display_name: STR, category: STR}
# make a list of assigned full tag names strings
# e.g. ['/managed/environment/prod', ...]
# get a list of tags needed to be changed
# try to assign or unassign tags to resource
# successfully changed all needed tags
# (c) 2023, Alexei Znamensky <russoz@gmail.com>
# httplib/http.client connection using unix domain socket
# Check that the received cert is signed by the provided server_cert_file
# If we have a region specified, connect to its endpoint.
# Otherwise, no region so we fallback to the old connection method
# Copyright (c) 2001-2022 Python Software Foundation.  All rights reserved.
# (See LICENSES/PSF-2.0.txt in this collection)
# Changed self.sessions_uri to Hardcoded string.
# Get server details
# This method checks if OOB controller reboot is completed
# Check server poststate
# When server is powered OFF
# When server is not rebooting
# Quick check for simple package names
# If no spec provided, any version is valid
# Parse version string
# If the `timeout` CLI command feature is removed,
# Then we could add this as a fixed param to `puppet_runner`
# Keeping backward compatibility, allow for running with the `timeout` CLI command.
# If this can be replaced with ansible `timeout` parameter in playbook,
# then this function could be removed.
# ID [n] ...
# any path within the filesystem can be used to query metadata
# constant for module execution
# refreshable
# TODO strategy for retaining information on deleted subvolumes?
# maybe error?
# I used to have a try: finally: here but there seems to be a bug in python
# which swallows the KeyboardInterrupt
# The abandon now doesn't make too much sense
# config, ldap objects from common module
# Specify a complete Link header, for validation purposes
# Specify a single relation, for iteration and string extraction purposes
# Prevent requesting the resource status too soon
# Make sure the destination directory exists
# Check if we need to download new updates file
# Get timestamp when the file was changed last time
# environmental options
# default options of django-admin
# keys can be used in _django_args
# deprecate, remove in 13.0.0
# Python 3+
# 4.0.0 removed 'as_list'
# 3.7.0 added 'get_all'
# We can create an oauth_token using a username and password
# https://docs.gitlab.com/ee/api/oauth2.html#authorization-code-flow
# pop properties we don't know
# transform old vars to new variables structure
# self.status is already the message (backwards compat)
# type: list
# Remove values that are None
# retries option is added in version 4.1.0
# path argument is added in version 5.1.0
# (c) 2020, Alexei Znamensky <russoz@gmail.com>
# A helper function to mitigate https://github.com/OpenNebula/one/issues/6064.
# It allows for easily handling lists like "NIC" or "DISK" in the JSON-like template representation.
# There are either lists of dictionaries (length > 1) or just dictionaries.
# It renders JSON-like template representation into OpenNebula's template syntax (string).
# context required for not validating SSL, old python versions won't validate anyway.
# Check if the module can run
# TODO: check formally available data types in templates
# TODO: some arrays might be converted to space separated
# Copyright (c), Luke Murphy @decentral1se
# Copyright (c) 2016 Allen Sanabria, <asanabria@linuxdynasty.org>
# DEPRECATION: set default value for ignore_none to True in community.general 12.0.0
# DEPRECATION: remove parameter ctx_ignore_none in community.general 12.0.0
# DEPRECATION: replace ctx_ignore_none with True in community.general 12.0.0
# instantiate a response object
# if we've already started preloading the payload then copy it
# and use that, otherwise we need to isntantiate it.
# set some sane defaults
# Refresh the resource info
# Refresh the operation info
# ATTENTION!
# The function `make_process_list()` is deprecated and will be removed in community.general 13.0.0
# Copyright (c) 2016 Thomas Krahn (@Nosmoht)
# If no host was given, we try to guess it from IPA.
# The ipa-ca entry is a standard entry that IPA will have set for
# the CA.
# TODO: We should probably handle this a little better.
# Status codes returned by WDC FW Update Status
# Status messages returned by WDC FW Update Status
# Dict keys for resource bodies
# Standard keys
# Keys for specific operations
# Update the root URI if we cannot perform a Redfish GET to the first one
# Simple update status URI is not provided via GET /redfish/v1/UpdateService
# So we have to hard code it.
# FWActivate URI
# Make sure the service supports FWActivate
# If not tarfile, then if the file has "MMG2" or "DPG2" at 2048th byte
# then the bundle is for MM or DP G2
# It is anticipated that DP firmware bundle will be having the value "DPG2"
# for cookie1 in the header
# G2 bundle file name: Ultrastar-Data102_3000_SEP_1010-032_2.1.12
# MM G2 is always single tanant
# Bundle is for MM or DP G1
# Convert credentials to standard HTTP format
# Make sure bundle URI is HTTP(s)
# Make sure IOM is ready for update
# Check the FW version in the bundle file, and compare it to what is already on the IOMs
# Bundle version number
# Verify that the bundle is correctly multi-tenant or not
# Verify that the bundle is compliant with the target enclosure
# Version number installed on IOMs
# If version is None, we will proceed with the update, because we cannot tell
# for sure that we have a full version match.
# For multi-tenant, only one of the IOMs will be affected by the firmware update,
# so see if that IOM already has the same firmware version as the bundle.
# For single-tenant, see if both IOMs already have the same firmware version as the bundle.
# If this FW already installed, return changed: False, and do not update the firmware.
# Version numbers don't match the bundle -- proceed with update (unless we are in check mode)
# Sometimes a timeout error is returned even though the update actually was requested.
# Check the update status to see if the update is in progress.
# Update is not in progress -- retry until max number of retries
# Unable to get SimpleUpdate to work.  Return the failure from the SimpleUpdate
# Wait for "ready to activate"
# For a short time, target will still say "ready for firmware update" before it transitions
# to "update in progress"
# We may get timeouts, just keep trying until we give up
# Once it says update in progress, "ready for update" is no longer a valid status code
# Update no longer in progress -- verify that it finished
# To determine which IOM we are on, try to GET each IOM resource
# The one we are on will return valid data.
# The other will return an error with message "IOM Module A/B cannot be read"
# Assume if there is an "Id", it is valid
# Make sure the response includes Oem.WDC.PowerMode, and get current power mode
# No filtering
# The user PUT request returns the updated user object
# If user_id is provided will do PUT otherwise will do POST
# user POST request returns an array of a single item,
# so return this item instead of the list
# Modules you write using this snippet, which is embedded dynamically by
# Ansible still belong to the author of the module, and may assign their
# own license to the complete work.
# Contains LXCA common class
# Lenovo xClarity Administrator (LXCA)
# in 12.0.0 add 'debug' to the tuple
# in 12.0.0 remove this if statement entirely
# Parameters on_success and on_failure are deprecated and should be removed in community.general 12.0.0
# patchy solution to resolve conflict with output variables
# (c) 2020-2024, Alexei Znamensky <russoz@gmail.com>
# Copyright (c) 2020-2024, Ansible Project
# resolve aliases
# setup internal state dicts
# override attribute
# Copyright (c) 2018 Luca 'remix_tj' Lorenzetto
# Copyright (c) 2017, 2018, 2019 Oracle and/or its affiliates.
# This module utils is deprecated and will be removed in community.general 13.0.0
# If a resource is in one of these states it would be considered inactive
# If a resource is in one of these states it would be considered available
# If a resource is in one of these states, it would be considered deleted
# Note: This method is used by most OCI ansible resource modules during initialization. When making changes to this
# method, ensure that no `oci` python sdk dependencies are introduced in this method. This ensures that the modules
# can check for absence of OCI Python SDK and fail with an appropriate message. Introducing an OCI dependency in
# this method would break that error handling logic.
# Note: This method is used by most OCI ansible fact modules during initialization. When making changes to this
# When auth_type is not instance_principal, config file is required
# if instance_principal auth is used, an empty 'config' map is used below.
# Merge any overrides through other IAM options
# Redirect calls to home region for IAM service.
# Replace the region in the config with the home region.
# XXX: Validate configuration -- this may be redundant, as all Client constructors perform a validation
# Create service client class with the signer
# check if auth type is overridden via module params
# An authentication attribute has been provided through an env-variable or an ansible
# option and must override the corresponding attribute's value specified in the
# config file [profile].
# If the underlying SDK Service list* method doesn't support filtering by name or display_name, filter the resources
# and return the matching list of resources
# Handle list of dicts. Dictionary returned by the API may have additional keys. For example, a get call on
# service gateway has an attribute `services` which is a list of `ServiceIdResponseDetails`. This has a key
# `service_name` which is not provided in the list of `services` by a user while making an update call; only
# `service_id` is provided by the user in the update call.
# Handle lists of primitive types.
# only update if the user has explicitly provided a value for this attribute
# otherwise, no update is necessary because the user hasn't expressed a particular
# value for that attribute
# Get the existing resources list sorted by creation time in descending order. Return the latest matching resource
# in case of multiple resource matches.
# list_fn doesn't support sort_by, so remove the sort_by key in kwargs_list and retry
# Handle errors like 404 due to bad arguments to the list_all_resources call.
# If a user explicitly requests us to match only against a set of resources (using 'key_by', use that as the list
# of attributes to consider for matching.
# Consider all attributes except freeform_tags as freeform tags do not distinguish a resource.
# Temporarily removing node_count as the existing resource does not reflect it
# When default value for a resource's attribute is empty dictionary, check if the corresponding value of the
# existing resource's attribute is also empty.
# only compare keys that are in default_attribute_values[attr]
# this is to ensure forward compatibility when the API returns new keys that are not known during
# the time when the module author provided default values for the attribute
# non-dict, normal comparison
# module author has not provided a default value for attr
# Check if the user has explicitly provided the value for attr.
# If the user has not explicitly provided the value for attr and attr is in exclude_list, we can
# consider this as a 'pass'. For example, if an attribute 'display_name' is not specified by user and
# that attribute is in the 'exclude_list' according to the module author(Not User), then exclude
# Check if attr has a value that is not default. For example, a custom `security_list_id`
# is assigned to the subnet's attribute `security_list_ids`. If the attribute is assigned a
# value that is not the default, then it must be considered a mismatch and false returned.
# Convert a value which is itself a list of dict to a list of tuples.
# To handle comparing two None values, while creating a tuple for a {key: value}, make the first element
# in the tuple a boolean `True` if value is None so that attributes with None value are put at last
# in the sorted list.
# Perform a deep equivalence check for a List attribute
# Process a list of dict
# Walk through the sorted list values of the resource's value for this attribute, and compare against user
# provided values.
# Perform a deep equivalence check for dict typed attributes
# As the user has not specified a value for an optional attribute, if the existing resource's
# current state has a DEFAULT value for that attribute, we must not consider this incongruence
# an issue and continue with other checks. If the existing resource's value for the attribute
# is not the default value, then the existing resource is not a match.
# User has not provided a value for the map option. In this case, the user hasn't expressed an intent around
# this optional attribute. Check if existing_resource_dict matches default.
# For example, source_details attribute in volume is optional and does not have any defaults.
# If the existing resource has an empty dict, while the user has provided entries, dicts are not equal
# check if all keys of an existing resource's dict attribute matches user-provided dict's entries
# If user has provided value for sub-attribute, then compare it with corresponding key in existing resource.
# If sub_attr not provided by user, check if the sub-attribute value of existing resource matches default value.
# if a default value for the sub-attr was provided by the module author, fail if the existing
# resource's value for the sub-attr is not the default
# No default value specified by module author for sub_attr
# An immediate attempt to retrieve a compartment after a compartment is created fails with
# 'Authorization failed  or requested resource not found', 'status': 404}.
# This is because it takes few seconds for the permissions on a compartment to be ready.
# Wait for few seconds before attempting a get call on compartment.
# Set changed to True as work request has been created to delete the resource.
# While waiting for resource to get into terminated state, if the resource is not found.
# oci.wait_until() returns an instance of oci.util.Sentinel in case the resource is not found.
# DNS API throws a 400 InvalidParameter when a zone id is provided for zone_name_or_id and if the zone
# resource is not available, instead of the expected 404. So working around this for now.
# If the attribute_name is set as an alias for some option X and user has provided value in the playbook using
# option X, then user provided value for attribute_name is equal to value for X.
# Get option name for attribute_name from module.aliases.
# module.aliases is a dictionary with key as alias name and its value as option name.
# Only update if a user has specified a value for an option
# Always set current values of the resource in the update model if there is no request for change in
# to handle older SDKs that did not support retry_strategy
# A validation error raised by the SDK, throw it back
# Here existing attribute values is an instance
# Get all the compartments in the tenancy
# For each compartment, get the volume attachments for the compartment_id with the other args in
# list_attachments_args.
# Pass ServiceError due to authorization issue in accessing volume attachments of a compartment
# volume_attachments has attachments in DETACHING or DETACHED state. Return the volume attachment in ATTACHING or
# ATTACHED state
# Returning an object that mimics an OCI response as oci_utils methods assumes an Response-ish
# - Rename user to username once current usage of username is removed
# - Alias user to username and deprecate it
# BSD 2-Clause license (see LICENSES/BSD-2-Clause.txt)
# This URL is used for:
# - Querying client authorization permissions
# - Removing client authorization permissions
# Remove empty items, for instance missing client_secret
# Try to refresh token and retry, if available
# Token refresh returns 400 if token is expired/invalid, so continue on if we get a 400
# Try to re-auth with username/password, if available
# Try to re-auth with client_id and client_secret, if available
# Either no re-auth options were available, or they all failed
# The Keycloak API expects the realm name (like `master`) not the ID when fetching the realm data.
# See the Keycloak API docs: https://www.keycloak.org/docs-api/latest/rest-api/#_realms_admin
# prefer an exception since this is almost certainly a programming error in the module itself.
# only lookup the name if cid is not provided.
# in the case that both are provided, prefer the ID, since it is one
# less lookup.
# if the group doesn't exist - no problem, nothing to delete.
# should have a good cid by here.
# Since version 23, when GETting a group Keycloak does not
# return subGroups but only a subGroupCount.
# Children must be fetched in a second request.
# for 1st parent in chain we must query the server
# given as name, assume toplvl group
# start recursion by reversing parents (in optimal cases
# we dont need to walk the whole tree upwarts)
# walk complete parents list to the top, all names, no id's,
# try to resolve it assuming list is complete and 1st
# element is a toplvl group
# current parent is given as ID, we can stop walking
# upwards searching for an entry point
# current parent is given as name, it must be resolved
# later, try next parent (recurse)
# only lookup the name if groupid isn't provided.
# should have a good groupid by here.
# Get existing composites
# create new composites
# delete new composites
# Check if the authentication flow exists on the Keycloak serveraders
# Send a DELETE request to remove the specified authentication config from the Keycloak server.
# Copyright (c) 2022, John Cant <a.johncant@gmail.com>
# only lookup the client_id if id isn't provided.
# Due to the required_one_of spec, client_id is guaranteed to not be None
# Copyright (c) 2014, Brian Coca, Josh Drake, et al
# tls connection
# redis sentinel connection
# normal connection
# format: "localhost:26379;localhost2:26379;0:changeme"
# handle if no db nr is given
# password is optional
# guard against the key not being removed from the zset;
# this could happen in cases where the timeout value is changed
# between invocations
# a timeout of 0 is handled as meaning 'never expire'
# TODO: there is probably a better way to do this in redis
# bail out - another thread already acquired the lock
# guard against the key not being removed from the keyset;
# Copyright (c) 2017, Brian Coca
# Pickle is a binary format
# Use pickle protocol 2 which is compatible with Python 2.3+.
# Weird bug where the path of this file is incorrect
# It seems the ansible module, at least when used here
# Used to run in a different directory, meaning the
# relative path was not correct. This method should mean it's
# always correct whatever the context.
# Trim off the dirs after the role dir
# We only want to run this once
# assert path exists
# assert mongodb.conf contains path
# asset mongodb.cnf contains path
# in_docker is false on GHA here since they use custom parent cgroup (actions_job)
# inspecting cgroups is iffy on GHA, so use the /.dockerenv file
# print(str(ansible))
# with capsys.disabled(): #Disable autocapture of output and send to stdout N.B capsys must be passed into function
# print(include_vars(host)['ansible_facts'])
# Standard documentation
# Copyright (c) 2021 T-Systems MMS
# Documentation for global options that are always the same
# (c) 2016, Marcos Diez <marcos@unitron.com.br>
# https://github.com/marcosdiez/
# for older PyMongo 2.2
# else the user knows what s/he is doing and we won't predict. PyMongo will return an error if necessary
# python2 and 3 compatible....
# epoch
# failsafe
# all other parameters are sent to mongo, so we are future and past proof
# Copyright (c) 2020 T-Systems MMS
# remap keys to API format
# Copyright: (c) 2020, Rhys Campbell <rhys.james.campbell@googlemail.com>
# (c) 2012, Elliott Foster <elliott@fourkitchens.com>
# Sponsored by Four Kitchens http://fourkitchens.com.
# (c) 2014, Epic Games, Inc.
# NOTE: there is no 'db' field in mongo 2.4.
# Workaround to make the condition works with AWS DocumentDB,
# since all users are in the admin database.
# 11=UserNotFound
# Allow return False
# pymongo's user_add is a _create_or_update_user so we won't know if it was changed or updated
# without reproducing a lot of the logic in database.py of pymongo
# We get this exception: "not authorized on admin to execute command"
# when auth is enabled on a new instance. The loalhost exception should
# allow us to create the first user. If the localhost exception does not apply,
# then user creation will also fail with unauthorized. So, ignore Unauthorized here.
# 13=Unauthorized
# We must be aware of users which can read the oplog on a replicaset
# Such users must have access to the local DB, but since this DB does not store users credentials
# and is not synchronized among replica sets, the user must be stored on the admin db
# Therefore their structure is the following :
# =========================================
# Certs don't have a password but we want this module behaviour
# Here we can  check password change if mongo provide a query for that : https://jira.mongodb.org/browse/SERVER-22848
# newuinfo = user_find(client, user, db_name)
# if uinfo['role'] == newuinfo['role'] and CheckPasswordHere:
# localhost exception applied.
# touch the file
# How many times we have queried the member
# Number of failures when querying the replicaset
# Run step down command
# For now we assume the stepDown was successful
# 4.0 and below close the connection as part of the stepdown.
# This code should be removed once we support 4.2+ onwards
# https://tinyurl.com/yc79g9ay
# Wait for interval
# Copyright: (c) 2021, Rhys Campbell <rhyscampbell@blueiwn.ch>
# Copyright: (c) 2020, Andrew Klychkov (@Andersson007) <aaklychkov@mail.ru>
# Get general info:
# Get parameters:
# Gather info about databases and their total size:
# Gather info about users for each database:
# Gather info about roles for each database:
# Force conversion to avoid: Refusing to deserialize an invalid UTF8 string value
# Module execution
# Initialize an object and start main work:
# handle optional options
# (c) 2022, Rhys Campbell <rhyscampbell@bluewin.ch>
# 31=RoleNotFound
# probably not needed for role create... to clarify
# when auth is enabled on a new instance. The localhost exception should
# seems to be a list of lists of dict, we want a list of dicts
# TODO replace with proper exception
# TODO _ Functions use a different param order... make consistent
# Validate window
# Copyright: (c) 2021, Rhys Campbell (@rhysmeister) <rhyscampbell@bluewin.ch>
# Copyright: (c) 2020, Rhys Campbell (@rhysmeister) <rhys.james.campbell@googlemail.com>
# Ensure keys are present in index spec
# Check index subkeys look correct
# Pre flight checks done
# "_id.ns": namespace, 4.4.X Bug??? ObjectId given as id
# "_id.min": min,
# first check if the ranges exist
# All ranges are the same
# MongoDB 4.4 inists on a real
# Copyright: (c) 2018, Rhys Campbell <rhys.james.campbell@googlemail.com>
# Need to validate the number of votes in the replicaset
# We have a good number of votes
# How many times we have queried the cluster
# Requires auth
# replicaset looks good
# Sort out the return doc
# 2020 Rhys Campbell <rhys.james.campbell@googlemail.com>
# https://github.com/rhysmeister
# TODO This could be refactored into a function
# could be a list of dicts or a list of strings
# We need to use a different connection format when conn params are supplied
# refactor repeated code
# the list of dicts containing the members for the replicaset configuration document
# members that are staying in the config
# No port supplied. Assume 27017
# We need to put the _id values in into the matching document and generate them for new hosts
# TODO: https://docs.mongodb.com/manual/reference/replica-configuration/#mongodb-rsconf-rsconf.members-n-._id
# Maybe we can add a new member id parameter value, stick with the incrementing for now
# Perhaps even save this in the mongodb instance?
# first get all the existing members of the replicaset
# members that have been supplied by the moduel and matched with existing members
# append existing members with the appropriate _id
# new member , append and increment id
# TODO tidy this stuff up
# Count voting members
# does not require auth
# replicaset does not exist
# Some validation stuff
# (c) 2016, Loic Blot <loic.blot@unix-experience.fr>
# Verify parameter is coherent with specified type
# (c) 2018, Rhys Campbell <rhys.james.campbell@googlemail.com>
# determine what transform_type to perform
# Splits on whitespace
# Strip Extended JSON stuff like:
# "_id": ObjectId("58f56171ee9d4bd5e610d6b7"),
# "count": NumberLong(999),
# pylint: disable=unused-import:
# TODO Should we do something here or are we covered by pymongo?
# sanity tests
# Get driver version::
# Check driver and server version compatibility:
# param exists only in some modules
# Need to do this for 3.12.* as well
# we need to connect directly to the instance
# Arbiters cannot login with a user
# if this is still none we have a problem
# Our test code had issues with multiple exit points with fail_json
# Get server version:
# this is the mongodb_user module
# else: this has to be the first admin user added
# Atlas auth path
# pymongo >= 4. There's no authenticate method in pymongo 4.0. Recreate the connection object
# no port supplied
# compare if members are the same
# Compare dict key to see if votes, tags etc have changed. We also default value if key is not specified
# priority a special case
# for case when the member is not an arbiter
# , msg
# Taken from https://github.com/ansible-collections/community.postgresql/blob/main/plugins/module_utils/postgres.py#L420
# By default returns the same value
# noinspection PyCompatibility, PyUnresolvedReferences
# pylint: disable=locally-disabled, import-error, no-name-in-module
# We make it here when the fact_caching_timeout was set to a different value between runs
# We'll fall back to using ``ansible`` as the database if one was not provided
# in the MongoDB Connection String URI
# The collection is hard coded as ``cache``, there are no configuration options for this
# Only manage the indexes once per run, not per connection
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt
# Documentation fragment for ProxySQL connectivity
# Documentation fragment for managing ProxySQL configuration
# proxysql module specific support methods.
# Copyright (c), Jonathan Mainguy <jon@soh.re>, 2015
# Most of this was originally added by Sven Schliesing @muffl0n in the mysql_user.py module
# 2.2.0-72-ge14accd
# 2.3.2-percona-1.1
# Override some commond defaults with values from config file if needed
# If login_user or login_password are given, they should override the
# config file
# In case of PyMySQL driver:
# In case of MySQLdb driver
# This must match a profile defined elsewhere
# Py 2.7 compat.
# https://docs.ansible.com/ansible/latest/collections/ansible/netcommon/network_cli_connection.html
# show version
# Cisco SMB 300 and 500 - fw 1.x.x.x
# Cisco SMB 350 and 550 - fw 2.x.x.x
# show system
# output in seconds
# show cpu utilization
# show inventory
# make 1 module 1 line
# delete empty lines
# remove extra chars
# normalize lines
# every inventory has module with NAME: "1"
# stacks have modules 2 3 ... 8
# index is string
# fw 1.x
# fw 2.x, 3.x
# to get speed in kb
# add ips to interface
# by documentation SG350
# copy of https://github.com/napalm-automation/napalm/blob/develop/napalm/base/canonical_map.py
# fields_position.insert(0,0)
# fields_end.append(len(headerline))
# allow "long" last field
# search for overflown fields
# concat owerflown elements to previous data
# is b empty?
# same leaf value
# The contents of this file are licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with the
# License. You may obtain a copy of the License at
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
# TODO, allow for CTRL+C to break the loop more easily
# TODO, store the failures from this iteration
# TODO, display a summary of failures from every iterations
# Copyright: (c) 2015, Jonathan Mainguy <jon@soh.re>
# Standard mysql documentation fragment
# Copyright: (c) 2020, Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# TRUNCATE is not DDL query but it also returns 0 rows affected:
# For Python 2 compatibility, fallback to time.time()
# Measure query execution time in milliseconds
# Calculate the execution time rounding it to 4 decimal places
# Prepare args:
# Connect to DB:
# Set defaults:
# Execute query:
# When something is run with IF NOT EXISTS
# and there's "already exists" MySQL warning,
# set the flag as True.
# PyMySQL < 0.10.0 throws the warning, mysqlclient
# and PyMySQL 0.10.0+ does NOT.
# Check DML or DDL keywords in query and set changed accordingly:
# Indicates the entity already exists
# Reset flag
# MySQLdb removed cursor._last_executed as a duplicate of cursor._executed
# When the module run with the single_transaction == True:
# Create dict with returned values:
# Exit:
# Copyright: (c) 2021, Andrew Klychkov <andrew.a.klychkov@gmail.com>
# TODO 4.0.0 add default=True
# TODO Release 4.0.0 : Remove this test and variable assignation
# Set defaults
# Check if the server supports roles
# Main job starts here
# avoid granting unwanted privileges
# avoid adding unwanted members
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# MySQL module specific support methods.
# version = ["5", "5," "60-MariaDB]
# full_version = "5.5.60-MariaDB"
# release = "60"
# check if a suffix exists by counting the length
# suffix = "MariaDB"
# major = "5"
# minor = "5"
# MariaDB roles have no host
# Proxy privileges are hard to work with because of different quotes or
# backticks like ''@'', ''@'%' or even ``@``. In addition, MySQL will
# forbid you to grant a proxy privileges through TCP.
# Only keep *.* USAGE if it's the only user privilege given
# Prevent returning a resource limit if empty
# Prevent returning tls_require if empty
# TODO password_option
# but both are not supported by mysql_user atm. So no point yet.
# Build the main query
# Get and process databases with tables
# Handle empty databases if requested
# Create object and do main job
# Copyright: (c) 2012, Mark Theunissen <mark.theunissen@gmail.com>
# If defined, mysqldump demands --defaults-extra-file be the first option
# --defaults-file must go first, or errors out
# The line below is for returned data only:
# Used to prevent 'Broken pipe' errors that
# occasionaly occur when target files are compressed.
# FYI: passing the `shell=True` argument to p2 = subprocess.Popen()
# doesn't solve the problem.
# Escape '%' since mysql cursor.execute() uses a format string
# Copyright: (c) 2013, Balazs Pocze <banyek@gawker.com>
# Certain parts are taken from Mark Theunissen's mysqldb module
# Since MySQL 8.0.22 and MariaDB 10.5.1,
# "REPLICA" must be used instead of "SLAVE"
# MySQL 8.0 uses Replica_...
# MariaDB 10.6 uses Slave_...
# MariaDB only
# Type values before using them
# Simplified BSD License (see simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
# Once we drop support for ansible-core 2.11, we can
# mysqlclient is called MySQLdb
# pymysql has two methods:
# - __version__ that returns the string: 0.7.11.None
# - VERSION that returns the tuple (0, 7, 11, None)
# version_info returns the tuple (2, 1, 1, 'final', 0)
# Default values of comment_prefix is '#' and ';'.
# '!' added to prevent a parsing error
# when a config file contains !includedir parameter.
# for PyMySQL < 1.0.0, use 'db' instead of 'database' and 'passwd' instead of 'password'
# Will be deprecated and dropped
# https://github.com/ansible-collections/community.mysql/issues/654
# for MySQLdb < 2.1.0, use 'db' instead of 'database' and 'passwd' instead of 'password'
# Monkey patch the Connection class to close the connection when garbage collected
# Patched
# Convert the command to uppercase to ensure case-insensitive lookup
# Add more command mappings here
# Per discussions on irc:libera.chat:#maria the query may return up to 2 rows but "ACCOUNT LOCK" should always be in the first row.
# ACCOUNT LOCK does not have to be the last option in the CREATE USER query.
# Need to handle both DictCursor and non-DictCursor
# Mysql_info use a DictCursor so we must convert back to a list
# otherwise we get KeyError 0
# before MariaDB 10.2.19 and 10.3.11, "password" and "authentication_string" can differ
# when using mysql_native_password
# Mysql_info use a DictCursor so we must convert list(dict)
# to list(tuple) otherwise we get KeyError 0
# 'plugin_auth_string' contains the hash string. Must be removed in c.mysql 4.0
# See https://github.com/ansible-collections/community.mysql/pull/629
# If attributes are set, perform a sanity check to ensure server supports user attributes before creating user
# we cannot create users without a proper hostname
# Determine what user management method server uses
# This is for update_password: on_new_username
# What if plugin differ?
# Mysql and MariaDB differ in naming pam plugin and Syntax to set it
# Used by MariaDB which requires the USING keyword, not BY
# Handle clear text and hashed passwords.
# Get a list of valid columns in mysql.user table to check if Password and/or authentication_string exist
# Select hash from either Password or authentication_string, depending which one exists and/or is filled
# https://stackoverflow.com/questions/51600000/authentication-string-of-root-user-on-mysql
# Replacing empty root password with new authentication mechanisms fails with error 1396
# Handle password expiration
# Check if changes needed to be applied.
# Handle plugin authentication
# this case can cause more updates than expected,
# as plugin can hash auth_string in any way it wants
# and there's no way to figure it out for
# a check, so I prefer to update more often than never
# Mysql and MariaDB differ in naming pam plugin and syntax to set it
# Handle privileges
# If the user has privileges on a db.table that doesn't appear at all in
# the new specification, then revoke all privileges on it.
# If the user has the GRANT OPTION on a db.table, revoke it first.
# If the user doesn't currently have any privileges on a db.table, then
# we can perform a straight grant operation.
# If the db.table specification exists in both the user's current privileges
# and in the new privileges, then we need to see if there's a difference.
# When appending privileges, only missing privileges need to be granted. Nothing is revoked.
# When subtracting privileges, revoke only the intersection of requested and current privileges.
# No privileges are granted.
# When replacing (neither append_privs nor subtract_privs), grant all missing privileges
# and revoke existing privileges that were not requested...
# ... avoiding pointless revocations when ALL are granted
# Only revoke grant option if it exists and absence is requested
# For more details
# https://github.com/ansible-collections/community.mysql/issues/77#issuecomment-1209693807
# USAGE grants no privileges, it is only needed because 'WITH GRANT OPTION' cannot stand alone
# after privilege manipulation, compare privileges from before and now
# Handle attributes
# Calculate final attributes by re-running attributes_get when not in check mode, and merge dictionaries when in check mode
# Final if statements excludes items whose values are None in attributes_to_change, i.e. attributes that will be deleted
# Convert empty dict to None per return value requirements
# Handle TLS requirements
# If a user has roles or a default role assigned,
# we'll have some of the priv tuples looking either like
# GRANT `admin`@`%` TO `user1`@`localhost`
# SET DEFAULT ROLE `admin`@`%` FOR `user1`@`localhost`
# which will result None as res value.
# As we use the mysql_role module to manipulate roles
# we just ignore such privs below:
# Handle cases when there's privs like GRANT SELECT (colA, ...) in privs.
# To this point, the privileges list can look like
# ['SELECT (`A`', '`B`)', 'INSERT'] that is incorrect (SELECT statement is splitted).
# Columns should also be sorted to compare it with desired privileges later.
# Determine if there's a case similar to the above:
# If not, either start and end will be None
# Determine elements of privileges where
# columns are listed
# We found the start element
# We found the end element
# if the privileges list consist of, for example,
# ['SELECT (A', 'B), 'INSERT'], return indexes of related elements
# If start and end position is the same element,
# it means there's expression like 'SELECT (A)',
# so no need to handle it
# When the privileges list look like ['SELECT (colA,', 'colB)']
# (Notice that the statement is splitted)
# When it look like it should be, e.g. ['SELECT (colA, colB)'],
# we need to be sure, the columns is sorted
# 1. Extract stuff inside ()
# 2. Split
# 3. Sort
# 4. Put between () and return
# "SELECT/UPDATE/.. (colA, colB) => "colA, colB"
# "colA, colB" => ["colA", "colB"]
# Check for FUNCTION or PROCEDURE object types
# Do not escape if privilege is for database or table, i.e.
# neither quote *. nor .*
# Escape '%' since mysql db.execute() uses a format string
# Escape '%' since mysql db.execute uses a format string and the
# specification of db and table often use a % (SQL wildcard)
# MySQL and MariaDB don't store roles in the user table the same manner:
# select user, host from mysql.user;
# +------------------+-----------+
# | user             | host      |
# | role_foo         | %         | <- MySQL
# | role_foo         |           | <- MariaDB
# It means the user does not exists, so we need
# to set all limits after its creation
# Supported keys are listed in the documentation
# and must be determined in the get_resource_limits function
# (follow 'AS' keyword)
# If not check_mode
# information_schema.tables does not hold the tables within information_schema itself
# convert JSON string stored in row into a dict - mysql enforces that user_attributes entires are in JSON format
# if the attributes dict is empty, return None instead
# When user don't require SSL, res value is: ('', '', '', '')
# https://www.akkadia.org/drepper/SHA-crypt.txt
# Copyright (c) 2019 Felix Fontein <felix@fontein.de>
# Copyright (c) 2025 Felix Fontein <felix@fontein.de>
# Only for transition period
# noqa: F821, pylint: disable=undefined-variable
# Copyright (c) 2019 Oleksandr Stepanov <alexandrst88@gmail.com>
# read if the user has caching enabled and the cache is not being refreshed
# attempt to read the cache if inventory is not being refreshed and the user has caching enabled
# This can only happen if the code is modified so that cache=False
# set the cache
# Python 2.x fallback:
# Remove key
# Make sure key is present
# The only thing we can update is the name
# Update or create key
# Retrieve current boot config
# Deactivate current boot configurations that are not requested
# Enable/compare boot configuration
# Normalize options
# Idempotence check
# unfold the return object for the idempotence check to work correctly
# Deactivate existing boot configuration
# Enable new boot configuration
# DEPRECATED: old API
# NEW API!
# https://robot.your-server.de/doc/webservice/en.html#get-firewall-server-ip
# Copyright (c) 2025 Victor LEFEBVRE <dev@vic1707.xyz>
# this endpoint is stupidly slow
# Contains all subaccount informations
# { "subaccount": <data> }
# None values aren't updated
# Means user didn't provide a value
# we assume we don't want to update that field
# password aren't considered part of update check
# due to being a different API call
# { "password": <password> }
# Hetzner's response [ { "subaccount": <data> }, ... ]
# -----------------------------------------
# For some reason, the home directory must **not** start with a slash, despite being returned that way...
# Hetzner likes to strip leading '/' from the home directory
# For some reason, home_directory must not start with a slash
# Set the found username in case user used comment as idempotence
# state 'present' without pre-existing account
# username cannot be choosen
# not necessary, allows us to get additional infos (created time etc...)
# Retrieve created subaccount
# (not necessary, allows us to get additional infos (created time etc...))
# The documentation (https://robot.hetzner.com/doc/webservice/en.html#get-storagebox-storagebox-id-snapshotplan)
# claims that the result is a list, but actually it is a dictionary. Convert it to a list of dicts if that's the case.
# Copyright (c) 2022 Alexander Gil Casas <alexander.gilcasas@trustyou.net>
# TODO: missing NOT_FOUND, VSWITCH_NOT_AVAILABLE, VSWITCH_PER_SERVER_LIMIT_REACHED
# information about which servers are failing is only there
# TODO: add and delete with `wait=false`
# TODO: missing INVALID_INPUT, NOT_FOUND
# according to the API docs 'status' cannot be provided as input to POST, but that's not true
# TODO: If the API ever changes to support more than one plan, the following needs to
# For some reason, minute and hour are required even for disabled plans,
# even though the documentation says otherwise
# TODO: If the API ever changes to support more than one plan, the following need to change
# The documentation (https://robot.hetzner.com/doc/webservice/en.html#post-storagebox-storagebox-id-snapshotplan)
# Add the comment if provided
# Update snapshot comment
# Delete snapshot
# Retrieve created snapshot
# When filtering by linked_server, the result should be a dictionary
# Copyright (c) 2025 Matthias Hurdebise <matthias_hurdebise@hotmail.fr>
# pylint: disable=self-assigning-variable
# Sanitize input
# Build wanted (after) state and compare
# Update if different
# https://robot.your-server.de/doc/webservice/en.html#post-firewall-server-ip
# Only use result if configuration is done, so that diff will be ok
# Construct result (used for check mode, and configuration still in process)
# We want 'in process' here
# Copyright (c), Felix Fontein <felix@fontein.de>, 2019
# The API endpoint is fixed.
# Reference: https://robot.hetzner.com/doc/webservice/en.html#errors
# In Python 2, reading from a closed response yields a TypeError.
# In Python 3, read() simply returns ''
# Reference: https://docs.hetzner.cloud/reference/hetzner#errors
# allow_empty_result=False,
# allowed_empty_result_status_codes=(),
# if allow_empty_result and info.get('status') in allowed_empty_result_status_codes:
# TODO: is there a hint how much time we should wait?
# If yes, adjust check_done_delay accordingly!
# allow_empty_result=allow_empty_result,
# allowed_empty_result_status_codes=allowed_empty_result_status_codes,
# TODO: add coverage!
# result, error = api_fetch_url_json(module, url, **kwargs)
# if check_done_callback(result, error):
# Copyright: (c) 2022, Sean Freeman ,
# Check mode
# Copyright: (c) 2021, Rainer Leber <rainerleber@gmail.com>
# create a list of file and directories
# names in the given directory
# Iterate over all the entries
# Create full path
# If entry is a directory then get the list of files in this directory
# download sapcar binary if url is provided otherwise path is returned
# manipulating output from SAR file for compare with already extracted files
# remove any SIGNATURE.SMF from list because it will not unpacked if signature is false
# if signature is renamed manipulate files in list of sar file for compare.
# get extracted files if present
# compare extracted files with files in sar file
# Copyright: (c) 2022, Rainer Leber rainerleber@gmail.com>
# /hana/shared directory exists
# /sapmnt directory exists
# Check to see if /sapmnt/SID/sap_bobj exists
# is a bobj system
# check if instance number exists
# sapcontrol returns c(0 - 5) exit codes only c(1) is unavailable
# check if returned instance_nr is a number because sapcontrol returns all if a random string is provided
# convert to list and extract last
# split instance number
# It's a PAS
# It's an ASCS
# It's a Webdisp
# It's a Java
# It's an SCS
# It's an ERS
# Unknown instance type
# Copyright: (c) 2021, Rainer Leber <rainerleber@gmail.com> <rainer.leber@sva.de>
# PyRFC call function
# Creates RFC parameters for creating organizations
# define dicts in batch
# define company name
# define location
# define communication
# return dict
# basic RFC connection with pyrfc
# build parameter dict of dict
# processes task settings to objects
# values for connection
# values for execution tasks
# initialize session task
# Confirm Tasks which requires manual activities from Task List Run
# unskip defined tasks and set parameters
# start the task
# get task logs because the execution may successfully but the tasks shows errors or warnings
# returned value is ABAPXML https://help.sap.com/doc/abapdocu_755_index_htm/7.55/en-US/abenabap_xslt_asxml_general.htm
# pre evaluation of parameters
# splits snote number from path and txt extension
# -x Suppresses additional output, such as the number of selected rows in a result set.
# makes a command like hdbsql -i 01 -u SYSTEM -p secret123# -I /tmp/HANA_CPU_UtilizationPerCore_2.00.020+.txt,
# iterates through files and append the output to var out.
# makes a command like hdbsql -i 01 -u SYSTEM -p secret123# "select user_name from users",
# iterates through multiple commands and append the output to var out.
# Copyright: (c) 2022, Rainer Leber rainerleber@gmail.com, rainer.leber@sva.de,
# converts recursively the suds object to a dictionary e.g. {'item': [{'name': hdbdaemon, 'value': '1'}]}
# Adds the given value to a dict as the key
# check if the given key is in the given dict yet
# for change parameters
# define username
# define Address
# define Password
# define Alias
# define LogonData
# define company
# add change if user exists
# logical values
# values for the new or existing user
# values for profile must a list
# Example ["SAP_NEW", "SAP_ALL"]
# values for roles must a list
# user details
# check for address changes when user exists
# analyse return value
# Diagnose XML file parsing errors in Beautiful Soup
# https://stackoverflow.com/questions/56942892/cannot-parse-iso-8859-15-encoded-xml-with-bs4/56947172#56947172
# SWPM2 control.xml conversion to utf8
# Convert control.xml from iso-8859-1 to UTF-8, so it can be used with Beautiful Soup lxml-xml parser
# https://stackoverflow.com/questions/64629600/how-can-you-convert-a-xml-iso-8859-1-to-utf-8-using-python-3-7-7/64634454#64634454
# SWPM2 Component and Parameters extract all as CSV
# SWPM2 Component and Parameters extract all and generate template inifile.params
# SWPM2 product.catalog conversion to utf8
# SWPM2 Product Catalog entries to CSV
# Each Product Catalog entry is part of a components group, which may have attributes:
# output-dir, control-file, product-dir (link to SWPM directory of param file etc)
# Attributes possible for each entry = control-file, db, id, name, os, os-type, output-dir,
# ppms-component, ppms-component-release, product, product-dir, release, table
# Get arguments passed to Python script session
# Define path to control.xml, else assume in /tmp directory
# (c) 2013, Greg Buehler
# (c) 2018, Filippo Ferrazini
# server
# login
# ssl certs
# host inventory
# host interface
# check if zabbix api returned interfaces element
# check for a interfaces list that contains at least interface
# use first interface only
# check if zabbix api returned a interfaces element
# zabbix_api tries to exit if it cannot parse what the zabbix server returned
# so we have to use SystemExit here
# Copyright: (c) 2017, Ansible, Inc
# (c) 2021, Markus Fischbacher (fischbacher.markus@gmail.com)
# Quick Link to Zabbix API docs: https://www.zabbix.com/documentation/current/manual/api
# By default Zabbix WebUI is on http(s)://FQDN/zabbix
# zabbix_url_path provided (even if it is an empty string)
# Apply custom headers first. Plugin-specific headers set later will take precedence.
# Need to add Basic auth header
# Get this response from Zabbix when we switch username to execute REST API
# Need to login with new username/password
# Replace 'auth' field in payload with new one (we got from login process)
# Re-send the request we initially were trying to execute
# Some methods return bool not a dict in "result"
# Do not try to find "error" if it is not a dict
# The method defined in ansible.plugins.httpapi
# We need to override it to avoid endless re-tries if HTTP authentication fails
# Copyright: (c), Ansible Project
# (c) 2023, Alexandre Georges
# (c) 2021, Timothy Test
# Modified from ServiceNow Inventory Plugin and Zabbix inventory Script
# set proxy information if required
# add host to inventory
# set variables for host
# added for compose vars and keyed groups
# organize inventory by zabbix groups
# Copyright: (c) 2022, mrvanes
# the AnsibleModule object
# (c) 2019, Ruben Tsirunyan <rubentsirunyan@gmail.com>
# (c) stephane.travassac@fr.clara.net
# check hostid exist
# tGet last event for trigger with problem value = 1
# https://www.zabbix.com/documentation/3.4/manual/api/reference/trigger/object
# (c) 2019, OVH SAS
# raise Exception("%s ------ %s" % (parents, service_exist))
# Load service module
# Does not exists going to create it
# Else we update it if exists
# Check if parameters have changed
# Update service if a parameter is different
# No parameters changed, no update required.
# (c) 2013-2014, Epic Games, Inc.
# create host group(s) if not exists
# delete host group(s)
# get group ids by name
# delete host groups
# create host groups
# (c) me@mimiko.me
# get script by script name
# Send Message type
# opconditions valid only for "trigger" action
# Send Command type
# Add to/Remove from host group
# Add/Remove tags
# Link/Unlink template
# Set inventory mode
# Remove escalation params when for event sources where they are not applicable
# Host group
# Host
# Trigger
# Trigger name: return as is
# Trigger severity
# Trigger value
# Time period: return as is
# Host IP: return as is
# Discovered service type
# Discovered service port: return as is
# Discovery status
# maintenance_status
# when type is remote_command
# when type is send_message
# when type is add_to_host_group or remove_from_host_group
# when type is set_host_inventory_mode
# when type is link_to_template or unlink_from_template
# Copyright: (c) 2020, Tobias Birkefeld (@tcraxs) <t@craxs.de>
# (c) 2013, Alexander Bulimov <lazywolf0@gmail.com>
# Sanitize fields
# Parse start_date/time fields
# Parse Every and Day of Week
# Set Default for minutes if needed
# Set End Time
# Logic for backwards compatability.  Remove with 4.0.0
# built by Martin Eiswirth (@meis4h) on top of the work by Stéphane Travassac (@stravassac) on zabbix_host_events_info.py
# and Michael Miko (@RedWhiteMiko) on zabbix_group_info.py
# Copyright: (c) 2022, BGmot
# Delete script
# (c) 2017, Alen Komic
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Active proxy
# Passive proxy
# Create temporary proxy object to be able to pull Zabbix version to resolve parameters dependencies
# convert enabled / disabled to integer
# check if proxy already exists
# remove proxy
# the proxy is already deleted.
# create template group(s) if not exists
# delete template group(s)
# delete template groups
# create template groups
# Copyright: (c) 2022, ONODERA Masaru <masaru-onodera@ieee.org>
# get authentication setting
# update authentication setting
# Copyright: (c) 2019, sky-joker
# User can be created password-less only when all groups are of non-internal
# authentication types
# 0 = use system default authentication method
# 1 = use internal authentication
# 2 = use LDAP authentication
# 3 = disable access to the frontend
# Zabbix API for versions < 5.2 does not have a way to query the default auth type
# so we must assume its set to internal
# E-Mail
# Because user media sendto parameter is raw in parameters specs perform explicit check on type
# sendto should be a list for Email media type
# existing data
# request data
# The type key has changed to roleid key since Zabbix 5.2
# Copyright: (c) 2024, ONODERA Masaru <masaru-onodera@ieee.org>
# Copyright: (c) 2023, ONODERA Masaru <masaru-onodera@ieee.org>
# get setting setting
# get global macro
# create global macro
# update global macro
# delete global macro
# Zabbix handles macro names in upper case characters
# Valid format for macro is {$MACRO}
# delete a macro
# ZBX-15706
# sort list of parameters to prevent mismatch due to reordering
# script
# webhook
# Script
# SMS
# Jabber
# Email
# EZ Text
# Webhook
# this is used to simulate `required_if` of `AnsibleModule`, but only when state=present
# remove date field if requested
# Copyright: (c) 2021, D3DeFi
# (c) 2024, Evgeny Yurchenko
# check if proxy group already exists
# remove proxy group
# the proxy group is already deleted.
# get token
# If params does not have any parameter except tokenid and name, no need to update.
# delete token
# 31 + 20
# "status" in API
# get autoregistration
# update autoregistration
# exist host
# check if host group exists
# By single proxy
# The host was discovered via Discovery Rule
# A "plain" host
# get host by host name
# get proxyid by proxy name
# get proxy_group_id by proxy group name
# get group ids by group names
# get host groups ids by host id
# get host templates by host id
# Not handled in argument_spec with required_if since only SNMP interfaces are using details
# check the exist_interfaces whether it equals the interfaces or not
# Find already configured interfaces in requested interfaces
# get the status of host by host
# check all the properties before link or clear template
# get the existing host's groups
# get the existing status
# get the existing templates
# Check whether the visible_name has changed; Zabbix defaults to the technical hostname if not set.
# Only compare description if it is given as a module parameter
# in Zabbix >= 5.4 these parameters are write-only and are not returned in host.get response
# hostmacroid and hostid are present in every item of host["macros"] and need to be removed
# make copy to prevent change in original data
# link or clear template of the host
# get host's exist template ids
# get unlink and clear templates
# Update the host inventory_mode
# nothing was set, do nothing
# watch for - https://support.zabbix.com/browse/ZBX-6033
# Add all default values to all missing parameters for existing interfaces
# convert enabled to 0; disabled to 1
# convert monitored_by to int
# convert macros to zabbix native format - {$MACRO}
# Use proxy specified, or set to 0
# check if host exist
# get host id by host name
# If proxy is not specified as a module parameter, use the existing setting
# If monitored_by and proxy_group are not specified as a module parameters, use the existing setting
# remove host
# if host_groups have not been specified when updating an existing host, just
# get the group_ids from the existing host without updating them.
# get existing host's interfaces
# Convert integer parameters from strings to ints
# fix values for properties
# Whe no interfaces specified, copy existing interfaces
# Find already configured interfaces in requested interfaces and compile final list of
# interfaces in "interfaces" variable. Every element of the list defines one interface.
# If an element has "interfaceid" field then Zabbix will update existing interface otherwise
# a new interface will be added.
# if force == True overwrite existing interfaces with provided interfaces with the same type
# Macros not present in host.update will be removed if we dont copy them when force=no
# Tags not present in host.update will be removed if we dont copy them when force=no
# update host
# the host is already deleted.
# create host
# (c) 2017, sookido
# before Zabbix 6.2 host_groups and template_group are joined into groups parameter
# Check if any new templates would be linked or any existing would be unlinked
# Mark that there will be changes when at least one existing template will be unlinked
# If we got here we know that only one template was provided via template_name
# rules schema latest version
# Identify template names for IDs retrieval
# Template names are expected to reside in ["zabbix_export"]["templates"][*]["template"] for both data types
# Load all subelements for template that were provided by user
# Zabbix configuration.export does not differentiate python types (numbers are returned as strings)
# Assume new templates are being added when no ID"s were found
# (c) 2017-2018, Antony Alekseyev <antony.alekseyev@gmail.com>
# get list of map elements (nodes)
# since generated IDs differ from real Zabbix ones, make real IDs match generated ones
# compare as strings since Zabbix API returns everything as strings
# transform Graphviz coordinates to Zabbix's ones
# If a string has single or double quotes around it, remove them.
# Zabbix >= 6.4
# Mandatory parameters check
# idp_type is ldap
# idp_type is saml
# str parameters
# boolean parameters
# No User Directory found with given name
# User Directory with given name exists
# Zabbix API returns provision_status as str we need it as int to correctly compare
# 3 means custom expression.
# Copyright: (c) 2025, ONODERA Masaru <masaru-onodera@ieee.org>
# get housekeeping setting
# Check parameter about time is valid.
# update housekeeping setting
# get host macro
# create host macro
# update host macro
# no change only when macro type == 0. when type = 1 or 2 zabbix will not output value of it.
# delete host macro
# Missing fping
# Sometimes the version on publicsuffix.org differs depending on from where you request it over many hours,
# so for now let's directly fetch it from GitHub.
# url = 'https://publicsuffix.org/list/public_suffix_list.dat'
# The run was skipped
# Copyright (c) 2021 Felix Fontein
# NOTE: This document fragment needs to be augmented by ZONE_ID_TYPE in a provider document fragment.
# NOTE: This document fragment augments the above standard DOCUMENTATION document fragment
# NOTE: This document fragment adds additional information on records.
# WARNING: This section is automatically generated by update-docs-fragments.py.
# Copyright (c) 2020 Markus Bergholz <markuman+spambelongstogoogle@gmail.com>
# Copyright (c) 2025 Markus Bergholz
# Copyright (c) 2017-2020 Felix Fontein
# Copyright (c) 2022 Felix Fontein
# handled by assert_requirements_present_dnspython
# type: ignore  # noqa: F811
# handled by assert_requirements_present
# Copyright (c) 2020-2021, Felix Fontein <felix@fontein.de>
# type: ignore  # TODO
# Find matching rules
# Select prevailing rule
# Determine suffix
# Return result
# Split into labels and normalize
# Get suffix length
# The official Public Suffix List
# For templating, we need to make the zone_id type 'string' or 'raw'.
# This converts the value to its proper type expected by the API.
# Copyright (c) 2022, Felix Fontein <felix@fontein.de>
# Copyright (c) 2017-2021 Felix Fontein
# Note that this is updated to the 'after' value in check mode (but not outside of check mode!)
# handled in assert_requirements_present()
# Simple quadratic sleep with maximum wait of max_sleep seconds
# Make sure we do not exceed the timeout by much by waiting
# Extracted from https://github.com/benjaminp/six/blob/7d2a0e96602b83cd082896c8c224a87f1efe2111/six.py
# equals dns.message.DEFAULT_EDNS_PAYLOAD; larger values cause problems with Route53 nameservers for me
# For dnspython < 2.0.0
# For dnspython < 1.6.0
# Sanity check: do we have a valid nameserver IP?
# We keep the current nameservers
# type: dict[str, typing.Any] | None
# type: dict[str, typing.Any]
# type: DNSZone
# type: list[DNSRecord]
# type: dict[str, str] | None
# type: bytes | None
# type: int | None
# type: (...) -> tuple[bytes | None, dict[str, typing.Any]]
# type: AnsibleModule
# 24 * 60 * 60
# type: ZoneRecordAPI
# type: ProviderInformation
# TODO type
# type: list[DNSRecord] | None
# type: (...) -> tuple[bool, list[DNSAPIError], dict[str, list[DNSRecord]]]
# type: list[DNSAPIError]
# type: dict[str, list[DNSRecord]]
# Delete records
# Change records
# Create records
# Compose basic document
# if debug:
# type: dict[str, dns.rdatatype.RdataType]
# type: dict[dns.rdatatype.RdataType, str]
# type: dict[dns.rdatatype.RdataType, list[str]]
# The following data has been borrowed from community.general's dig lookup plugin.
# Note: adding support for RRSIG is hard work. :)
# has to be handled on application level
# type: dns.rdata.Rdata
# type: (...) -> dict[str, typing.Any]
# dnspython < 2.0.0
# Convert ulabel to alabel
# Always convert to lower-case
# type: (...) -> str | None
# type: HTTPHelper
# type: Collection[int] | None
# Check expected status
# type: bool | list[int] | tuple[int, ...]
# type: (...) -> tuple[dict[str, typing.Any] | list[typing.Any] | None, dict[str, typing.Any]]
# Check for unauthenticated
# Check Content-Type header
# Decode content as JSON
# type: (...) -> dict[str, str]
# q.q('Request: GET {0}'.format(full_url))
# q.q('Request: POST {0}'.format(full_url))
# q.q('Request: PUT {0}'.format(full_url))
# q.q('Request: DELETE {0}'.format(full_url))
# Valid values: 'decoded', 'encoded', 'encoded-no-octal' (deprecated), 'encoded-no-char-encoding'
# Valid values: 'api', 'quoted', 'unquoted'
# Valid values: 'decimal', 'octal'
# Do not touch record values
# We assume that records internally use decoded values
# This must be a decimal sequence
# It is apparently not - error out
# We need more letters for a three-digit decimal sequence
# Should not happen
# Add letter
# Make sure that we do not split up an escape sequence over multiple TXT strings
# Make sure that we do not split up a decimal sequence over multiple TXT strings
# Make sure that we do not split up a UTF-8 letter over multiple TXT strings
# Split if too long
# 'email': zone.email,
# 'ttl': zone.ttl,
# 'nameserver': zone.nameserver,
# 'serial': zone.serial,
# 'template': zone.template,
# q.q('{0} {1} {2}'.format('=' * 4, msg, '=' * 40))
# q.q('Request: {0}'.format(command))
# q.q('Extracted result: {0} (type {1})'.format(res, type(res)))
# q.q('Result: {0}; extracted type {1}'.format(result, type(res)))
# The API documentation can be found here: https://api.ns1.hosttech.eu/api/documentation/
# API returns '', we want None
# We cannot simply return `_create_zone_from_json(zone)`, since this contains less information!
# This module_utils is PRIVATE and should only be used by this collection. Breaking changes can occur any time.
# Create API
# Get zone information
# Process parameters
# Group existing record sets
# Data required for diff
# Create action lists
# Otherwise create new record
# If pruning, remove superfluous record sets
# Compose result
# Apply changes
# Include diff information
# Retrieve requested information
# Find matching records
# Convert records
# Format output
# Parse records
# Compare records
# Create record
# Update record
# Delete record
# Determine what to do
# Mismatch: user wants to overwrite?
# on_existing == 'keep'
# If there's a record to delete, change it to new record
# Determine whether there's something to do
# Actually do something
# Extract prefix
# Extract prefix if necessary
# If normalized_record is not specified, use prefix
# Convert record to prefix
# These errors are not documented, but are what I experienced the API seems to return:
# Error 422 means that at least one of the records was not valid
# This is the list of invalid records that was detected before accepting the whole set
# This is the list of valid records that were not processed
# This is the list of correctly processed records
# Currently Hetzner's bulk update API seems to be broken, it always returns the error message
# "An invalid response was received from the upstream server". That's why for now, we always
# fall back to the default implementation.
# pylint: disable=using-constant-test
# Copyright: (c) 2016, Jorge Rodriguez <jorge.rodriguez@tiriel.eu>
# Parameters for RabbitMQ modules
# (c) 2018, John Imison <john+github@imison.net>
# TODO: In the future consider checking content_type and handle text/binary data differently.
# If we didn't get a method_frame, exit.
# Copyright: (c) 2013, Chatham Financial <oss@chathamfinancial.com>
# 3.7.x erlang style output
# 3.8.x style output
# Copyright: (c) 2015, Manuel Sousa <manuel.sousa@gmail.com>
# Ensure provided data is safe to use in a URL.
# https://docs.python.org/3/library/urllib.parse.html#url-quoting
# NOTE: This will also encode '/' characters, as they are required
# to be percent encoded in the RabbitMQ management API.
# exchange plugin type to plugin name mapping
# Check if exchange already exists
# Check if attributes change on existing exchange
# Exit if check_mode
# Do changes
# RabbitMQ 3.6.7 changed this response code from 204 to 201
# This is a dummy return to prevent linters from throwing errors.
# check_rc is not passed to the `run_command` method directly to allow for more fine grained checking of
# error messages returned by `rabbitmqctl`.
# Filter out headers from the output of the command in case they are still present
# Normalize the arguments
# No such path exists.
# Copyright: (c) 2021, Damian Dabrowski <damian@dabrowski.cloud>
# else don't care
# arg_name in current_args
# Check if queue already exists
# Sync arguments with parameters (the final request uses module.params['arguments'])
# Check if attributes change on existing queue
# Copyright: (c) 2018, Hiroyuki Matsuo <h.matsuo.engineer@gmail.com>
# Copyright: (c) 2017, Juergen Kirschbaum <jk@jk-itc.de>
# RabbitMQ 3.8.x and above return table header, ignore it
# Copyright: (c) 2013, John Dewey <john@dewey.ws>
# API parameters.
# Check if the vhost and name should be defined.
# TODO: verify the endpoint is supported.
# 3.8.x style ouput
# PARSE THE RESPONSE DATA.
# The response data is a json list with field names. The logic of the code expects tab delimited strings.
# Prior to 3.7.0, the apply-to & pattern fields were swapped.
# Remove first header line from policies list for version > 3.7.9
# Change fields order in rabbitmqctl output in version 3.7
# Priority must be a number.
# API Params.
# Copyright: (c) 2018, John Imison <john+github@imison.net>
# notification/rabbitmq_basic_publish.py
# Fail if url is specified and other conflicting parameters have been specified
# Fail if url not specified and there is a missing parameter to build the url
# If src (file) is defined and content_type is left as default, do a mime lookup on the file
# If queue and exchange is not defined post to random queue, RabbitMQ will return the queue name of the automatically generated queue.
# https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/cloudstack.py#L150
# If routing key is not defined, but, the queue is... we will use the queue name as routing_key.
# If exchange is not specified use the default/nameless exchange
# self.module.fail_json(msg="%s %s %s" % (to_native(self.queue), to_native(self.exchange), to_native(self.routing_key)))
# from ansible.module_utils.compat.version import LooseVersion
# we want support for SHA1 signatures
# Copyright (c) 2017, Yanis Guenane <yanis+ansible@guenane.org>
# Note that this doc fragment is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
# Copyright (c) 2025 Ansible project
# Corresponds to the plugins.module_utils._cryptography_dep.COLLECTION_MINIMUM_CRYPTOGRAPHY_VERSION constant
# Copyright (c) 2016-2017, Yanis Guenane <yanis+ansible@guenane.org>
# Copyright (c) 2017, Markus Teufelberger <mteufelberger+ansible@mgit.at>
# Basic documentation fragment without account data
# Account data documentation fragment
# No account data documentation fragment
# Copyright (c) 2016, Yanis Guenane <yanis+ansible@guenane.org>
# Note that this plugin util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Copyright (c) 2012-2013 Michael DeHaan <michael.dehaan@gmail.com>
# Copyright (c) 2016 Toshio Kuratomi <tkuratomi@ansible.com>
# Copyright (c) 2020 Felix Fontein <felix@fontein.de>
# Parts taken from ansible.module_utils.basic and ansible.module_utils.common.warnings.
# NOTE: THIS IS ONLY FOR ACTION PLUGINS!
# Internal data
# AnsibleModule data
# We cannot use ModuleArgumentSpecValidator directly since it uses mechanisms for reporting
# warnings and deprecations that do not work in plugins. This is a copy of that code adjusted
# for our use-case:
# Before ansible-core 2.14.2, deprecations were always for aliases:
# Since ansible-core 2.14.2, a message is present that can be directly printed:
# Copied from ansible.module_utils.common.warnings:
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
# Copyright (c) 2022 Felix Fontein <felix@fontein.de>
# NOTE: THIS IS ONLY FOR FILTER PLUGINS!
# Copyright (c) 2018, David Kainz <dkainz@mgit.at> <dave.jokain@gmx.at>
# should never be None, but the type checker doesn't know
# Drop the certificate path
# Copyright (c) 2018 Felix Fontein <felix@fontein.de>
# Get part of orders list
# Add order URLs to result list
# Extract URL of next part of results list
# Prevent infinite loop
# Check whether account exists
# Make sure promised data is there
# Retrieve orders list
# Copyright (c) 2018, Felix Fontein <felix@fontein.de>
# Check issuer
# Check signature
# Unknown public key type
# Try to load PEM certificate
# Load chain
# Check chain
# Load intermediate certificates
# Load root certificates
# Try to complete chain
# Do not try to complete the chain when it is already ending with a root certificate
# Return results
# discard initial handshake from server for this naive implementation
# Clear default ctx options
# For each item in the tls_ctx_options list
# If the item is a string_type
# Convert tls_ctx_option to a native string
# Get the tls_ctx_option_str attribute from ssl
# If tls_ctx_option_attr is an integer
# Set tls_ctx_option_int to the attribute value
# If tls_ctx_option_attr is not an integer
# If the item is an integer
# Set tls_ctx_option_int to the item value
# If the item is not a string nor integer
# make pylint happy; this code is actually unreachable
# Add the int value of the item to ctx options
# (pylint does not yet notice that module.fail_json cannot return)
# pylint: disable=possibly-used-before-assignment
# The official way to access this has been added in https://github.com/python/cpython/pull/109113/files.
# We are basically doing the same for older Python versions. The internal API needed for this was added
# in https://github.com/python/cpython/commit/666991fc598bc312d72aff0078ecb553f0a968f1, which was first
# released in Python 3.10.0.
# This is of type ssl._ssl._SSLSocket
# This works with Python 3.13+
# Unfortunately due to a bug (https://github.com/python/cpython/issues/118658) some early pre-releases of
# Python 3.13 do not return lists of byte strings, but lists of _ssl.Certificate objects. This is going to
# be fixed by https://github.com/python/cpython/pull/118669. For now we convert the certificates ourselves
# if they are not byte strings to work around this.
# We need the -1 offset to get the same values as pyOpenSSL
# Make sure that for state == changed_key, one of
# new_account_key_src and new_account_key_content are specified
# Make sure padding is there
# Make sure key is Base64 encoded
# Account is not yet deactivated
# Deactivate it
# Parse new account key
# Verify that the account exists and has not been deactivated
# Now we can start the account key rollover
# Compose inner signed message
# https://tools.ietf.org/html/rfc8555#section-7.3.5
# specified in draft 12 and older
# specified in draft 13 and newer
# Send request and verify result
# Kind of fake diff_after
# Step 1: load order
# Step 2: find all pending authorizations
# Step 3: figure out challenges to use
# Step 4: validate pending authorizations
# Copyright (c) 2017, Guillaume Delpierre <gde@llew.me>
# Try to build encryption builder for compatibility2022
# Store fake object which can be used to retrieve the components back
# ignore result
# This check is required because pyOpenSSL will not return a friendly name
# if the private key is not set in the file
# Empty method because OpenSSLObject wants this
# Convert
# used to get <luks-name> out of lsblk output in format 'crypt <luks-name>'
# regex takes care of any possible blank characters
# used to get </luks/device> out of lsblk output
# in format 'device: </luks/device>'
# See https://gitlab.com/cryptsetup/cryptsetup/-/wikis/LUKS-standard/on-disk-format.pdf
# See https://gitlab.com/cryptsetup/LUKS2-docs/-/blob/master/luks2_doc_wip.pdf
# f.seek(0)
# apparently each device can have only one LUKS container on it
# create a new luks container; use batch mode to auto confirm
# For LUKS2, sometimes both `cryptsetup erase` and `wipefs` do **not**
# erase all LUKS signatures (they seem to miss the second header). That's
# why we do it ourselves here.
# LUKS2 header dumps use human-readable indented output.
# Thus we have to look out for 'Keyslots:' and count the
# number of indented keyslot numbers.
# LUKS1 header dumps have one line per keyslot with ENABLED
# or DISABLED in them. We count such lines with ENABLED.
# Since we supply -q no passphrase is needed
# This check is necessary due to cryptsetup in version 2.0.3 not printing 'No usable keyslot is available'
# when using the --key-slot parameter in combination with --test-passphrase
# try to obtain luks name - it may be already opened
# container is not open
# container is already opened
# the container is already open but with different name:
# suspicious. back off
# container is opened and the names match
# conditions for open not fulfilled
# conditions for close not fulfilled
# successfully getting name based on device means that luks is open
# successfully getting device based on name means that luks is open
# conditions for adding a key not fulfilled
# conditions for removing a key not fulfilled
# available arguments/parameters that a user can pass
# conditions not allowed to run
# The conditions are in order to allow more operations in one run.
# (e.g. create luks and add a key to it)
# luks create
# ensured in conditions.luks_create()
# luks open
# ensured in conditions.luks_open()
# luks close
# luks add key
# ensured in conditions.luks_add_key()
# luks remove key
# ensured in conditions.luks_remove_key()
# luks remove
# ensured in conditions.luks_remove()
# Success - return result
# We use None instead of a magic string for 'no challenge'
# Make sure account exists
# Extract list of identifiers from CSR
# We are in the second stage if data.order_uri is given (which has been
# stored in self.order_uri by the constructor).
# Skip valid authentications: their challenges are already valid
# and do not need to be returned
# We drop the type from the key to preserve backwards compatibility
# Step 1: obtain challenge information
# For ACME v2, we obtain the order object by fetching the
# order URI, and extract the information from there.
# Step 2: validate pending challenges
# If there is no challenge, we must check whether the authz is valid
# Step 3: wait for authzs to validate
# Retrieve alternate chains
# Prepare return value for all alternate chains
# Try to select alternate chain depending on criteria
# ignore errors
# If checkmode is active, base the changed state solely on the status
# of the certificate file as all other actions (accessing an account, checking
# the authorization status...) would lead to potential changes of the current
# state
# First run: start challenges / start new order
# Second run: finish challenges, and get certificate
# Remove "type:" from key
# Copyright (2) 2020, Felix Fontein <felix@fontein.de>
# Convert byte string to ASN1 encoded octet string
# Get hold of private key
# Some common attributes
# Generate regular self-signed certificate
# Process challenge
# Copyright (c) 2020, Felix Fontein <felix@fontein.de>
# Load certificate
# Get hold of private key (if available) and make sure it comes from disk
# Revoke certificate
# Step 1: load and parse private key
# Step 2: sign revokation request with private key
# Step 1: get hold of account URI
# Step 2: sign revokation request with account key
# Standardized error from draft 14 on (https://tools.ietf.org/html/rfc8555#section-7.6)
# Hack for Boulder errors
# Fallback: boulder returns this in case the certificate was already revoked.
# If we know the certificate was already revoked, we do not fail,
# but successfully terminate while indicating no change
# Copyright (c) 2019, Patrick Pichler <ppichler+ansible@mgit.at>
# Implementation with using cryptography
# Copyright (c) 2018 Felix Fontein (@felixfontein)
# Get hold of ACMEClient and ACMEAccount objects (includes directory)
# Do we have to do more requests?
# Do request
# only POSTs can change
# Update results
# See if we can parse the result as JSON
# Fail if error was returned
# Done!
# Copyright (c) 2017, Thom Wiggers  <ansible@thomwiggers.nl>
# only generate when necessary
# fix permissions (checking force not necessary as done above)
# Fix done implicitly by
# AnsibleModule.set_fs_attributes_if_different
# create a tempfile
# Ansible will delete the file on exit
# openssl dhparam -out <path> <bits>
# If the call failed the file probably does not exist or is
# unreadable
# output contains "(xxxx bit)"
# No "xxxx bit" in output
# if output contains "WARNING" we've got a problem
# Generate parameters
# Serialize parameters
# Write result
# Load parameters
# Detection what is possible
# First try cryptography, then OpenSSL
# Success?
# Regenerate
# Step 2 and 3: download certificate(s) and chain(s)
# Step 2: wait for authorizations to validate
# Step 3: finalize order, wait, then download certificate(s) and chain(s)
# Step 4: pick chain, write certificates, and provide return values
# Copyright (c) 2021 Felix Fontein <felix@fontein.de>
# While UnsupportedAlgorithm got added in cryptography 0.1, InternalError
# only got added in 0.2, so let's guard the import
# Test for DSA
# added in 0.5 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/dsa/
# added later in 1.5
# Test for RSA
# added in 0.5 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/
# added later in 1.4
# Test for Ed25519
# added in 2.6 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ed25519/
# added with the primitive in 2.6
# Test for Ed448
# added in 2.6 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ed448/
# Test for X25519
# added in 2.0 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x25519/
# added later in 2.5
# Some versions do not support serialization and deserialization - use generate() instead
# Test for X448
# added in 2.5 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x448/
# Test for ECC
# added in 0.5 - https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/
# pylint: disable=duplicate-except,bad-except-order
# On Fedora 41, some curves result in InternalError. This is probably because
# Fedora's cryptography is linked against the system libssl, which has the
# curves removed.
# Read and dump public key. Makes sure that the comment is stripped off.
# Copyright (c) 2019, Felix Fontein <felix@fontein.de>
# Load certificate from file or content
# Specify serial_number (and potentially issuer) directly
# All other options
# Normalize to IDNA. If this is used-provided, it was already converted to
# IDNA (by cryptography_get_name) and thus the `idna` library is present.
# If this is coming from cryptography and is not already in IDNA (i.e. ascii),
# cryptography < 2.1 must be in use, which depends on `idna`. So this should
# not require `idna` except if it was already used by code earlier during
# this invocation.
# Throw out revocation_date
# We do not simply use a set so that duplicate entries are treated correctly
# result['digest'] = cryptography_oid_to_name(self.crl.signature_algorithm_oid)
# Call get_private_key_data() to make sure that exceptions are raised now:
# In case the module's input (`content`) is returned as `privatekey`:
# Since `content` is no_log=True, `privatekey`'s value will get replaced by
# VALUE_SPECIFIED_IN_NO_LOG_PARAMETER. To avoid this, we remove the value of
# `content` from module.no_log_values. Since we explicitly set
# `module.no_log = True`, this should be safe.
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# We assume that naive datetime objects use timezone UTC!
# Convert to native datetime object
# timestamp.timestamp() is offset by the local timezone if timestamp has no timezone
# not matched or only a single "+" or "-"
# Relative time
# Absolute time
# this also parses '202401020304Z', but as datetime(2024, 1, 2, 3, 0, 4)
# this also parses '202401020304+0000', but as datetime(2024, 1, 2, 3, 0, 4, tzinfo=...)
# Corresponds to the community.crypto.cryptography_dep.minimum doc fragment
# Find out parameters for file
# The path argument is only supported in Ansible 2.10+. Fall back to
# pre-2.10 behavior of module_utils/crypto.py for older Ansible versions.
# Create tempfile name
# if we fail, let Ansible try to remove the file
# Create tempfile
# Update destination to wanted permissions
# Move tempfile to final destination
# Try to update permissions again
# Copyright (c) 2020, Doug Stanley <doug+ansible@technologixllc.com>
# Protocol References
# https://datatracker.ietf.org/doc/html/rfc4251
# https://datatracker.ietf.org/doc/html/rfc4253
# https://datatracker.ietf.org/doc/html/rfc5656
# https://datatracker.ietf.org/doc/html/rfc8032
# Inspired by:
# ------------
# https://github.com/pyca/cryptography/blob/main/src/cryptography/hazmat/primitives/serialization/ssh.py
# https://github.com/paramiko/paramiko/blob/master/paramiko/message.py
# 0 (False) or 1 (True) encoded as a single byte
# Unsigned 8-bit integer in network-byte-order
# Unsigned 32-bit integer in network-byte-order
# Unsigned 32-bit little endian integer
# Unsigned 64-bit integer in network-byte-order
# See https://datatracker.ietf.org/doc/html/rfc4251#section-5 for SSH data types
# Cast to bytes is required as a memoryview slice is itself a memoryview
# Convenience function, but not an official data type from SSH
# data is doubly-encoded
# https://datatracker.ietf.org/doc/html/rfc4253#section-6.6
# https://datatracker.ietf.org/doc/html/rfc8332#section-3
# https://datatracker.ietf.org/doc/html/rfc5656#section-3.1.2
# https://datatracker.ietf.org/doc/html/rfc8032#section-5.1.2
# SSH option data is encoded twice though this behavior is not documented
# Handles values which require \x00 or \xFF to pad sign-bit
# https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD
# See https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD
# See https://datatracker.ietf.org/doc/html/rfc5656#section-6.1
# We have str, but we're expecting a specific literal:
# See https://datatracker.ietf.org/doc/html/rfc4253#section-6.6
# mypy doesn't understand that the setter accepts other types than the getter:
# Public exponent should always be 65537 to prevent issues
# if improper padding is used during signing
# TODO: Maybe we should check whether the public key actually fits the private key?
# Ed25519 keys are always of size 256 and do not have a key_size attribute
# Revert to PEM if key could not be loaded in SSH format
# "y" must be entered in response to the "overwrite" prompt
# User input is ignored for `key size` when `key type` is ed25519
# Default to OpenSSH 7.8 compatibility when OpenSSH is not installed
# OpenSSH made SSH formatted private keys available in version 6.5,
# but still defaulted to PKCS1 format with the exception of ed25519 keys
# Simulates the null output of ssh-keygen
# Cryptography >= 3.0 uses a SSH key loader which does not raise an exception when a passphrase is provided
# when loading an unencrypted key
# avoids breaking behavior and prevents
# automatic conversions with OpenSSH upgrades
# Parse data
# Process link-up headers if there was no chain in reply
# -1 usually means connection problems
# 429 and 503 should have a Retry-After header (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After)
# Check whether self.version matches what we expect
# Make sure that 'meta' is always available
# Set to true to enable logging of all signed requests
# account_key path and content are mutually exclusive
# Grab account URI from module parameters.
# Make sure empty string is treated as None.
# Make sure self.account_jws_header is updated
# POST-as-GET
# POST
# In case of badNonce error, try again (up to 5 times)
# (https://tools.ietf.org/html/rfc8555#section-6.7)
# Try POST-as-GET
# Instead, do unauthenticated GET
# Do unauthenticated GET
# Perform unauthenticated GET
# Process result
# Include Retry-After header if asked for
# Backend autodetect
# Create backend object
# Either we could not import cryptography at all, or there was an unexpected error
# We succeeded importing cryptography, but its version is too old.
# Check common module parameters
# AnsibleModule() changes the locale, so change it back to C because we rely
# on datetime.datetime.strptime() when parsing certificate dates.
# RFC 3339 (https://www.rfc-editor.org/info/rfc3339)
# Python does not support anything smaller than microseconds
# (Golang supports nanoseconds, Boulder often emits more fractional digits, which Python chokes on)
# It is not clear from the RFC whether the finalize call returns the order object or not.
# Instead of using the result, we call self.refresh(client) below.
# Normalize DNS names and IPs
# https://tools.ietf.org/html/rfc8555#section-8.3
# https://tools.ietf.org/html/rfc8555#section-8.4
# https://www.rfc-editor.org/rfc/rfc8737.html#section-3
# IPv4/IPv6 address: use reverse mapping (RFC1034, RFC3596)
# Unknown challenge type: ignore
# While 'challenges' is a required field, apparently not every CA cares
# (https://github.com/ansible-collections/community.crypto/issues/824)
# multiple challenges could have failed at this point, gather error
# details for all of them before failing
# If certificate file contains other certs appended
# (like intermediate certificates), ignore these.
# First try a number of seconds
# Obtain certificate info if not provided
# Convert Authority Key Identifier to string
# Convert serial number to string
# Compose cert ID
# Copyright (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# This function was adapted from an earlier version of https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/uri.py
# https://www.rfc-editor.org/rfc/rfc7807#section-3.1
# Try to get hold of content, if response is given and content is not provided
# Make sure that content_json is None or a dictionary
# Try to get hold of JSON decoded content, when content is given and JSON not provided
# For some reason Python's strptime() does not return any timezone information,
# even though the information is there and a supported timezone for all supported
# Python implementations (GMT). So we have to modify the datetime object by
# replacing it by UTC.
# If key_file is not given, but key_content, write that to a temporary file
# Parse key
# This happens for example if openssl_privatekey created this key
# (as opposed to the OpenSSL binary). For now, we assume this is
# an RSA key.
# FIXME: add some kind of auto-detection
# Not yet supported on Let's Encrypt side, see
# https://github.com/letsencrypt/boulder/issues/2217
# We do not want to error out on something IPAddress() cannot parse
# If key_content is not given, read key_file
# Make sure we have at most one PEM. Otherwise cryptography 36.0.0 will barf.
# Since we need to pass this to t.cast(), we need a version that doesn't break with Python 3.7 and 3.8
# Some ACME servers such as ZeroSSL do not like it when you try to register an existing account
# and provide external_account_binding credentials. Thus we first send a request with allow_creation=False
# to see whether the account already exists.
# Unfortunately, for other ACME servers it's the other way around: (at least some) HARICA endpoints
# do not allow *any* access without external account data. That's why we catch errors and check
# for 'externalAccountRequired'.
# Note that we pass contact here: ZeroSSL does not accept registration calls without contacts, even
# if onlyReturnExisting is set to true.
# An account already exists! Return data
# An account does not yet exist. Try to create one next.
# Either another error happened, or we got 'externalAccountRequired' and external account data was not supplied
# => re-raise exception!
# In this case, the server really wants external account data.
# The below code tries to create the account with external account data present.
# https://tools.ietf.org/html/rfc8555#section-7.3.1
# Account did not exist
# Account did exist
# A bug in Pebble (https://github.com/letsencrypt/pebble/issues/179) and
# Boulder (https://github.com/letsencrypt/boulder/issues/3971): this should
# not return a valid account object according to
# https://tools.ietf.org/html/rfc8555#section-7.3.6:
# Account does not exist (and we did not try to create it)
# (According to RFC 8555, Section 7.3.1, the HTTP status code MUST be 400.
# Unfortunately Digicert does not care and sends 404 instead.)
# Account has been deactivated; currently works for Pebble; has not been
# implemented for Boulder (https://github.com/letsencrypt/boulder/issues/3971),
# might need adjustment in error detection.
# try POST-as-GET first (draft-15 or newer)
# check whether that failed with a malformed request error
# retry as a regular POST (with no changed data) for pre-draft-15 ACME servers
# Returned when account is deactivated
# Returned when account does not exist
# Verify that the account key belongs to the URI.
# (If update_contact is True, this will be done below.)
# Create request
# No change?
# Apply change
# This particular file snippet, and this file snippet only, is licensed under the
# Apache 2.0 License. Modules you write using this snippet, which is embedded
# dynamically by Ansible, still belong to the author of the module, and may assign
# their own license to the complete work.
# This excerpt is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file at
# https://github.com/pyca/cryptography/blob/master/LICENSE for complete details.
# The Apache 2.0 license has been included as LICENSES/Apache-2.0.txt in this collection.
# The BSD License license has been included as LICENSES/BSD-3-Clause.txt in this collection.
# SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause
# Adapted from cryptography's hazmat/backends/openssl/decode_asn1.py
# Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# Relevant commits from cryptography project (https://github.com/pyca/cryptography):
# WARNING: this function no longer works with cryptography 35.0.0 and newer!
# Set to 80 on the recommendation of
# https://www.openssl.org/docs/crypto/OBJ_nid2ln.html#return_values
# But OIDs longer than this occur in real life (e.g. Active
# Directory makes some very long OIDs).  So we need to detect
# and properly handle the case where the default buffer is not
# big enough.
# 'res' is the number of bytes that *would* be written if the
# buffer is large enough.  If 'res' > buf_len - 1, we need to
# alloc a big-enough buffer and go again.
# account for terminating null byte
# See https://github.com/openssl/openssl/blob/master/crypto/pem/pem_pkey.c#L40-L85
# (PEM_read_bio_PrivateKey)
# and https://github.com/openssl/openssl/blob/master/include/openssl/pem.h#L46-L47
# (PEM_STRING_PKCS8, PEM_STRING_PKCS8INF)
# Error handled in the calling module.
# This list of preferred fingerprints is used when prefer_one=True is supplied to the
# fingerprinting methods.
# Sort algorithms to have the ones in PREFERRED_FINGERPRINTS at the beginning
# This can happen for hash algorithms not supported in FIPS mode
# (https://github.com/ansible/ansible/issues/67213)
# Certain hash functions have a hexdigest() which expects a length parameter
# TODO: once cryptography has a _utc variant of InvalidityDate.invalidity_date, set this
# Older cryptography versions do not have signature_algorithm_oid yet
# type: ignore[attr-defined]  # pylint: disable=protected-access
# Compute len_e = floor(log_2(e))
# Compute f**e mod m
# The constant in the next line is the product of all primes < 200
# Explicitly check for all primes < 200
# TODO: maybe do some iterations of Miller-Rabin to increase confidence
# (https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test)
# This has been extracted from the OpenSSL project's objects.txt:
# Extracted with https://gist.github.com/felixfontein/376748017ad65ead093d56a45a5bf376
# In case the following data structure has any copyrightable content, note that it is licensed as follows:
# Copyright (c) the OpenSSL contributors
# Licensed under the Apache License 2.0
# https://github.com/openssl/openssl/blob/master/LICENSE.txt or LICENSES/Apache-2.0.txt
# Copyright (c) 2020, Jordan Borean <jborean93@gmail.com>
# An ASN.1 serialized as a string in the OpenSSL format:
# 'modifier':
# 'type':
# 'value':
# Universal tag numbers that can be encoded.
# NOTE: This is *NOT* the same as packing an ASN.1 INTEGER like value.
# Continue to shift the number by 7 bits and pack into an octet until the
# value is fully packed.
# First round (last octet) must have the MSB set.
# Reverse to ensure the higher order octets are first.
# We should only do a universal type tag if not IMPLICITLY tagged or the tag class is not universal.
# When adding support for more types this should be looked into further. For now it works with UTF8Strings.
# Bit 8 and 7 denotes the class.
# Bit 6 denotes whether the value is primitive or constructed.
# Bits 5-1 contain the tag number, if it cannot be encoded in these 5 bits
# then they are set and another octet(s) is used to denote the tag number.
# If the length can be encoded in 7 bits only 1 octet is required.
# Otherwise the length must be encoded across multiple octets
# Reverse to make the higher octets first.
# The first length octet must have the MSB set alongside the number of
# octets the length was encoded in.
# This is a separate try/except since this is only present in cryptography 36.0.0 or newer
# Since cryptography will not give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing ourselves.
# We access a *lot* of internal APIs here, so let's disable that message...
# With cryptography 35.0.0, we can no longer use obj2txt. Unfortunately it still does
# not allow to get the raw value of an extension, so we have to use this ugly hack:
# Decoding a hex string
# Decoding a regular string
# Since IDNA does not like '*' or empty labels (except one empty label at the end),
# we split and let IDNA only handle labels that are neither empty or '*'.
# otherName can either be a raw ASN.1 hex string or in the format that OpenSSL works with.
# See https://www.openssl.org/docs/man1.0.2/man5/x509v3_config.html - Subject Alternative Name for more
# defailts on the format expected.
# According to https://datatracker.ietf.org/doc/html/rfc4514.html#section-2.1 the
# list needs to be reversed, and joined by commas
# Main code for cryptography 36.0.0 and forward
# Requires cryptography 36.0.0 or newer
# Backwards compatibility code for cryptography 35.x
# See https://github.com/pyca/cryptography/issues/5760#issuecomment-842687238
# This code basically does what load_key_and_certificates() does, but without error-checking.
# Since load_key_and_certificates succeeded, it should not fail.
# Backwards compatibility code for cryptography < 35.0.0
# Check whether certificate is signed by CA certificate
# Check subject
# Check AuthorityKeyIdentifier
# not used
# From the object called `module`, only the following properties are used:
# OpenSSL always uses this
# Select export format and encoding
# "TraditionalOpenSSL" format is PKCS1
# pylint does not notice that all possible values for export_format_txt have been covered.
# Select key encryption
# Serialize key
# Interpret bytes depending on format.
# Raw keys cannot be encrypted. To avoid incompatibilities, we try to
# actually load the key (and return False when this fails).
# Loading the key succeeded. Only return True when no passphrase was
# provided.
# key does not exist
# During generation step, regenerate if format does not match and format_mismatch == 'regenerate'
# During conversion step, convert if format does not match and format_mismatch == 'convert'
# Ignore errors
# Get hold of private key bytes
# Store result
# This only works with cryptography >= 2.1
# pylint does not notice that all possible values for self.format have been covered.
# csr.sign() does not accept some digests we theoretically could have in digest.
# For that reason we use type t.Any here. csr.sign() will complain if
# the digest is not acceptable.
# This catches IDNAErrors, which happens when a bad name is passed as a SAN
# (https://github.com/ansible-collections/community.crypto/issues/105).
# For older cryptography versions, this is handled by idna, which raises
# an idna.core.IDNAError. Later versions of cryptography deprecated and stopped
# requiring idna, whence we cannot easily handle this error. Fortunately, in
# most versions of idna, IDNAError extends UnicodeError. There is only version
# 2.3 where it extends Exception instead (see
# https://github.com/kjd/idna/commit/ebefacd3134d0f5da4745878620a6a1cba86d130
# and then
# https://github.com/kjd/idna/commit/ea03c7b5db7d2a99af082e0239da2b68aeea702a).
# param in ('encipher_only', 'decipher_only') can result in ValueError()
# being raised if key_agreement == False.
# In that case, assume that the value is False.
# Check CA flag
# Check path length
# Check criticality
# To check whether public key of CSR belongs to private key,
# encode both public keys and compare PEMs.
# Get hold of CSR bytes
# Make sure that g is not 0, 1 or -1 in Z/pZ
# Make sure that x is in range
# Check whether q divides p-1
# Check that g**q mod p == 1
# Check whether g**x mod p == y
# Check (quickly) whether p or q are not primes
# key._backend was removed in cryptography 42.0.0
# type: ignore  # pylint: disable=protected-access
# For X25519 and X448, there's no test yet.
# Only fail when it is False, to avoid to fail on None (which means "we do not know")
# Create empty CSR on the fly
# Check whether certificate is signed by private key
# content must be a bytes string
# crypto_utils
# We need to temporarily write the CSR to disk
# The following are default values which make sure check() works as
# before if providers do not explicitly change these properties.
# Verify that CSR is signed by certificate's private key
# Check extensions
# Filter out SubjectKeyIdentifier extension before comparison
# Filter out AuthorityKeyIdentifier extension before comparison
# Get hold of certificate's SKI
# Get hold of CSR's SKI for 'create_if_not_provided'
# If CSR had no SKI, or we chose to ignore it ('always_create'), compare with created SKI
# If CSR had SKI and we did not ignore it ('create_if_not_provided'), compare SKIs
# Check whether private key matches
# Check whether CSR matches
# Check SubjectKeyIdentifier
# Check not before
# Check not after
# Get hold of certificate bytes
# choices will be filled by add_XXX_provider_to_argument_spec() in certificate_xxx.py
# General properties of a certificate
# Copyright 2018 Edoardo Tenani <e.tenani@arduino.cc> (@endorama)
# Create option value querier
# Remove all prereleases (versions with '+' or '-' in them)
# Copyright (c) 2018 Edoardo Tenani <e.tenani@arduino.cc> (@endorama)
# no need to do much if path does not exist for basedir
# NOTE: iterating without extension allow retrieving files recursively
# A filter is then applied by iterating on all results and filtering by
# extension.
# - https://github.com/ansible-collections/community.sops/pull/6
# Check whether sops thinks the file might be encrypted. If it thinks it is not,
# skip it. Otherwise, re-raise the original error
# The filestatus operation can fail for example if sops cannot parse the file
# as JSON/YAML. In that case, also re-raise the original error
# Compare JSON
# Treat parsing errors as content not equal
# Compare YAML
# Check YAML
# Decode binary data
# Simply encrypt
# Change detection: check if encrypted data equals new data
# Should not happen with sops-encrypted files
# must come *before* Sequence, as strings are also instances of Sequence
# Copyright (c), Edoardo Tenani <e.tenani@arduino.cc>, 2018-2020
# Since this is used both by plugins and modules, we need subprocess in case the `module` parameter is not used
# From https://github.com/getsops/sops/blob/master/cmd/sops/codes/codes.go
# Should be manually updated
# if --disable-version-check is not supported, this is version 3.7.3 or older
# Run sops directly, python module is deprecated
# output is binary, we want UTF-8 string
# the process output is the decrypted secret; be cautious
# sops logs always to stderr, as stdout is used for
# Copyright (c), Yanis Guenane <yanis+ansible@guenane.org>, 2016
# This is taken from community.crypto
# (c) 2013, Evan Wies <evan@neomantra.net>
# (c) 2017, Abhijeet Kasurde <akasurde@redhat.com>
# Inspired by the EC2 inventory plugin:
# https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.py
# Main execution path
# DigitalOceanInventory data
# All DigitalOcean data
# Ansible Inventory
# Define defaults
# Read settings, environment variables, and CLI arguments
# Verify credentials were set
# env command, show DigitalOcean credentials
# Manage cache
# Pick the json_data to print based on the CLI command
# '--list' this is last to make it default
# Script configuration
# Credentials
# Cache related
# Private IP Address
# Group variables
# Droplet tag_name
# Setup credentials
# Make --list default if none of the other commands are specified
# Data Management
# We always get fresh droplets
# add all droplets by id and name
# groups that are always present
# groups that are not always present
# hostvars
# Cache Management
# Run the script
# Copyright: (c) 2018, Abhijeet Kasurde (akasurde@redhat.com)
# Parameters for DigitalOcean modules
# Constructable methods use the following function to construct group names. By
# default, characters that are not valid in python variables, are always replaced by
# underscores. We are overriding this with a function that respects the
# TRANSFORM_INVALID_GROUP_CHARS configuration option and allows users to control the
# request parameters
# build url
# send request(s)
# set composed and keyed groups
# Better be safe and not include any hosts by accident.
# cache settings
# Copyright: (c) 2021, Mark Mercado <mamercad@gmail.com>
# pop the oauth token so we don't include it in the POST data
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# was Ansible 2.13
# Copyright: (c) 2023, Raman Babich <ramanbabich@gmail.com>
# Copyright (c) 2021, Mark Mercado <mamercad@gmail.com>
# Pop these values so we don't include them in the POST data
# Copyright: (c) 2018, Anthony Bond <ajbond2005@gmail.com>
# Copyright: (c) 2020, Tyler Auerbeck <tauerbec@redhat.com>
# Copyright: (C) 2017-18, Ansible Project
# stop if any error during pagination
# Digital Ocean API info https://docs.digitalocean.com/reference/api/api-reference/#tag/Kubernetes
# The only variance from the documented response is that the kubeconfig is (if return_kubeconfig is True) merged in at data['kubeconfig']
# Get valid Kubernetes options (regions, sizes, versions)
# Validate region
# Validate version
# Validate size
# Create the Kubernetes cluster
# Add the kubeconfig to the return
# Assign kubernetes to project
# empty string is the default project, skip project assignment
# Set the cluster_id
# This key doesn't match, try the next alert.
# Didn't hit break, this alert matches.
# Check for an existing (same) one.
# Create it.
# Exceptions in fetch_url may result in a status -1, the ensures a
# IF key not found create it!
# If key found was found, check if name needs to be updated
# Check if oauth_token is valid or not
# GET /v2/domains/$DOMAIN_NAME/records
# digitalocean stores data: '@' if the data=domain
# verify credentials and domain
# Imported as a dependency for dopy
# NOTE: Expressing Python dependencies isn't really possible:
# https://github.com/ansible/ansible/issues/62733#issuecomment-537098744
# Naive lexographical check
# powered off
# Check first by id.  digital ocean requires that it be unique
# Failing that, check by hostname.
# First, try to find a droplet by id.
# If we couldn't find the droplet and the user is allowing unique
# hostnames, then check to see if a droplet with the specified
# hostname already exists.
# If both of those attempts failed, then create a new droplet.
# Copyright: (c) 2021, Mark Mercado <mmercado@digitalocean.com>
# only load for non-default project assignments
# pop wait and wait_timeout so we don't include it in the POST data
# Ensure Tag exists
# No resource defined, we're done.
# Check if resource is already tagged or not
# If resource is not tagged, tag a resource
# Already tagged resource
# Unable to find resource specified by user
# Have it already
# Update it
# The API docs are wrong (they say 202 but return 200)
# Pop these away (don't need them beyond DOCDNEndpoint)
# We're at the mercy of a backend process which we have no visibility into:
# https://docs.digitalocean.com/reference/api/api-reference/#operation/create_domain
# In particular: "Keep in mind that, upon creation, the zone_file field will
# have a value of null until a zone file is generated and propagated through
# an automatic process on the DigitalOcean servers."
# Arguably, it's nice to see the records versus null, so, we'll just try a
# few times before giving up and returning null.
# Regions which use 'size' versus 'size_unit'
# Handle size versus size_unit
# Ensure that we have size
# Ensure that we have size_unit
# Found one with the given id:
# Found one with the same name:
# Make sure the region is the same!
# Check if the VPC needs changing.
# Check if it exists already (the API docs aren't up-to-date right now,
# "name" is required and must be unique across the account.
# Do we need to update it?
# Check mode.
# Store it.
# Response body should be empty
# Droplet ID and tag are mutually exclusive, check that both have not been defined
# Found it
# Didn't find it
# Digital Ocean API info https://docs.digitalocean.com/reference/api/api-reference/#operation/list_all_kubernetes_clusters
# URL https://api.digitalocean.com/v2/domains/[NAME]
# for the MX, CNAME, SRV, CAA records make sure the data ends with a dot
# look for exactly the same record used by (create, delete)
# python3 does not have cmp so let's use the official workaround
# look for similar records used by (update)
# if no exact neither similar records
# before data comparison, we need to make sure that
# the payload['data'] is not normalized, but
# during create/update digitalocean expects normalized data
# POST /v2/domains/$DOMAIN_NAME/records
# if record_id is given we need to update the record no matter what
# create the record if no similar or exact record were found
# no exact match, but we have similar records
# so if force_update == True we should update it
# if we have 1 similar record
# update if we were told to do it so
# if no update was given, create it
# we have multiple similar records, bun not exact match
# we have multiple similar records, can't decide what to do
# record matches
# double check if the record exist
# record found
# PUT /v2/domains/$DOMAIN_NAME/records/$RECORD_ID
# recond not found
# DigitalOcean stores every data in lowercase except TXT
# if record_id is given, try to find the record based on the id
# if no record_id is given, try to a single matching record
# record was not found, we're done
# record found, lets delete it
# DELETE /v2/domains/$DOMAIN_NAME/records/$RECORD_ID.
# somehow define the absent requirements: record_id OR ('name', 'type', 'data')
# Get updated Droplet data
# Make sure Droplet is active first
# action and other fields may not be available in case of error, check first
# will catch Not Authorized due to restrictive Scopes
# Keep checking till it is done or times out
# Make sure Droplet is active or off first
# Trigger power-on
# Trigger power-off
# We have the Droplet
# Add droplet to a firewall if specified
# Ensure Droplet size
# Ensure Droplet power state
# Get updated Droplet data (fallback to current data)
# We don't have the Droplet, create it
# Ensure that the Droplet is created
# Add droplet to firewall if specified
# raise Exception(self.module.params["firewall"])
# to delete a droplet we need to know the droplet id or unique name, ie
# name is not None and unique_name is True, but as "id or name" is
# enforced elsewhere, we only need to enforce "id or unique_name" here
# (c) 2015, Patrick F. Marques <patrickfmarques@gmail.com>
# Lets try to associate the ip to the specified droplet
# TODO: If already assigned to a droplet verify if is one of the specified as valid
# Get existing floating IPs
# Exit unchanged if any of them are assigned to this Droplet already
# Support environment variable for DigitalOcean OAuth Token
# Certificate does not exist, let us create it
# Already detached
# we'd get status 422 if desired_size <= current volume size
# The volume exists already, but it might not have the desired size
# update
# Copyright (c), Ansible Project 2017
# Check if api_token is valid or not
# There's a bug in the API docs: GET v2/cdn/endpoints doesn't return a "links" key
# Filter out non server hosts, search_s seems to return 3 extra entries
# that are not computer classes, they do not have a distinguished name
# set in the returned results
# Load the variables and direct args into the lookup options
# Validate and set input values
# https://www.openldap.org/lists/openldap-software/200202/msg00456.html
# Same as OPT_X_TLS_HARD
# We have encryption if using LDAPS, or StartTLS is used, or we auth with SASL/GSSAPI
# We cannot use conn.set_option as OPT_X_TLS_NEWCTX (required to use the new context) is not supported on
# older distros like EL7. Setting it on the ldap object works instead
# While this is a path, python-ldap expects a str/unicode and not bytes
# https://keathmilligan.net/python-ldap-and-macos/
# Allow us to search from the base
# Make sure we run StartTLS before doing the bind to protect the credentials
# The SASL GSSAPI binding is not installed, e.g. cyrus-sasl-gssapi. Give a better error message than
# what python-ldap provides
# TODO: change method to search for all servers in 1 request instead of multiple requests
# Copyright: (c) 2019, Brant Evans <bevans@redhat.com>
# Copyright: (c) 2018, Wojciech Sciesinski <wojciech[at]sciesinski[dot]net>
# Copyright: (c) 2017, Daniele Lazzari <lazzari@mailup.com>
# Copyright: (c) 2017, Liran Nisanov <lirannis@gmail.com>
# Copyright: (c) 2015, Corwin Brown <blakfeld@gmail.com>
# Copyright: (c) 2015, Peter Mounce <public@neverrunwithscissors.com>
# Copyright: (c) 2015, Henrik Wallström <henrik@wallstroms.nu>
# Copyright: (c) 2020, Brian Scholer <@briantist>
# Copyright: (c) 2019, Varun Chopra (@chopraaa) <v@chopraaa.com>
# Copyright: (c) 2016, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2018, Kevin Subileau (@ksubileau)
# Copyright: (c) 2020, Jamie Magee <jamie.magee@gmail.com>
# Copyright: (c) 2017, Jon Hawkesworth (@jhawkesworth) <jhawkesworth@protonmail.com>
# Copyright: (c) 2015, Heyo
# Copyright: (c) 2018, Ansible, inc
# Copyright: 2017, Dag Wieers (@dagwieers) <dag@wieers.com>
# Copyright: (c) 2021, Kento Yagisawa <thel.vadam2485@gmail.com>
# Copyright: (c) 2017, Henrik Wallström <henrik@wallstroms.nu>
# Copyright: (c) 2018, Jordan Borean <jborean93@gmail.com>
# GSSAPI extension required for Kerberos Auth in SMB
# create PAExec service and run the process
# close the SMB connection
# Copyright: (c) 2017, Marc Tschapek <marc.tschapek@itelligence.de>
# Copyright: 2019, rnsc(@rnsc) <github@rnsc.be>
# Copyright: (c) 2020, ライトウェルの人 <jiro.higuchi@shi-g.com>
# Copyright: (c) 2015, Jon Hawkesworth (@jhawkesworth) <jhawkesworth@protonmail.com>
# Copyright: (c) 2014, Timothy Vandenbrande <timothy.vandenbrande@gmail.com>
# Copyright: (c) 2017, Artem Zinenko <zinenkoartem@gmail.com>
# Copyright: (c) 2019, Thomas Moore (@tmmruk)
# Copyright: (c) 2015, Sam Liu <sam.liu@activenetwork.com>
# Copyright: (c) 2016, Jon Hawkesworth (@jhawkesworth) <jhawkesworth@protonmail.com>
# Copyright: (c) 2018, Varun Chopra (@chopraaa) <v@chopraaa.com>
# build the wait_for_connection object for later use
# if it's not in check mode, call the module async so the WinRM restart doesn't kill ansible
# if we're in check mode (not doing async) return the result now
# turn off async so we don't run the following actions as async
# build the async_status object
# build an async_status mode=cleanup object
# Retries here is a fallback in case the module fails in an unexpected way
# which can sometimes not properly set the failed field in the return.
# It is not related to async retries.
# Without this, that situation would cause an infinite loop.
# check up on the async job
# let's try to clean up after our implicit async
# let's swallow errors during implicit cleanup to aovid interrupting what was otherwise a successful run
# special handling of tags
# collect rest of attributes
# align value with API spec of all upper
# The ELB is changing state in some way. Either an instance that's
# InService is moving to OutOfService, or an instance that's
# already OutOfService is being deregistered.
# If ec2_elbs wasn't specified, then filter out LBs we're not a member
# of.
# Instance isn't a member of an ASG
# handled by imported AnsibleAWSModule
# caught by imported AnsibleAWSModule
# Compare only relevant set of domain arguments.
# As get_domain_name gathers all kind of state information that can't be set anyways.
# Also this module doesn't support custom TLS cert setup params as they are kind of deprecated already and would increase complexity.
# Cleanout `base_path: "(none)"` elements from dicts as those won't match with specified mappings
# Cleanout `base_path: ""` elements from dicts as those won't match with existing mappings
# When lists missmatch delete all existing mappings before adding new ones as specified
# Use regionalCertificateArn for regional domain deploys
# Trim out the whitespace before comparing the certs.  While this could mean
# an invalid cert 'matches' a valid cert, that's better than some stray
# whitespace breaking things
# We can't compare keys.
# For now we can't make any changes.  Updates to tagging would go here and
# update 'changed'
# Try to be nice, if we've already been renamed exit quietly.
# Copyright 2014 Jens Carl, Hothead Games Inc.
# No support for managing tags yet, but make sure that we don't need to
# change the return value structure after it's been available in a release.
# Description is optional, SubnetIds is not
# AWS is "eventually consistent", cope with the race conditions where
# deletion hadn't completed when we ran describe
# return empty set for unknown broker in check mode
# we can simply return the sub-object from the response
# Replace '.' with '_' in attribute key names to make it more Ansible friendly
# Now set target group attributes
# Get current attributes
# Something went wrong setting attributes. If this target group was created during this task, delete it to leave a consistent state
# Set health check if anything set
# Only need to check response code and path for http(s) health checks
# Get target group
# Target group exists so check health check parameters match what has been passed
# Modify health check if anything set
# Health check protocol
# Health check port
# Health check interval
# Health check timeout
# Healthy threshold
# Unhealthy threshold
# Health check path
# Matcher (successful response codes)
# TODO: required and here?
# Do we need to modify targets?
# get list of current target instances. I can't see anything like a describe targets in the doco so
# describe_target_health seems to be the only way to get them
# Correct type of target ports
# register lambda target
# only one target is possible with lambda
# remove lambda targets
# Get the target group again
# Get the target group attributes again
# Convert target_group to snake_case
# Merge the results of the scalable target creation and policy deletion/creation
# There's no risk in overriding values since mutual keys have the same values in our case
# Scalable target registration will occur if:
# 1. There is no scalable target registered for this service
# 2. A scalable target exists, different min/max values are defined and override is set to "yes"
# check if the input parameters are equal to what's already configured
# Remove any target_tracking_scaling_policy_configuration suboptions that are None
# A scalable target must be registered prior to creating a scaling policy
# handled by AnsibleAwsModule
# Very simplistic
# Get the bucket's current lifecycle rules
# Helper function to deeply compare filters
# Treat empty string as equal to a filter not being set
# Create expiration
# If current_lifecycle_obj is not None then we have rules to compare, otherwise just add the rule
# If rule ID exists, use that for comparison otherwise compare based on prefix
# If nothing appended then append now as the rule must not exist
# If an ID exists, use that otherwise compare based on prefix
# We're not keeping the rule (i.e. deleting) so mark as changed
# Copy objects
# because of the legal S3 transitions, we know only one can exist for each storage class.
# So, our strategy is build some dicts, keyed on storage class and add the storage class transitions that are only
# in updating_rule to updated_rule
# Write lifecycle to bucket
# Amazon interpreted this as not changing anything
# We've seen examples where get_bucket_lifecycle_configuration returns
# the updated rules, then the old rules, then the updated rules again and
# again couple of times.
# Thus try to read the rule few times in a row to check if it has changed.
# Write lifecycle to bucket or, if there no rules left, delete lifecycle configuration
# allow deleting/disabling a rule by id/prefix
# If dates have been set, make sure they're in a valid format
# (c) 2015, Jose Armesto <jose@armesto.net>
# Copyright: (c) 2019, Michael Pechner <mikey@mikey.com>
# new stacks, existing stacks, unspecified stacks
# Return None if the stack doesn't exist
# Stack set has completed operation
# subtract however long we waited already
# this means the deletion beat us, or the stack set is not yet propagated
# AWSRetry will retry on `StackSetNotFound` errors for us
# no template is provided, but if the stack set exists already, we can use the existing one.
# TODO loosen the semantics here to autodetect the account ID and build the ARN
# TODO: need to check the template and other settings for correct check mode
# on create this parameter has a different name, and cannot be referenced later in the job log
# now create/update any appropriate stack instances
# this time, it is likely that either the delete failed or there are more stacks.
# Or create new project:
# Prep both dicts for sensible change comparison:
# clean up params
# Check if project with that name already exists and if so update existing:
# Copyright: (c) 2018, Rob White (@wimnat)
# The resource does not exist, or it has already been deleted
# If we're not waiting for a delete to complete then we're all done
# so just return
# Determine if it's possible to upgrade directly from source version
# to target version, or if it's necessary to upgrade through intermediate major versions.
# When perform_check_only is true, indicates that an upgrade eligibility check needs
# to be performed. Does not actually perform the upgrade.
# There is no compatible version, according to the get_compatible_versions() API.
# The upgrade should fail, but try anyway.
# It's not possible to upgrade directly to the target version.
# Check the module parameters to determine if this is allowed or not.
# If background tasks are in progress, wait until they complete.
# This can take several hours depending on the cluster size and the type of background tasks
# (maybe an upgrade is already in progress).
# It's not possible to upgrade a domain that has background tasks are in progress,
# the call to client.upgrade_domain would fail.
# In check mode (=> PerformCheckOnly==True), a ValidationException may be
# raised if it's not possible to upgrade to the target version.
# If the engine version is ElasticSearch < 7.9, cold storage is not supported.
# When querying a domain < 7.9, the AWS API indicates cold storage is disabled (Enabled: False),
# which makes sense. However, trying to do HTTP POST with Enable: False causes an API error.
# The 'ColdStorageOptions' attribute should not be present in HTTP POST.
# Remove 'ColdStorageOptions' from the current domain config, otherwise the actual vs desired diff
# will indicate a change must be done.
# Elasticsearch 7.9 and above support ColdStorageOptions.
# OpenSearch cluster is attached to VPC
# Modify existing cluster.
# AWS does not allow changing the type. Don't fail here so we return the AWS API error.
# There are no VPCOptions to configure.
# Note the subnets may be the same but be listed in a different order.
# The property was parsed from yaml to datetime, but the AWS API wants a string
# Updating existing domain
# Creating new domain
# Create default if OpenSearch does not exist. If domain already exists,
# the data is populated by retrieving the current configuration from the API.
# By default create ES attached to the Internet.
# If the "VPCOptions" property is specified, even if empty, the API server interprets
# as incomplete VPC configuration.
# "VPCOptions": {},
# Determine if OpenSearch domain already exists.
# current_domain_config may be None if the domain does not exist.
# Validate the engine_version
# For check mode purpose
# Remove the "EngineVersion" attribute, the AWS API does not accept this attribute.
# Create new OpenSearch cluster
# NonExistentQueue is explicitly expected when a queue doesn't exist
# Boto3 returns everything as a string, convert them back to integers/dicts if
# that's what we expected.
# Create a dict() to hold attributes that will be passed to boto3
# The return values changed between boto and boto3, add the old keys too
# Boto3 SQS deals with policies as strings, we want to deal with them as
# dicts
# We handle these as a special case because they're IAM policies
# Boto3 expects strings
# type: (EcsEcr, dict, int) -> Tuple[bool, dict]
# Parse policies, if they are given
# Some failure w/ the policy. It's helpful to know what the
# policy is.
# Sort any lists containing only string types
# Copyright: (c) 2021, Milan Zink <zeten30@gmail.com>
# Convert type_filter to the findingType strings returned by the API
# Botocore only supports specific values for locale and resource_type, however the supported
# values are likely to be expanded, let's avoid hard coding limits which might not hold true in
# the long term...
# silently ignore CREATE_ONLY_PARAMS on update to
# make playbooks idempotent
# values contained in 'current' but not specified in 'desired' are ignored
# value contained in 'desired' but not in 'current' (unsupported attributes) are ignored
# assumption: all 'list' type settings we allow changes for have scalar values
# assumption: all 'dict' type settings we allow changes for have scalar values
# unexpected type
# add some stupid default (cannot create broker without any users)
# replace name with id
# get current state for comparison:
# engine version of 'latest' is taken as "keep current one"
# i.e. do not request upgrade on playbook rerun
# silently ignore delete of unknown broker (to make it idempotent)
# check for pending delete (small race condition possible here
# parameters only allowed on create
# parameters allowed on update as well
# check if web acl exists
# fall thru and look through found ones
# Pull requested and existing capacity providers and strategies.
# Check if capacity provider strategy needs to trigger an update.
# Unless purge_capacity_providers is true, we will not be updating the providers or strategy.
# If either the providers or strategy differ, update the cluster.
# doesn't exist. create it.
# delete the cluster
# it exists, so we should delete it and mark changed.
# return info about the cluster deleted
# Get the domain tags
# This could potentially happen if a domain is deleted between the time
# its domain status was queried and the tags were queried.
# Filter by tags
# Get the domain config
# Copyright: (c) 2018, Yaakov Kuperman <ykuperman@gmail.com>
# GNU General Public License v3.0+ # (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# we can handle the lack of boto3 based on the ec2 module
# the relevant targets associated with this group
# get ahold of the instance in the API
# typically this will happen if the instance doesn't exist
# IPs are represented in a few places in the API, this should
# account for all of them
# build list of TargetGroup objects representing every target group in
# the system
# only collect target groups that actually are connected
# to LBs
# Build a list of all the target groups pointing to this instance
# based on the previous list
# Loop through all the target groups
# Get the list of targets for that target group
# If the target group has this instance as a target, add to
# list. This logic also accounts for the possibility of a
# target being in the target group multiple times with
# overridden ports
# The 'AvailabilityZone' parameter is a weird one, see the
# API docs for more.  Basically it's only supposed to be
# there under very specific circumstances, so we need
# to account for that
# since tgs is a set, each target group will be added only
# once, even though we call add on each successful match
# do this first since we need the IPs later on in this function
# build list of target groups
# In check mode nothing changes...
# Unfortunately we can't filter on the Version, as such we need something custom.
# Description field not available from get_parameter function so get it from describe_parameters
# Handle tag updates for existing parameters
# Add tags in initial creation request
# Overwrite=True conflicts with tags and is not needed for new param
# If we can't describe the parameter we may still be able to delete it
# Copyright (c) 2019, Tom De Keyser (@tdekeyser)
# Copyright (c) 2017 Jon Meran <jonathan.meran@sonos.com>
# logger = logging.getLogger()
# logging.basicConfig(filename='ansible_debug.log')
# check if the job definition exists
# check if definition has changed and register a new version if necessary
# Create Job definition
# remove the Job definition
# use retry decorator for boto3 calls
# see if metadata needs updating
# provider needs updating
# Copyright: Contributors to the Ansible Project
# will be caught by AnsibleAWSModule
# Copyright: (c) 2021, Daniil Kupchenko (@oukooveu)
# python >= 2.7 is required:
# return {
# create new configuration
# update existing configuration (creates new revision)
# it's required because 'config' doesn't contain 'ServerProperties'
# return some useless staff in check mode if configuration doesn't exists
# can be useful when these options are referenced by other modules during check mode run
# Did we split anything?
# If redirect_all_requests is set then don't use the default suffix that has been set
# Wait 5 secs before getting the website_config again to give it time to update
# find all invalidations for the distribution
# check if there is an invalidation with the same caller reference
# Purge rules before adding new ones in case a deletion shares the same
# priority as an insertion.
# Cluster can only be synced if available. If we can't wait
# for this, then just be done.
# Collect ALL nodes for reboot
# No need to wait, we're already done
# Check modifiable data attributes
# Check cache security groups
# check vpc security groups
# Only check for modifications if zone is specified
# Redis only supports a single node (presently) so just use
# the first and only
# The documentation for elasticache lies -- status on rebooting is set
# to 'rebooting cache cluster nodes' instead of 'rebooting'. Fix it
# here to make status checks etc. more sane.
# alias for compat with the original PR 1950
# As of 2021-06 boto3 doesn't offer any built in waiters
# prepare available update methods definitions with current/target values and options
# need to get cluster version and check for the state because
# there can be several updates requested but only one in time can be performed
# TODO: need to check if long arn format enabled.
# include tasks and failures
# R S P
# R* S*
# R S
# R
# P*
# S*
# Validate Inputs
# TBD - validate the rest of the details
# run_task returns a list of tasks created
# Wait for task(s) to be running prior to exiting
# Wait for task to be stopped prior to exiting
# Copyright: (c) 2014, Michael J. Schultz <mjschultz@gmail.com>
# Short names can't contain ':' so we'll assume this is the full ARN
# also have a fixed key for accessing results/details returned
# set default to summary if no option specified
# validations
# get distribution id from domain name alias
# set appropriate cloudfront id
# get details based on options
# get list based on options
# default summary option
# Copyright: (c) 2018, Loic BLOT (@nerzhul) <loic.blot@unix-experience.fr>
# This module is sponsored by E.T.A.I. (www.etai.fr)
# File share gateway
# Volume tape gateway
# iSCSI gateway
# this should never happen
# Handle bad files
# Handle issues loading key
# some fields are datetime which is not JSON serializable
# make them strings
# Load the list of applied policies to include in the response.
# In principle we should be able to just return the response, but given
# eventual consistency behaviours in AWS it's plausible that we could
# end up with a list that doesn't contain the policy we just added.
# So out of paranoia check for this case and if we're missing the policy
# just make sure it's present.
# As a nice side benefit this also means the return is correct in check mode
# SES APIs seem to have a much lower throttling threshold than most of the rest of the AWS APIs.
# Docs say 1 call per second. This shouldn't actually be a big problem for normal usage, but
# the ansible build runs multiple instances of the test in parallel that's caused throttling
# failures so apply a jittered backoff to call SES calls.
# Map in both directions
# If you try to update an index while another index is updating, it throws
# LimitExceededException/ResourceInUseException exceptions at you.  This can be
# pretty slow, so add plenty of retries...
# ResourceNotFoundException is expected here if the table doesn't exist
# The schema/attribute definitions are a list of dicts which need the same
# treatment as boto3's tag lists
# Map from 'HASH'/'RANGE' to attribute name
# Map from attribute name to 'S'/'N'/'B'.
# Put some of the values into places people will expect them
# billing_mode_summary doesn't always seem to be set but is always set for PAY_PER_REQUEST
# and when updating the billing_mode
# convert indexes into something we can easily search against
# run through hash_key_name and range_key_name
# Use ansible_dict_to_boto3_tag_list to generate the list of dicts
# format we need
# Convert the type name to upper case and remove the global_
# Convert the type name to upper case
# TODO (future) It would be nice to catch attempts to change types here.
# TODO (future) it would be nice to add support for deleting an index
# The only thing we can change is the provisioned throughput.
# TODO (future) it would be nice to throw a deprecation here
# rather than dropping other changes on the floor
# TODO (future) Changes to Local Indexes aren't possible after creation,
# we should probably throw a deprecation warning here (original module
# also just dropped these changes on the floor)
# Get throughput / billing_mode changes
# Update table_class use exisiting if none is defined
# Only one index can be changed at a time except if changing the billing mode, pass the first during the
# main update and deal with the others on a slow retry to wait for
# completion
# If neither need updating we can return already
# TODO (future)
# StreamSpecification,
# SSESpecification,
# If an index is mid-update then we have to wait for the update to complete
# before deletion will succeed
# TODO (future) It would be good to split global and local indexes.  They have
# different parameters, use a separate namespace for names,
# It would be nice to make this optional, but because Local and Global
# indexes are mixed in here we need this to be able to tell to which
# group of indexes the index belongs.
# These are used to pass computed data about, not needed for users
# Get the virtual interfaces, filtering by the ID if provided.
# Remove deleting/deleted matches from the results.
# Filter by name if provided.
# If there isn't a unique match filter by connection ID as last resort (because connection_id may be a connection yet to be associated)
# virtual interface type specific parameters
# Ensures the number parameters are int as required by the AWS SDK
# Boto3 is weird about params passed, so only pass nextToken if we have a value
# Fetch all the arns, possibly across multiple pages
# Return the full descriptions of the task definitions, sorted ascending by revision
# The definition specifies revision. We must guarantee that an active revision of that number will result from this.
# A revision has been explicitly specified. Attempt to locate a matching revision
# We cannot reactivate an inactive revision
# Make sure the values are equivalent for everything left has
# We don't care about list ordering because ECS can change things
# if list_val is the port mapping, the key 'protocol' may be absent (but defaults to 'tcp')
# fill in that default if absent and see if it is in right_list then
# Make sure right doesn't have anything that left doesn't
# 'essential' defaults to True when not specified
# Nope.
# No revision explicitly specified. Attempt to find an active, matching revision that has all the properties requested
# Awesome. Have an existing one. Nothing to do.
# Doesn't exist. create it.
# When de-registering a task definition, we can specify the ARN OR the family and revision.
# It exists, so we should delete it and mark changed. Return info about the task definition deleted
# for now it will stay that way until we can sometimes avoid change
# lookup API gateway using tags
# create new API gateway as non were provided and/or found using lookup=tag
# Remove tags from Resource
# add new tags to resource
# Describe API gateway
# (c) 2019, XLAB d.o.o <www.xlab.si>
# will be protected by AnsibleAWSModule
# Handle different event targets
# Iterate through configs and get current event config
# Iterate through existing configs then add the desired config
# Iterate through existing configs omitting specified config
# Iterate through available configs
# Add one to max_attempts as wait() increment
# its counter before assessing it for time.sleep()
# look for matching connections
# verifying if the connections exists; if true, return connection identifier, otherwise return False
# not verifying if the connection exists; just return current connection info
# the connection is found; get the latest state and see if it needs to be updated
# no connection found; create a new one
# Copyright (c) 2015 Mike Mochan
# Prep kwargs
# Only for ip_set
# there might be a better way of detecting an IPv6 address
# Specific for geo_match_set
# Common For everything but ip_set and geo_match_set
# Specific for byte_match_set
# Specific for size_constraint_set
# Specific for regex_match_set
# at time of writing(2017-11-20) no regex pattern paginator exists
# list_geo_match_sets and list_regex_match_sets do not have a paginator
# Filters are deleted using update with the DELETE action
# We do not need to wait for the conditiontuple delete because we wait later for the delete_* call
# tidy up regex patterns
# Bytes
# return a condition agnostic ID for use by waf_rule
# we a simple comparision here: strip down spaces and compare the rest
# TODO: use same XML normalizer on new as used by AWS before comparing strings
# not result from get_broker_info(). use requeste config
# Restore summary
# Weirdly, boto3 doesn't return some keys if the value is empty e.g. Description
# To counter this, add the key if it's missing with a blank value
# If glue_job is not None then check if it needs to be modified, else create it
# Update job needs slightly modified params
# We were returning details of the Web ACL inside a "web_acl"  parameter on
# creation, keep returning it to avoid breaking existing playbooks, but also
# return what the docs said we return (and returned when no change happened)
# Wait for instance to exit transition state before deleting
# Wait for instance to exit transition state before state change
# Try state change
# Grab current instance info
# List of LAG IDs that are exact matches
# List of LAG data that are exact matches
# determine the associated connections and virtual interfaces to disassociate
# If min_links is not 0, there are associated connections, or if there are virtual interfaces, ask for force_delete
# update min_links to be 0 so we can remove the LAG
# if virtual_interfaces and not delete_vi_with_disassociation: Raise failure; can't delete while vi attached
# NOTE: Never set FifoTopic = False. Some regions (including GovCloud)
# don't support the attribute being set, even to False.
# Set content-based deduplication attribute. Ignore if topic_type is not fifo.
# aws sdk expects values to be strings
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sns.html#SNS.Client.set_subscription_attributes
# subscription attributes aren't defined in desired, skipping
# NOTE: subscriptions in 'PendingConfirmation' timeout in 3 days
# We're kinda stuck with CamelCase here, it would be nice to switch to
# snake_case, but we'd need to purge out the alias entries
# Initialize an empty dict() for building TargetTrackingConfiguration policies,
# which will be returned
# Accounting for boto3 response
# Build spec for predefined_metric_spec
# Build spec for customized_metric_spec
# min_adjustment_step attribute is only relevant if the adjustment_type
# is set to percentage change in capacity, so it is a special case
# can't use required_if because it doesn't allow multiple criteria -
# it's only required if policy is SimpleScaling and state is present
# Ensure idempotency with policies
# Backward compatible return values
# In the time when MountPoint was introduced there was a need to add a suffix of network path before one could use it
# AWS updated it and now there is no need to add a suffix. MountPoint is left for back-compatibility purpose
# And new FilesystemAddress variable is introduced for direct use with other modules (e.g. mount)
# AWS documentation is available here:
# U(https://docs.aws.amazon.com/efs/latest/ug/gs-step-three-connect-to-ec2-instance.html)
# Set tags *after* doing camel to snake
# check valid modifiable parameters
# check allowed datatype for modified parameters
# check allowed values for modifiable parameters
# check if a new value is different from current value
# if the user specified parameters to reset, only check those for change
# otherwise check all to find a change
# compares current group parameters with the parameters we've specified to to a value to see if this will change the group
# used to compare with the reset parameters' dict to see if there have been changes
# determine whether to reset all or specific parameters
# determine changed
# check that the needed requirements are available
# Taking action
# confirm that the group exists without any actions
# modify existing group
# create group
# delete group
# Copyright: (c) 2018, REY Remi
# current_secret exists; decide what to do with it
# Based off of https://github.com/mmochan/ansible-aws-ec2-asg-scheduled-actions/blob/master/library/ec2_asg_scheduled_action.py
# (c) 2016, Mike Mochan <@mmochan>
# Some of these params are optional
# To correctly detect changes convert the start_time & end_time to datetime object
# ReplicationSubnetGroupIdentifier gets translated to lower case anyway by the API
# need to sanitize values that get returned from the API
# https://docs.aws.amazon.com/efs/latest/ug/gs-step-three-connect-to-ec2-instance.html
# we always wait for the state to be available when creating.
# if we try to take any actions on the file system before it's available
# we'll throw errors
# To modify mount target it should be deleted and created again
# If no security groups were passed into the module, then do not change it.
# Copyright: (c) 2018, JR Kerkstra <jrkerkstra@example.org>
# Copyright (c) 2019, Prasad Katti (@prasadkatti)
# list_executions is eventually consistent
# this will never be executed anymore
# describe_execution is eventually consistent
# Copyright (c) 2017, Ben Tomasik <ben@tomasik.io>
# Metadata was not set meaning there is no active rule set
# No ruleset name deactivates all rulesets
# Note: this currently isn't as relaxed for the nested settings, (eg. S3Settings)
# modify can't update tags or password
# modify can't update tags
# modify and create don't return tags
# there is currently no paginator for wafv2
# Update cagw
# Poorly documented, but "publishMetricAction.dimensions ... must have length less than or equal to 1"
# NetworkFirewallPolicyManager can cope with a list for future-proofing
# publish_metric_dimension_values=dict(type='list', elements='str', required=False, aliases=['publish_metric_dimension_value']),
# Actions need to be defined before potentially consuming them
# to support automatic testing without broker reboot
# returns API response object
# better support for testing
# This must be the registered result of a loop of route53 tasks
# This must be a single route53 task
# ignored
# Minimum volume size is 1GiB. We'll use volume size explicitly set to 0 to be a signal not to create this volume
# Otherwise, we dump binary to the user's terminal
# aws returns the arn of the task definition
# but the user is just entering
# expected is params. DAEMON scheduling strategy returns desired count equal to
# number of instances running; don't check desired count if scheduling strat is daemon
# filter placement_constraint and left only those where value is not None
# use-case: `distinctInstance` type should never contain `expression`, but None will fail `str` type validation
# desired count is not required if scheduling strategy is daemon
# check various parameters and AWS SDK versions and give a helpful error if the SDK is not new enough for feature
# fails if deployment type is not CODE_DEPLOY or ECS
# update required
# Wait for service to be INACTIVE prior to exiting
# check every 10s
# pipeline_description raises DataPipelineNotFound
# activated but completed more rapidly than it was checked
# removing objects from the unique id so we can update objects or populate the pipeline after creation without needing to make a new pipeline
# pipeline_description returns a pipelineDescriptionList of length 1
# dp is a dict with keys "description" (str), "fields" (list), "name" (str), "pipelineId" (str), "tags" (dict)
# Get uniqueId and pipelineState in fields to add to the exit_json result
# Remove fields; can't make a list snake_case and most of the data is redundant
# Note: tags is already formatted fine so we don't need to do anything with it
# Reformat data pipeline and add reformatted fields back
# See if there is already a pipeline with the same unique_id
# A change is expected but not determined. Updated to a bool in create_pipeline().
# Unique ids are the same - check if pipeline needs modification
# Definition needs to be updated
# No changes
# delete old version
# There isn't a pipeline or it has different parameters than the pipeline in existence.
# Make pipeline
# Put pipeline definition
# Just a filter_prefix or just a single tag filter is a special case
# Otherwise we need to use 'And'
# regex library
# remove_tags_from_certificate wants a list of key, value pairs, not a list of keys.
# Takes in two text arguments
# Each a PEM encoded certificate
# Or a chain of PEM encoded certificates
# May include some lines between each chain in the cert, e.g. "Subject: ..."
# Returns True iff the chains/certs are functionally identical (including chain order)
# Chain length is the same
# Takes in PEM encoded data with no headers
# returns equivilent DER as byte array
# Store this globally to avoid repeated recompilation
# Use regex to split up a chain or single cert into an array of base64 encoded data
# Using "-----BEGIN CERTIFICATE-----" and "----END CERTIFICATE----"
# Noting that some chains have non-pem data in between each cert
# This function returns only what's between the headers, excluding the headers
# This happens if the regex doesn't match at all
# shouldn't happen
# This could happen if the user identified the certificate using 'certificate_arn' or 'domain_name',
# and the 'Name' tag in the AWS API does not match the ansible 'name_tag'.
# Are the existing certificate in ACM and the local certificate the same?
# Need to test this
# not sure if Amazon appends the cert itself to the chain when self-signed
# When there is no chain with a cert
# it seems Amazon returns the cert itself as the chain
# note: returned domain will be the domain of the previous cert
# update cert in ACM
# Validate argument requirements
# Update existing certificate that was previously imported to ACM.
# len(certificates) == 0
# Import new certificate to ACM.
# Add/remove tags to/from certificate
# Check argument requirements
# at least one of these should be specified.
# exactly one of these should be specified
# Because we're setting the Name tag, we need to explicitly not purge when tags isn't passed
# The module was originally implemented to filter certificates based on the 'Name' tag.
# Other tags are not used to filter certificates.
# It would make sense to replace the existing name_tag, domain, certificate_arn attributes
# with a 'filter' attribute, but that would break backwards-compatibility.
# fetch the list of certificates currently in ACM
# tests()
# as the list_ request does not contain the Etag (which we need), we need to do another get_ request here
# Little helper for turning xss_protection into XSSProtection and not into XssProtection
# threshhold for returned timestamp age
# consider change made by this execution of the module if returned timestamp was very recent
# Inserts a Quantity field into dicts with a list ('Items')
# Items on top level case
# Items on second level case
# Determine if the CodePipeline exists
# Update dictionary with provided module params:
# os.stat constants
# include/exclude
# not on the include list, so we don't want it.
# skip it, even if previously included.
# dirpath = path *to* the directory
# dirnames = subdirs *in* our directory
# filenames
# don't modify the input dict
# reminder: file extension is '.txt', not 'txt'.
# override? use it.
# else sniff it
# might be None or '' from one of the above. Not a great type but better than nothing.
# 404 (Missing) - File doesn't exist, we'll need to upload
# 403 (Denied) - Sometimes we can write but not read, assume we'll need to upload
# init/fetch info from S3 if we're going to use it for comparisons
# now actually run the strategies
# since we have a remote s3 object, compare the values.
# files match, so remove the entry
# file etags don't match, keep the entry.
# we don't have an etag, so we'll keep it.
# fstat = entry['stat']
# py2's datetime doesn't have a timestamp() field, so we have to revert to something more awkward.
# remote_modified_epoch = entry['s3_head']['LastModified'].timestamp()
# else: probably 'force'. Basically we don't skip with any with other strategies.
# prune 'please skip' entries, if any.
# if this fails exception is caught in main()
# can delete 1000 objects at a time
# future options: encoding, metadata, retries
# mark changed if we actually upload something.
# result.update(filelist=actionable_filelist)
# validate compute environment name
# if module.params['minv_cpus'] is not None:
# check if the compute environment exists
# Update Batch Compute Environment configuration
# Create Batch Compute Environment
# Describe compute environment
# remove the compute environment
# Get all targets for the target group
# Ensure that fields that are only available for active clusters are
# included in the returned value
# this waits for an IAM role to become fully available, at the cost of
# taking a long time to fail when the IAM role/policy really is invalid
# Copyright: (c) 2018, Shuang Wang <ooocamel@icloud.com>
# Running environments will be terminated before deleting the application
# check if rule group exists
# If an existing direct connect gateway matches our args
# then a match is considered to have been found and we will not create another dxgw.
# if a gateway_id was provided, check if it is attach to the DXGW
# attach the dxgw to the supplied virtual_gateway_id
# if params['virtual_gateway_id'] is not provided, check the dxgw is attached to a VPG. If so, detach it.
# create a new dxgw
# if a vpc-id was supplied, attempt to attach it to the dxgw
# wait for deleting association
# rule_type=dict(type='str', required=True, aliases=['type'], choices=['stateless', 'stateful']),
# ELB exists so check subnets, security groups and tags match what has been passed
# ELB attributes
# If listeners changed, mark ELB as changed
# Update ELB ip address type only if option has been provided
# Update the objects to pickup changes
# Get the ELB again
# Get the ELB listeners again
# Update the ELB attributes
# Check for subnets or subnet_mappings if state is present
# Copyright (c) 2018 Dennis Conrad for Sainsbury's
# Some parameters are not ready instantly if you don't wait for available
# cluster status
# Simple wrapper around delete, try to avoid throwing an error if some other
# operation is in progress
# Package up the optional parameters
# https://github.com/boto/boto3/issues/400
# enhanced_vpc_routing parameter change needs an exclusive request
# change the rest
# can't use module basic required_if check for this case
# cache behaviors are order dependent so we don't preserve the existing ordering when purge_cache_behaviors
# is true (if purge_cache_behaviors is not true, we can't really know the full new order)
# we don't care if the order of how cloudfront stores the methods differs - preserving existing
# order reduces likelihood of making unnecessary changes
# e_tag = distribution['ETag']
# check if the job queue exists
# Update Batch Job Queue configuration
# Create Job Queue
# Describe job queue
# remove the Job Queue
# we don't have an entry (or a table?)
# wrap all our calls to catch the standard exceptions. We don't pass `module` in to the
# methods so it's easier to do here.
# changes needed
# no changes needed
# compare_policies() takes two dicts and makes them hashable for comparison
# If the CallerReference is a value already sent in a previous identity request
# the returned value is that of the original request
# update cloudfront origin access identity
# create cloudfront origin access identity
# and throw different exceptions
# Get the attributes and tags for each target group
# Get tags for each target group
# e.g: Cluster was listed but is in deleting state
# Check if any fargate profiles is in changing states, if so, wait for the end
# Unpredictably get_identity_verification_attributes doesn't include the identity even when we've
# just registered it. Suspect this is an eventual consistency issue on AWS side.
# Don't want this complexity exposed users of the module as they'd have to retry to ensure
# a consistent return from the module.
# To avoid this we have an internal retry that we use only after registering the identity.
# Unpredictably get_identity_notifications doesn't include the notifications when we've
# just registered the identity.
# To avoid this we have an internal retry that we use only when getting the current notification
# status for return.
# No clear AWS docs on when this happens, but it appears sometimes identities are not included in
# in the notification attributes when the identity is first registered. Suspect that this is caused by
# eventual consistency within the AWS services. It's been observed in builds so we need to handle it.
# When this occurs, just return None and we'll assume no identity notification settings have been changed
# from the default which is reasonable if this is just eventual consistency on creation.
# See: https://github.com/ansible/ansible/issues/36065
# Paranoia check for coding errors, we only requested one identity, so if we get a different one
# something has gone very wrong.
# Not passing the parameter should not cause any changes.
# If there is no configuration for notifications cannot be being sent to topics
# hence assume None as the current state.
# If there is information on the notifications setup but no information on the
# particular notification topic it's pretty safe to assume there's no topic for
# this notification. AWS API docs suggest this information will always be
# included but best to be defensive
# The topic has to be omitted from the request to disable the notification.
# If there is no configuration for topic notifications, headers cannot be being
# forwarded, hence assume false.
# AWS API doc indicates that the headers in fields are optional. Unfortunately
# it's not clear on what this means. But it's a pretty safe assumption that it means
# headers are not included since most API consumers would interpret absence as false.
# AWS requires feedback forwarding to be enabled unless bounces and complaints
# are being handled by SNS topics. So in the absence of identity_notifications
# information existing feedback forwarding must be on.
# forwarding state it's pretty safe to assume forwarding is off. AWS API docs
# suggest this information will always be included but best to be defensive
# Glue module doesn't appear to have any waiters, unlike EC2 or RDS
# Get security group IDs from names
# If glue_connection is not None then check if it needs to be modified, else create it
# We need to slightly modify the params for an update
# If changed, get the Glue connection again
# check if ip set exist
# If ES cluster is attached to the Internet, the "VPCOptions" property is not present.
# The "VPCOptions" returned by the describe_domain_config API has
# additional attributes that would cause an error if sent in the HTTP POST body.
# The "StartAt" property is converted to datetime, but when doing comparisons it should
# be in the string format "YYYY-MM-DD".
# Provisioning of "AdvancedOptions" is not supported by this module yet.
# Get the ARN of the OpenSearch cluster.
# Timeout occured.
# It's possible to upgrade directly to the target version.
# No direct upgrade is possible. Upgrade to the highest version available.
# Return the highest compatible version which is lower than target_version
# In the event of an error it can be helpful to ouput things like the
# 'name'/'arn' of a resource.
# If you override _flush_update you're responsible for handling check_mode
# If you override _do_update_resource you'll only be called if check_mode == False
# (CHECK MODE)
# If you override _do_update_resource you'll only be called if there are
# updated pending and check_mode == False
# (CHECK_MODE)
# the AWSRetry wrapper doesn't support the wait functions (there's no
# public call we can cleanly wrap)
# This can be overridden by a subclass *if* 'Tags' isn't returned as a part of
# the standard Resource description
# If the resource supports using "TagSpecifications" on creation we can
# Name parameter is unique (by region) and can not be modified.
# If the Tags are available from the resource, then use them
# Otherwise we'll have to look them up
# Tags are returned as a part of the resource, but have to be updated
# via dedicated tagging methods
# So that diff works in check mode we need to know the full target state
# Tags are a stored as a list, but treated like a list, the
# simplisic '==' in _set_resource_value doesn't do the comparison
# properly
# source: https://github.com/tlastowka/calculate_multipart_etag/blob/master/calculate_multipart_etag.py
# calculate_multipart_etag  Copyright (C) 2015
# calculate_multipart_etag is free software: you can redistribute it and/or modify
# calculate_multipart_etag is distributed in the hope that it will be useful,
# along with calculate_multipart_etag.  If not, see <http://www.gnu.org/licenses/>.
# > 1
# If there are no secondary indexes, simply return
# potentially AuthorizationError when listing subscriptions for third party topic
# topic names cannot have colons, so this captures the full topic name
# AWS automatically injects disableSubscriptionOverrides if you set an
# http policy
# AWS SNS expects phone numbers in
# and canonicalizes to E.164 format
# See <https://docs.aws.amazon.com/sns/latest/dg/sms_publish-to-phone.html>
# Paginators can't be (easily) wrapped, so we wrap this method with the
# retry - retries the full fetch, but better than simply giving up.
# Network Firewall returns a token when you perform create/get/update
# Tags are returned as a part of the metadata, but have to be updated
# simplisic '==' in _set_metadata_value doesn't do the comparison
# Users should never see this, but let's cover ourself
# Rule Group is already in the process of being deleted (takes time)
# Seems a little kludgy but the HOME_NET ip variable is how you
# configure which source CIDRs the traffic should be filtered for.
# Perform some transformations
# Finally build the 'rule'
# Apply some pre-flight tests before trying to run the creation.
# Policy is already in the process of being deleted (takes time)
# : is only valid in ARNs
# During deletion, there's a phase where this will return Metadata but
# no policy
# Firewall is already in the process of being deleted (takes time)
# Because the canonicalization of a non-ARN policy name will require an API call,
# try comparing the current name to the policy name we've been passed.
# If they match we don't need to perform the lookup.
# We don't need to perform EC2 lookups if we're not changing anything.
# # Apply some pre-flight tests before trying to run the creation.
# if 'Capacity' not in self._metadata_updates:
# There are no 'metadata' components of a Firewall to update
# There's no tool for 'bulk' updates, we need to iterate through these
# one at a time...
# Disable Change Protection...
# When disabling change protection, do so *before* making changes
# General Changes
# Enable Change Protection.
# When enabling change protection, do so *after* making changes
# It takes a couple of seconds before the firewall starts to update
# the subnets and policies, pause if we know we've changed them.  We'll
# be waiting subtantially more than this...
# Unlike RuleGroups and Policies for some reason Firewalls have the tags set
# directly on the resource.
# find same priority rules
# TODO(odyssey4me):
# Figure out why pipelining does not work and fix it
# Ensure that any Windows hosts in your inventory have one of the
# following set, in order to trigger this code:
# ansible_shell_type: cmd
# ansible_shell_type: powershell
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
# Generate powershell commands
# Implement buffering much like the other connection plugins
# Implement 'env' for the environment settings
# Implement 'input-data' for whatever it might be useful for
# Add timeout parameter
# Work out a better way to wait until the command has exited
# Wait for 5% of the time already elapsed
# Decode xml from windows
# Handle exception for file/path IOError
# TODO(daveol)
# make using connection plugins optional
# TODO(daveol): Fix "Invalid characters were found in group names"
# This warning is generated because of uuid's
# This needs the guest powered on, 'qemu-guest-agent' installed and the org.qemu.guest_agent.0 channel configured.
# type==0 returns all types (users, os, timezone, hostname, filesystem, disks, interfaces)
# Get variables for compose
# (c) 2015, Maciej Delmanowski <drybjed@gmail.com>
# (c) 2025, Dougal Seeley <git@dougalseeley.com>
# Ensure clone_source is valid
# Conversion factors to bytes
# Convert size to bytes
# If no clone_source is provided, just create an empty volume
# StringIO as BytesIO for python2/3 compatibility
# Ensure we actually have some CIDATA before creating the CIDATA cdrom
# Remote iso XML
# Copyright: (c) 2007, 2012 Red Hat, Inc
# Michael DeHaan <michael.dehaan@gmail.com>
# Seth Vidal <skvidal@fedoraproject.org>
# libvirt returns maxMem, memory, and cpuTime as long()'s, which
# xmlrpclib tries to convert to regular int's during serialization.
# This throws exceptions, so convert them to strings here and
# assume the other end of the xmlrpc connection can figure things
# out or doesn't care.
# Change autostart flag only if needed
# A dict of interface types (found in their `type` attribute) to the
# corresponding "source" attribute name of their  <source> elements
# user networks don't have a <source> element
# We do not support fuzzy matching against any interface types
# not defined here
# TODO: provide info from parser
# We'll support supplying the domain's name either from 'name' parameter or xml
# But we will fail if both are defined and not equal.
# since there's no <name> in the xml, we'll add it
# From libvirt docs (https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainDefineXML):
# -- A previous definition for this domain with the same UUID and name would
# be overridden if it already exists.
# If a domain is defined without a <uuid>, libvirt will generate one for it.
# If an attempt is made to re-define the same xml (with the same <name> and
# no <uuid>), libvirt will complain with the following error:
# operation failed: domain '<name>' already exists with <uuid>
# If a domain with a similiar <name> but different <uuid> is defined,
# libvirt complains with the same error. However, if a domain is defined
# with the same <name> and <uuid> as an existing domain, then libvirt will
# update the domain with the new definition (automatically handling
# addition/removal of devices. some changes may require a boot).
# we are updating a domain's definition
# A user should not try defining a domain with the same name but
# different UUID
# Users will often want to define their domains without an explicit
# UUID, instead giving them a unique name - so we support bringing
# over the UUID from the existing domain
# the counts of interfaces of a similar type/source
# key'd with tuple of (type, source)
# iterate user-defined interfaces
# we want to count these, but not try to change their MAC address
# In this case, we may have updated the definition or it might be the same.
# We compare the domain's previous xml with its new state and diff
# the changes. This allows users to fix their xml if it results in
# non-idempotent behaviour (e.g. libvirt mutates it each time)
# there was no existing XML, so this is a newly created domain
# Use the undefine function with flag to also handle various metadata.
# This is especially important for UEFI enabled guests with nvram.
# Provide flag as an integer of all desired bits, see 'ENTRY_UNDEFINE_FLAGS_MAP'.
# Integer 55 takes care of all cases (55 = 1 + 2 + 4 + 16 + 32).                flag = 0
# Check mutually exclusive flags
# Get and add flag integer from mapping, otherwise 0.
# Finally, execute with flag
# entryid = -1 returns a list of everything
# Get all entries
# Get active entries
# identify what type of entry is given in the xml
# find the one mac we're looking for
# add the host
# pretend there was a change
# change the host
# not supported on RHEL 6
# (c) 2025, Joey Zhang <thinkdoggie@gmail.com>
# Required options
# Installation media options
# Controller devices
# Input devices
# Host devices
# Sound devices
# Audio devices
# Watchdog devices
# Serial devices
# Parallel devices
# Channel devices
# Console devices
# Video devices
# Smartcard devices
# Redirection devices
# Memory balloon devices
# TPM devices
# RNG devices
# Panic devices
# Shared memory devices
# Vsock devices
# IOMMU devices
# Build command sections
# Always add --noautoconsole for non-interactive execution
# Execute the command
# run virt-install to create new vm
# Define argument specification
# Connection options
# Basic VM options
# Hardware configuration
# CPU configuration
# Installation options
# Guest OS options
# Storage options
# Network options
# Graphics options
# Virtualization options
# Device options
# Miscellaneous options
# Add the 'import' option (Python keyword)
# Create module
# Copyright: (c) 2022, Brian Scholer (@briantist)
# Copyright: (c) 2021, Brian Scholer (@briantist)
# (c) 2022, Brian Scholer (@briantist)
# ansible-core 2.10 or later
# ansible 2.9
# Loading ensures that the options are initialized in ConfigManager
# TODO: remove process_deprecations() if backported fix is available (see method definition)
# (c) 2021, Brian Scholer (@briantist)
# this method was added in hvac 1.0.0
# See: https://github.com/hvac/hvac/pull/869
# this method was removed in hvac 1.0.0
# See: https://github.com/hvac/hvac/issues/758
# (c) 2020, Brian Scholer (@briantist)
# (c) 2015, Julie Davila (@juliedavila) <julie(at)davila.io>
# process connection options
# secret field splitter
# begin options processing methods
# split secret and field
# Check response for KV v2 fields and flatten nested secret data.
# https://vaultproject.io/api/secret/kv/kv-v2.html#sample-response-1
# sentinel field checks
# unwrap nested data
# everything after here implements return_as == 'dict'
# (c) 2023, Tom Kivlin (@tomkivlin)
# TODO: write_data will eventually turn back into write
# see: https://github.com/hvac/hvac/issues/1034
# https://github.com/ansible-collections/community.hashi_vault/issues/389
# https://github.com/hvac/hvac/issues/797
# HVAC returns a raw response object when the body is not JSON.
# That includes 204 responses, which are successful with no body.
# So we will try to detect that and a act accordingly.
# A better way may be to implement our own adapter for this
# collection, but it's a little premature to do that.
# Copyright (c) 2021 Brian Scholer (@briantist)
# The interfaces in this file are meant for use within the community.hashi_vault collection
# allow first item to be specified as value only and assign to assumed option name
# TODO: this is a workaround for deprecations not being shown in lookups
# If a fix is backported to 2.9, this should be removed.
# Otherwise, we'll have to test with fixes that are available and see how we
# can determine whether to execute this conditionally.
# nicked from cli/__init__.py
# with slight customizations to help filter out relevant messages
# (relying on the collection name since it's a valid attrib and we only have 1 plugin at this time)
# warn about deprecated config options
# remove this item from the list so it won't get processed again by something else
# (c) 2022, Isaac Wagner (@idwagner)
# Vault has two separate methods, one for delete latest version,
# and delete specific versions.
# (c) 2024, Martin Chmielewski (@M4rt1nCh)
# (c) 2022, Florent David (@Ripolin)
# generate_certificate is a write operation which always return a new certificate
# we don't actually need to import hvac directly in this module
# because all of the hvac calls happen in module utils, but
# we would like to control the error message here for consistency.
# we override this from the shared argspec in order to turn off no_log
# otherwise we would not be able to return the input token value
# we override this from the shared argspec because the default for
# this module should be True, which differs from the rest of the
# collection since 4.0.0.
# a login is technically a write operation, using storage and resources
# with the token auth method, we don't actually perform a login operation
# nor change the state of Vault; it's read-only (to lookup the token's info)
# token creation is a write operation, using storage and resources
# (c) 2023, Devon Mar (@devon-mar)
# Copyright (c) 2022 Junrui Chen (@jchenship)
# if mount_point is not provided, it will use the default value defined
# in hvac library (e.g. `azure`)
# if jwt exists, use provided jwt directly, otherwise trying to get jwt
# from azure service principal or managed identity
# the logic of getting azure scope is from this function
# https://github.com/Azure/azure-cli/blob/azure-cli-2.39.0/src/azure-cli-core/azure/cli/core/auth/util.py#L72
# the reason we expose resource instead of scope is resource is
# more aligned with the vault azure auth config here
# https://www.vaultproject.io/api-docs/auth/azure#resource
# service principal
# user assigned managed identity
# system assigned managed identity
# first merge in the entire response at the top level
# but, rather than being missing, the auth field is going to be None,
# so we explicitly overwrite that with our original value.
# then we'll merge the data field right into the auth field
# and meta->metadata needs a name change
# usually we would warn here, but the v1 method doesn't seem to be deprecated (yet?)
# when token=None on this method, it calls lookup-self
# must manually set the client token with JWT login
# see https://github.com/hvac/hvac/issues/644
# fixed in https://github.com/hvac/hvac/pull/746
# but we do it manually to maintain compatibilty with older hvac versions.
# we implement retries via the urllib3 Retry class
# https://github.com/ansible-collections/community.hashi_vault/issues/58
# try for a standalone urllib3
# failing that try for a vendored version within requests
# https://www.vaultproject.io/api#http-status-codes
# 429 is usually a "too many requests" status, but in Vault it's the default health status response for standby nodes.
# Precondition failed. Returned on Enterprise when a request can't be processed yet due to some missing eventually consistent data.
# Should be retried, perhaps with a little backoff.
# Internal server error. An internal error has occurred, try again later. If the error persists, report a bug.
# A request to Vault required Vault making a request to a third party; the third party responded with an error of some kind.
# Vault is down for maintenance or is currently sealed. Try again later.
# this field name changed in 1.26.0, and in the interest of supporting a wider range of urllib3 versions
# we'll use the new name whenever possible, but fall back seamlessly when needed.
# See also:
# - https://github.com/urllib3/urllib3/issues/2092
# - https://github.com/urllib3/urllib3/blob/main/CHANGES.rst#1260-2020-11-10
# None allows retries on all methods, including those which may not be considered idempotent, like POST
# validate_certs is only used to optionally change the value of ca_cert
# our transformed ca_cert value will become the verify parameter for the hvac client
# because hvac requires requests which requires urllib3 it's unlikely we'll ever reach this condition.
# This is defined here because Retry may not be defined if its import failed.
# As mentioned above, that's very unlikely, but it'll fail sanity tests nonetheless if defined with other classes.
# We don't want the Retry class raising its own exceptions because that will prevent
# hvac from raising its own on various response codes.
# We set this here, rather than in the defaults, because if the caller sets their own
# dict for retries, we use it directly, but we don't want them to have to remember to always
# set raise_on_status=False themselves to get proper error handling.
# On the off chance someone does set it, we leave it alone, even though it's probably a mistake.
# That will be mentioned in the parameter docs.
# needs urllib 1.15+ https://github.com/urllib3/urllib3/blob/main/CHANGES.rst#115-2016-04-06
# but we should always have newer ones via requests, via hvac
# this method focuses on validating the option, and setting a valid Retry object construction dict
# it intentionally does not build the Session object, which will be done elsewhere
# we'll start with a copy of our defaults
# try int
# on int, retry the specified number of times, and use the defaults for everything else
# on zero, disable retries
# try dict
# on dict, use the value directly (will be used as the kwargs to initialize the Retry instance)
# if it can be interpreted as dict
# if it can't be interpreted as dict
# but can be interpreted as str
# use this str as http and https proxy
# record the new/interpreted value for 'proxies' option
# This is needed because of this (https://hvac.readthedocs.io/en/stable/source/hvac_v1.html):
# # verify (Union[bool,str]) - Either a boolean to indicate whether TLS verification should
# # be performed when sending requests to Vault, or a string pointing at the CA bundle to use for verification.
# Validate certs option was not explicitly set
# Check if VAULT_SKIP_VERIFY is set
# VAULT_SKIP_VERIFY is set
# Check that we have a boolean value
# Not a boolean value fallback to default value (True)
# Use the inverse of VAULT_SKIP_VERIFY
# Copyright (c) 2021 Devon Mar (@devon-mar)
# must manually set the client token with userpass login
# fixed in 0.11.0 (https://github.com/hvac/hvac/pull/733)
# but we keep the old behavior to maintain compatibility with older hvac
# logout to prevent accidental use of inferred tokens
# https://github.com/ansible-collections/community.hashi_vault/issues/13
# More context on the need to call Config Manager methods:
# Some issues raised around deprecations in plugins not being processed led to comments
# from core maintainers around the need to use Config Manager and also to ensure any
# option needed is always retrieved using AnsiblePlugin.get_option(). At the time of this
# writing, based on the way Config Manager is implemented, that's not actually necessary,
# and calling AnsiblePlugin.set_options() to initialize them is enough. But that's not
# guaranteed to stay that way, if get_option() is used to "trigger" internal events.
# More reading:
# - https://github.com/ansible-collections/community.hashi_vault/issues/35
# - https://github.com/ansible/ansible/issues/73051
# - https://github.com/ansible/ansible/pull/73058
# - https://github.com/ansible/ansible/pull/73239
# - https://github.com/ansible/ansible/pull/73240
# AnsiblePlugin.has_option was added in 2.10, see https://github.com/ansible/ansible/pull/61078
# see https://github.com/ansible-collections/community.hashi_vault/issues/10
# Options which seek to use environment vars that are not Ansible-specific
# should load those as values of last resort, so that INI values can override them.
# For default processing, list such options and vars here.
# Alternatively, process them in another appropriate place like an auth method's
# validate_ method.
# key = option_name
# value = dict with "env" key which is a list of env vars (in order of those checked first; process stops when value is found),
# and an optional "default" key whose value will be set if none of the env vars are found.
# An optional boolean "required" key can be used to specify that a value is required, so raise if one is not found.
# we use has_option + get_option rather than get_option_default
# because we will only override if the option exists and
# is None, not if it's missing. For plugins, that is the usual,
# but for modules, they may have to set the default to None
# in the argspec if it has late binding env vars.
# please keep this list in alphabetical order of auth method name
# so that it's easier to scan and see at a glance that a given auth method is present or absent
# Copyright: (c) 2018, Chris Houseknecht <@chouseknecht>
# STARTREMOVE (downstream)
# ENDREMOVE (downstream)
# Copyright (c) 2021, Red Hat
# Copyright (c) 2020-2021, Red Hat
# Returned value names need to match k8s modules parameter names, to make it
# easy to pass returned values of openshift_auth to other k8s modules.
# Discussion: https://github.com/ansible/ansible/pull/50807#discussion_r248827899
# python-requests takes either a bool or a path to a ca file as the 'verify' param
# Get needed info to access authorization APIs
# return k8s_auth as well for backwards compatibility
# Request authorization code using basic auth credentials
# In here we have `code` and `state`, I think `code` is the important one
# Using authorization code given to us in the Location header of the previous request, request a token
# This is just base64 encoded 'openshift-challenging-client:'
# We need to do something a little wonky to wait if the user doesn't supply a custom condition
# Don't use default wait logic in perform_action
# Want to conditionally add these so we don't overwrite what is automically added when nothing is provided
# Get TemplateInstances from a provided namespace
# Get TemplateInstances from all namespaces
# pylint: disable=use-a-generator
# Use a generator instead 'all(desired.get(key, True) == item.get(key, False) for key in keys)'
# filter managed images
# Be sure to pick up the newest managed image which should have an up to date information
# 2nd try to get the pull spec from any image stream
# Sorting by creation timestamp may not get us up to date info. Modification time would be much
# Unable to delete layer
# non-2xx/3xx response doesn't cause an error
# image size is larger than the permitted limit range max size
# keeping because tag is used
# There are few options why the image may not be found:
# 1. the image is deleted manually and this record is no longer valid
# 2. the imagestream was observed before the image creation, i.e.
# check prune over limit size
# keeping all images because of image stream too young
# skipping because it does not exist anymore
# Update Image stream tag
# Deleting tags without items
# Update ImageStream
# keeping external image because all_images is set to false
# pruning only managed images
# keeping because of keep_younger_than
# Deleting image from registry
# add blob for image name
# Delete image from cluster
# validate that host has a scheme
# Analyze Image Streams
# Create image mapping
# Create limit range mapping
# Stage 1: delete history from image streams
# Create a list with images referenced on image stream
# Stage 2: delete images
# When namespace is defined, prune only images that were referenced by ImageStream
# from the corresponding namespace
# The image is referenced in one or more Image stream
# The image is not existing anymore
# Validate url
# Make sure bindDN and bindPassword are both set, or both unset
# validate query scope
# validate deref aliases
# validate timeout
# Validate DN only
# validate filter
# Extract Scheme
# ipv6 literal (with no port)
# set deref alias (TODO: need to set a default value to reset for each transaction)
# Take the last attribute from the other DN to compare against
# qry retrieves entries from an LDAP server
# query_attributes is the attribute for a specific filter that, when conjoined with the common filter,
# retrieves the specific LDAP entry from the LDAP server. (e.g. "cn", when formatted with "aGroupName"
# and conjoined with "objectClass=groupOfNames", becomes (&(objectClass=groupOfNames)(cn=aGroupName))")
# filter that returns all values
# Builds the query containing a filter that conjoins the common filter given
# in the configuration with the specific attribute filter for which the attribute value is given
# Query retrieves entries from an LDAP server
# Get group entry from LDAP
# Extract member UIDs from group entry
# Get name from User defined mapping
# ExtractMembers returns the LDAP member entries for a group specified with a ldapGroupUID
# if we already have it cached, return the cached value
# This happens in cases where we did not list out every group.
# In that case, we're going to be asked about specific groups.
# Get group members
# Check group Existence
# image reference does not match hostname/namespace/name pattern - skipping
# Attempt to dereference istag. Since we cannot be sure whether the reference refers to the
# integrated registry or not, we ignore the host part completely. As a consequence, we may keep
# image otherwise sentenced for a removal just because its pull spec accidentally matches one of
# our imagestreamtags.
# set the tag if empty
# Ignoring container because it has no reference to image
# A pod is only *excluded* from being added to the graph if its phase is not
# pending or running. Additionally, it has to be at least as old as the minimum
# age threshold defined by the algorithm.
# Determine 'from' reference
# Json Path is always spec.strategy
# Analyze image reference from Pods
# Analyze image reference from Resources creating Pod
# Analyze image reference from Build/BuildConfig
# Overrides incremental
# Environment variable
# Docker strategy option
# caching option
# commit
# Instantiate Build from Build config
# Re-run Build
# list all builds from namespace
# Build status.phase is matching the requested state and is not completed
# Set cancelled to true
# Make sure the build phase is really cancelled.
# list replicationcontroller candidate for pruning
# Get ReplicationController
# Validate LDAP URL Annotation
# Validate LDAP UID Annotation
# We raise an error for group part of the allow_group not matching LDAP sync criteria
# Make sure we aren't taking over an OpenShift group that is already related to a different LDAP group
# Overwrite Group Users data
# Create connection object
# Get Synchronize object
# Determine what to sync : list groups
# List LDAP Group to synchronize
# Get membership data
# Determine usernames for members entries
# Get group name
# Make Openshift group
# Create Openshift Groups
# Check if LDAP group exist
# if the LDAP entry that was previously used to create the group doesn't exist, prune it
# Delete Group
# validate LDAP sync configuration
# Split host/port
# DeploymentConfigAnnotation is an annotation name used to correlate a deployment with the
# DeploymentConfig on which the deployment is based.
# This is set on replication controller pod template by deployer controller.
# validate that replication controller status is either 'Complete' or 'Failed'
# verify if the deploymentconfig associated to the replication controller is still existing
# skip this binding as the roleRef.kind does not match
# select this binding as the roleRef.name match
# Prune ClusterRoleBinding
# Prune Role Binding
# Remove the user role binding
# Remove the user cluster role binding
# Remove the user from security context constraints
# Remove the user from groups
# Remove the user's OAuthClientAuthorizations
# Remove the groups role binding
# Remove the groups cluster role binding
# Remove the groups security context constraints
# Try to determine registry hosts from Image Stream from 'openshift' namespace
# Unable to determine registry from 'openshift' namespace, trying with all namespace
# verify ssl
# The reference needs to be followed with two format patterns:
# a) sameis:sometag and b) sometag
# anotheris:sometag - this should not happen.
# sameis:sometag - follow the reference as sometag
# Update ImageStream appropriately
# Follow any referential tags to the destination
# Create a new tag
# if the from is still empty this means there's no such tag defined
# nor we can't create any from .spec.dockerImageRepository
# Disallow re-importing anything other than DockerImage
# disallow changing an existing tag
# Set the target item to import
# Clear the legacy annotation
# Reset the generation
# Create accompanying ImageStreamImport
# Find existing Image Stream
# importing the entire repository
# importing a single tag
# Create image import
# case: "sha256:" with no hex.
# Parse remainder information
# docker image reference with digest
# docker image reference with tag
# name only
# ansible-doc has introduced a 'doc.plugin_name' on branch 'stable-2.17'
# This change generated the following sanity tests error.
# invalid-documentation: DOCUMENTATION.plugin_name: extra keys not allowed @ data['plugin_name'].
# This will be removed from the module documentation
# NOTE: not used anymore, kept for compat
# sys.stderr.write("Failed to assign id for %s on %s, skipping\n" % (t, filename))
# sys.stderr.write('assigned: %s\n' % varkey)
# sys.stderr.write("unable to parse %s" % filename)
# Copyright (c) 2020, Nikolay Dachev <nikolay@dachev.info>
# Copyright (c) 2016 Red Hat Inc.
# check ECMA-48 Section 5.4 (Control Sequences)
# Copyright (c) 2022, Felix Fontein (@felixfontein) <felix@fontein.de>
# Handled in api module_utils
# handle_read_only must be 'validate'
# do not update this value
# Try to match indexed_entries with old_entries
# Update existing entries
# Add to modification list if there are changes
# For sake of completeness, retrieve the full new data:
# Remove 'irrelevant' data
# Produce return value
# Store ID for primary keys
# Determine modifications
# Do modifications
# Actually do modification
# Retrieve latest version
# This should never happen for Python 2.7+
# Find matching entries
# Allow to specify keys that should not be present by prepending '!'
# Check whether the correct amount of entries was found
# Identify entries to update
# Allow to specify keys to remove by prepending '!'
# Only include the matching values
# Copyright (c) 2018, Egor Zaitsev (@heuels)
# Try to convert subnet to an integer
# remove datetime
# create api base path
# api calls
# where must be of the format '<attribute> <operator> <value>'
# attribute
# operator
# Raised when WHERE has not been found
# The data inside here is private to this collection. If you use this from outside the collection,
# you are on your own. There can be random changes to its format even in bugfix releases!
# Mark as 'fully understood' if it is for at least one version
# How to obtain this information:
# 1. Run `/export verbose` in the CLI;
# 2. All attributes listed there go into the `fields` list;
# 3. All bold attributes go into the `primary_keys` list -- this is not always true!
# primary_keys=('default', ),
# Mikrotik does not provide exact version in official changelogs.
# The 7.15 version is the earliest, found option in router config backups:
# The template field ca not really be changed once the item is
# created. This config captures the behavior best as it can
# i.e. template=yes is shown, template=no is hidden.
# Since librouteros does not pass server_hostname,
# we have to do this ourselves:
# Obtain field and value
# Check
# Copyright (c) 2021, Felix Fontein (@felixfontein) <felix@fontein.de>
# Disable logging message trigged by pSphere/suds.
# Retrieve only guest VMs, or include host systems?
# Read authentication information from VMware environment variables
# (if set), otherwise from INI file.
# Limit the clusters being scanned
# Override certificate checks
# Create the VMware client connection.
# Use different cache names for guests only vs. all hosts.
# Loop through clusters and find hosts:
# Get list of all physical hosts
# Loop through physical hosts:
# Loop through all VMs on physical host.
# Group by resource pool.
# Group by datastore.
# Group by network.
# Group by guest OS.
# Group all VM templates.
# Additional options for use when running the script standalone, but never
# used by Ansible.
# Copyright (C): 2017, Ansible Project
# Requirements
# use jinja environments to allow for custom filters
# translation table for attributes to fetch for known vim types
# Read settings and parse CLI arguments
# Check the cache
# Handle Cache
# Data to print
# Display list of instances for inventory
# where is the config?
# apply defaults
# where is the cache?
# set the cache filename and max age
# mark the connection info
# behavior control
# Special feature to disable the brute force serialization of the
# virtual machine objects. The key name for these properties does not
# matter because the values are just items for a larger list.
# save the config
# noqa  # pylint: disable=assigning-non-slot
# Python 2.7.9 < or RHEL/CentOS 7.4 <
# Create a search container for virtualmachines
# If requested, limit the total number of instances
# make a unique id for this object to avoid vmware's
# numerous uuid's which aren't all unique.
# Put it in the inventory
# Make a map of the uuid to the alias the user wants
# Make a map of the uuid to the ssh hostname the user wants
# Reset the inventory keys
# set ansible_host (2.x)
# 1.9.x backwards compliance
# add new key
# cleanup old key
# Apply host filters
# delete this host
# Create groups
# props without periods are direct attributes of the parent
# props with periods are subkeys of parent attributes
# pointer to the current object
# pointer to the current result key
# if the val wasn't set yet, get it from the parent
# in a subkey, get the subprop from the previous attrib
# make sure it serializes
# lowercase keys if requested
# change the pointer or set the final value
# pyvmomi objects are not yet serializable, but may be one day ...
# https://github.com/vmware/pyvmomi/issues/21
# WARNING:
# Accessing an object attribute will trigger a SOAP call to the remote.
# Increasing the attributes collected or the depth of recursion greatly
# increases runtime duration and potentially memory+network utilization.
# Attempt to get the method, skip on fail
# Skip callable methods
# Parameters for VMware modules
# Parameters for VMware REST Client based modules
# Copyright: (c) 2018, Deric Crago <deric.crago@gmail.com>
# TODO: Fix persistent connection implementation currently ansible creates new connections to vcenter for each task
# therefore we're currently closing a non existing connection here and establish a connection just for being thrown away
# right afterwards.
# https://pubs.vmware.com/vsphere-6-5/index.jsp?topic=%2Fcom.vmware.wssdk.smssdk.doc%2Fvmodl.fault.SystemError.html
# https://github.com/ansible/ansible/issues/57607
# file size of 'in_path' must be greater than 0
# Copyright: (c) 2018 Red Hat Inc.
# Copyright: (c) 2020, dacrystal
# Copyright: (c) 2021, Abhijeet Kasurde <akasurde@redhat.com>
# Disable warning shown at stdout
# Request is malformed
# Pyvmomi 5.5 and onwards requires requests 2.3
# https://github.com/vmware/pyvmomi/blob/master/requirements.txt
# Create Property Spec
# Type of object to retrieved
# Create Filter Spec
# Add virtual machine to appropriate tag group
# Ghost Tags - community.vmware#681
# Add tags related to VM
# Add categories related to VM
# Add tag and categories related to VM
# Path
# Check if we can add host as per filters
# Load VM properties in host_vars
# Sanitize host properties: to snake case
# to snake case
# For backward compatability
# Already handled in module_utils/inventory.py
# Ghost Tags
# filter nics that are selected
# add hostvar 'management_ip' to each host
# Copyright: (c) 2022, Mario Lenz <m@riolenz.de>
# Copyright: (c) 2018, CrySyS Lab <www.crysys.hu>
# Copyright: (c) 2018, Peter Gyorgy <gyorgy.peter@edu.bme.hu>
# Creating the new port policy
# Ports that are in mirror sessions
# If a port is promiscuous set disable it, and add it to the array to enable it after the changes are made.
# Return the promiscuous ports array, to set them back after the config is finished.
# Revert the delete
# Delete Mirroring Session
# Session
# Set back the promiscuous ports
# Locate the ports, we want to use
# Now we can create the VspanSession
# Finally we can set the destination port to promiscuous mode
# Set Back the Promiscuous ports
# Copyright: (c) 2017, Stéphane Travassac <stravassac@gmail.com>
# other exceptions
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# HA
# Copyright (c) 2020, Abhijeet Kasurde <akasurde@redhat.com>
# Copyright: (c) 2023, Pure Storage, Inc.
# Copyright: (c) 2018, Christian Kotte <christian.kotte@gmx.de>
# MTU sanity check
# TODO: add port mirroring
# Name
# MTU
# Discovery Protocol type and operation
# Administrator contact
# Description
# Uplinks
# Create DVS
# Find new DVS
# Use the same version in the new spec; The version will be increased by one by the API automatically
# Set multicast filtering mode
# Set default network policy
# Set NetFlow config
# Set Health Check config
# Check MTU
# Check Discovery Protocol type and operation
# Check Multicast filtering mode
# Check administrator contact
# Check description
# need to use empty string; will be set to None by API
# Check uplinks
# just replace the uplink array if uplinks need to be added
# just replace the uplink array if uplinks need to be removed
# No uplink name check; uplink names can't be changed easily if they are used by a portgroup
# Check Health Check
# Check Network Policy
# Check switch version
# Check NetFlow Config
# Copyright: (c) 2023, Hewlett Packard Enterprise Development LP
# Don't do anything if IPv6 is already enabled
# Enable IPv6
# Don't do anything if IPv6 is already disabled
# Disable IPv6
# Copyright: (c) 2018, Diane Wang <dianew@vmware.com>
# useAutoDetect set to False then display number and video memory config can be changed
# useAutoDetect value not control 3D config
# 3D is enabled then 3D memory and renderer method can be set
# Copyright: (c) 2017, Wei Gao <gaowei3@qq.com>
# Check if category and tag combination exists as per user request
# User specified category
# User specified only tag
# Tags that need to be attached
# Tags that need to be detached
# The custom attribute should be either global (managedObjectType == None) or host specific
# Check if the virtual machine exists before continuing
# host already exists
# host does not exists
# Copyright: (c) 2018, Jose Angel Munoz <josea.munoz () gmail.com>
# FindByInventoryPath() does not require an absolute path
# so we should leave the input folder path unmodified
# Check if the VM exists before continuing
# VM exists
# Copyright: (c) 2020, Abhijeet Kasurde <akasurde@redhat.com>
# Gather information about all vSwitches and Physical NICs
# vSwitch contains all PNICs as string in format of 'key-vim.host.PhysicalNic-vmnic0'
# Check if pnic does not exists
# Check if pnic is already part of some other vSwitch
# Check Security Policy
# Check Teaming Policy
# Check Traffic Shaping Policy
# Check Number of Ports
# Check nics
# Update teaming if not configured specifigaly
# Remove missing nics from policy
# Set new nics as active
# Check teaming policy
# Check teaming notify switches
# Check failback
# Check teaming failover order
# Check teaming failure detection
# Check if traffic shaping needs to be disabled
# Associate the scsiLun key with the canonicalName (NAA)
# Associate target number with LUN uuid
# Copyright: (c) 2017, Tim Rightnour <thegarbledone@gmail.com>
# Prepare new NTP server list
# build verbose message
# add/remove NTP servers
# overwrite NTP servers
# get differences
# Copyright: (c) 2018, Karsten Kaj Jakobsen <kj@patientsky.com>
# Throw error if cluster does not exist
# get group
# Set result here. If nothing is to be updated, result is already set
# Get host data
# Allow for group check even if dry run
# No group found
# By casting lists as a set, you remove duplicates and order doesn't count. Comparing sets is also much faster and more efficient than comparing lists.
# Check if anything has changed when editing
# Modify existing hosts
# Set new result since something changed
# Modify existing VMs
# Check if dry run
# Check if group is a host group
# Create instance of VmwareDrsGroupMemberManager
# Set results
# Copyright: (c) 2023, Mario Lenz <m@riolenz.de>
# Copyright: (c) 2021, Mario Lenz <m@riolenz.de>
# The `checkpointFtSupported` and `checkpointFtCompatibilityIssues` properties have been removed from pyvmomi 7.0.
# The parameters can be substituted as follows.
# So add `checkpointFtSupported` and `checkpointFtCompatibilityIssues` keys for compatibility with previous versions.
# https://github.com/ansible-collections/vmware/pull/118
# Copyright: (c) 2018, Mike Klebolt  <michael.klebolt@centurylink.com>
# Exit if VMware tools is already up to date
# Fail if VM is not powered on
# Fail if VMware tools is either not installed or not running
# Fail if VMware tools are unmanaged
# If vmware tools is out of date, check major OS family
# Upgrade tools on Linux and Windows guests
# VM already exists
# Get previous boot disk name when boot_hdd_name is set
# Copyright: (c) 2017, Abhijeet Kasurde <akasurde@redhat.com>
# VM already exists, so set power state
# Check if a virtual machine is locked by a question
# Wait until a virtual machine is unlocked
# Copyright: (c) 2020, sky-joker
# Copyright: (c) 2023, Lionel Sutcliffe <sutcliffe.lionel@gmail.com>
# Match with vmware_guest parameter
# Check if RDM first as changing backing later on will erase some settings like disk_mode
# Perform actual VM reconfiguration
# Set vm object
# Sanitize user input
# Deal with controller
# check if disk controller is in the new adding queue
# check if disk controller already exists
# create disk controller when not found and disk state is present
# Create new controller
# Deal with Disks
# Deal with iolimit. Note that if iolimit is set, you HAVE TO both set limit and shares,
# 'low', 'normal' and 'high' values in disk['iolimit']['shares']['level'] are converted to int values on vcenter side
# set the operation to edit so that it knows to keep other settings
# If this is an RDM ignore disk size
# Disk already exists, deleting
# Add new disk
# get Storage DRS recommended datastore from the datastore cluster
# Since RDMs can be shared between two machines cluster_disk with rdm will
# invoke a copy of the existing disk instead of trying to create a new one which causes
# file lock issues in VSphere. This ensures we dont add a "create" operation.
# Set backing filename when datastore is configured and not the same as VM datastore
# If datastore is not configured or backing filename is not set, default is VM datastore
# Adding multiple disks in a single attempt raises weird errors
# So adding single disk at a time.
# Initialize default value for disk
# Type of Disk
# Check state
# Check controller type
# Check controller bus number
# the Paravirtual SCSI Controller Supports up to 64 disks in vSphere 6.7. Using hardware
# version 14 or higher from the vm config should catch this appropriately.
# By default destroy file from datastore if 'destroy' parameter is not provided
# Select datastore or datastore cluster
# Check if given value is datastore or datastore cluster
# If user specified datastore cluster, keep track of that for determining datastore later
# Find datastore which fits requirement
# If datastore field is provided, filter destination datastores
# size, size_tb, size_gb, size_mb, size_kb
# We found float value in string, let's typecast it
# We found int value in string, let's typecast it
# Even multiple size_ parameter provided by user,
# consider first value only
# No size found but disk, fail. Excepting RDMs because the cluster_disk will need a filename.
# Mode of Disk
# Sharing mode of disk
# Deal with RDM disk needs. RDMS require some different values compared to Virtual Disks
# RDMs need a path
# Enable Physical or virtual SCSI Bus Sharing
# Check if Datastore Cluster provided by user is SDRS ready
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
# There is some error so we fall back to general workflow
# We unable to find the virtual machine user specified
# Bail out
# Based on the desired_state and the current_state call
# the appropriate method from the dictionary
# This should never happen
# This should never happen either
# Backward compatability
# Copyright: (c) 2018, VMware, Inc.
# Check resource state and apply all required changes
# NIOC is enabled and the correct version, so return the state of the resources
# Copyright: (c) 2015, Dag Wieers (@dagwieers) <dag@wieers.com>
# Due to a software bug in vSphere, it fails to handle ampersand in datacenter names
# The solution is to do what vSphere does (when browsing) and double-encode ampersands, maybe others ?
# Implementing check-mode using HEAD is impossible, since size/date is not 100% reliable
# vSphere resets connection if the file is in use and cannot be replaced
# Check if the host is already connected to vCenter
# The host name is unique in vCenter; A host with the same name cannot exist in another datacenter
# However, the module will fail later if the target folder/cluster is in another datacenter as the host
# Check if the host is connected under the target cluster
# Check if the host is connected under the target folder
# Build the connection spec as well and fetch thumbprint if enabled
# Useful if you reinstalled a host and it uses a new self-signed certificate
# Check parent type
# check 'vim.ClusterComputeResource' first because it's also an
# instance of 'vim.ComputeResource'
# Check if the host is disconnected if reconnect disconnected hosts is true
# Reconnect the host if disconnected or if specified by state
# Move ESXi host from folder to folder
# Move ESXi host from cluster to folder
# Put host in maintenance mode if moved from another cluster
# Copyright: (c) 2019, David Hewitt <davidmhewitt@gmail.com>
# Turn on debug if not specified, but ANSIBLE_DEBUG is set
# Turn on debugging
# Find the datacenter by the given datacenter name
# Find the datastore by the given datastore name
# Find the datastore by the given datastore cluster name
# Find the LibraryItem (Template) by the given LibraryItem name
# Find the folder by the given FQPN folder name
# The FQPN is I(datacenter)/I(folder type)/folder name/... for
# example Lab/vm/someparent/myfolder is a vm folder in the Lab datacenter.
# Find the Host by the given name
# Find the Cluster by the given Cluster name
# Find the resourcepool by the given resourcepool name
# Create VM placement specs
# Wrap AnsibleModule methods
# Copyright: (c) 2017, IBM Corp
# Author(s): Andreas Nafpliotis <nafpliot@de.ibm.com>
# discard vim returned hostname if endpoint is a standalone ESXi host
# find manually the url if there is a redirect because urllib2 -per RFC- doesn't do automatic redirects for PUT requests
# Copyright: (c) 2020, Lev Goncharov <lev@goncharov.xyz>
# Copyright: (c) 2021, VMware, Inc. All Rights Reserved
# Since vSphere API 6.5
# kms server reconfigure
# reconfigure existing kms server
# no kms server with specified name
# no kms specified in kms_info, then only update proxy or user info
# Native Key Provider is supported from vSphere 7.0.2
# Find if there is existing Key Provider with the specified name
# Add a new Key Provider or reconfigure the existing Key Provider
# For existing Standard Key Provider, KMS servers can be reconfigured
# Named Key Provider not exist
# Add a Standard Key Provider, KMS name, IP are required
# If this new added key provider is the only key provider, then mark it default
# Remove Key Provider
# Named Key Provider not found
# Copyright: (c) 2019, Diane Wang <dianew@vmware.com>
# e.g., file_path is like this format: [datastore0] test_vm/test_vm-1.png
# from file_path generate URL
# file is downloaded as local_file_name when specified, or use original file name
# Copyright: (c) 2015-16, Ritesh Khadgaray <khadgaray () gmail.com>
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/execute_program_in_vm.py
# Copyright: (c) 2023, Valentin Yonev <valentin.ionev@live.com>
# Copyright: (c) 2019, Anusha Hegde <anushah@vmware.com>
# a change was applied meaning at least one task succeeded
# create serial config spec for adding, editing, removing
# modify existing serial port
# remove serial port
# if serial port is None
# create a new serial port
# configure create tasks first
# each type of serial port is of config_spec.device = vim.vm.device.VirtualSerialPort() object type
# because serial ports differ in the backing types and config_spec.device has to be unique,
# we are creating a new spec for every create port configuration
# since no_rx_loss is an optional argument, so check if the key is present
# network backing type
# named pipe backing type
# physical serial device backing type
# file backing type
# if there is a backing of only one type, user need not provide secondary details like service_uri, pipe_name, device_name or file_path
# we will match the serial port with backing type only
# in this case, the last matching serial port will be returned
# We are unable to find the virtual machine user specified
# based on code vmware_deploy_ovf from Matt Martz <matt@sivel.net>
# Copyright: (c) 2023, Alexander Nikitin <alexander@ihumster.ru>
# Perform a real range test instead of checking 'accept-ranges'
# Partial Content
# A slightly more accurate percentage
# Non-VMDK
# VMDK
# Get datacenter firstly
# Get cluster in datacenter if cluster configured
# Or get ESXi host in datacenter if ESXi host configured
# For more than one datacenter env, specify 'folder' to datacenter hostFolder
# If we have the same network name defined in multiple clusters, check all networks to get the right one
# Search for the network key of the same network name, that resides in a cluster parameter
# Check whether ovf/ova file exists
# Upload from url
# Copyright: (c) 2019, VMware Inc.
# There should only always be a single uplink port group on
# a distributed virtual switch
# operation should be edit, add and remove
# If defined, use vmnics as non-LAG uplinks
# Otherwise keep current non-LAG uplinks
# If defined, use lag_uplinks as LAG uplinks
# Otherwise keep current LAG uplinks
# We still need the HostSystem object to add the host
# to the distributed vswitch
# Skip checking uplinks if the host should be absent, anyway
# results = dict(changed=False, result=dict())
# Get previous config
# Build factory default config
# Looks like this value is causing the reset
# Check port
# Check read-only community strings
# Doesn't work. Need to reset config instead
# snmp_config_spec.readOnlyCommunities = []
# Check trap targets
# Loop through desired targets
# Build destination and add to temp target list
# Loop through existing targets to find targets that need to be deleted
# Configure trap targets if something has changed
# snmp_config_spec.trapTargets = []
# Check options
# options = []
# snmp_config_spec.option = options
# Check if there was a change before
# Copyright: (c) 2015, VMware, Inc.
# Copyright: (c) 2018, Derek Rushing <derek.rushing@geekops.com>
# find_obj doesn't include rootFolder
# HID usage tables https://www.usb.org/sites/default/files/documents/hut1_12v2.pdf
# define valid characters and keys value, hex_code, key value and key modifier
# rightShift, rightControl, rightAlt, leftGui, rightGui are not used
# Sleep in between key / string send event
# Copyright: (c) 2018, Michael Tipton <mike () ibeta.org>
# Copyright: (c) 2015, Bede Carroll <bc+github () bedecarroll.com>
# Get Datacenter if specified by user
# Get Destination Host System or Cluster if specified by user
# Get Destination Datastore or Datastore Cluster if specified by user
# unable to use find_datastore_cluster_by_name module
# At-least one of datastore, datastore cluster, host system or cluster is required to migrate
# Check if datastore is required, this check is required if destination
# and source host system does not share same datastore.
# Check for changes
# We have both host system and datastore object
# Datastore is not accessible
# Datastore is not associated with host system
# VM is already located on same host
# VM is already located on this cluster
# VM is already located on same datastore
# VM is already located on a datastore in the datastore cluster
# Get Destination resourcepool
# Migrate VM and get Task object back
# Wait for task to complete
# If task was a success the VM has moved, update running_host and complete module
# The storage layout is not automatically refreshed, so we trigger it to get coherent module return values
# Filter out evaluation license key
# FIXME: This does not seem to work on vCenter v6.0
# if cluster_name parameter is provided then search the cluster object in vcenter
# if esxi_hostname parameter is provided then search the esxi object in vcenter
# e.g., key.editionKey is "esx.enterprisePlus.cpuPackage", not sure all keys are in this format
# backward compatibility - check if it's is a vCenter licence key
# if we have found a cluster, an esxi or a vCenter object we try to assign the licence
# Check if key is in use
# Copyright: (c) 2020, Dustin Scott <sdustin@vmware.com>
# MOB METHODS
# These will generate the individual items with the following expected structure (see
# https://github.com/vmware/pyvmomi/blob/master/pyVmomi/PbmObjects.py):
# PbmProfile: array
# ensure if the category exists
# ensure if the tag exists
# loop through and update the first match
# if we didn't exit by now create the profile
# loop through and delete the first match
# if we didn't exit by now exit without changing anything
# This is a helper class to sort the changes in a valid order
# "Greater than" means a change has to happen after another one.
# As an example, let's say self is daily (key == 1) and other is weekly (key == 2)
# You cannot disable daily if weekly is enabled, so later
# Enabling daily is OK if weekly is disabled
# Otherwise, decreasing the daily level below the current weekly level has to be done later
# Check if level options are valid
# Check if state options are valid
# Check statistics
# Statistics for past day
# Statistics for past week
# Statistics for past month
# Statistics for past year
# if a storage policy set tag base placement rules, the tags are set into the value.
# https://github.com/ansible-collections/community.vmware/issues/742
# Copyright: (c) 2019, Michael Tipton <mike () ibeta.org>
# No rule found
# Check if anything has changed
# Check if already rule exists
# This need to be set in order to edit a existing rule
# Create instance of VmwareDrsGroupManager
# VSAN
# Check if the folder already exists
# Check if folder exists under parent folder
# Create a new folder
# To be consistent with the other vmware modules, We decided to accept this error
# and the playbook should simply carry on with other tasks.
# User will have to take care of this exception
# https://github.com/ansible/ansible/issues/35388#issuecomment-362283078
# Copyright: (c) 2022, Swisscom (Schweiz) AG
# Author(s): Olivia Luetolf <olivia.luetolf@swisscom.com>
# DRS
# Some sanity checks
# Basic config
# Default port config
# Check that private VLAN exists in dvs
# Teaming Policy
# PG policy (advanced_policy)
# NetFlow
# Ingress traffic shaping
# enabled
# adverage bandwidth
# burst size
# peak bandwidth
# Egress traffic shaping
# PG Type
# Check config
# Check port allocation
# Check the existing autoStart setting difference.
# Check the existing autoStart powerInfo setting difference for VM.
# Copyright: Abhijeet Kasurde <akasurde@redhat.com>
# NOTE: the following code is deprecated from 2.11 onwards
# Add to existing privileges
# Set given privileges
# Add system-defined privileges, "System.Anonymous", "System.View", and "System.Read".
# Remove given privileges from existing privileges
# Don't do anything if Hyperthreading is already enabled
# L1 Terminal Fault (L1TF)/Foreshadow mitigation workaround (https://kb.vmware.com/s/article/55806)
# Enable Hyperthreading
# Check if Hyperthreading is available
# This should never happen since Hyperthreading is available
# Don't do anything if Hyperthreading is already disabled
# Disable Hyperthreading
# Get List currently of allowed Datastore Names
# Get the to add and to remove allowed and not allowed Datastores
# vCLS
# Hosts
# keep track of changes as we go
# save variables here for comparison later and change tracking
# also covers cases where inputs may be null
# compare what is configured on the firewall rule with what the playbook provides
# apply everything here in one function call
# setup spec
# Check support mode
# Check LAGs
# Check if desired LAGs are configured
# Check if LAGs need to be removed
# NOTE: You need to run the task again to change the support mode to 'basic' as well
# No matter how long you sleep, you will always get the following error in vCenter:
# 'Cannot complete operation due to concurrent modification by another operation.'
# self.update_dvs_config(self.dvs, spec)
# greyed out in vSphere Client!?
# lacp_spec.vlan = vim.dvs.VmwareDistributedVirtualSwitch.LagVlanConfig()
# lacp_spec.vlan.vlanId = [vim.NumericRange(...)]
# lacp_spec.ipfix = vim.dvs.VmwareDistributedVirtualSwitch.LagIpfixConfig()
# lacp_spec.ipfix.ipfixEnabled = True/False
# Copyright: (c) 2021, sky-joker
# Make the diff_config variable to check the difference between a new and existing config.
# The host name is unique in vCenter, so find the host from the whole.
# Make a warning for the item if it isn't supported by ESXi when specified item.
# Make the return value for the result.
# Track all existing library names, to  block update/delete if duplicates exist
# Import objects of both types to prevent duplicate names
# Set library type for create/update actions
# Fail if no datastore is specified
# Build the storage backing for the library to be created
# Build the specification for the library to be created
# Build subscribed specification
# Ensure library types are consistent
# Compare changeable subscribed attributes
# Setup library service based on existing object type to allow library_type to unspecified
# Copyright: (C) 2020, Viktor Tsymbalyuk
# Copyright: (C) 2020, Ansible Project
# fill value, if user not provided
# looks only for refresh info
# TODO: needed method to know that host updated info
# Copyright, (c) 2018, Ansible Project
# Copyright, (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# Initialize the variables.
# https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#diff
# reuslt_fields is the variable for the return value after the job finish.
# update_custom_attributes is the variable for storing the custom attributes to update.
# changed variable is the flag of whether the target changed.
# https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#changed
# If update_custom_attributes variable has elements, add or update the custom attributes and values.
# Set result_fields for the return value.
# All custom attribute values will set blank to remove the value.
# If update_custom_attributes variable has elements, remove the custom attribute values.
# Gather the available existing custom attributes based on user_fields
# vmware_guest_custome_attributes must work with self moref type of custom attributes or with global custom attributes
# Gather the values of set the custom attribute.
# When add custom attribute as a new one, it has not the value key.
# Add the value key to avoid unintended behavior in the difference check.
# Select the custom attribute and value to update the configuration.
# Add the custom attribute as a new one if the state is present and existing_custom_attribute has not the custom attribute name.
# If the custom attribute exists to update, the changed is set to True.
# Add custom_attributes key for the difference between before and after configuration to check.
# virtual machine already exists
# virtual machine does not exists
# Security info
# Traffic Shaping info
# Teaming and failover info
# Default role: No access
# Default role: Read-only
# Default role: Administrator
# Custom roles
# contains type as string in format of 'key-vim.host.FibreChannelHba-vmhba1'
# Check if primary PVLANs are unique
# Check if secondary PVLANs are unique
# Check if secondary PVLANs are already used as primary PVLANs
# Check if a primary PVLAN is present for every secondary PVLANs
# Check Private VLANs
# Check if desired PVLANs are configured
# Check if a PVLAN needs to be removed
# the first secondary VLAN's type is always promiscuous
# Remove PVLAN configuration if present
# set read device content chunk size to 2 MB
# set lease progress update interval to 15 seconds
# get http nfc lease firstly
# create a thread to track file download progress
# total storage space occupied by the virtual machine across all datastores
# new deployed VM with no OS installed
# device file named disk-0.iso, disk-1.vmdk, disk-2.vmdk, replace 'disk' with vm name
# if export from ESXi host, replace * with hostname in url
# e.g., https://*/ha-nfc/5289bf27-da99-7c0e-3978-8853555deb8c/disk-1.vmdk
# generate ovf file
# self.facts = http_nfc_lease.HttpNfcLeaseGetManifest()
# get ticket
# Copyright: (c) 2016, IBM Corp
# portgroup names are unique; there can be only one portgroup with the same name per host
# Check VLAN ID
# Check security settings
# Check traffic shaping
# Check teaming
# this option is called 'failback' in the vSphere Client
# rollingOrder also uses the opposite value displayed in the client
# Only configure security policy if an option is defined
# Only configure teaming policy if an option is defined
# The following properties are deprecated since VI API 5.1. Default values are used
# With Namespace
# Without Namespace
# Copyright: (c) 2019-2020, Naveenkumar G P <ngp@vmware.com>
# TO-DO: move to vmware_rest_client
# Check host name
# Check domain
# Check DNS server(s)
# Copyright: (c) 2017, Davis Phillips davis.phillips@gmail.com
# for backwards-compat
# check the difference between the existing config and the new config
# Copyright: (c) 2021, VMware, Inc. All Rights Reserved.
# Get the datacenter object
# Get the cluster object
# If host is given, get the cluster object using the host
# Define the environment browser object the ComputeResource presents
# Get supported hardware versions list
# Get supported guest ID list
# Author(s): Eugenio Grosso, <eugenio.grosso@purestorage.com>
# Initialize connection to SMS manager
# This second step is required to register self-signed certs,
# since the previous task returns the certificate back waiting
# for confirmation
# Set the following variables from parameters in LnuxPrep or SysPrep
# Spec
# Identity
# global IP Settings
# NIC setting map
# Copyright (C) 2018 James E. King III (@jeking3) <jking@apache.org>
# Runtime settings
# User directory
# Mail
# SNMP receivers - SNMP receiver #1
# SNMP receivers - SNMP receiver #2
# SNMP receivers - SNMP receiver #3
# SNMP receivers - SNMP receiver #4
# Timeout settings
# Logging settings
# Copyright: (c) 2019, VMware, Inc. All Rights Reserved.
# starts from hardware version 13 nvme controller supported
# create new USB controller, bus number is 0
# create other disk controller
# Copyright: (c) 2015, Russell Teague <rteague2 () csc.com>
# Find attached datastore at host.
# Check that whether the datastore has free capacity to expand.
# NFS v3
# NFS v4.1
# remoteHost needs to be set to a non-empty string, but the value is not used
# Copyright: (c) 2019, OVH SAS
# Sanity check for cluster
# Sanity check for virtual machines
# Get list of VMs only if state is present
# Getter
# Create
# Add Rule
# Delete Rule
# Generate filter spec.
# List method will be used in getting objects.
# List method can get only up to a max of 1,000 objects.
# If VCSA has than 1,000 objects with the specified object_type, the following error will occur.
# Error: Too many virtual machines. Add more filter criteria to reduce the number.
# To resolve the error, the list method should use the filter spec, to be less than 1,000 objects.
# Make a filter for moid if you specify object_moid.
# The moid is a unique id, so get one object if target moid object exists in the vSphere environment.
# If you use object_name parameter, an object will filter with names.
# Get an object for changing the object name.
# Ensure whether already exists an object in the same object_new_name name.
# Ensure whether the same object name as object_new_name.
# Object with same name already exists
# Do not populate lists if we are deleting group
# Do not throw error if group does not exist. Simply set changed = False
# Add DRS group
# Delete DRS group
# resource pools come back as a dictionary
# make a copy
# everything else should be a list
# a change was detected and needs to be applied through reconfiguration
# dict of changes made or would-be-made in check mode, updated when change_applied is set
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
# Delete VM from Inventory
# Delete VM from Disk
# guest_id is not required when using templates
# guest_id is only mandatory on VM creation
# set cpu/memory/etc
# check VM power state and cpu hot-add/hot-remove state before re-config VM
# Allow VM to be powered on during this check when in check mode, when no changes will actually be made
# num_cpu is mandatory for VM creation
# check VM power state and memory hotadd state before re-config VM
# memory_mb is mandatory for VM creation
# boot firmware re-config can cause boot issue
# set CDROM type is 'client' by default
# Configure the VM CD-ROM
# Changing CD-ROM settings on a template is not supported
# get existing CDROM devices
# get existing IDE and SATA controllers
# if not find create new ide or sata controller
# create new CD-ROM
# re-configure CD-ROM
# delete CD-ROM
# Check is to make sure vm_obj is not of type template
# Hardware version is denoted as "vmx-10"
# VM exists and we need to update the hardware version
# Only perform the upgrade if not in check mode.
# Label is required when remove device
# Reconfigure device requires VM in power off state
# Get existing NVDIMM controller
# New VM or existing VM without label specified, add new NVDIMM device
# Get default PMem storage policy when host is vCenter
# Clean up user data here
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
# User wants network type as 'dhcp'
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
# List current device for Clone or Idempotency
# We are editing existing network devices, this is either when
# are cloning from VM or Template
# Changing mac address has no effect when editing interface
# Default device type is vmxnet3, VMware best practice
# VDS switch
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
# NSX-T Logical Switch
# vSwitch
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
# Sets the values in property_info
# each property must have a unique key
# init key counter with max value + 1
# this is 'edit' branch
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
# Updating attributes only if needed
# attempt to delete non-existent property
# this is add new property branch
# Configure the values in property_value
# New VM
# If kv is not kv fetched from facts, change it
# User specified customization specification
# Network settings
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
# Global DNS settings
# TODO: Maybe list the different domains from the interfaces here by default ?
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
# computer name will be truncated to 15 characters if using VM name
# Check if timezone value is a int before proceeding.
# Add new spec param for vSphere 8.0U2
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
# TODO: Maybe add domain from interface if missing ?
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
# If vm_obj doesn't exist there is no SCSI controller to find
# what size is it?
# No size found but disk, fail
# If this is a new disk, or the disk file names are different
# default is persistent for new deployed VM
# get existing specified disk controller and attached disks
# check if scsi controller key already used
# create new disk controller if not exist
# from attached disk list find the specified one
# if find the disk do reconfigure
# if no disk or the specified one not exist do create new disk
# Only update the configspec that will be applied in reconfigure_vm if something actually changed
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
# if one of 'controller_type', 'controller_number', 'unit_number' parameters set in one of disks' configuration
# will call configure_multiple_controllers_disks() function
# do not support mixed old scsi disks configuration and new multiple controller types of disks configuration
# do single controller type disks configuration
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
# If we are manipulating and existing objects which has disks and disk_index is in disks
# increment index for next disk search
# index 7 is reserved to SCSI controller
# is it thin?
# Only create virtual device if not backed by vmdk in original template
# which datastore?
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
# VMware doesn't allow to reduce disk sizes
# TODO: really use the datastore for newly created disks
# Check if user has provided datastore cluster first
# If user specified datastore cluster so get recommended datastore
# Check if get_recommended_datastore or user specified datastore exists or not
# use the template's existing DS
# validation
# Check if we have reached till root folder
# split the searchpath so we can iterate through it
# recursive walk while looking for next element in searchpath
# get the datacenter object
# if cluster is given, get the cluster object
# if host is given, get the cluster object using the host
# get resource pools limiting search to cluster or datacenter
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# Prepend / if it was missing from the folder path, also strip trailing slashes
# Nested folder does not have trailing /
# Check for full path first in case it was already supplied
# abort if no strategy was successful
# Add some debugging values in failure.
# always get a resource_pool
# set the destination datastore for VM & disks
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
# create the relocation spec
# Find if we need network customizations (find keys in dictionary that requires customizations)
# We don't need customizations for these keys
# Only select specific host when ESXi hostname is provided
# Convert disk present in template if is set
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
# ConfigSpec require name for VM creation
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
# set annotation
# Only send VMware task if we see a modification
# Rename VM
# Mark VM as Template
# Mark Template as VM
# add customize existing VM after VM re-configure
# TODO not sure if it is possible to query the current customspec to compare against the one being provided to check in check mode.
# Maybe by breaking down the individual fields and querying, but it needs more research.
# For now, assume changed...
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
# Check requirements for virtualization based security
# destroy it
# has to be poweredoff first
# Note that check_mode is handled inside reconfigure_vm
# Identify if the power state would have changed if not in check mode
# set powerstate
# This should not happen
# VM doesn't exist
# Copyright: (c) 2023, Alexander Nikitin (ihumster@ihumster.ru)
# add a dynamic target to an iSCSI configuration
# add a static target to an iSCSI configuration
# update a CHAP authentication of a dynamic target in an iSCSI configuration
# update a CHAP authentication of a static target in an iSCSI configuration
# update an iqn in an iSCSI configuration
# update an alias in an iSCSI configuration
# update a CHAP authentication an iSCSI configuration
# add port binds in an iSCSI configuration
# remove a dynamic target to an iSCSI configuration
# remove a static target to an iSCSI configuration
# remove port binds from an iSCSI configuration
# base authentication parameters.
# Looks for a specified ESXi host from a cluster if specified cluster and ESXi host.
# The keys use in checking pci devices existing.
# De-duplicate pci device data and sort.
# ESXi host configuration will be included in the result if it will be changed.
# Check if there is a latest snapshot already present as specified by user
# Snapshot already exists, do not anything.
# Check if Virtual Machine provides capabilities for Quiesce and Memory Snapshots
# Remove subtree depending upon the user input
# Copyright: (c) 2019, Aaron Longchamps, <a.j.longchamps@gmail.com>
# find kernel module options for a given kmod_name. If the name is not right, this will throw an exception
# configure the provided kernel module with the specified options
# evaluate our current configuration against desired options and save results
# keep track of original options on the kernel module
# apply as needed, also depending on check mode
# add the arguments we're going to use for this module
# make sure we have a valid target cluster_name or esxi_hostname (not both)
# and also enable check mode
# Copyright: (c) 2021, Anant Chopra <chopraan@vmware.com>
# to check if vm has been cloned in the destination vc
# query for the vm in destination vc
# get the host and datastore info
# clone the vm on VC
# Make an object for authentication in a guest OS
# Make a spec object to customize Guest OS
# Make an identity object to do linux prep
# The params are reflected the specified following after rebooting OS
# Should require rebooting to reflect customization parameters to instant clone vm.
# Wait vm tools is started after rebooting.
# connect to host/VC
# datacentre check
# datastore check
# populate relocate spec
# populate Instant clone spec
# General NIC capabilities
# DirectPath I/O and SR-IOV capabilities and configuration
# Workaround for "AttributeError: 'NoneType' object has no attribute 'nicDevice'"
# this issue doesn't happen every time; vswitch.spec.bridge.nicDevice exists!
# get current power policy
# the "name" and "description" parameters are pretty useless
# they store only strings containing "PowerPolicy.<shortName>.name" and "PowerPolicy.<shortName>.description"
# Don't do anything if the power policy is already configured
# get available power policies and check if policy is included
# If UUID is set, get_vm select UUID, show error message accordingly.
# Check name
# Check port policies
# There's no information available if the following option are deprecated, but
# they aren't visible in the vSphere Client
# Check policies
# Check VLAN trunk
# Check if range is already configured
# Check if range needs to be removed
# Check LACP
# Check NetFlow
# TODO: Check Traffic filtering and marking
# Check Block all ports
# Copyright: (c) 2018, Fedor Vompe <f.vompe () comptek.ru>
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
# Kept for backward compatibility
# Copyright, (c) 2022, Mario Lenz <m@riolenz.de>
# Check if the file/directory exists
# NOTE: Creating a file in a non-existing directory, then remove the file
# Create a temporary file in the new directory
# Copyright: (c) 2021, Tyler Gates <tgates81@gmail.com>
# Special thanks to:
# if controller was found check disk
# Connect into vcenter and get the profile manager for the VM.
# VM HOME
# Existing policy is different than requested. Set, wait for
# task success, and exit.
# will raise an Exception on failure
# DISKS
# Check the requested disks[] information is sane or fail by looking up
# and storing the object(s) in a new dict.
# {unit_number: {disk: <obj>, policy: <obj>}}
# All requested profiles are valid. Iterate through each disk and set
# Check our results and exit.
# We handle all supported types here so we can give meaningful errors.
# Don't silently drop unknown options. This prevents typos from falling through the cracks.
# Copyright: (c) 2022, sky-joker
# Search the specified user
# If you want to update a user password, require the override_user_passwd is true.
# Copyright (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# Delete datastore cluster
# Create datastore cluster
# Don't do anything if already enabled and joined
# Joined and no problems with the domain membership
# Joined, but problems with the domain membership
# Enable and join AD domain
# Don't do anything not joined to any AD domain
# Disable and leave AD domain
# common items of nic parameters
# If NIC is a SR-IOV adapter
# If a distributed port group specified
# If an NSX-T port group specified
# If a port group specified
# VirtualVmxnet3Vrdma extends VirtualVmxnet3
# TODO: make checks below less inelegant
# fabricate diff for check_mode
# fabricate diff/returns for checkmode
# Copyright: (c) 2024, Nina Loser <nina.loser@muenchen.de>
# Check current DRS settings
# Create DRS VM config spec
# Apply the cluster reconfiguration
# get mount info from the first ESXi host attached to this NFS datastore
# get mount info from the first ESXi host attached to this VMFS datastore
# uncommitted is optional / not always set
# Calculated values
# Copyright: (c) 2024, Fernando Mendieta <fernandomendietaovejero@gmail.com>
# Copyright: (c) 2020, Anusha Hegde <anushah@vmware.com>
# connect to destination VC
# get the power status of the newly cloned vm
# clone the vm/template on destination VC
# Check if vm name already exists in the destination VC
# populate service locator
# If ssl verify is false, we ignore it also in the clone task by fetching thumbprint automatically
# populate clone spec
# runtime default value
# Check all general settings, except statistics
# Initialize diff_config variable
# Advanced settings
# User specified specific dvswitch name to gather information
# default behaviour, gather information about all dvswitches
# Copyright: (c) 2017-18, Ansible Project
# Copyright: (c) 2017-18, Abhijeet Kasurde <akasurde@redhat.com>
# find Port Group
# find VMkernel Adapter
# config change (e.g. DHCP to static, or vice versa); doesn't work with virtual port change
# vDS to vSS or vSS to vSS (static IP)
# check if it's a vSS Port Group
# check if it's a vDS Port Group
# Check IPv4 settings
# Check virtual port (vSS or vDS)
# Check configuration of service types (only if default TCP/IP stack is used)
# do service type configuration
# Other service type
# vmotion and vxlan stay the same
# Copyright: (c) 2019, NAER William Leemans (@bushvin) <willie@elaba.net>
# Copyright: (c) 2017, Philippe Dellaert <philippe@dellaert.org>
# Copyright (c) 2020, Matt Proud
# Collect target lookup for naa devices
# Copyright: (c) 2018, James E. King III (@jeking3) <jking@apache.org>
# Get all objects matching type (and name if given)
# Return first match or None
# Return all matching objects or empty list
# Remove leading/trailing slashes and create list of subfolders
# Process datacenter
# Process folder type
# Process remaining subfolders
# Search By BIOS UUID rather than instance UUID
# get all objects for this path
# Resolve custom values
# Exit the loop immediately, we found it
# Resolve advanced settings
# Gather vTPM information
# timestamp is a datetime.datetime object
# State is not already true
# need to get new metadata if changed
# options is the dict as defined in the module parameters, current_options is
# the list of the currently set options as returned by the vSphere API.
# When truthy_strings_as_bool is True, strings like 'true', 'off' or 'yes'
# are converted to booleans.
# Get Root Folder
# Create Container View with default root folder
# Create Traversal spec
# Create Object Spec
# Virtual Machine related functions
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
# We have found multiple virtual machines, decide depending upon folder value
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
# User defined datacenter
# User defined datacenter's object
# Get Path for Datacenter
# Nested folder does not return trailing /
# User provided blank value or
# User provided only root value, we fail
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
# Check if user has provided same path as virtual machine
# Unique virtual machine found.
# climb back up the tree to find our path, stop before the root folder
# We have found multiple virtual machine templates
# Cluster related functions
# Hosts related functions
# Network related functions
# Datastore cluster
# Resource pool
# Resource pool name is different than default 'Resources'
# VMDK stuff
# This method returns tag objects only,
# Please use get_tags_for_dynamic_obj for more object details
# This API returns just names of tags
# Please use get_tags_for_vm for more tag object details
# This is not used for the multiple controller with multiple disks scenario,
# disk unit number can not be None
# self.next_disk_unit_number = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
# While creating a new IDE controller, temporary key value
# Updating an existing CD-ROM
# one scsi controller attach 0-15 (except 7) disks
# one sata controller attach 0-29 disks
# one nvme controller attach 0-14 disks
# Copyright: (c) 2025, Mario Lenz (@mariolenz) <m@riolenz.de>
# Copyright: (c) 2019, Rémi REY (@rrey)
# define http headers
# Copyright: (c) 2023, flkhndlr (@flkhndlr)
# {{{ Authentication header
# }}}
# Copyright: (c) 2021
# https://grafana.com/docs/grafana/latest/http_api/org/#get-organization-by-name
# https://grafana.com/docs/http_api/org/#create-organization
# https://grafana.com/docs/http_api/org/#delete-organization
# search org by name
# create new org
# delete org
# TODO: only when priority==emergency
# TODO: add sound choices
# Copyright: (c) 2020, Antoine Tanzilli (@Tailzip), Hong Viet Lê (@pomverte), Julien Alexandre (@jual), Marc Cyprien (@LeFameux)
# https://grafana.com/docs/http_api/admin/#global-users
# https://grafana.com/docs/grafana/latest/http_api/user/#get-single-user-by-usernamelogin-or-email
# https://grafana.com/docs/http_api/user/#user-update
# https://grafana.com/docs/http_api/admin/#permissions
# https://grafana.com/docs/http_api/admin/#delete-global-user
# compare value before in target_user object and param
# search user by login
# create new user
# update found user
# Copyright: (c) 2017, Thierry Sallé (@seuf)
# the 'General' folder is a special case, it's ID is always '0'
# search by title
# for comparison, we sometimes need to ignore a few keys
# you don't need to set the version, but '0' is incremented to '1' by Grafana's API
# if folderId is not provided in dashboard,
# try getting the folderId from the dashboard metadata,
# otherwise set the default folderId
# remove meta key if exists for compare
# Ignore dashboard ids since real identifier is uuid
# define data payload for grafana API
# Check that the dashboard JSON is nested under the 'dashboard' key
# define http header
# test if the folder exists
# test if dashboard already exists
# Ensure there is no id in payload
# dashboard does not exist, do nothing
# check if secureJsonData should be compared
# if we should ignore it just drop alltogether
# handle secureJsonData/secureJsonFields, some current facts:
# - secureJsonFields is reporting each field set as true
# - secureJsonFields once set cant be removed (DS has to be deleted)
# secureJsonData is not provided so just remove both for comparision
# we have some secure data so just "rename" secureJsonFields for comparison as it will change anyhow everytime
# define password
# define basic auth
# define tls auth
# datasource type related parameters
# Handle changes in es_version format in Grafana < 8.x which used to
# be integers and is now semver format
# Retrieve the Semver format expected by API
# general arguments
# alertmanager
# dingding
# discord
# googlechat
# kafka
# line
# opsgenie
# pagerduty
# pushover
# sensugo
# slack
# teams
# telegram
# threema
# victorops
# webex
# wecom
# TODO: check api messages
# already member
# XXX: Grafana folders endpoint stopped sending back json in response for delete operations
# see https://github.com/grafana/grafana/issues/77673
# Avoid sanity error with devel
# Postgres documentation fragment
# PostgreSQL module specific support methods.
# Ensure psycopg libraries are available before connecting to DB:
# only happens if fail_on_conn is False and there actually was an issue connecting to the DB
# the lines below seem redundant but they are actually needed for connect to work as expected
# Notice: incl_list and excl_list
# don't make sense together, therefore,
# if incl_list is not empty, we collect
# only values from it:
# Collect info:
# Default behaviour, if include or exclude is not passed:
# Just collect info for each item:
# Check spcoption exists:
# Check that pg_extension exists:
# Check that pg_replication_slots exists:
# If there is no replication:
# Check pending restart column exists:
# PG 10+
# PG < 10
# Following query returns:
# Name, Owner, Encoding, Collate, Ctype, Access Priv, Size
# that means we don't have permission to access these database
# Reconnect to the default DB after gathering info in other DBs
# The context is to get the user's default database.
# Should be executed right after logging in
# https://github.com/ansible-collections/community.postgresql/issues/794
# Check input for potentially dangerous elements:
# Do job:
# Because they have the same meaning
# Check mutual exclusive parameters:
##############
# Do main job:
# Set default returned values:
# Refresh table info for RETURN.
# Note, if table has been renamed, it gets info by newname:
# We just change the table state here
# to keep other information about the dropped table:
# Module functions and classes #
# The subscription does not exist:
# To support the comment resetting functionality
# Parameters handling:
# Connect to DB and make cursor object:
# We check subscription state without DML queries execution, so set autocommit:
# Check version:
###################################
# Create object and do rock'n'roll:
# Always returns True:
# Get final subscription info:
# Connection is not needed any more:
# Return ret values and exit:
# Copyright: (c) 2017, Felix Archambault
# Measure query execution time in milliseconds as requested in
# https://github.com/ansible-collections/community.postgresql/issues/787
# if it's a list
# Convert elements of type list to strings
# representing PG arrays
# Ansible engine does not support decimals.
# An explicit conversion is required on the module's side
# Psycopg 3 doesn't fail with 'no results to fetch'
# This exception will be triggered only in Psycopg 2
# Copyright: (c) 2022, Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# Execute script content:
# In Psycopg 2, only the result of the last statement is returned.
# In Psycopg 3, all the results are available.
# https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#multiple-results-returned-from-multiple-statements
# Copyright: (c) 2018-2020 Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# Set some default values:
# If connection established:
# Copyright: (c) 2019, John Scalia (@jscalia), Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# Check server version (immediately_reserved needs 9.6+):
# When slot_type is logical and parameter db is not passed,
# the default database will be used to create the slot and
# the user should know about this.
# When the slot type is physical,
# it doesn't matter which database will be used
# because physical slots are global objects.
# Create an object and do main job
# Note: check mode is implemented here:
# If src is SQL SELECT statement:
# If src is a table:
# In this case table is actually SQL SELECT statement.
# If SQL fails, it's handled by exec_sql():
# If exec_sql was passed, it means all is OK:
# If SQL was executed successfully:
# Note: we don't need to check mutually exclusive params here, because they are
# checked automatically by AnsibleModule (mutually_exclusive=[] list above).
# Create the object and do main job:
# Note: parameters like dst, src, etc. are got
# from module object into data object of PgCopyData class.
# Therefore not need to pass args to the methods below.
# Note: check mode is implemented inside the methods below
# by checking passed module.check_mode arg.
# Finish:
# Return some values:
# For the resetting comment feature (comment: '') to work correctly
# Avoid catching this on Python 2.4
# Copyright: (c) Andrei Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# It was copied here from postgresql_set.
# GUC_LIST_QUOTE parameters list for each version where they changed (from PG_REQ_VER).
# It is a tuple of tuples as we need to iterate it in order.
# Unquote GUC_LIST_QUOTE parameter (each element can be quoted or not)
# Assume the parameter is GUC_LIST_QUOTE (check in param_is_guc_list_quote function)
# Due to a bug in PostgreSQL
# Andersson007 originally used ABC, but it turned out that
# Python 2.7 does not support it. As of 2025-03-27 it's still
# supported by Ansible on target hosts until ~mid May, 2025,
# so we, as a certified collection must support it as well.
# TODO revisit this after May, 2025 to uncomment and use
# as a parent class in all the Value* classes.
# class Value(ABC):
# This abstract class is a blueprint for "real" classes
# that represent values of certain types.
# This makes practical sense as we want the classes
# have same set of parameters to instantiate them
# in the same manner.
# If you need to handle parameters of a new type
# or if you need to handle some combination of vartype
# and unit differently (like we do it with ValueMem or ValueTime),
# create another class using this class as parent.
# To understand why we use this, take a look at how
# the child classes are instantiated in a similar manner
# SELECT * FROM pg_catalog.pg_settings WHERE vartype = 'bool'
# We do not use all the parameters in every class
# like default_unit, etc., but we need them to instantiate
# classes in a standard manner
# SELECT * FROM pg_catalog.pg_settings WHERE vartype = 'integer' and unit IS NULL
# SELECT * FROM pg_catalog.pg_settings WHERE vartype = 'string'
# classes in a standard manner.
# It typically doesn't need normalization,
# so accept it as is
# Check parameter is GUC_LIST_QUOTE (done once as depend only on server version).
# These functions were copied here from the postgresql_set module
# SELECT * FROM pg_catalog.pg_settings WHERE vartype = 'enum'
# No idea why Ansible converts on/off passed as string
# to "True" and "False". However, there are represented
# as "on" and "off" in pg_catalog.pg_settings.
# SELECT * FROM pg_catalog.pg_settings WHERE vartype = 'real'
# Drop the unit part as there's only "ms" or nothing
# Let's convert num_value to the smallest unit,
# i.e. to "us" which means microseconds
# When disabled, some params have -1 as value
# When the value is like 1min
# When the value is like 1ms
# When the value is like 1s
# When it doesn't contain a unit part
# we set it as the unit defined for this
# parameter in pg_catalog.pg_settings
# If you pass anything else for memory-related param,
# Postgres will show that only the following
# units are acceptable
# Bytes = MB << 20, etc.
# This looks a bit better and maybe
# even works more efficiently than
# say Bytes = MB * 1024 * 1024
# This is a special case when the unit in pg_catalog.pg_settings is "8kB".
# Users can still pass such values as "10MB", etc.
# The only issue seems to appear when users don't specify values
# of 8kB default value explicitly, i.e., when they pass just "100".
# In this case the self.__validate method will assign its default unit of 8kB
# When the value is like 1024MB
# When the value is like 1024B
# Run "SELECT DISTINCT unit FROM pg_catalog.pg_settings;"
# and extract memory-related ones
# and extract time-related ones
# In this case, it means that the setting is disabled
# and we don't need to do any sophisticated normalization
# It can be of type integer or real, that's why
# we don't have them under vartype == "integer"
# Attempt to convert the string to a float
# Check if the number is not an integer (has a decimal part)
# We don't expect s to be anything but a string
# If it cannot be converted to a float, it's not a valid float
# The issue here is that a value can look like
# integer in one column, but like float in another,
# so let's check them all separately
# Leave the attrs[elem] as-is, i.e. a string
# This can happen when max_value of type real
# is huge and written in scientific notation
# e.g., 1.79769e+308 (as of 2025-05-14, this is the only one)
# and it doesn't make sense to convert it as it'll be really huge
# https://github.com/ansible-collections/community.postgresql/issues/853
# For some type of context it's impossible
# to change settings with ALTER SYSTEM and
# for some service restart is required
# Return a proper value class based on vartype and unit
# from a pg_catalog.pg_settings entry for a specific parameter
# Same object will be instantiated to compare
# the desired and the current values
# Compare normalized values of the desired and the current
# values to decide whether we need to do any real job
# As the value is "_RESET", i.e. a string, and
# the module always return changed=true, we just instantiate
# the desired value as if it would be a value of string type
# Because the result of running "ALTER SYSTEM RESET param;"
# is always a removal of the line from postgresql.auto.conf
# this will always run the command to ensure the removal
# and report changed=true
# You can uncomment the line below while debugging
# to see what DB actually returns for the parameter
# executed_queries.append(res[0])
# Issue https://github.com/ansible-collections/community.postgresql/issues/78
# Change value from 'one, two, three' -> "'one','two','three'"
# PR https://github.com/ansible-collections/community.postgresql/pull/400
# Parameter names ends with '_command' or '_prefix'
# can contains commas but they are not lists
# PR https://github.com/ansible-collections/community.postgresql/pull/521
# unix_socket_directories up to PostgreSQL 13 lacks GUC_LIST_INPUT and
# GUC_LIST_QUOTE options so it is a single value parameter
# In case like search_path value "$user"
# just append it w/o any modifications
# There's at least one param that doesn't
# work well with ALTER SYSTEM SET.
# Add more to this function if you see any
# Check input for potentially dangerous elements
# Ensure psycopg libraries are available before connecting to DB
# Get and check server version
# We assume nothing has changed by default
# Instantiate the object
# When we need to reset the value by running
# "ALTER SYSTEM RESET param;".
# setting up a regular value first
# This is the default case when we need to run
# "ALTER SYSTEM SET param = 'value';",
# i.e., it's not the above cases
# Fetch info again to get diff.
# It doesn't see the changes w/o reconnect
# Instantiate another object to get the latest attrs
# Make sure if there any difference between
# the attrs in the diff, report changed
# Disconnect
# Convert ret values, then populate attrs and diff
# Attributes are immutable (in the context of this module at least),
# so we put them separately, not as a part of diff
# FOR DEBUGGING you can return the information below if needed.
# If yes, create an empty dict called debug first
# debug["value_class_value"]=pg_param.init_value.num_value,
# debug["value_class_unit"]=pg_param.init_value.passed_unit,
# debug["value_class_normalized"]=pg_param.init_value.normalized,
# debug["desir_class_value"]=pg_param.desired_value.num_value,
# debug["desir_class_unit"]=pg_param.desired_value.passed_unit,
# debug["desir_class_normalized"]=pg_param.desired_value.normalized,
# pbkdf2_hmac is missing on python 2.6, we can safely assume,
# that postresql 10 capable instance have at least python 2.7 installed
# map to cope with idiosyncrasies of SUPERUSER and LOGIN
# This is a special list for debugging.
# If you need to fetch information (e.g. results of cursor.fetchall(),
# queries built with cursor.mogrify(), vars values, etc.):
# 1. Put debug_info.append(<information_you_need>) as many times as you need.
# 2. Run integration tests or you playbook with -vvv
# 3. If it's not empty, you'll see the list in the returned json.
# The PUBLIC user is a special case that is always there
# Note: role_attr_flags escaped by parse_role_attrs and encrypted is a
# on some databases, E.g. AWS RDS instances, there is no access to
# the pg_authid relation to check the pre-existing password, so we
# just assume password is different
# Do we actually need to do anything?
# Handle SQL_ASCII encoded databases
# Empty password means that the role shouldn't have a password, which
# means we need to check if the current password is None.
# If the provided password is a SCRAM hash, compare it directly to the current password
# SCRAM hashes are represented as a special object, containing hash data:
# `SCRAM-SHA-256$<iteration count>:<salt>$<StoredKey>:<ServerKey>`
# for reference, see https://www.postgresql.org/docs/current/catalog-pg-authid.html
# extract SCRAM params from rolpassword
# we'll never need `storedKey` as it is only used for server auth in SCRAM
# storedKey = b64decode(r.group(3))
# from RFC5802 https://tools.ietf.org/html/rfc5802#section-3
# SaltedPassword  := Hi(Normalize(password), salt, i)
# ServerKey       := HMAC(SaltedPassword, "Server Key")
# We assume the password is not scram encrypted
# or we cannot check it properly, e.g. due to missing dependencies
# When the provided password looks like a MD5-hash, value of
# 'encrypted' is ignored.
# https://github.com/ansible-collections/community.postgresql/issues/688
# When the current password is not none and is not
# hashed as scram-sha-256 / not explicitly declared as plain text
# (if we are here, these conditions should be met)
# but the default password encryption is scram-sha-256, update the password.
# Can be relevant when migrating from older version of postgres.
# 32: MD5 hashes are represented as a sequence of 32 hexadecimal digits
# Let's first try to get the attrs from pg_authid.
# Some systems like AWS RDS instances
# do not allow user to access pg_authid
# If we succeeded, return it
# If we haven't succeeded, like in case of AWS RDS,
# try to get the attrs from the pg_roles table
# Compare the desired role_attr_flags and current ones.
# If they don't match, return True which means
# they need to be updated, False otherwise.
# If the desired expiration date is not equal to
# what is already set for the role, set this to True
# We could catch psycopg.errors.ReadOnlySqlTransaction directly,
# but that was added only in Psycopg 2.8
# Handle errors due to read-only transactions indicated by pgcode 25006
# ERROR:  cannot execute ALTER ROLE in a read-only transaction
# Note: role_attr_flags escaped by parse_role_attrs and encrypted is a literal
# Handle passwords.
# Get role's current attributes to check if they match with the desired state
# Does password need to changed?
# Do role attributes need to changed?
# Does role expiration date need to changed?
# Does role connection limit need to change?
# Now let's check if anything needs to changed. If nothing, just return False
# If we are here, something does need to change.
# Compose a statement and execute it
# Do role attributes need to changed? If not, just return False right away
# If they need, compose a statement and execute
# Fetch new role attributes.
# Detect any differences between current_ and new_role_attrs.
# check each item in the current configuration
# we already have the correct setting
# so we can remove it from the list
# if the key is not in the list of settings we want, and we reset unspecified parameters
# we will reset it on the database
# if the setting is not in the db or has the wrong value, it will get updated
# parses a list of "key=value" strings to a dict
# It seems psycopg's prepared statements don't work with 'ALTER ROLE' at this point.
# This is vulnerable to SQL-injections (added to docs) but I don't see a better way to do this.
# we pre-quote users to make sure pg_quote_identifiers doesn't fail on dots
# user is already quoted, we run it through pg_quote_identifiers to make sure it doesn't contain any lonely
# double-quotes to prevent SQL-injections
# sanitize configuration
# handle user-specific configuration-defaults
# Copyright: (c) 2018, Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# To allow to set value like 1mb instead of 1MB, etc:
# The function returns a value in bytes
# if the value contains 'B', 'kB', 'MB', 'GB', 'TB'.
# Otherwise it returns the passed argument.
# It's sometimes possible to have an empty values
# If the first char is not a digit, it does not make sense
# to parse further, so just return a passed value
# If the last char is not an alphabetical symbol, it means that
# it does not contain any suffixes, so no sense to parse further
# Extract digits
# When we reach the first non-digit element,
# e.g. in 1024kB, stop iterating
# For cases like "1B"
# Parameter names ends with '_command' or '_prefix' can contains commas but are not lists
# Convert a value like 1mb (Postgres does not support) to 1MB, etc:
# Convert a value like 1b (Postgres does not support) to 1B:
# Check server version (needs 9.4 or later):
# Check parameter is GUC_LIST_QUOTE (done once as depend only on server version)
# Get info about param state:
# Do job
# If check_mode, just compare and exit:
# Anyway returns current raw value in the check_mode:
# Set param (value can be an empty string):
# Reset param:
# nothing to change, exit:
# When the setting can be changed w/o restart, apply it
# Reconnect and recheck current value:
# f_ means 'final'
# Copyright: (c) 2019, Tobias Birkefeld (@tcraxs) <t@craxs.de>
# Collect info
# Change autocommit to False if check_mode:
# Create new sequence
# Drop non-existing sequence
# Drop existing sequence
# Rename sequence
# Refresh information
# Change owner, schema and settings
# change owner
# Set schema
# Rollback if it's possible and check_mode:
# Make return values:
# Convert result to list of dicts to handle it easier:
# Add schema name as a key if not presented:
# Add object name key as a subkey
# (they must be uniq over a schema, so no need additional checks):
# Add other other attributes to a certain index:
# We don't need to commit anything, so, set it to False:
# Create object and do work:
# Clean up:
# Return information:
# We don't have functools.partial in Python < 2.5
# implicit roles in current pg version
# Methods for implicit roles managements
# Methods for querying database objects
# check if rolname is a implicit role
# check if rolname is present in pg_catalog.pg_roles
# PostgreSQL < 9.0 doesn't support "ALL TABLES IN SCHEMA schema"-like
# phrases in GRANT or REVOKE statements, therefore alternative methods are
# provided here.
# Methods for getting access control lists and group membership info
# To determine whether anything has changed after granting/revoking
# privileges, we compare the access control lists of the specified database
# objects before and afterwards. Python's list/string comparison should
# suffice for change detection, we should not actually have to parse ACLs.
# The same should apply to group membership information.
# Manipulating privileges
# get_status: function to get current status
# Return False (nothing has changed) if there are no objs to work on.
# obj_ids: quoted db object identifiers (sometimes schema-qualified)
# set_what: SQL-fragment specifying what to set for the target roles:
# Either group membership or privileges on objects of a certain type
# We don't want privs to be quoted here
# function types are already quoted above
# Note: obj_type has been checked against a set of string literals
# and privs was escaped when it was parsed
# Note: Underscores are replaced with spaces to support multi-word privs and obj_type
# for_whom: SQL-fragment specifying for whom to set the above
# as_who: SQL-fragment specifying to who to set the above
# For python 3+ that can fail trying
# to compare NoneType elements by sort method.
# With Psycopg 3 we get a list of dicts, it is easier to sort it as strings
# param "schema": default, allowed depends on param "type"
# param "objs": ALL_IN_SCHEMA can be used only
# when param "type" is table, sequence, function or procedure
# param "objs": default, required depends on param "type"
# param "privs": allowed, required depends on param "type"
# Check input
# Connect to Database
# privs
# objs:
# Again, do we have valid privs specified for object type:
# function signatures are encoded using ':' to separate args
# roles
# Some implicit roles (as PUBLIC) works in uppercase without double quotes and in lowercase with double quotes.
# Other implicit roles (as SESSION_USER) works only in uppercase without double quotes.
# So the approach that works for all implicit roles is uppercase without double quotes.
# check if target_roles is set with type: default_privs
# target roles
# Use a fifo to be notified of an error in pg_dump
# Using shell pipe has no way to return the code of the first command
# in a portable way.
# If the source db doesn't exist and
# the target db exists, we assume that
# the desired state has been reached and
# respectively nothing needs to be changed
# Such a transformation is used, since the connection should go to 'maintenance_db'
# Parameters for connecting to the database
# Handle real mode
# Parameters for performing dump/restore
# 1. Get the current extension version:
# 2. Get the extension default version:
# 3. Get extension available versions:
# Get extension info and available versions:
# Decode version 'latest' when passed (if version is not passed 'latest' is assumed)
# Note: real_version used for checks but not in CREATE/DROP/ALTER EXTENSION commands,
# If there are not available versions the extension is not available
# Check default_version is available
# 'latest' version matches default_version specified in extension control file
# Passed version is 'latest', versions are available, but no default_version is specified
# in extension control file. In this situation CREATE/ALTER EXTENSION commands fail if
# a specific version is not passed ('latest' cannot be determined).
# If version passed:
# If extension is installed, update to passed version if a valid path exists
# Given/Latest version already installed
# Attempt to update to given/latest version
# Reconnect (required by some extensions like timescaledb)
# No valid update path from curr_version to latest extension version
# (extension is buggy or no direct update supported)
# If extension is not installed, install passed version
# If passed version not available fail
# Latest version not available (extension is buggy)
# Else install the passed version
# If version is not passed:
# Extension exists, no request to update so no change
# If the ext doesn't exist and is available:
# 'latest' version installed by default if version not passed
# If the ext doesn't exist and is not available:
# Get extension info again:
# Parse previous and current version for module output
# Copyright: (c) 2019, Loic Blot (@nerzhul) <loic.blot@unix-experience.fr>
# Publication does not exist:
# Publication DML operations:
# If alltables flag is False, get the list of targeted tables:
# FOR TABLES IN SCHEMA statement and row filters are supported since PostgreSQL 15
# Publication exists:
# Make list ["param = 'value'", ...] from params dict:
# Add the list to query_fragments:
# If check_mode, just add possible SQL to
# executed_queries and return:
# Add or drop tables from published tables suit:
# Add new tables to the publication:
# Drop redundant tables from the publication:
# 1. If needs to add table to the publication:
# If table is part of the publication, but row filter input
# doesn't match the actual state of the publication, then drop it and
# re-ADD it so that the row filter gets applied.
# 2. if there is a table in targeted tables
# that's not present in the passed tables:
# 1. If needs to add schema to the publication:
# 2. if there is a schema that's already in the publication
# but not present in the passed schemas we remove it from the publication:
# Update pub parameters:
# In PostgreSQL 10/11 only 'publish' optional parameter is presented.
# 'publish' value can be only a string with comma-separated items
# of allowed DML operations like 'insert,update' or
# 'insert,update,delete', etc.
# Make dictionary to compare with current attrs later:
# Compare val_dict and the dict with current 'publish' parameters,
# if they're different, set new values:
# Default behavior for other cases:
# If the parameter was not set before:
# Update pub owner:
# Check pg_publication.pubtruncate exists (supported from PostgreSQL 11):
# We check publication state without DML queries execution, so set autocommit:
# Nothing was changed by default:
# If module.check_mode=True, nothing will be changed:
# Get final publication info:
# Update publication info and return ret values:
# Copyright: (c) 2017, Flavien Chantelot (@Dorn-)
# Copyright: (c) 2018, Antoine Levy-Lambert (@antoinell)
# Check that spcoptions exists:
# For 9.1 version and earlier:
# Options exist:
# Location exists:
# settings must be a dict {'key': 'value'}
# Apply new settings:
# Create PgTablespace object and do main job:
# If tablespace exists with different location, exit:
# Create new tablespace:
# Because CREATE TABLESPACE can not be run inside the transaction block:
# Drop existing tablespace:
# Because DROP TABLESPACE can not be run inside the transaction block:
# Rename tablespace:
# Refresh information:
# Change owner, comment and settings:
# Update tablespace information in the class
# Copyright: (c) 2018-2019, Andrew Klychkov (@Andersson007) <andrew.a.klychkov@gmail.com>
# check_mode start
# check_mode end
# Copyright: (c) 2019, Sebastiaan Mannem (@sebasmannem) <sebastiaan.mannem@enterprisedb.com>
# from ansible.module_utils.postgres import postgres_common_argument_spec
# this regex allows for some invalid IPv6 addresses like ':::', but I honestly don't care
# if that line continues, we just glue the next line onto the end until it ends
# we can and have to do that, as continuation even applies within comments and quoted strings [sic]
# https://www.postgresql.org/docs/current/auth-pg-hba-conf.html#AUTH-PG-HBA-CONF
# we got a line continuation, but there was no more line
# add the newline so we don't lose that information
# handle comment-only lines
# handle empty lines
# handle "normal" lines
# remove continuation tokens
# a comment would always be the last token
# create Rule
# We need to do this charade for splitting to be compatible with Python 3.6 which has been EOL for three years
# at the time of writing. If you come across this after support for Python 3.6 has been dropped, please replace
# WHITESPACE_OR_QUOTE_RE in the beginning of the file with `TOKEN_SPLIT_RE = re.compile(r'(?<=[\s"#])')`
# and the next 8 lines (including bare_tokens.append) with `bare_tokens = TOKEN_SPLIT_RE.split(string)`
# if the previous token ended a quoted string, we need to decide how to continue
# if the token consists of only spaces, we know for sure this symbol is finished
# otherwise it might continue with more characters or even another quote
# we either start a new symbol or continue after a finished quote
# outside of quotes, whitespaces are ignored
# we use endswith here, to correctly handle strings like 'somekey="somevalue"'
# if there was a space before it, the quote will be alone, so that is not an issue
# handle edge-case of a comment having no space before the #-symbol like "... md5#some comment"
# if we are inside a quoted string we consume and append tokens until the quoted string ends
# includes, comment-only lines and empty lines are special
# normalize comment so we can safely compare it if we have to
# parse tokens into a rule
# construct the line from the comment if there is no line, but a comment
# comments are only compared if they are not attached to a rule
# normal rules are equal if they key matches
# if both have a line number, don't change the initial order
# if neither has a line number, order alphabetically
# if only one of the rules has a line number, that one is sorted before the other
# like that, new full-line comments are always added after existing ones
# comments go before anything else, includes go last
# this line is less than the other if it is not an include and a (full-line) comment
# this line is less than the other if the other is an include and not a comment
# When all else fails, just compare the rendered lines
# You can also write all to match any IP address,
# samehost to match any of the server's own IP addresses,
# or samenet to match any address in any subnet that the server is connected to.
# (all is considered the full range of all ips, which has a weight of 0)
# (sort samehost second after local)
# Might write some fancy code to determine all prefix's
# from all interfaces and find a sane value for this one.
# For now, let's assume IPv4/24 or IPv6/96 (both have weight 96).
# suffix matching (domain name), let's assume a very large scale
# and therefore a very low weight IPv4/16 or IPv6/64 (both have weight 64).
# hostname, let's assume only one host matches, which is
# IPv4/32 or IPv6/128 (both have weight 128)
# keywords lose their special meaning if quoted
# if "all" is in the list, we sort it to the bottom
# only empty lines
# we might want to change the return-type to a dict at some point, but right now we return a string
# to not introduce breaking changes
# ret_dict[header_map['options']] = copy.copy(self._auth_options)
# we haven't included the comment before, should we?
# if self._comment:
# empty lines, full line comments and includes are special
# don't strip quotes from database or user, as they have a special meaning there [sic]
# > Quoting one of the keywords in a database, user, or address field (e.g., all or replication) makes the word
# > lose its special meaning, and just match a database, user, or host with that name.
# it is an IP, but without a CIDR suffix, so we expect a netmask in the next token
# the method should be after the netmask
# if it is anything but a bare IP address, we expect the method on index 4
# if there is anything after the method, that must be options
# handle special cases
# if the rule is special, we are done now
# make sure each rule includes all required fields
# verify contype and set databases and users
# verify address and netmask if the contype isn't "local"
# verify the netmask if there is one
# we ignore address / netmask when contype is 'local'
# verify the method
# if the string is quoted or a regex, we return it unaltered
# we sort the dbs/users alphabetically
# try to parse it to a network
# if it contains a slash, it is a network
# it is a network, but has host bits set
# it might be a quoted address or network
# if it was a quoted address, we return it without quotes
# not a valid network or address, may be a hostname or keyword
# try to remove the temporary file if something goes wrong
# append rule if it doesn't exist
# update rule if it exists but is not correct
# delete rule if it exists
# remove blank lines before sorting
# if the lists have different lengths, they are not identical
# if any rule is not identical to the rule in the same list at the same index, the lists are not identical
# if we didn't find a mismatch, the lists are identical
# argument_spec = postgres_common_argument_spec()
# FIXME this should be removed and we should use standard methods
# TODO this can probably be changed to dict without breaking
# DEPRECATED, does nothing
# if overwrite is true, we don't care that we can't parse the file and just start with an empty one
# if both of those aren't set, we just read the rules from the file
# it's ok if the module default is set
# alias handling
# use user-supplied defaults or module defaults
# use module defaults
# we split users and databases delimited with ',', but pg_hba.conf can handle them with commas,
# so in the future we might want to just put them in as one line
# if sorting is turned on, we need to compare the sorted list
# compare the new rules to existing rules, if they are not identical, we overwrite everything
# file_args = None
# Roles do not exist, nothing to do, exit:
# if a new_owner is the object owner now,
# nothing to do:
# if we want to change ownership:
# if we want to reassign objects owned by roles:
# check recursively in case of nested lists
# recursively check values, as they might be dicts, as well
# Copyright (c), Ted Timmons <ted@timmons.me>, 2017.
# Most of this was originally added by other creators in the postgresql_user module.
# This line is needed for unit tests
# We need Psycopg 3 to be at least 3.1.0 because we need Client-side-binding cursors
# When a Linux distribution provides both Psycopg2 and Psycopg 3.0 we will use Psycopg2
# Getting a dictionary of environment variables
# TODO: Should we raise it as psycopg? That will be a breaking change
# Switch role, if specified:
# Ensure proper datestyle, only supported in psycopg 3
# This is usually needed to return queries in check_mode
# without execution
# To use defaults values, keyword arguments must be absent, so
# check which values are empty and don't include in the return dictionary
# Might be different in the modules:
# If a login_unix_socket is specified, incorporate it here.
# If connect_params is specified, merge it together
# If role is in a group now, pass:
# If role is not in a group now, pass:
# 1. Get groups that the role is member of but not in self.groups and revoke them
# 2. Filter out groups that in self.groups and
# the role is already member of and grant the rest
# Update role lists, excluding non existent roles:
# Once we drop support for Ansible 2.11, we can
# Docker doc fragment
# The following needs to be kept in sync with the compose_v2 module utils
# For plugins: allow to define common options with Ansible variables
# Additional, more specific stuff for minimal Docker SDK for Python version < 2.0
# Additional, more specific stuff for minimal Docker SDK for Python version >= 2.0.
# Note that Docker SDK for Python >= 2.0 requires Python 2.7 or newer.
# Docker doc fragment when using the vendored API access code
# Docker doc fragment when using the Docker CLI
# Copyright (c) 2019-2020, Felix Fontein <felix@fontein.de>
# Windows uses Powershell modules
# Since we are not setting the actual_user, look it up so we have it for logging later
# Only do this if display verbosity is high enough that we'll need the value
# This saves overhead from calling into docker when we do not need to
# Copyright (c) 2021 Jeff Goldschrafe <jeff@holyhandgrenade.org>
# Based on Ansible local connection plugin by:
# Copyright (c) 2012 Michael DeHaan <michael.dehaan@gmail.com>
# Copyright (c) 2015, 2017 Toshio Kuratomi <tkuratomi@ansible.com>
# Because nsenter requires very high privileges, our remote user
# is always assumed to be root.
# Rewrite the provided command to prefix it with nsenter
# This plugin does not support pipelining. This diverges from the behavior of
# the core "local" connection plugin that this one derives from.
# if we created a master, we can close the other half of the pty now, otherwise master is stdin
# Based on the chroot connection plugin by Maykel Moya
# (c) 2014, Lorin Hochstein
# (c) 2015, Leendert Brouwer (https://github.com/objectified)
# Note: docker supports running as non-root in some configurations.
# (For instance, setting the UNIX socket file to be readable and
# writable by a specific UNIX group and then putting users into that
# group).  Therefore we do not check that the user is root when using
# this connection.  But if the user is getting a permission denied
# error it probably means that docker on their system is only
# configured to be connected to by root and they are not running as
# root.
# no result yet, must be newer Docker version
# old docker versions
# The default exec user is root, unless it was changed in the Dockerfile with USER
# https://github.com/docker/cli/pull/732, first appeared in release 18.06.0
# TODO: this is mostly for backwards compatibility, play_context is used as fallback for older versions
# docker arguments
# timeout, use unless default and pc is different, backwards compat
# An explicit user is provided
# Support for specifying the exec user was added in docker 1.7
# This saves overhead from calling into docker when we do not need to.
# Older docker does not have native support for copying files into
# running containers, so we use docker exec to implement this
# Although docker version 1.8 and later provide support, the
# owner and group of the files are always set to root
# out_path is the final file path, but docker takes a directory, not a
# file path
# Older docker does not have native support for fetching files command `cp`
# If `cp` fails, try to use `dd` instead
# Rename if needed
# Clear container user cache
# For the parts taken from the docker inventory script:
# Copyright (c) 2016, Paul Durivage <paul.durivage@gmail.com>
# Copyright (c) 2016, Chris Houseknecht <house@redhat.com>
# Copyright (c) 2016, James Tanner <jtanner@redhat.com>
# Add container to groups
# Figure out ssh IP and Port
# Lookup the public facing port Nat'ed to ssh port.
# We need to do this last since we also add a group called `name`.
# When we do this before a set_variable() call, the variables are assigned
# to the group, and not to the host.
# This is workaround of bug in Docker when in some cases the Leader IP is 0.0.0.0
# Check moby/moby#35437 for details
# Copyright (c) 2019, Ximon Eighteen <ximon.eighteen@gmail.com>
# This can happen when the machine is created but provisioning is incomplete
# example output of docker-machine env --shell=sh:
# capture any of the DOCKER_xxx variables that were output and create Ansible host vars
# with the same name and value but with a dm_ name prefix.
# Filter out machines that are not in the Running state as we probably cannot do anything useful actions
# with them.
# 'optional', 'optional-silently'
# query `docker-machine env` to obtain remote Docker daemon connection settings in the form of commands
# that could be used to set environment variables to influence a local Docker client:
# add an entry in the inventory for this host
# check for valid ip address from inspect output, else explicitly use ip command to find host ip address
# this works around an issue seen with Google Compute Platform where the IP address was not available
# via the 'inspect' subcommand but was via the 'ip' subcomannd.
# set standard Ansible remote host connection settings to details captured from `docker-machine`
# see: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
# set variables based on Docker Machine tags
# set variables based on Docker Machine env variables
# Copyright 2016 Red Hat | Ansible
# Copyright (c) 2019 Hannes Ljungberg <hannes.ljungberg@gmail.com>
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
# Copyright (c) 2019 Piotr Wojciechowski <piotr@it-playground.pl>
# Copyright 2017 Red Hat | Ansible, Alex Grönholm <alex.gronholm@nextday.fi>
# https://github.com/docker/compose/pull/10981 - 2.22.0
# https://github.com/docker/compose/pull/10134 - 2.15.0
# If name contains a tag, it takes precedence over tag parameter.
# Since we no longer pass --tag if --output is provided, we need to set this manually
# We pass values on using environment variables. The user has been warned in the documentation
# that they should only use this mechanism when being comfortable with it.
# Use /dev/urandom to generate some entropy to make the environment variable's name unguessable
# convert to bytes
# supports_check_mode=True,
# API version 1.39+: return value CachesDeleted (list of str)
# We cannot see the data after creation, so adding a label we can use for idempotency check
# if something changed or force, delete and re-create the secret
# For some reason, from Docker 28.3.3 on not specifying X-Registry-Auth seems to be invalid.
# See https://github.com/moby/moby/issues/50614.
# We ca not see the data after creation, so adding a label we can use for idempotency check
# only use templating argument when self.template_driver is defined
# template_driver has changed if it was set in the previous config
# and now it differs, or if it was not set but now it is.
# if something changed or force, delete and re-create the config
# If the image vanished while we were trying to remove it, do not fail
# Copyright (c) 2021 Red Hat | Ansible Sakar Mehra<@sakarmehra100@gmail.com | @sakar97>
# Copyright (c) 2019, Vladimir Porshkevich (@porshkevich) <neosonic@mail.ru>
# Get privileges
# Pull plugin
# Inspect and configure plugin
# This can happen in check mode
# Copyright (c) 2023, Léo El Amri (@lel-amri)
# Note that for Docker Compose 2.32.x and 2.33.x, the long form is '--y' and not '--yes'.
# This was fixed in Docker Compose 2.34.0 (https://github.com/docker/compose/releases/tag/v2.34.0).
# Since 'docker compose stop' **always** claims it is stopping containers, even if they are already
# stopped, we have to do this a bit more complicated.
# Make sure all containers are created
# Make sure all containers are stopped
# Make sure names in repository are valid images, and add tag if needed
# Idempotency checks
# At this point we definitely know that we can talk to the Docker daemon
# At this point we know that we can talk to Docker, since we asked it for the API version
# config_only sets driver to 'null' (and scope to 'local') so force that here. Otherwise we get
# diffs of 'null' --> 'bridge' given that the driver option defaults to 'bridge'.
# Put network's IPAM config into the same format as module's IPAM config
# Compare lists of dicts as sets of dicts
# due to recursive argument_spec, all keys are always present
# (but have default value None if not specified)
# Only add IPAM if a driver was specified or if IPAM parameters were
# specified. Leaving this parameter out can significantly speed up
# creation; on my machine creation with this option needs ~15 seconds,
# and without just a few seconds.
# "The docker server >= 1.10.0"
# Copyright (c) 2017, Dario Zanzico (git@dariozanzico.com)
# Check if any invalid keys left
# Either the list is empty or does not contain dictionaries
# noqa: E721, pylint: disable=unidiomatic-typecheck
# Even though the types are different between these items,
# they are both strings. Try matching on the same string type.
# Fallback to assuming the strings are different
# Sort the aliases
# Before Docker API 1.29 adding/removing networks was not supported
# In Docker API 1.25 attaching networks to TaskTemplate is preferred over Spec
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
# Create copies of publish_item dicts where keys specified in ignored_keys are left out
# Prior to Docker SDK 4.0.0 no warnings were returned and will thus be ignored.
# (see https://github.com/docker/docker-py/pull/2272)
# Sometimes Version.Index will have changed between an inspect and
# update. If this is encountered we'll retry the update.
# Copyright (c) 2018 Dario Zanzico (git@dariozanzico.com)
# Make sure we have a string (assuming that line['stream'] and
# line['status'] are either not defined, falsish, or a string)
# Load image(s) from file
# Collect loaded images
# Copyright 2025 Felix Fontein <felix@fontein.de>
# 'ssl_version': context.ssl_version,  -- this isn't used anymore
# Adjust protocol name so that it works with the Docker CLI tool as well
# Create config for the modules
# Will have a 'sha256:' prefix
# Sanity check: fail early when we know that something will fail later
# Load the image from an archive
# pull the image
# Finding the image does not always work, especially running a localhost registry. In those
# cases, if we do not set force=True, it errors.
# line = json.loads(line)
# We can only do this when we actually got some output from Docker daemon
# f1 is EOF, so stop reading from it
# f2 is EOF, so stop reading from it
# At least one of f1 and f2 is EOF and all its data has
# been processed. If both are EOF and their data has been
# processed, the files are equal, otherwise not.
# Compare the next chunk of data, and remove it from the buffers
# https://pkg.go.dev/io/fs#FileMode
# ModeDir
# ModeTemporary
# ModeSymlink
# ModeDevice
# ModeNamedPipe
# ModeSocket
# ModeCharDevice
# ModeIrregular
# ModeSetuid
# ModeSetgid
# ModeSticky
# First handle all filesystem object types that are not regular files
# Check whether file is too large
# We need to get hold of the content
# TODO: better detection
# (ansible-core also just checks for 0x00, and even just sticks to the first 8k, so this is not too bad...)
# in Python 2, this *cannot* be used in `with`...
# Retrieve information of local file
# When forcing and we are not following links in the container, go!
# Resolve symlinks in the container (if requested), and get information on container's file
# Follow links in the Docker container?
# If the file was not found, continue
# When forcing, go!
# If force is set to False, and the destination exists, assume there's nothing to do
# Basic idempotency checks
# Fetch file from container
# Check things like user/group ID and mode
# Since the content is no_log, make sure that the before/after strings look sufficiently different
# Undocumented parameters for use by the action plugin
# depending on Python version and error, multiple different exceptions can be raised
# Can happen if a user explicitly passes `content: null` or `path: null`...
# missing Docker SDK for Python handled in ansible.module_utils.docker_common
# Returning container ID to not trigger another connection to host
# Container ID is sufficient to get extended info in other tasks
# Number of replicas have to be updated in calling method or may be left as None
# "The docker server >= 1.9.0"
# Copyright (c) 2020 Matt Clay <mclay@redhat.com>
# File content varies based on the environment:
# While this was true and worked well for a long time, this seems to be no longer accurate
# with newer Docker / Podman versions and/or with cgroupv2. That's why the /proc/self/mountinfo
# detection further down is done when this test is inconclusive.
# As to why this works, see the explanations by Matt Clay in
# https://github.com/ansible/ansible/blob/80d2f8da02052f64396da6b8caaf820eedbf18e2/test/lib/ansible_test/_internal/docker_util.py#L571-L610
# Copyright (c) 2020 Jose Angel Munoz (@imjoseangel)
# Node
# Copyright (c) 2016 Olaf Kilian <olaf.kilian@symanex.com>
# Make sure we have a minimal config if none is available.
# Attempt to read the existing config.
# No config found or an invalid config found so we'll ignore it.
# Update our internal config with what ever was loaded.
# Make sure directory exists
# Write config; make sure it has permissions 0x600
# build up the auth structure
# If we found an existing auth config for this registry and username
# combination, we can return it immediately unless reauth is requested.
# If user is already logged in, then response contains password for user
# This returns correct password if user is logged in and wrong password is given.
# So if it returns another password as we passed, and the user did not request to
# reauthorize, still do it.
# Get the configuration store.
# get raises an exception on not found.
# Check to see if credentials already exist.
# Make sure that there is a credential helper before trying to instantiate a
# Store object.
# Set to True when transferring files to the remote
# Scrambling is not done for security, but to avoid no_log screwing up the diff
# See here ("Extended description") for a definition what tags can be:
# https://docs.docker.com/engine/reference/commandline/tag/
# get default machine name from the url
# If a or b is None:
# If both are None: equality
# Otherwise, not equal for values, and equal
# if the other is empty for set/list/dict
# For allow_more_present, allow a to be None
# Otherwise, the iterable object which is not None must have length 0
# Do proper comparison (both objects not None)
# If we would know that both a and b do not contain duplicates,
# we could simply compare len(a) to len(b) to finish this test.
# We can assume that b has no duplicates (as it is returned by
# docker), but we do not know for a.
# All supported healthcheck parameters
# If the user explicitly disables the healthcheck, return None
# as the healthcheck object, and set disable_healthcheck to True
# Copyright (c) 2019-2021, Felix Fontein <felix@fontein.de>
# FIXME: This does **not work with SSLSocket**! Apparently SSLSocket does not allow to send
# probably: "TypeError: shutdown() takes 1 positional argument but 2 were given"
# WrappedSocket (urllib3/contrib/pyopenssl) does not have `send`, but
# only `sendall`, which uses `_send_until_done` under the hood.
# 10 milliseconds
# After calling self._sock.shutdown(), OpenSSL's/urllib3's
# WrappedSocket seems to eventually raise ZeroReturnError in
# case of EOF
# no data available
# Stream EOF
# Stream EOF (as reported by docker daemon)
# When the SSH transport is used, Docker SDK for Python internally uses Paramiko, whose
# Channel object supports select(), but only for reading
# (https://github.com/paramiko/paramiko/issues/695).
# Once we drop support for ansible-core 2.16, we can remove the try/except.
# Copyright (c) 2019 Piotr Wojciechowski (@wojciechowskipiotr) <piotr@it-playground.pl>
# Copyright (c) Thierry Bouvet (@tbouvet)
# The format is defined in https://pkg.go.dev/github.com/kr/logfmt?utm_source=godoc
# (look for "EBNFish")
# An extra, specific to containers
# Extras for pull events
# Extras for built events
# Extras for build start events
# type: (Type[ResourceType], Text) -> Any
# The following needs to be kept in sync with the MINIMUM_VERSION compose_v2 docs fragment
# TODO: no idea what to do with this
# This could be a bug, a change of docker compose's output format, ...
# Tell the user to report it to us :-)
# This is a bug in Compose that will get fixed by https://github.com/docker/compose/pull/11996
# For some reason, Writer.TailMsgf() is always used for errors *except* in one place,
# where its message is prepended with 'WARNING: ' (in pkg/compose/pull.go).
# {"dry-run":true,"id":" ","text":"build service app"}
# {"dry-run":true,"id":"==>","text":"==> writing image dryRun-7d1043473d55bfa90e8530d35801d4e381bc69f0"}
# {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==>","text":"==> writing image dryRun-5d9204268db1a73d57bbd24a25afbeacebe2bc02"}
# (The longer form happens since Docker Compose 2.39.0)
# {"dry-run":true,"id":"==> ==>","text":"naming to display-app"}
# {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"}
# Ignore this
# Continuing an existing event
# Unparsable line that apparently belongs to the previous error event
# Error message that is independent of an error event
# **Very likely** an error message that is independent of an error event
# If a message is present, assume it is a warning
# Support for JSON output was added in Compose 2.29.0 (https://github.com/docker/compose/releases/tag/v2.29.0);
# more precisely in https://github.com/docker/compose/pull/11478
# https://github.com/docker/compose/pull/10690
# https://github.com/docker/compose/pull/11038
# Breaking change in 2.21.0: https://github.com/docker/compose/pull/10918
# Handle breaking change in Docker Compose 2.37.0; see
# https://github.com/ansible-collections/community.docker/issues/1082
# and https://github.com/docker/compose/issues/12916 for details
# Only return stdout and stderr if it is not empty
# should not happen, but simply ignore to be on the safe side
# data can also be file object for streaming. This is because _put uses requests's put().
# See https://requests.readthedocs.io/en/latest/user/advanced/#streaming-uploads
# Note that without both name (bytes) and arcname (unicode), this either fails for
# Python 2.7, Python 3.5/3.6, or Python 3.7+. Only when passing both (in this
# form) it works with Python 2.7, 3.5, 3.6, and 3.7 up to 3.11
# If for some reason the file shrunk, fill up to the announced size with zeros.
# (If it enlarged, ignore the remainder.)
# We need to write a multiple of 512 bytes. Fill up with zeros.
# End with two zeroed blocks
# Python 2 (or more precisely: Python < 3.3) has no timestamp(). Use the following
# expression for Python 2:
# https://pkg.go.dev/io/fs#FileMode: bit 32 - 5 means ModeSymlink
# The next two imports ``docker.models`` and ``docker.ssladapter`` are used
# to ensure the user does not have both ``docker`` and ``docker-py`` modules
# installed, as they utilize the same namespace are are incompatible
# docker (Docker SDK for Python >= 2.0.0)
# docker-py (Docker SDK for Python < 2.0.0)
# Either Docker SDK for Python is no longer using requests, or Docker SDK for Python is not around either,
# or Docker SDK for Python's dependency requests is missing. In any case, define an exception
# class RequestException so that our code does not break.
# No Docker SDK for Python. Create a place holder client to allow
# instantiation of AnsibleModule and proper error handing
# Filter out all None parameters
# TLS with verification
# TLS without verification
# No TLS
# The minimal required version is < 2.0 (and the current version as well).
# Advertise docker (instead of docker-py).
# take module parameter value
# take the env variable value
# take the default
# Precedence: module parameters-> environment variables-> defaults.
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
# Sometimes library/xxx images are not found
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
# Modules can put information in here which will always be returned
# in case client.fail() is called.
# Test whether option is supported, and store result
# Fail if option is not supported but used
# Test whether option is specified
# If the option is used, compose error message.
# Define an exception class RequestException so that our code does not break.
# only use "filter" on API 1.24 and under, as it is deprecated
# debug=dict(type='bool', default=False),
# `--format json` was only added as a shorthand for `--format {{ json . }}` in Docker 23.0
# def call_cli(self, *args, check_rc=False, data=None, cwd=None, environ_update=None):
# Python 2.7 does not like anything than '**kwargs' after '*args', so we have to do this manually...
# def call_cli_json(self, *args, check_rc=False, data=None, cwd=None, environ_update=None, warn_on_stderr=False):
# def call_cli_json_stream(self, *args, check_rc=False, data=None, cwd=None, environ_update=None, warn_on_stderr=False):
# self.module.params['debug']
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on containerd's platforms Go module
# (https://github.com/containerd/containerd/tree/main/platforms)
# Copyright The containerd Authors
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# See https://github.com/containerd/containerd/blob/main/platforms/database.go#L32-L38
# See https://github.com/containerd/containerd/blob/main/platforms/database.go#L54-L60
# See normalizeOS() in https://github.com/containerd/containerd/blob/main/platforms/database.go
# See normalizeArch() in https://github.com/containerd/containerd/blob/main/platforms/database.go
# See Parse() in https://github.com/containerd/containerd/blob/main/platforms/platforms.go
# The part is either OS or architecture
# Generate a one-byte key. Right now the functions below do not use more
# than one byte, so this is sufficient.
# Return anything that is not zero
# Copyright 2022 Red Hat | Ansible
# Python 2 does not support causes
# FileNotFoundError does not exist in Python 2
# In Python 2.6, this does not have __exit__
# Extracts hash without 'sha256:' prefix
# Strip off .json filename extension, leaving just the hash.
# Moby 25.0.0, Docker API 1.44
# In Python 2.6, TarFile does not have __exit__
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
# Copyright (c) 2016-2022 Docker, Inc.
# The OpenSSH server default value for MaxSessions is 10 which means we can
# use up to 9, leaving the final session for the underlying SSH connection.
# For more details see: https://github.com/docker/docker-py/issues/2246
# Monkey-patching match_hostname with a version that supports
# IP-address checking. Not necessary for Python 3.5 and above
# Argument compatibility/mapping with
# https://docs.docker.com/engine/articles/https/
# This diverges from the Docker CLI in that users can specify 'tls'
# here, but also disable any public/default CA pool verification by
# leaving verify=False
# If the user provides an SSL version, we should use their preference
# If the user provides no ssl version, we should default to
# TLSv1_2.  This option is the most secure, and will work for the
# majority of users with reasonably up-to-date software. However,
# before doing so, detect openssl version to ensure we can support
# If the OpenSSL version is high enough to support TLSv1_2,
# then we should use it.
# Otherwise, TLS v1.0 seems to be the safest default;
# SSLv23 fails in mysterious ways:
# https://github.com/docker/docker-py/issues/963
# "client_cert" must have both or neither cert/key files. In
# either case, Alert the user when both are expected, but any are
# missing.
# If verify is set, make sure the cert exists
# Do not fail here if no authentication exists for this
# specific registry as we can have a readonly pull. Just
# put the header if we can.
# auth_config needs to be a dict in the format used by
# auth.py username , password, serveraddress, email
# This is a docker index repo (ex: username/foobar or ubuntu)
# We sometimes fall back to parsing the whole config as if it
# was the auth config by itself, for legacy purposes. In that
# case, we fail silently and return an empty conf if any of the
# keys is not formatted properly.
# Other values are irrelevant if we have a token
# Starting with engine v1.11 (API 1.23), an empty dictionary is
# a valid value in the auths config.
# https://github.com/docker/compose/issues/3265
# Likely missing new Docker config file or it is in an
# unknown format, continue to attempt to read old location
# and format.
# Default to the public index server
# The ecosystem is a little schizophrenic with index.docker.io VS
# docker.io - in that case, it seems the full URL is necessary.
# Retrieve all credentials from the default store
# credHelpers entries take priority over all others
# Not enough data
# requests 1.2 supports response as a keyword argument, but
# requests 1.1 does not
# If we do not have any auth data so far, try reloading the config file
# one more time in case anything showed up in there.
# If dockercfg_path is passed check to see if the config file exists,
# if so load that config.
# SSH has a different default for num_pools to all other adapters
# host part of URL should be unused, but is resolved by requests
# module in proxy_bypass_macosx_sysconf()
# Use SSLAdapter for the ability to specify SSL version
# version detection needs to be after unix adapter mounting
# Go <1.1 cannot unserialize null to a string
# so we do this disgusting thing here.
# Keep a reference to the response to stop it being garbage
# collected. If the response is garbage collected, it will
# close TLS sockets.
# UNIX sockets cannot have attributes set on them, but that's
# fine because we will not be doing TLS over them
# this read call will block until we get a chunk
# Response is not chunked, meaning we probably
# encountered an error immediately
# Disable timeout on the underlying socket to prevent
# Read timed out(s) for long running processes
# The generator will output tuples (stdout, stderr)
# The generator will output strings
# Wait for all the frames, concatenate them, and return the result
# Do not change the timeout if it is already disabled.
# We should also use raw streaming (without keep-alive)
# if we are dealing with a tty-enabled container.
# If we do not have any auth data so far, try reloading the config
# file one more time in case anything showed up in there.
# Send the full auth configuration (if any exists), since the build
# could use any (or all) of the registries.
# See https://github.com/docker/docker-py/issues/1683
# find the underlying socket object
# based on api.client._get_raw_response_socket
# We are working with a paramiko (SSH) channel, which does not
# support cancelable streams with the current implementation
# Copyright (c) 2016-2025 Docker, Inc.
# remove http+ from default docker socket url
# set default docker endpoint if no endpoint is set
# check docker endpoints
# unknown format
# for docker endpoints, set defaults for
# Host and SkipTLSVerify fields
# FIXME: We may need to disable buffering on Py3 as well,
# but there's no clear way to do it at the moment. See:
# https://github.com/docker/docker-py/issues/1799
# The select_proxy utility in requests errors out when the provided URL
# does not have a hostname, like is the case when using a UNIX socket.
# Since proxies are an irrelevant notion in the case of UNIX sockets
# anyway, we simply return the path URL directly.
# See also: https://github.com/docker/docker-py/issues/811
# Hotfix for requests 2.32.0 and 2.32.1: its commit
# https://github.com/psf/requests/commit/c0813a2d910ea6b4f8438b91d315b8d181302356
# changes requests.adapters.HTTPAdapter to no longer call get_connection() from
# send(), but instead call _get_connection().
# Fix for requests 2.32.2+:
# https://github.com/psf/requests/commit/c98e4d133ef29c46a9b68cd783087218a8075e05
# drop LD_LIBRARY_PATH and SSL_CERT_FILE
# When re-using connections, urllib3 calls fileno() on our
# SSH channel instance, quickly overloading our fd limit. To avoid this,
# we override _get_conn
# Connection is closed try a reconnect
# See Remarks:
# https://msdn.microsoft.com/en-us/library/aa365800.aspx
# Another program or thread has grabbed our pipe instance
# before we got to it. Wait for availability and attempt to
# connect again.
# Blocking mode
# Timeout mode - Value converted to milliseconds
# When re-using connections, urllib3 tries to call select() on our
# NpipeSocket instance, causing a crash. To circumvent this, we override
# _get_conn, where that check happens.
# See also: https://github.com/docker/docker-sdk-python/issues/811
# It is important to prepend our variables, because we want the
# variables defined in "environment" to take precedence.
# is some flavor of "**"
# Treat **/ as ** so eat the "/"
# is "**EOF" - to align with .gitignore just accept all
# is "**"
# Note that this allows for any # of /'s (even 0) because
# the .* will eat everything, even /'s
# is "*" so map it to anything but "/"
# "?" is any char except "/"
# NpipeSockets have their own error types
# pywintypes.error: (109, 'ReadFile', 'The pipe has been ended.')
# Limited to 1024
# npipes do not support duplex sockets, so we interpret
# a PIPE_ENDED error as a close operation (0-length read).
# We have reached EOF
# If the streams are multiplexed, the generator returns strings, that
# we just need to concatenate.
# If the streams are demultiplexed, the generator yields tuples
# (stdout, stderr)
# It is guaranteed that for each frame, one and only one stream
# is not None.
# Extra files override context files with the same name
# This happens when we encounter a socket file. We can safely
# ignore it and proceed.
# Workaround https://bugs.python.org/issue32713
# Windows does not keep track of the execute bit, so we make files
# and directories executable by default.
# Directories, FIFOs, symlinks... do not need to be read.
# Heavily based on
# https://github.com/moby/moby/blob/master/pkg/fileutils/fileutils.go
# If we want to skip this file and it is a directory
# then we should first check to see if there's an
# excludes pattern (e.g. !dir/file) that starts with this
# dir. If so then we cannot skip this dir.
# Remove trailing spaces
# Leading and trailing slashes are not relevant. Yes,
# "foo.py/" must exclude the "foo.py" regular file. "."
# components are not relevant either, even if the whole
# pattern is only ".", as the Docker reference states: "For
# historical reasons, the pattern . is ignored."
# ".." component must be cleared with the potential previous
# component, regardless of whether it exists: "A preprocessing
# step [...]  eliminates . and .. elements using Go's
# filepath.".
# Dockerfile not in context - read data to insert into tar later
# Dockerfile is inside the context - return path relative to context root
# Only calculate relpath if necessary to avoid errors
# on Windows client -> Linux Docker
# see https://github.com/docker/compose/issues/5969
# In the case of a legacy `.dockercfg` file, we will not
# be able to load any JSON data.
# NOTE: this is only relevant for Linux hosts
# (does not apply in Docker Desktop)
# Sensible defaults
# https://bugs.python.org/issue754016
# These protos are valid aliases for our library but not for the
# official spec
# "tcp://" is exceptionally disallowed by convention;
# omitting a hostname for other protocols is fine
# For legacy reasons, we consider unix://path
# to be valid and equivalent to unix:///path
# Rewrite schemes to fit library internals (requests adapters)
# empty string for cert path is the same as unset.
# empty string for tls verify counts as "false".
# Any value or 'unset' counts as true.
# assert_hostname is a subset of TLS verification,
# so if it is not set already then set it to false.
# Check if the variable is a string representation of an int
# without a units part. Assuming that the units are bytes.
# Reconvert to long for the final result
# Use format dictated by Swarm API if container is part of a task
# Match full string
# External part
# Address
# External range
# Internal range
# This is the worst hack, but it prevents a bug in Compose 1.14.0
# https://github.com/docker/docker-py/issues/1668
# TODO: remove once fixed in Compose stable
# docker-credential-pass will return an object for inexistent servers
# whereas other helpers will exit with returncode != 0. For
# consistency, if no significant data is returned,
# raise CredentialsNotFound
# shutil.which() already uses PATHEXT on Windows, so on
# Python 3 we can simply use shutil.which() in all cases.
# (https://github.com/docker/docker-py/commit/42789818bed5d86b487a030e2e60b02bf0cfa284)
# string or None
# LooseVersion object or None
# dict[str, dict[str, Any]] or None
# Return (module, active_options, client)
# convert from str to list
# convert from list to str
# convert from list to str.
# Same behavior as Docker CLI: if networks are specified, use the name of the first network as the value for network_mode
# (assuming no explicit value is specified for network_mode)
# Streamline options
# Add result to list
# We only allow IPv4 and IPv6 addresses for the bind address
# Any published port should also be exposed
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we do not end up in an infinite loop.
# Unpause
# Now try again
# We only loop when explicitly requested by 'continue'
# Cannot use get('Labels', {}) because 'Labels' may be present and be None
# If there are labels from the base image that should be removed and
# base_image_mismatch is fail we want raise an error.
# Format label for error message
# 'networks' is handled out-of-band
# The 'default' network_mode value is translated by the Docker daemon to 'bridge' on Linux and 'nat' on Windows.
# This happens since Docker 26.1.0 due to https://github.com/moby/moby/pull/47431; before, 'default' was returned.
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
# golang's omitempty for bool returns None for False
# binds
# volumes
# We only expect anonymous volumes to show up in the list
# Only pass anonymous volumes to create container
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve its ID)
# If we cannot find the container, issue a warning and continue with
# what the user specified.
# Keep track of all module params and all option aliases
# Process comparisons specified by user
# If '*' appears in comparisons, process it first
# `networks` is special: only update if
# some value is actually specified
# Now process all other comparisons.
# Find main key
# Check value and update accordingly
# Copy values
# Inspect container
# Check container state
# Wait
# Exponential backoff, but never wait longer than 10 seconds
# (1.1**24 < 10, 1.1**25 > 10, so it will take 25 iterations
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
# New container
# Wait for container to be removed before trying to create it
# Existing container
# `None` means that no health check enabled; simply treat this as 'healthy'
# Return the latest inspection results retrieved
# If the image is not there, or pull_check_mode_behavior == 'always', claim we'll
# pull. (Implicitly: if the image is there, claim it already was latest unless
# pull_check_mode_behavior == 'always'.)
# No match.
# Ignore the result
# Record the differences
# Since the order does not matter, sort so that the diff output is better.
# For selected values, use one entry as key
# We sort the list of dictionaries by using the sorted items of a dict as its key.
# Only consider differences of properties that can be updated when there are also other differences
# remove the container from the network, if connected
# connect to the network
# Set `failed` to True and return output as msg
# Python 2.x
# pylint: disable=deprecated-class
# pylint: disable=consider-using-dict-comprehension
# Copyright: (c) 2021, [ Hitachi Vantara ]
# module.exit_json(changed=True, data=raw_message)
# module.exit_json(changed=False, data=raw_message)
# Not including storage component details due to length constraints.
# Check if the current encryption mode matches requested mode
# Regex to match and extract timestamp for sorting
# List and filter log bundle files
# Sort files by timestamp in filename, most recent first
# Files to delete
# Delete older files
# Always write OS info
# Create zip
# Clean up temp directory
# Clean up old log bundles
# Optionally treat missing files as a non-fatal warning
# module.exit_json(changed=True, data=raw_message, user_consent_required=registration_message)
# return True
# Assgin default values to self.param.json_spec
# Update self.param.json_spec with the values from json_spec
# Check if the storage class already exists
# mapi_full_url = f"https://admin.{region}.{cluster_name}"
# Ensure jobParameters is not included in the request
# Ensure jobId is not included in the request
# return gateway.http_pd(
# gateway = OOGateway()
# connection_info = self.param.connection_info
# Return whole input unchanged
# self.json_spec = dict()
# self.json_spec["id"] = json_spec["id"]
# ADD operation validation
# storage_component_config["namespace"] = "ucp"
# json_data = {k: v for k, v in json_data.items() if v is not None}
# "read_only": {"type": "bool", "required": False,
# self.ignore_apis = IGNORED_APIS
# Backtrack calling context
# Update tracking data
# if self.data is not None or self.data != {}:
# if not success:
# If the file does not exist, initialize with empty data
# Write data to a hidden temporary file (no lock yet)
# Lock only during the replacement step
# Acquire lock for the critical section
# Atomic replacement of the file
# Release the lock
# Clean up any leftover temporary file
# Optionally, clean up the lock file if no longer needed
# Generate a timestamped backup file name
# Create backup directory
# Move the corrupted file to the backup directory
# Create a new empty JSON file
# Set the site_id for the request
# Prepare the request body
# # log.writeDebug(f"Processing request... {body}")
# Make a request using open_url from Ansible module
# Set to True in production for security
# Handle response
# Get API key
# Extracts 'admin.us-west-2b.vsp1o-2-k8s.scl.sie.hds.com'
# The second part is the region
# Everything after region is the cluster name
# step 1. Get Bearer Token
# step 2. Get XSRF Token
# headers=headers,
# step 2. Get Users List
# mask sensitive data when logging
# response_data = json.loads(response.read())
# logger.writeDebug("Response data: {}".format(response_data))
# PROJECT DETAILS
# LOGGING CONSTANTS
# MAPI CONSTANTS
# TELEMERTY CONSTANTS
# File Name Constants
# MODULE CONSTANTS
# from time import gmtime, strftime
# except ImportError as error:
# Define the log directory and ensure it exists
# print("ANSIBLE_LOG_PATH={}".format(ANSIBLE_LOG_PATH))
# Define the log file path
# Logging configuration dictionary
# root logger
# Apply the logging configuration
# Manually add RotatingFileHandler to the loggers
# Use the existing formatter from the configuration
# Add the handler to the root logger
# Add the handler to the hv_logger
# Define the base directories to check
# Define the target subdirectory to look for
# Iterate over the base directories to find the target subdirectory
# Fallback to determining the directory from the current file's location
# If none of the directories exist, return the default user-specific directory
# Get the previous frame (two levels up)
# $HOME/.ansible/collections/ansible_collections/hitachivantara/vspone_object/messages.properties
# Assuming Log() is defined elsewhere
# Split the string into a list of words using '_' as the separator
# Capitalize each word except the first, then join them
# Recursive function
# If the data is a string and is a valid JSON string, parse it into a
# If the data is a dictionary
# Mask the sensitive key
# If the data is a list
# If the data is a byte string, decode and mask it
# If the data is a string, mask if it's in fields_to_mask
# Mask the sensitive string
# Read the response body from the error (if available)
# storage_model: Optional[str] = None
# storage_serial: Optional[int] = None
# storage_type: Optional[int] = None
# connection_type: Optional[int] = None
# REGISTRATION_FILE_NAME,
# REGISTRATION_FILE_PATH,
# output_dict = port_auth.data_to_list()
# self.logger.writeDebug(f"MOD:hv_sds_block_chap_user_facts:argument_spec= {self.connection_info}")
# self.logger.writeDebug(f"MOD:hv_sds_block_chap_user_facts:spec= {self.spec}")
# self.logger.writeDebug(f"MOD:hv_sds_block_compute_node_facts:argument_spec= {self.connection_info}")
# self.logger.writeInfo(f"{data}")
# self.logger.writeDebug('20240726 volume_data={}',volume_data)
# can be added mandotary , optional mandatory arguments
# host_group_data_result = (
# sng20241115 for the secondary_connection_info remote_connection_manager
# msg = comment
# if msg is None:
# "resource_groups": rg,
# self.spec.copy_group_name = self.spec.name
# self.logger.writeDebug(
# if result is not None and result is not str:
# can be added mandatory , optional mandatory arguments
# "truecopy_info": data,
# "msg": self.get_message(),
# Copyright: (c) 2023, [ Hitachi Vantara ]
# VSPStoragePoolArguments
# self.logger.writeDebug(f"MOD:hv_truecopy_facts:spec= {self.spec}")
# nosec - This is a trusted command that is used to get the ansible version from the system
# Define the directory and pattern for the zip files
# Get a list of all zip files matching the pattern
# Sort files by creation time (or last modification time if creation time is unavailable)
# Keep only the latest 3 files
# Delete the older files
# Extract distribution name and version from /etc/os-release
# Get OS version based on the platform
# Get detailed Linux distribution info
# Fallback to kernel version if the distribution info is unavailable
# macOS version
# Windows version
# Fallback for other systems
# Get OS edition
# Get OS version
# Get Ansible version (if installed)
# Check if Ansible is installed
# Check if the Ansible executable is valid
# Check if the Ansible executable is valid and accessible
# Get the Ansible version
# Get Python version
# Get system information
# Print the results
# Write the system information to file
# Print a success message
# Log.getHomePath() is /opt/hitachivantara/ansible
# Calculate relative path to preserve the directory structure in the destination
# Make sure each corresponding directory exists in the destination
# Copy each .yml file to the corresponding directory in the destination
# copy registration files
# handle the usage file content
# comment out the registration files
# self.secondary_connection_info = (
# self.storage_serial_number = self.params_manager.get_serial()
# if self.state == "split":
# self.secondary_connection_info,
# if self.connection_info.connection_type == ConnectionTypes.GATEWAY:
# , self.secondary_connection_info
# sng20241115 for the remote_connection_manager
# 20240805
# sng20241115 - hur_pair_facts_direct
# return VSPHurPairInfo(data=ret_tc_pairs)
# sng20241115 - prov.get_all_hur_pairs_direct
# First we check if there is a copy group name present in the spec
# ret_list = self.cg_gw.get_all_copy_pairs(spec)
# return DirectCopyPairInfoList(data=ret_list)
# sng20241115 pair: DirectSpecificCopyGroupInfo or list
# sng20241115 change replicationType here to use other types for testing
# cgs is a dict, the element of the dict can some time be an array
# this element of the dict is not an array,
# we can now get the copyPairs from the cgs dict
# handle copyPair class objects
# primary_volume_id = spec.get('primary_volume_id',None)
# 20240805, update - removed if true
# 20240808 - one pvol can have 3 pairs, this only returns the first pair
# 20240808 - get_hur_by_pvol_mirror, mirror id support
# 20240808 - one pvol can have 3 pairs
# 20240912 - get_hur_by_pvol_svol
# 20240808 delete_hur_pair
# pair_exiting = self.gateway.get_replication_pair(spec)
# if (
# ):
# pair = self.gateway.get_replication_pair(spec)
# get immediately after create returning Unable to find the resource. give 5 secs
# if the HUR creation fails, delete the secondary volume
# return vol_gw.get_volume_by_id(device_id, primary_volume_id)
# err_msg = VSPShadowImagePairValidateMsg.SVOL_NOT_FOUND.value
# logger.writeError(err_msg)
# raise ValueError(err_msg)
# unpresent the svol from the host group
# self.nvme_provisioner.delete_host_namespace_path(ldev_found.nvmSubsystemId, ldev_found.hostNqn, int(ldev_found.namespaceId))
# delete the svol
# from ..model.vsp_user_models import (
# from model.vsp_user_models import (
# all_result = []
# all_result.append(result)
# return all_result
# Older storage models do not support nvm subsystem.
# So catch the exception and send user friendly msg.
# controllers = None
# controllers = settings.get("data", [])
# if spec is not None and spec.storage_node_name:
# if spec is not None and spec.id:
# tmp_port["tags"] = []
# Get a list of storage system
# Get the specified storage system
# Some storage models do not support capacity feature.
# So set value of total and free capacities to invalid values.
# Retrow the exception
# Get syslog servers
# Set default values
# if "pools" in query:
# if "ports" in query:
# if "quorumdisks" in query:
# if "journalPools" in query:
# if "freeLogicalUnitList" in query:
# logger.writeDebug(f"hg.portId={hg.portId}, hg.hostGroupNumber={hg.hostGroupNumber}")
# logger.writeDebug(f"port_id={port_id}, hg_id={hg_id}")
# logger.writeDebug("PV:resource_groups={}", resource_groups)
# return resource_groups
# don't do all this crap for meta resource groups
# host_group = host_group_one.data
# logger.writeDebug("PV:get_display_host_groups:hg={}", host_group)
# logger.writeDebug("PV:get_display_iscsi_targets:iscsi_targets={}", iscsi_targets)
# pvol = self.get_volume_by_id(spec.primary_volume_id)
# if pvol is None:
# sng20241127 - pvol set_alua_mode
# # verify the pvol isAluaEnabled
# if the GAD creation fails, delete the secondary volume
# just return the first one for now
# expect DirectCopyPairInfo here
# pairs = self.cg_gw.get_all_copy_pairs_by_copygroup_name(spec)
# DIRECT should use gad_pair_facts cg_gw
# TODO - spec is not needed for gateway, clean up the gateway layer later
# after the auto split, the input "pair" is a DirectCopyPairInfo obj
# pvol = self.get_volume_by_id(gad_pair.pvolLdevId)
# pair_id = self.gateway.get_pair_id_from_swap_pair_id(swap_pair_id, spec.secondary_connection_info)
# logger.writeDebug(f"PV: swap_pair_id = {swap_pair_id} pair_id = {pair_id}")
# sng20241126 swap_split_gad_pair consistencyGroupId check
# common code is checking it?
# if gad_pair["pvolStatus"] == PairStatus.PSUS:
# tc = None
# if spec.primary_volume_id:
# from
# already in split state
# if gad_pair["pvolStatus"] == PairStatus.PAIR:
# already in pair state
# sng20241115 - prov.gad_pair_facts.direct
# logger.writeDebug("sng20241115 get_all_gad_pairs_direct.secondary_connection_info ={}", spec.secondary_connection_info)
# sng20241115 - prov.get_all_gad_pairs_direct
# if tc.ldevId == primary_vol_id or tc.remoteLdevId == primary_vol_id:
# commented by ansible sanity test
# and copyPair.pvolStatus != "SMPL"
# sng20241115 pairs: []DirectCopyPairInfo
# sng1104 - GET FACTS, formatting is in the reconciler layer
# do pegasus switch here
# invalid code , commented by ansible sanity test
# if False:
# raise ValueError(GADPairValidateMSG.SECONDARy_SYSTEM_CANNOT_BE_SAME.value)
# Fail early, save time
# Before creating the secondary volume check if secondary hostgroups exist
# the name change is done in the update_volume method
# sng20241127 - set svol label/name and set_alua_mode
# if setting the volume name fails, delete the secondary volume
# verify the svol set label and set_alua_mode
# self.vol_gateway.assign_vldev(sec_vol_id, sec_vol_id)
# Can't blindly remove the ldev from the resource group, it could be
# already attached to some hostgroups
# if the secondary volume RG does not match with HG RG throw error
# if there is a lun path you can't change RG ID
# hg_info = self.parse_hostgroup(host_group)
# logger.writeDebug("PROV:get_secondary_volume_id:hg_info = {}", hg_info)
# GAD reserved
# if attaching the volume to the host group fails, delete the secondary volume
# if attaching the volume to the host group fails, detach them
# capture namespace ID
# Before creating the secondary volume check if secondary hostgroup exists
# host_group = self.get_secondary_hostgroup(spec.secondary_hostgroups)
# per UCA-2281
# ns_id = ns_id.split(",")[-1]
# self.rg_gateway.add_resource(0, add_resource_spec)
# assign back vldev
# consistency_group_id = spec.consistency_group_id or -1
# enable_delta_resync = spec.enable_delta_resync or False
# count_query = "count={}".format(16384)
# ldev_option_query = "ldevOption=dpVolume"
# Add encryption info to the pool
# self.add_encryption_info(pool)
# Fetch data protection volumes
# pool.dpVolumes = self.fetch_dp_volumes(pool)
# Format the storage pool
# sng20241206 add_encryption_info for pool
# get the volume
# if self.pg_info is None:
# for pg in self.pg_info.data:
# pool.isEncrypted = False
# if pool_spec.pool_volumes is None:
# handle the case to create a volume in the parity group id
# if self.connection_type == ConnectionTypes.DIRECT:
# Create the volume
# Compare and update only if there are changes
# Get a list of parity groups
# Get a list of external parity groups
# self.direct_get_parity_group_by_id(spec.parity_group_id)
# input = (CL4-C, 13)
# Validate the SNMP specification
# if spec.snmp_v3_authentication_settings:
# we are not supporting this yet,
# just a place holder for now
# get the hex format of the volume
# 1345 -> 0541
# get the storage serial encoding
# 410109 -> 40277D
# assume the same external ldev will not appear in two external storages
# if hex_serial != serialInfo :
# def undo(self):
# VSPHostGroupDirectGateway
# host_group = VSPHostGroupInfo()
# host_group.port = hgport
# host_group.hostGroupId = hgname
# Failed to register quorum disk,
# invalid quorum_disk_id (ran out of quorum disks)
# auto select with ldev_id
# see if ldev_id is already registered
# allow ldev-less QRD
# if spec.ldev_id is None:
# register the remote storage system
# register quorum_disks
# ldev_id can be None for ldev-less QRD
# Get storage cluster info
# Get health status
# Get drives
# Get ports
# Get pools
# logger.writeDebug(f"PROV:60 tc= {tc}")
# tc_pair_id = "remoteStorageDeviceId,copyGroupName,localDeviceGroupName,remoteDeviceGroupName,copyPairName"
# If we have both copy_group_name and copy_pair_name, we can delete the pair directly
# Deleting TC by primary_volume_id is only supported for VSP One
# secondary_storage_info = self.gateway.get_secondary_storage_info(spec.secondary_connection_info)
# storage_model = secondary_storage_info["model"]
# if we have copy_group_name and copy_pair_name, we can directly resync the
# pair and return the pair information
# if we have copy_group_name and copy_pair_name, we directly swap_resync the
# DO NOT NEED THIS FUNCTION AS WE ARE NOW DIRECTLY RUNNING FROM SECONDARY
# swap_pair_id =  self.gateway.swap_resync_true_copy_pair(spec)
# logger.writeDebug(f"PV:swap_resync_true_copy_pair: swap_pair_id = {swap_pair_id} pair_id = {pair_id}")
# if we have copy_group_name and copy_pair_name, we directly split the
# if we have copy_group_name and copy_pair_name, we directly swap_split the
# swap_pair_id =  self.gateway.swap_split_true_copy_pair(spec)
# spec.is_data_reduction_force_copy = pvol.isDataReductionShareEnabled
# if the TC creation fails, delete the secondary volume if it was created
# @LogDecorator.debug_methods
# self.nvme_provisioner = VSPNvmeProvisioner(self.connection_info, self.serial)
# this version of the get_one_snapshot will return None instead of raising an exception,
# if the snapshot does not exist
# get the full hg name
# self.fix_host_group_names(volume.ports)
# this function only deals with creating new pair
# the pool_id is not required for the new snapshot creation
# if none, use the pvol pool_id
# floating snapshot
# get the hg with the full name
# get the hg with the attached luns info
# option #1
# hg_provisioner = VSPHostGroupProvisioner(self.connection_info)
# this call is getting 500 since its trying to get all hgs
# hg = hg_provisioner.get_one_host_group_using_hg_port_id(
# self.logger.writeDebug(f"20250324 hg: {hg}")
# hg = hg_provisioner.get_one_host_group(
# # we need the hg with the full name
# # unpresent the svol from the host group
# hg_provisioner.delete_luns_from_host_group(hg, luns=[ssp.svolLdevId])
# option #2, the hostGroupName is incomplete from pf-rest
# for hg_info in svol.ports:
# query for the hg.lunPaths
# the name in hg_info is not the full name, its 16 char max
# assign svol to the floating snapshot
# 4 cases for svol_id:
# 1. svol is undefined - create a new volume using the svol_id
# 2. svol is defined - validate then use it to create snapshot
# 3. svol is -1 - create/unassign floating volume snapshot
# 4. svol_id is None - create a new volume and use it to create snapshot
# case 3, create floating snapshot
# m.id not given, only pvol and svol, idempotent check before creating snapshot
# changed = False
# fix uca-3157, we want to show the error message from create_snapshot
# Check if vvol or normal lun is required to create
# 1. svol_id is none - same as before, create a new volume
# 2. svol is defined - use svol, no need to create a new volume
# 3. svol is not defined - use svol_id to create volume
# create a new volume, svol
# use the user provided svol_id, a freelun/undefined ldev id
# set the data reduction force copy to true
# Assign the svol and pvol to the host group
# UCA-2602 for the case where the snapshot is already in PAIR status, do nothing, removed the check
# Select the first free volume in the range
# If no range is specified, get the first free volume
# check if pvol is iscsi
# if port.portInfo["portType"] != "FIBRE" or port.portInfo["mode"] != "SCSI":
# pvolNvmSubsystemId = vol_info.nvmSubsystemId
# if pvolNameSpaceId is None or pvolNameSpaceId == "":
# Before creating the secondary volume check if secondary nvmsubsystem exists
# if int(nvme_subsystem.nvmSubsystemId) != int(pvolNvmSubsystemId):
# logger.writeDebug("PROV:get_secondary_volume_id:spec = {}", spec)
# logger.writeDebug("PROV:get_secondary_volume_id:host_group = {}", host_group)
# self.hg_gateway.add_luns_to_host_group(host_group, [sec_vol_id])
# iscsi_targets.append(
# hostgroups.append(
# sng20241205 prov change_volume_settings_vldev
# vol_id: ldevId we want to change
# existing vldevId, can be none
# need to unassign old first before you can assign new
# unassign only, we are done
# unassign volume with own ldevid
# and vldev_id != 65534
# sng20241205 prov create_volume
# set the volume status to block before shredding
# fetch the remote storage device id and type id
# self.get_remote_connection_info()
# Normalize the remote paths to spec type and compare
# Check if the dynamic pool already exists
# if spec.drives is None:
# validate the ddp drives count
# Helper function to convert camelCase to snake_case
# Insert underscores before uppercase letters and convert to lowercase
# pool_spec.startLdevId is None
# and pool_spec.endLdevId is None
# If any of the conditions do not match, perform the update
# Everything is already updated.
# In case no copy pairs in the copy group return copy group information.
# found_copy_group = self.gateway.get_one_copygroup_info_by_name(spec)
# if found_copy_group is None:
# elif found_copy_group is not None:
# self.gateway.get_one_copygroup_info_by_name(spec)
# Only one of the four (master_volume_id, master_volume_name, snapshot_volume_id, snapshot_volume_name) can be provided at a time
# If snapshot_volume_name is provided, it should be a valid snapshot volume
# find the external_parity_group in the external_path_group, then get the ldevids from it
# find the external_parity_group in the external_path_group
# get the external volumes for the externalPath
# ( filter by externalPath.portId, externalPath.externalWwn )
# look for the external volume from the external_parity_group in the external_path_group
# logger.writeDebug("20250228 extvols={}", allExtvols)
# for eplist in externalPathsList:
# the external volume to delete
# return None, "No External Storage Volumes in the system."
# logger.writeDebug("20250228 rsp={}", rsp)
# 20250303 creates an external volume from external parity group
# select ext-volume by ldev and serial
# you need these 3 params to find or create the external_parity_group
# vol = vol.camel_to_snake_dict()
# walk thru the external_path_groups
# see if the external_parity_group is already created
# we need to create the external_parity_group
# if it fails, check the lunId, which is the externalLun
# loop thru the externalParityGroups in the externalPathGroups
# get the externalParityGroupId which has this externalLun
# and report it (or offer to delete it)
# this can be a pre-check
# map ext volume: create volume by parity group
# return vol.camel_to_snake_dict(), None
# and spec.is_svol_writable is None
# and spec.do_pvol_write_protect is None
# and spec.do_data_suspend is None
# and spec.do_failback is None
# and spec.is_consistency_group is None
# and spec.fence_level is None
# and spec.copy_pace is None
# ports = self.get_all_storage_ports()
# ports.data = [port for port in ports.data if port.portId in port_ids]
# return ports
# logger.writeError(MessageID.ERR_GetCopyGroups)
# First check if CLPR already exists
# Create new CLPR if it doesn't exist
# Get all CLPRs and filter the one we just created
# First check if CLPR exists
# Check if update is needed
# Proceed with update if needed
# Convert to MB
# * 512 / (1024 * 1024)  # Convert to MB
# Add auto-scaled value (GB or TB)
# Convert to TB if > 1024 GB
# Convert to GB
# logger.writeDebug('RC:extract:change_keys:key={} value_type2 = {}', key, type(value))
# Get the corresponding key from the response or its mapped key
# logger.writeDebug('RC:extract:self.size_properties = {}', self.size_properties)
# Assign the value based on the response key and its data type
# Handle missing keys by assigning default values
# "lun": int,
# self.qos_param = {
# self.size_properties = ("total_capacity", "used_capacity")
# return nvme_ss
# if spec.host_mode_options and nvme_subsystem.hostModeOptions != spec.host_mode_options:
# for ns in namespaces:
# extracted_data = TrueCopyInfoExtractor(self.storage_serial_number).extract(tc_pairs)
# return extracted_data
# 20240808 hur operations reconciler
# sng20250125 this should be in validate_hur_module
# but calls to validate_hur_module are bypassed in favor of the common validations
# "HUR must be registered in a consistency group"
# sng20250125 validate_hur_spec_ctg, default is auto-assign CTG
# if we don't want the default then raise exception
# raise ValueError(VSPHurValidateMsg.INVALID_CTG_NONE.value)
# 20240905 comment
# Match output with Gateway
# for key, value in resp_data.items():
# updated_resp_data["primary_hex_volume_id"] = volume_id_to_hex_format(
# updated_resp_data["secondary_hex_volume_id"] = volume_id_to_hex_format(
# self.logger.writeDebug("resp_data={}", updated_resp_data)
# 20241218
# sng20241115 virtual vldevid lookup
# in case input is not a list
# logger.writeDebug("RC:sng20241115  144 secondary_connection_info={}", spec.secondary_connection_info)
# convert objs in the input to dict
# DirectCopyPairInfo?
# "resourceId": str,
# "copyPaceTrackSize": int,
# "fenceLevel": str,
# "pairName": str,
# "primaryVSMResourceGroupName": str,
# "primaryVirtualHexVolumeId": str,
# "primaryVirtualStorageId": str,
# "primaryVirtualVolumeId": int,
# "secondaryVSMResourceGroupName": str,
# "secondaryVirtualStorageId": str,
# "secondaryVirtualVolumeId": int,
# "svolAccessMode": str,
# "type": str,
# "secondaryVirtualHexVolumeId": int,
# "entitlementStatus": str,
# "partnerId": str,
# "subscriberId": str,
# new_dict["partner_id"] = "apiadmin"
# "pvolVirtualLdevId":int,
# "svolVirtualLdevId":int,
# "pvol_virtual_ldev_id": "primary_virtual_volume_id",
# "svol_virtual_ldev_id": "secondary_virtual_volume_id",
# "pvol_status": "status",
# "copy_pair_name": "pair_name",
# sng20241126 get_serial_number_from_device_id
# for 'pvolStorageDeviceId': 'A34000810045' -> 810045
# for 'svolStorageDeviceId': 'A34000810050' -> 810050
# supports up to 7 digits device id
# "primary_volume_storage_id": self.storage_serial_number,
# "secondary_volume_storage_id": spec.secondary_storage_serial_number,
# new_dict["primary_virtual_volume_id"] = ""
# new_dict["primary_virtual_hex_volume_id"] = ""
# new_dict["secondary_virtual_hex_volume_id"] = ""
# new_dict["secondary_virtual_volume_id"] = ""
# if new_dict.get("primary_hex_volume_id") == "" :
# if new_dict.get("secondary_hex_volume_id") == "" :
# sng20250125 UCA-2466 'VSPHurPairInfo' object has no attribute 'items'
# Key replacement as per the given instructions
# Convert volume IDs to hex format
# Log the updated response data
# self.storage_serial_number = serial
# if new_dict.get("ldev_hex_id") == "" or new_dict.get("ldev_hex_id") is None:
# new_dict = {"storage_serial_number": self.storage_serial_number}
# if value_type == dict:
# before the subobjState change
# make sure all the ports are defined in the storage
# so we can add comments properly
# if wwns is not None:
# if hg with given port is not found, we have to ignore
# Convert WWNs to uppercase and ensure they are strings, removing duplicates
# support the new subobjState keywords
# for hostmode, you can only update, no delete
# hostoptlist is the user input
# # add = new - old
# del = old & new
# if not wwn.nick_name:
# delLun = list(set(hgLun) - set(newLun))
# check for add/remove hostgroup for each given port
# If newWWN is present,  update wwns
# If luns is present, present or overwrite luns
# get all hgs
# see if all the (hg,port)s are created
# get hg by hg_number
# No host groups found
# Enter create mode
# In this case, if state is absent, we do nothing.
# if we want to support the case:
# the user given one lun, one port to create hg,
# do sync up the hg in all other ports
# then we need to call get_host_groups_by_scan_all_ports(self.provisioner, hgName, ?, hostGroups)
# Delete mode.
# result["comment"] = VSPHostGroupMessage.LDEVS_PRESENT.value
# Handle delete host group
# Modify mode. Certain operations only occur if state is overwrite.
# data.host_group.camel_to_snake_dict()
# Do not add default value for wwns and lunPaths because they are in query
# self.logger.writeDebug(f"20250324 port_info: {port_info}")
# if e.args[0]['message'] is not None:
# except Exception as ex:
# "resource_id": str,
# del new_dict["pvolHostGroups"]
# del new_dict["svolHostGroups"]
# logger.writeDebug(f"20250324 item: {item}")
# if self.port_type_dict[item[]]
# new_dict["pvol_host_groups"] = items
# new_dict["pvol_iscsi_targets"] = items
# if value_type == list:
# logger.writeDebug(f"key={key} value_type = {value_type}")
# self.connection_info.changed = True
# return None, str(e)
# msg = "Truecopy pair {} has been deleted successfully.".format(result)
# "remoteMirrorCopyGroupId": str,
# "remoteStorageDeviceId": str,
# "localDeviceGroupName": str,
# "remoteDeviceGroupName": str,
# "copyGroupName": str,
# "consistencyGroupId": int,
# "copyRate": int,
# "ldevId": int,
# "pvolIOMode" : str,
# "svolIOMode" : str,
# "pvolDifferenceDataManagement": str,
# "svolDifferenceDataManagement": str,
# "pvolProcessingStatus": str,
# "svolProcessingStatus": str,
# "pvolJournalId" : int,
# "svolJournalId" : int,
# "remoteSerialNumber": str,
# "remoteStorageTypeId": str,
# "remoteLdevId": int,
# "primaryOrSecondary": str,
# "muNumber": int,
# "status": str,
# "serialNumber": str,
# "storageTypeId": str,
# "isMainframe": bool,
# "copyPairs": list,r,
# new_dict["primary_storage_serial"] = self.storage_serial_number
# new_dict["secondary_storage_serial"] = self.remote_serial_number
# reconcile the parity group based on the desired state in the specification
# Handle for case of None response_key or missing keys by assigning default values
# free_capacity_mb = ret_value.get("free_capacity_in_units")
# total_capacity_mb = ret_value.get("total_capacity_in_units")
# if free_capacity_mb:
# if total_capacity_mb:
# return self.create_update_storage_pool(spec).to_dict()
# sng20241126 str message to display
# Show pvol and svol size in case of resize.
# resp_in_dict["serialNumber"] = self.storage_serial_number
# resp_in_dict["remoteSerialNumber"] = spec.secondary_storage_serial_number
# sng20241123 you have do swap spit from the secondary storage,
# but the pair is not swap yet, so we have to swap to show the fact correctly
# get pvol svol details
# reconcile_gad_pair
# logger.writeDebug("RC: 172:spec={}", spec)
# sng20241114 - TODO
# sng20241218 - swap here for now until operator rework is done
# operation completed, fetch the pair again
# fix for uca 2525
# with the operator fix of uca-2282 we should not have to get it again
# don't expect the pair to be swap yet even though the input is swapped
# for gateway only
# due to swap operations, primary_volume_id is not required,
# but copy_pair_name is a must
# sng1104 - filter the copy group copy pairs by GAD
# logger.writeDebug("RC:sng20241115 secondary_connection_info={}", spec.secondary_connection_info)
# VspGadPairsInfo class
# logger.writeDebug("RC: 299 spec ={}", spec)
# logger.writeDebug("RC:cglistdict={}", cglistdict)
# sng20250116 get svol_status
# we get RG number from the porcleain
# we want to show the RG name
# given input_spec.id, it is taking it as a start index
# sng20250115 - get_resource_groups for remote storage
# if pair["partner_id"] is None:
# if pair["subscriber_id"] is None:
# pair is not swapped
# but the self.serial is the remote and
# the primary_volume_id is the secondaryVolumeId
# sng20250129 - get_gad_pair_by_svol_id_gw
# DirectSpecificCopyGroupInfo?
# doSwap=true is only for swap-split
# sng20241126 swap as needed
# sng20241127 get_vsm_info for facts
# gad_copy_pair["primaryVirtualStorageId"] = rg.virtualStorageId
# gad_copy_pair["SecondaryVirtualStorageId"] = rg.virtualStorageId
# logger.writeDebug("sng1104 virtualStorageId={}", rg.virtualStorageId)
# VirtualStorageMachineInfoList
# sng20241123 the copy group from swap-split,
# do swap
# "replicationType": str,
# "remote_storage_type_id": "secondary_storage_type_id",
# "primaryVirtualStorageDeviceId": str,
# in case we get None in the input data
# return cnodes
# return SDSBComputeNodesInfo(data=cn_list)
# logger.writeDebug('RC:add_iqn_to_compute_node:iqn_id={}', self.provisioner.add_iqn_to_compute_node(compute_node_id, iqn))
# hba_ids = self.get_compute_node_hba_ids(compute_node_id)
# nqn_ids = self.get_compute_node_nqn_ids(compute_node_id)
# this is a create
# if the os_type is not provided throw error
# if spec.state is None or empty during create, we will try to attach volumes and
# add iSCSI initiators or host NQNs, based on the information provided in the spec.
# Also note that either IQN or NQN will work depending on the compute port protocol setting.
# iqns are present in the spec, so add them to the newly created compute node
# nqns are present in the spec, so add them to the newly created compute node
# All volumes are present in the spec, so add them to the newly created compute node
# get all the volume names present in the system
# valid volumes are the volumes which are common between all_volume_names and user supplied volume names
# now find the volumes ids that are already attached to the compute node
# create a list of volume ids that need to be attached to a compute node.
# now find the volumes names that are already attached to the compute node
# if the name is not provided use the current name
# if the os_type is not provided use the current os_type
# user provided an id of the compute node, so this must be an update
# this could be a create or an update
# this is an update
# user provided an id of the compute node, so this must be a delete
# compue_node_id = self.delete_compute_node_by_id(spec.id)
# return compue_node_id
# user provided an compute node name, so this must be a delete
# compue_node_id = self.delete_compute_node_by_id(compute_node.id)
# return compute_node.nickname
# first detach the volume from the compute node
# get the volume information
# if the volume is not attached to any compute node, delete the volume
# default_value = get_default_value(value_type)
# value = default_value
# logger.writeDebug("RC:extract:value_type={}", value_type)
# DO NOT HANDLE MISSING KEYS
# new_dict[cased_key] = default_value
# self.validate_create_spec(spec)
# return self.provisioner.create_external_volume_by_spec(spec)
# self.validate_delete_spec(spec)
# return self.provisioner.delete_external_volume_by_spec(spec)
# @log_entry_exit
# def get_one_external_path_group(self, ext_path_grp_id, is_salamander=False):
# Convert to dict manually (fallback if no .to_dict())
# Unwrap nested objects like SalamanderExternalPathInfoList
# Convert inner SalamanderExternalPathInfo to dicts
# Add empty defaults for expected keys
# def process_list(self, response_key):
# skip if not dict-like
# if cased_key in self.parameter_mapping.keys():
# , secondary_connection_info: str
# if self.secondary_connection_info is None:
# "mu_number": "mirror_unit_id",
# "remote_serial_number": "secondary_storage_serial",
# "serial_number": "primary_storage_serial",
# if "v_s_m" in cased_key:
# new_items.append(new_dict)
# ldev_id and name not present in the spec, but nvm_subsystem_name present
# Check if the ldev is a command device
# Keep this logic always at the end of the volume creation
# sng20241205 VLDEVID_META_RSRC
# if spec.vldev_id and volume_data.resourceGroupId == 0:
# Expand the size if its required
# if "." in spec.size:
# size_in_bytes = convert_to_bytes(spec.size)
# update the volume by comparing the existing details
# sng20241202 update change_volume_settings_tier
# if spec.state is None or empty during create, we will try to
# add host NQNs, based on the information provided in the spec.
# add ldev to the nvme name space
# During update if spec.state is None, we just return
# If host_nqns is empty just create the namespace for ldev_id
# sng20241202 validate_tiering_policy
# Validation check: The difference between the values of the tier1AllocationRateMax and tier1AllocationRateMin attributes is a multiple of 10.
# Validation check: The sum of the values of the tier1AllocationRateMin and tier3AllocationRateMin attributes is equal to or less than 100.
# spec.size = process_size_string(spec.size)
# host group and iSCSI target info are always included
# This get call is needed for newly created cmd device
# check_vol = self.provisioner.get_volume_by_ldev(volume.ldevId)
# if check_vol.attributes and "CMD" in check_vol.attributes:
# if there is parity group info then this is a basic volume
# new_volume = None
# "ports": list,
# "namespace_id": str,
# "nvm_subsystem_id": str,
# sng20241202 tiering_policy extractor
# tier_level='all' if is_relocation_enabled: false
# "tiering_policy": "tieringProperties",
# "level": "tierLevel",
# "is_data_reduction_share_enabled": "isDRS", # commented out as it is not in the response
# logger.writeDebug("20240825 after gateway creatlun response={}", response)
# Add total_capacity_in_mb and used_capacity_in_mb fields
# build tiering_policy output for gateway
# 20240825 voltiering tieringProperties
# logger.writeDebug("tieringProperties={}", response["tieringProperties"])
# sng20241202
# sng20241202 build tiering_policy output for direct
# 20250312 doesn't look like we can reach here,
# below is likely not used, do not follow
# "user_object_id": "id",
# "user_storage_port": "group_names",
# return self.get_port_by_name(spec.port_name)
# If iqn_initiators is present, update iqn_initiators
# If ldevs is present, present or overwrite ldevs
# If chap_users is present, update chap_users
# for host_mode, you can only update, no delete
# if state == VSPIscsiTargetConstant.STATE_ADD_HOST_MODE or state == StateValue.PRESENT:
# elif state == VSPIscsiTargetConstant.STATE_REMOVE_HOST_MODE or state == StateValue.ABSENT:
# Handle create iscsi target
# Handle update iscsi target
# Handle delete iscsi target
# if response is not None and isinstance(response, str):
# index = err_msg.find("%!(EXTRA ")
# logger.writeDebug("RC:reconcile_rg_lock:index={}", index)
# if index != -1:
# Assign the value based on the response key and its data type.
# reconcile the disk drive based on the desired state in the specification
# 20240822 VSPVolTierReconciler
# F811 Redefinition of unused `__init__`
# def __init__(self, serial):
# user provided an id of the volume, so this must be an update
# user provided an id of the volume, so this must be a delete
# return volumes
# vol_with_cn = SDSBVolumeAndComputeNodeInfo(vol, cn_summary)
# return self.get_volume_by_id(vol_id)
# detach the volume from the compute node
# return self.get_volume_by_id(volume_data.id)
# get the compute node ids supplied in the spec
# get the compute node ids to which this volume is attached
# if len(compute_nodes) == 0:
# Expand the volume if its required
# if virtual storage id is not 0, then this is a VSM
# if rg.ldevIds:
# Handle host groups and iscsi targets in the add resource case
# list(set(hg_list) - set(rg.hostGroupIds))
# list(set(iscsi_list) - set(rg.hostGroupIds))
# if remove resource is not needed provision layer will return None
# if rg.nvmSubsystemIds != new_rg.nvmSubsystemIds:
# logger.writeDebug("RC:process_list:response_key={}", response_key)
# state = self.state.lower()
# portInfo = self.provisioner.change_port_settings(
# return StoragePortInfoExtractor(self.storage_serial_number).extract(
# epg = self.get_one_external_parity_group(spec.external_parity_group_id)
# Not used fields
# skip empty lines
# Handle the first line of the section as headers
# Parse data lines
# Add logic to find out compute_network_ip_2 etc.
# logger.writeDebug(f"cluster_control_nw_ips = {cluster_control_nw_ips}")
# logger.writeDebug(f"cluster_inter_node_nw_ips = {cluster_inter_node_nw_ips}")
# logger.writeDebug(f"cluster_compute_nw_ips = {cluster_compute_nw_ips}")
# logger.writeDebug(f"control_internode_nw_route_gws = {control_internode_nw_route_gws}")
# Find the GW from the file??
# Add code for IP v6 entries
# set dummpy password for clouds
# msg = f"""
# Testing the construction of the file line
# Here is the line:
# {line_entries}
# return msg
# Open the file in append mode ('a')
# Iterate through the list of lines and write each to the file
# This code is for testing on bare metal to check the request before testing on cloud.
# comment this out when testing is done
# if platform == "HP":
# return self.create_config_file_gcp(spec)
# return self.create_config_file_azure(spec)
# key = camel_to_snake_case(key)
# if key in self.parameter_mapping.keys():
# new_key = camel_to_snake_case(key)
# from ..message.vsp_quorum_disk_msgs import VSPSQuorumDiskValidateMsg
# logger.writeDebug("RC:get_jobs:jobs={}", jobs)
# self.logger.writeDebug(f"5744resultspec: {result2}")
# result = {"snapshot_groups": extracted_data}
# this is incorrect if an existing pair is in the spec
# ex. for assign/unassign, we don't need pool id
# if spec.pool_id is None:
# self.logger.writeDebug(f"20240801 before calling extract, expect good poolId in resp_data: {resp_data}")
# Just split
# Create then split
# self.logger.writeError(f"20240719 resp_data: {resp_data}")
# self.logger.writeDebug(f"20240801 resp_data.to_dict: {resp_in_dict}")
# if len(snapshots.snapshots) == 0:
# "primaryOrSecondary":str,
# "poolId": "snapshotReplicationId", # duplicate key
# 20240801 "poolId": "snapshotPoolId",
# "isCloned": "isClone",
# logger.writeDebug(f"20250324 response_key: {response_key}")
# logger.writeDebug(f"20250324 key: {key}")
# logger.writeDebug(f"20250324 new_dict[key]: {new_dict[key]}")
# new_dict["pvol_host_groups"] = new_dict["pvolHostGroups"]
# new_dict["svol_host_groups"] = new_dict["svolHostGroups"]
# this func assume an ldev can only be in
# hgs or its only, not both
# logger.writeDebug(f"20250324 items: {items}")
# logger.writeDebug(f"20250324 port_type_dict: {port_type_dict}")
# this is a more general version,
# it handles an item belongs to both hgs and its,
# use this if it applies
# logger.writeDebug(f"20250324 port_id: {port_id}")
# logger.writeDebug(f"20250324 port_type: {port_type}")
# logger.writeDebug(f"20250324 value: {value}")
# or spec.ldev_id is None
# numOfLdevs: int = None
# emulationType: str = None
# clprId: int = None
# externalProductId: str = None
# availableVolumeCapacityInKB: int = None
# raise ValueError(SDSBVpsValidationMsg.SAME_SAVING_SETTING.value)
# user provided an id of the chap user, so this must be an update
# return self.create_sdsb_vps(spec)
# comment out the following code block
# logger.writeDebug("RC:=== Delete VPS ===")
# logger.writeDebug("RC:state = {}", state)
# logger.writeDebug("RC:spec = {}", spec)
# if spec.id is not None:
# elif spec.name is not None:
# vps_id = self.delete_vps_by_id(vps_id)
# if vps_id is not None:
# new_items = camel_array_to_snake_case(new_items)
# new_dict = camel_dict_to_snake_case(new_dict)
# user provided an id of the CHAP user, so this must be a delete
# Normalize replication type to UR
# convert dict to Copy Group object with all the remote pair information
# is_resource_group_locked: Optional[bool] = None
# lock_token: Optional[str] = None
# for key, value in kwargs.items():
# or selectively forward kwargs
# snapshotPairInfo: Optional[str] = None
# sng20241205 vldev_id
# Added for UCA-3302
# added comment for ldev module
# No fields specified in example, placeholder for future extension
# volumeId: int
# if self.qosSettings:
# Convert totalCapacity from bytes to MiB
# Convert usedCapacity from bytes to MiB
# Convert freeCapacity from bytes to MiB
# in MiB
# Convert dict to NicknameParam instance if needed
# Convert capacity from string to MiB
# Ensure luns is a list of SalamanderVolumeServerLunInfo instances
# entitlementStatus: Optional[str] = None
# subscriberId: Optional[str] = None
# partnerId: Optional[str] = None
# if not hasattr(self, key):
# data["primary_vsm_resource_group_name"] = data.pop("primary_vsmresource_group_name", None)
# id: int = None
# resource_group_id: Optional[int] = None
# remote_connection_info: Optional[ConnectionInfo] = None
# secondary_storage_connection_info: Optional[ConnectionInfo] = None
# Making a single hg
# primaryHexVolumeId: Optional[str] = None
# secondaryHexVolumeId: Optional[str] = None
# Mapping logic from VSPJournalPoolDirect fields to VSPJournalPool fields
# Convert nested mirror units if present
# data.pop("num_of_ldevs", None)
# id: Optional[int] = None
# These fields are required for the Direcr connection
# replication_type: Optional[str] = None  # we will assign this field to TC
# remoteStorageDeviceId: Optional[str] = None # from the secondary_storage_serial_number we will find this
# pvolLdevId : primary_volume_id will be assigned to this field
# svolLdevId : secondary_volume_id will be assigned to this field
# Optional fields
# range 1-15
# Replace transitions where a lowercase letter is followed by uppercase letters
# Replace transitions where an acronym (multiple uppercase) is followed by a lowercase letter
# Define a parent class with the common functionality
# Determine the value to use
# Remove NoneType if Optional
# Default for unsupported types
# Handle nested SingleBaseClass instances or list of them
# Determine default filler based on data type
# @dataclass
# class ComputePortFactSpec:
# protocol: str
# lun: int = -1
# if "lun" in kwargs:
# is_command_device_enabled: Optional[bool] = None
# usedCapacityRate: int = None
# raidType: str = None
# driveType: str = None
# availablePhysicalCapacity: int = None
# spaces: List[] = None
# driveSpeed: int = None
# status: str = None
# svol: Optional[int] = None
# dpVolumes: List[VSPDpVolume] = None
# super().__init__(**kwargs)
# if self.dpVolumes:
# Handle specific fields that need special treatment
# secondary_connection_info: Optional[ConnectionInfo] = None
# secondary_storage_serial_number: Optional[int] = None
# copy_pair_name: Optional[str] = None
# local_device_group_name: Optional[str] = None
# remote_device_group_name: Optional[str] = None
# replication_type: str = ""
# svol_operation_mode: str = ""
# is_svol_writable: Optional[bool] = False
# do_pvol_write_protect: Optional[bool] = False
# do_data_suspend: Optional[bool] = False
# do_failback: Optional[bool] = False
# failback_mirror_unit_number: Optional[int] = None
# is_consistency_group: Optional[bool] = False
# consistency_group_id: Optional[int] = None
# fence_level: Optional[str] = None
# lunId: int = None
# portId: str = None
# hostGroupNumber: int = None
# hostMode: str = None
# isCommandDevice: bool = None
# luHostReserve: Optional[LuHostReserve] = None
# hostModeOptions: List = None
# def __init__(self, **kwargs):
# if kwargs.get("luHostReserve"):
# 20240830 - without these, the create hur was breaking
# required
# Disabled, Compression
# Inline, nullable
# required for rest API create call
# portAttributes: Optional[List] = None
# tags: List[str] = None
# fabricOn: bool = False
# Use default_factory
# local map of external volume
# qrd id
# copy_group_name: Optional[str] = None
# , asdict
# host_mode_options: Optional[List[int]] = None
# Example usage in the class (replace string literals with enum values):
# spec.comments.append(VSPVolumeMSG.VOLUME_DELETED_SUCCESS.value)
# raise Exception(VSPVolumeMSG.POOL_ID_REQUIRED.value)
# class VSPUserFailedMsg(Enum):
# logger = Log()
# endPoint = Endpoints.GET_LDEVS.format("?count=16384&ldevOption=undefined")
# If the endpoint is not available, return None
# logger.writeDebug(f"GW-Direct:create_true_copy:get_remote_token:remote_connection_info={remote_connection_info}")
# self.init_connections(spec.secondary_connection_info)
# headers = self.get_remote_token(self.connection_info)
# self.end_points = Endpoints
# return DirectTrueCopyPairInfoList(
# if self.remote_connection_manager is None:
# Not sure when the token was generated, got invalid token error, so always generate a new one
# remoteStorageDeviceId,copyGroupName,localDeviceGroupName,remoteDeviceGroupName, copyPairName
# local_device_group_name = spec.local_device_group_name if spec.local_device_group_name else spec.copy_group_name + "P_"
# remote_device_group_name = spec.remote_device_group_name if spec.remote_device_group_name else spec.copy_group_name + "S_"
# object_id = f"{remote_storage_deviceId},{spec.copy_group_name},{local_device_group_name},{remote_device_group_name},{spec.copy_pair_name}"
# if spec.local_device_group_name and spec.remote_device_group_name:
# # local_device_group_name = spec.local_device_group_name if spec.local_device_group_name else spec.copy_group_name + "P_"
# # remote_device_group_name = spec.remote_device_group_name if spec.remote_device_group_name else spec.copy_group_name + "S_"
# this means hur pair is absent
# enhanced_expansion = (
# if we split the pair, then we need to resync it before returning
# pvol_gateway = VSPVolumeDirectGateway(self.connection_info)
# svol_gateway = VSPVolumeDirectGateway(spec.secondary_connection_info)
# pvolume_data = pvol_gateway.get_volume_by_id(pvol_id)
# svolume_data = svol_gateway.get_volume_by_id(svol_id)
# if split_required and pvolume_data.blockCapacity == svolume_data.blockCapacity:
# secondary_storage_info = self.get_secondary_storage_info(
# logger.writeDebug(
# remote_storage_deviceId = secondary_storage_info.get("storageDeviceId")
# if mode:
# headers = self.get_remote_token(spec.secondary_connection_info)
# headers["Remote-Authorization"] = headers.pop("Authorization")
# headers["Job-Mode-Wait-Configuration-Change"] = "NoWait"
# , headers_input=headers
# Inline code to convert all keys to snake_case
# Convert camelCase to snake_case for top-level keys
# Convert keys in nested dictionaries or lists
# logger.writeDebug("GW:get_chap_users:spec={}", spec)
# sng20250111 - includes: Failed to establish a connection
# just log it so we can still return the pairs
# should we do the for loop only for single or few items?
# if there is an exception, it could be because of the token expiry, so get the token again and retry
# Refresh the connections
# For different storage error msg are different, so we can't check error msgs.
# For pegasus, error msg is "User authentication failed"
# For VSP 5600H, getting copypair is not working
# sng20241115 for Operations cannot be performed for the specified object (xxx),
# we don't want to throw exception else it breaks the whole operation,
# just log it and keep going
# logger.writeDebug(f"GW:get_all_copy_pairs spec={spec}")
# return DirectCopyPairInfoList(data=copy_pairs)
# return copy_pairs
# return DirectSpecificCopyGroupInfoList(data=copy_pairs)
# tc_pairs.append(y)
# remoteStorageDeviceId,copyGroupName,localDeviceGroupName,remoteDeviceGroupName
# copy_group_info = self.get_one_copygroup_info_by_name(spec)
# object_id = copy_group_info.remoteMirrorCopyGroupId
# parts = object_id.split(',')
# if len(parts) == 4:
# # headers["Remote-Authorization"] = headers.pop("Authorization")
# , headers_input=None
# logger.writeException("response = {}", response)
# 503 Service Unavailable
# wait for 5 mins and try to re-authenticate, we will retry 5 times
# raise Exception(error_msg, response.status)
# logger.writeDebug(f"response = {response}")
# For POST call to add chap user to port, affected resource is empty
# For PATCH port-auth-settings, affected resource is empty
# Construct multipart/form-data body manually
# def build_multipart_form_data(self, setup_user_password, csv_path=None, exported_config_file=None):
# Add text field
# Add files
# Final boundary
# Encode form data
# Headers
# To suppress the "Expect: 100-continue"
# if data is not None:
# can be due to wrong address or kong is not ready
# raise Exception("Could not discard the session.")
# def __del__(self):
# raise Exception("Could not discard the current session.")
# This class is added to use Administrator API for Storage Management
# Snapshot group related methods
# snapshots=  DirectSnapshotsInfo().dump_to_object({"data": ss["snapshots"]})
# sng_grps.snapshots = snapshots.data
# payload = None if not auto_split else {"parameters": {"autoSplit": auto_split}}
# reset name to None after creation
# will add more parameters as needed
# try once more
# Split the ldevid from url
# the block_size is added to support decimal values like 1.5 GB etc.
# if spec.capacity_saving.lower() != VolumePayloadConst.DISABLED:
# while not found:
# sng20241205 unassign_vldev
# Define the data
# Iterate over each item and send individual requests for non-None values
# VSP One does not support detailInfoType=class
# sng20241202 change_volume_settings_tier_policy
# use all ldev operations above this function
# server_info = self.get_volume_server_connection_info_by_id(volume_id)
# volume_info.luns = [lun.lunId for lun in server_info.luns] if server_info and server_info.luns else []
# Capacity (in MiB)
# Number of volumes (default 1 if not provided)
# Nickname parameters
# Capacity saving (e.g., COMPRESSION, DEDUPLICATION_AND_COMPRESSION)
# Data reduction share flag
# Pool ID
# org_base_url = self.connection_manager.get_base_url()
# self.connection_manager.set_base_url_for_vsp_one_server()
# self.connection_manager.set_base_url(org_base_url)
# if this is for pagination handle it later
# nvme_subsystems = VspNvmeSubsystemInfoList(
# if spec.host_mode_options:
# Default is Enable, we send Disable only when this is False
# logger.writeDebug(f"GW:download_config_file:resp={resp}")
# You must specify Storage Administrator (View Only).
# Audit Log Administrator (View & Modify)#
# Audit Log Administrator (View Only)#
# Security Administrator (View & Modify)#
# Security Administrator (View Only)#
# Storage Administrator (Initial Configuration)
# Storage Administrator (Local Copy)
# Storage Administrator (Performance Management)
# Storage Administrator (Provisioning)
# Storage Administrator (Remote Copy)
# Storage Administrator (System Resource Management)
# Storage Administrator (View Only)
# Support Personnel#
# User Maintenance#
#: If you specify this role, be sure to specify true for hasAllResourceGroup.
# headers = self.populateHeader()
# This block of code is added to handle situation like described in JIRA
# https://hv-eng.atlassian.net/browse/UCA-2865?focusedCommentId=2159648
# nosec: No security issue here as it is does not exploit any security vulnerability only used for generating unique resource id for UAIG
# parameters = {}
# parameters["forceDelete"] = spec.force_delete
# Fixed no member issue
# Use a generator to avoid unnecessary looping
# Initialize and populate the journal pool object
# return dicts_to_dataclass_list(compute_node_data["data"], SDSBComputeNodeInfo)
# return dicts_to_dataclass_list(compute_node_data, SDSBComputeNodeInfo)
# return SDSBComputeNodesInfo(**data)
# Validate user ID length
# Update connection manager password
# def __init__(self, connection_info):
# logger.writeDebug("sng20241115 secondary_connection_info ={}", spec.secondary_connection_info)
# def set_storage_serial_number(self, serial: str):
# def get_storage_device_id(self, serial_number):
# sng1104 - have to use the CG version
# sng20241105 sng1104 create_gad_pair
# secondary_storage_serial_number = spec.secondary_storage_serial_number
# remote_storage_deviceId = self.get_storage_device_id(
# adding GAD to existing copy group,
# payload cannot include the mu number
# for new copy group, it is either 0 or user input
# headers["Remote-Authorization"] = "Session cf68b8ce47fd47e5ad9195466c915d7e"
# end_point = CREATE_GAD_PAIR_DIRECT.format(storage_deviceId)
# logger.writeDebug("sng20241115 65 secondary_connection_info ={}", spec.secondary_connection_info)
# secondary_storage_serial_number = self.get_secondary_serial(spec)
# storage_deviceId = self.get_storage_device_id(str(self.storage_serial_number))
# logger.writeDebug("GW-Direct:create_hur_copy:storage_deviceId={}", storage_deviceId)
# .format(storage_deviceId)
# "UAIGConnectionManager": "gatewayTasks",
# "UAIGConnectionManager": "gatewayStorageSystems",
# Check if any ignore API is a substring of the URL
# Attempt to call    `open_url`
# If the data is not a list, create a new list with the model details
# will update the threading part later
# thread.daemon = True
# Write updated data to file
# write_to_audit_log(url=url, kwargs=kwargs)
# Exception(f"open_url failed: {exception_message}")
# log.writeDebug(f"Processing request body {body}")
# logger.writeDebug(f"Inside the ldev_details {tmpHg["lunPaths"]}")
# retHg = self.parse_host_group(resp, True, True)
# return VSPOneHostGroupInfo(VSPHostGroupInfo(**retHg))
# Re-raise exceptions if they occurred in the threads
# So catch the exception and try older method without nvmSubsystem.
# if spec.virtual_storage_id and spec.virtual_storage_device_id is None:
# Build query based on provided parameters
# Construct query string
# Add SDSB Gateways below and VSP Gayeways above this line
# resp = self.connectionManager.post(end_point, data)
# return VSPPfrestExternalParityGroup(**parity_group_dict)
# Salamander returns a different structure
# epg = ExternalPathGroupInfo(**response)
# epg.externalPaths = ExternalPathInfoList(
# return epg
# Build file field details
# Construct multipart/form-data body
# Encode to bytes
# Set headers
# Handle trap destination settings
# Handle authentication settings
# All parameters
# Ticket Management Endpoints
# snapshot Endpoints
# There is no applicable level in Python for following priorities
# 0(Emergency), 1(Alert), 5(Notice)
# UCP_NAME = 'ucp-ansible-test'
# TODO: sng1104 use VSP_COPY_GROUPS
# add to be ignored api end points details like which we not need to get the telemetry data
# Like below example , for UAIG add the api end points which don't have storage id in the url
# http client module
# url_username=params.user if (params.session_id is None) else None,
# url_password=params.password if (params.session_id is None) else None,
# force_basic_auth=True if (params.session_id is None) else False,
# for 404, its the html text of the response page
# problem above is that sometimes error is empty, but there is message
# err_str includes the http error string
# 2.4 MT - comment out if too verbose
# logger.writeDebug(LogMessages.API_RESPONSE.format(to_native(text)))
# vsp storage
# Volumes
# HG
# ISCSI
# CHAP
# LUNS
# CG
# CP
# SI
# UAIG_CREATE_SHADOW_IMAGE_PAIR = 'v2/storage/devices/{deviceId}/shadowimage'
# UAIG_DELETE_SHADOW_IMAGE_PAIR = 'v2/storage/devices/{deviceId}/shadowimage/{pairId}'
# SnapShot
# Pool
# Journal Volumes
# Parity group
# Tag device resources
# remote connection urls
# remote iscsi connection urls
# Dynamic pool salmanda api
# MP blades
# Initial config api
# SALMENDER PARAMS
# URL PARAMS
# volume emulation type
# Volume operation type
# QOS constants
# The following models are present in GW
# "VSP_G150" - Not found in the REST API guide
# The following models are supported by GW just by device type field
# The following models are supported by GW by combination of device type and model fields
# The following models are not supported by GW
# The following models are no longer supported by HV
# "HUS_VM" : "HUS VM",
# "VSP" : "VSP"
# "VSP_ONE_B28" : "VSP One B28",
# "VSP_ONE_B26" : "VSP One B26",
# "VSP_ONE_B24" : "VSP One B24",
# "VSP_E790H" : "VSP E790H",
# "VSP_E590H" : "VSP E590H",
# Generate a UUID
# Custom formatter to include uuid and module_name
# Add UUID and module_name to each log record
# Updated the formatter to include uuid and module_name placeholders
# Use the custom formatter
# example: "/opt/hitachivantara/ansible"
# raise Exception("Improper environment home configuration, please execute the 'bash' command and try again.")
# Log.logger.debug(msg)
# example: "/var/log"
# if HAS_MESSAGE_ID and path is None:
# Iterate through handlers to find FileHandlers
# Extract the file path from the args parameter
# args is expected to be a string representation of a tuple, e.g., "('logs/app.log', 'a')"
# Use ast.literal_eval to safely parse the args tuple
# if not isinstance(exception, AttributeError) else exception.message
# ....................self.logger.debug("writeHiException")
# ........................self.writeParam("exception={}",str(exception))
# ........................self.writeParam("messageId={}",messageId)
# ........................self.writeParam("errorMessage={}",str(type(errorMessage)))
# ........................self.writeParam("errorMessage={}",str(errorMessage))
# message = exception.errorMessage
# ........................self.writeParam("errorMessage={}",errorMessage)
# SDSB Parameter manager
# Removed gateway connection type as it is not supported
# "vps_name": {
# },
# "ReplaceStorageNode",
# "ReplaceDrive",
# 'protocol': {'required': False, 'type': 'str', 'description': 'Compute nodes that belongs to this vps'},
# "format": "date-time",
# "format": "uuid",
# cls.common_arguments.pop("state")
# "id": {
# "name": {
# "upper_limit_for_number_of_user_groups": {
# "upper_limit_for_number_of_users": {
# "upper_limit_for_number_of_sessions": {
# "upper_limit_for_number_of_servers": {
# "volume_settings": {
# Validator functions
# if spec.number_of_storage_nodes is None and spec.number_of_drives is None and spec.number_of_tolerable_drive_failures is None:
# Function to recursively replace None with ""
# # VSP Parameter manager # #
# VSPSpecValidators.validate_shadow_image_module(input_spec, self.connection_info)
# VSPSpecValidators().validate_parity_group_fact(input_spec)
# VSPSpecValidators().validate_local_copy_groups_fact(input_spec)
# VSPSpecValidators().validate_hur_module(self.spec, self.state)
# self.spec = SnapshotGroupFactSpec(**self.params["spec"])
# return self.spec
# Arguments Managements ##
# "subscriber_id": {
# cls.common_arguments["spec"]["required"] = True
# sng20250212 host_mode_options validations
# args = copy.deepcopy(cls.common_arguments)
# "is_new_group_creation": {
# ssi["options"].pop("api_token")
# ssi["options"].pop("subscriber_id")
# ssi["options"]["username"]["required"] = True
# ssi["options"]["password"]["required"] = True
# args.pop("storage_system_info")
# "enable_quick_mode": {
# "ports",
# "quorumdisks",
# "journalPools",
# "freeLogicalUnitList",
# "refresh": {
# "iqn_initiators": {
# args.pop("state")
# "startLdevId": {
# "endLdevId": {
# args["connection_info"]["options"].pop("subscriber_id")
# args["connection_info"]["options"].pop("api_token")
# "storage_system_info": VSPCommonParameters.storage_system_info(),
# cls.common_arguments["spec"]["options"] = spec_options
# return cls.common_arguments
# "allocate_new_consistency_group": {
# "secondary_storage_serial_number": {
# 20240812 HUR facts spec
# "is_command_device_enabled": {
# "storage_device_id": {
# common_arguments["connection_info"]["options"].pop("subscriber_id")
# # Validator functions # #
# For direct connect, api_token is used to pass the lock token
# if conn_info.connection_type == ConnectionTypes.DIRECT and conn_info.api_token:
# elif conn_info.subscriber_id:
# Handle the case where the input is not a valid integer format
# Handle other ValueErrors, like out-of-range checks
# 2.3 gateway defines spec.ldev for one set of logics,
# it also defines spec.ldevs as str (not list) for other business logics, it's a mess
# if spec.state == StateValue.PRESENT:
# ex. for assign/unassign, we don't need group name
# if spec.snapshot_group_name is None:
# StateValue.ABSENT,
# we have added the support of using pvol pool_id if spec.pool_id is None
# hence this check is not needed
# 1) if pool_id is None and mirror_unit_id is None,
# then we will create new pair and do the auto-split
# 2) if the user makes the mistake of not providing the mirror_unit_id,
# we will go ahead and create the pair and auto-split.
# This can create confusion for the user until the input error is realized.
# if not isinstance(spec.pool_id, int) and not isinstance(
# if input_spec.pool_id is None:
# 2.4 MT - for composite playbook, gateway returns a str, direct returns a int
# if hg.id is None:
# if input_spec.primary_volume_id is None and input_spec.secondary_volume_id is None:
# if (input_spec.id or input_spec.name) and input_spec.query:
# if input_spec.virtual_storage_device_id:
# if input_spec.ldevs:
# 20240808 - validate_hur_module
# if input_spec.secondary_storage_serial_number is None:
# if input_spec.secondary_hostgroups is None:
# if input_spec.primary_storage_serial_number is None:
# valid_type = [ "port", "volume", "hostgroup", "shadowimage", "storagepool", "iscsi_target", "hurpair", "gadpair", "truecopypair"]
# if spec.is_resource_group_locked is None:
# if spec.is_resource_group_locked is False and spec.lock_token is None:
###############################################################
# Common functions ###
# Convert WWN to integer if it's in hexadecimal string format
# Mask and adjustment based on array family
# Apply masks
# Precompute high bytes since they don't change with LUN
# Compute low bytes with the given LUN
# Format NAID
# hash is used to generate the same resource ID in the UAIG gateway, non-security purposes
# GET_SNAPSHOTS = "v2/storage/devices/{}/snapshotpairs"
# CREATE_TRUE_COPY_PAIR = "v2/storage/devices/truecopypair"
# DELETE_TRUE_COPY_PAIR = "v2/storage/devices/{}/truecopypair/{}"
# class ErrorMessages(object):
# Direct Mapping
# Split the string into words using '_' as delimiter
# Capitalize the first letter of each word except the first one
# camel_case_string = ''.join([word.capitalize() for word in words])
# Use regular expressions to find all occurrences of capital letters
# followed by lowercase letters or digits
# Replace the capital letters with '_' followed by lowercase letters
# using re.sub() function
# If the size exceeds TB, assume it's in petabytes
# Convert the string to uppercase for case-insensitivity
# Split the size string into value and unit
# result = func(*args, **kwargs)
# Convert to uppercase for case-insensitivity
# Remove white spaces
# Remove 'B' if present
# Append 'M' if none of MB, GB, TB are present
# Split the hex value to string
# Convert hexadecimal to 00:00:00 format
# Combine the hexadecimal values into the desired format
# ARRAY_FAMILY_DF
# ARRAY_FAMILY_HM700
# ARRAY_FAMILY_R800
# ARRAY_FAMILY_HM800
# ARRAY_FAMILY_R900
# ARRAY_FAMILY_HM900
# ARRAY_FAMILY_HM2000
# Construct high bytes
# Construct low bytes
# Define a regular expression to capture the numeric value and the unit
# If the regex didn't match, raise an error
# Parse the numeric part (the size value)
# Get the unit part (e.g., "GB", "MB")
# Convert to bytes based on the unit
# this function is used to do auto name assignment,
# not cryptographic or security-sensitive usage
# this security scan error can be disregarded moving forward,
# name collisions are not a concern in this context and
# switching to secrets is not necessary
# Remove any extra spaces
# Extract number part and convert to MB
# Extract numeric part and unit
# Convert to bytes
# Convert to MiB (rounded to nearest int)
# 1 MiB = 1,048,576 bytes
# Convert bytes → MB (1 MB = 1,000,000 bytes)
# Basic regex for email validation
# Create the directory if it doesn't exist
# Get the caller frame from the stack
# Extract filename and line number
# Get the method or function name from the caller frame
# Log the message
# Output expected ImportErrors.
# Ensure constants are alphabatecally ordered
# Unicode-like literals
# which() is available from Python 3.3
# This is a copy of which() from Python 3.3
# Maps MIME names to type objects
# Maps extensions to types
# List of (glob, type) pairs
# Maps liternal names to types
# If this is done in __init__, it is automatically called again each time
# the MIMEtype is returned by __new__, which we don't want. So we call it
# explicitly only when we construct a new instance.
# FIXME: add get_icon method
# Should we ever reload?
#print line
# [indent] '>'
# start-offset '='
# value length (2 bytes, big endian)
# ['&' mask]
# This can contain newlines, so we may need to read more lines
# ['~' word-size] ['+' range-length]
# Per the spec, this will be caught and ignored, to allow
# for future extensions.
# Ignored to allow for extensions to the rule format.
# Build the rule tree
# (rule, [(subrule,[subsubrule,...]), ...])
# mimetype -> [(priority, rule), ...]
#print(shead)
#print shead[1:-2]
#print rule
# (priority, mimetype, rule)
# Number of bytes to read from files
#print priority, max_pri, min_pri
# Maps mimetype to {(weight, glob, flags), ...}
# This signals to discard any previous globs
# Maps extensions to [(type, weight),...]
# List of (regex, type, weight) triplets
# Maps literal names to (type, weight)
# *.foo -- extension pattern
# Translate the glob pattern to a regex & compile it
# No wildcards - literal pattern
# Sort globs by weight & length
# Literals (no wildcards)
# Extensions
# Other globs
# Some well-known types
# Maps alias Mime types to canonical names
# Maps to sets of parent mime types.
# Load aliases
# Load filename patterns (globs)
# Load magic sniffing data
# Load subclasses
# Special filesystem objects
# Try all magic matches
# See if the file is already installed
# Already installed
# Not already installed; add a new copy
# Create the directory structure...
# Write the file...
# Update the database...
# Public stuff
# Can be True, False, DELETED, NO_DISPLAY, HIDDEN, EMPTY or NOT_SHOW_IN
# Private stuff, only needed for parsing
# FIXME: Performance: cache getName()
# unicode() becomes str() in Python 3
# FIXME: Add searchEntry/seaqrchMenu function
# search for name/comment/genericname/desktopfileid
# return multiple items
# getHidden / NoDisplay / OnlyShowIn / NotOnlyShowIn / Deleted / NoExec
# remove separators at the beginning and at the end
# show_empty tag
# inline tags
# Type is TYPE_INCLUDE or TYPE_EXCLUDE
# expression is ast.Expression
# Create entry
# Can True, False DELETED, HIDDEN, EMPTY, NOT_SHOW_IN or NO_EXEC
# Semi-Private
# Private Stuff
# Caching
# remove duplicate entries from a list
# convert to absolute path
# use default if no filename given
# check if it is a .menu file
# create xml parser
# parse menufile
# generate the menu
# and finally sort
# ---------- <Rule> parsing
# ---------- App/Directory Dir Stuff
# ---------- Merge Stuff
# check for infinite loops
# load file
# ---------- Legacy Dir Stuff
# If kde-config doesn't exist, ignore this.
# unallocated / deleted
# Layout Tags
# add parent's app/directory dirs
# go recursive through all menus
# reverse so handling is easier
# get the valid .directory file out of the list
# Finally generate the menu
# parse move operations
# FIXME: this is assigned, but never used...
# handle legacy items
# cache the results again
# FIXME: This is only 99% correct, but still...
# start standard keys
# end standard keys
# start kde keys
# end kde keys
# start deprecated keys
# end deprecated keys
# desktop entry edit stuff
# end desktop entry edit stuff
# validation stuff
# file extension
# check if group header is valid
#OnlyShowIn and NotShowIn
# standard keys
# locale string
# kde extensions
# deprecated keys
# "X-" extensions
# check if entry already there
# delete if more then 500 files
# add entry
# XML-Cleanups: Move / Exclude
# FIXME: proper reverte/delete
# FIXME: pass AppDirs/DirectoryDirs around in the edit/move functions
# FIXME: catch Exceptions
# FIXME: copy functions
# FIXME: More Layout stuff
# FIXME: unod/redo function / remove menu...
# FIXME: Advanced MenuEditing Stuff: LegacyDir/MergeFile
# fix for creating two menus with the same name on the fly
#FIXME: is this needed with etree ?
# Hack for legacy dirs
# Hack for New Entries
# FIXME: we should also return the menu's parent,
# to avoid looking for it later on
# @see Element.getiterator()
# remove old filenames
#FIXME: this finds only Rules whose FIRST child is a Filename element
# shouldn't it remove all occurences, like the following:
#filename_nodes = rule.findall('.//Filename'):
#for fn in filename_nodes:
#if fn.text == filename:
##element.remove(rule)
#parent = self.__get_parent_node(fn)
#parent.remove(fn)
# add new filename
# remove old layout
# add new layout
# elements in ElementTree doesn't hold a reference to their parent
# Standard Keys
# Per Directory Keys
# Check required keys
# Directories
# just cache variables, they give a 10x speed improvement
# if we have an absolute path, just return it
# check if it has an extension and strip it
# parse theme files
# more caching (icon looked up in the last 5 seconds?)
# cache stuff again (directories looked up in the last 5 seconds?)
# we haven't found anything? "hicolor" is our fallback
# look for the cache
# [0] last time of lookup
# [1] mtime
# [2] dir: [subdir, [items]]
# cache stuff (directory lookuped up the in the last 5 seconds?)
# This must be a real directory, not a symlink, so attackers can't
# point it elsewhere. So we use lstat to check it.
# The fallback must be a directory
# Must be owned by the user and not accessible by anyone else
#if 'C' not in languages:
# for performance reasons
# The content should be UTF-8, but legacy files can have other
# encodings, including mixed encodings in one file. We don't attempt
# to decode them, but we silence the errors.
# parse file
# new group
# key
# Spaces before/after '=' should be ignored
# start stuff to access the keys
# set default group
# return key (with locale)
# end stuff to access the keys
# start subget
# end subget
# start validation stuff
# get file extension
# overwrite this for own checkings
# check all keys
# check if value is empty
# raise Warnings / Errors
# check if key is valid
# check random stuff
# 1 or 0 : deprecated
# true or false: ok
# float() ValueError
# int() ValueError
# write support
# An executable bit signifies that the desktop file is
# trusted, but then the file can be executed. Add hashbang to
# make sure that the file is opened by something that
# understands desktop files.
# Add executable bits to the file to show that it's trusted.
# misc
# Verify the token was not tampered with.
# We use a UserWarning subclass, instead of DeprecationWarning, because CPython
# decided deprecation warnings should be invisible by default.
# Several APIs were deprecated with no specific end-of-life date because of the
# ubiquity of their use. They should not be removed until we agree on when that
# cycle ends.
# If you're wondering why we don't use `Buffer`, it's because `Buffer` would
# be more accurately named: Bufferable. It means something which has an
# `__buffer__`. Which means you can't actually treat the result as a buffer
# (and do things like take a `len()`).
# Maintain backwards compatibility with `name is None` for pyOpenSSL.
# Python 3.10 changed representation of enums. We use well-defined object
# representation and string representation from Python 3.9.
# This must be kept in sync with sign.rs's list of allowable types in
# identify_hash_type
# This is quadratic in the number of extensions
# This is quadratic in the number of attributes
# Runtime isinstance checks need this since the rust class is not a subclass.
# ASN.1 integers are always signed, so most significant bit must be
# As defined in RFC 5280
# Type alias
#: Short attribute names from RFC 4514:
#: https://tools.ietf.org/html/rfc4514#page-7
# RFC 4514 Section 2.4 defines the value as being the # (U+0023) character
# followed by the hexadecimal encoding of the octets.
# See https://tools.ietf.org/html/rfc4514#section-2.4
# See https://tools.ietf.org/html/rfc4514#section-3
# special = escaped / SPACE / SHARP / EQUALS
# escaped = DQUOTE / PLUS / COMMA / SEMI / LANGLE / RANGLE
# Regular escape
# Hex-value scape
# The appropriate ASN1 string type varies by OID and is defined across
# multiple RFCs including 2459, 3280, and 5280. In general UTF8String
# is preferred (2459), but 3280 and 5280 specify several OIDs with
# alternate types. This means when we see the sentinel value we need
# to look up whether the OID has a non-UTF8 type. If it does, set it
# to that. Otherwise, UTF8!
# Keep list and frozenset to preserve attribute order where it matters
# TODO: this is relatively expensive, if this looks like a bottleneck
# for you, consider optimizing!
# parseaddr has found a name (e.g. Name <email>) or the entire
# value is an empty string.
# This is a very slow way to do this.
# This takes a subset of CertificatePublicKeyTypes because an issuer
# cannot have an X25519/X448 key. This introduces some unfortunate
# asymmetry that requires typing users to explicitly
# narrow their type, but we should make this accurate and not just
# convenient.
# These are distribution point bit string mappings. Not to be confused with
# CRLReason reason flags bit string mappings.
# ReasonFlags ::= BIT STRING {
# status_request is defined in RFC 6066 and is used for what is commonly
# called OCSP Must-Staple when present in the TLS Feature extension in an
# X.509 certificate.
# status_request_v2 is defined in RFC 6961 and allows multiple OCSP
# responses to be provided. It is not currently in use by clients or
# servers.
# Users found None confusing because even though encipher/decipher
# have no meaning unless key_agreement is true, to construct an
# instance of the class you still need to pass False.
# Return the value of each GeneralName, except for OtherName instances
# which we return directly because it has two important properties not
# just one value.
# Per RFC5280 Section 5.2.5, the Issuing Distribution Point extension
# in a CRL can have only one of onlyContainsUserCerts,
# onlyContainsCACerts, onlyContainsAttributeCerts set to TRUE.
# This is an alternate OID for RSA with SHA1 that is occasionally seen
# Occasionally we run into situations where the version of the Python
# package does not match the version of the shared object that is loaded.
# This may occur in environments where multiple versions of cryptography
# are installed and available in the python path. To avoid errors cropping
# up later this code checks that the currently imported package and the
# shared object that were loaded have the same version and raise an
# ImportError if they do not
# This is a mapping of
# {condition: function-returning-names-dependent-on-that-condition} so we can
# loop over them and delete unsupported names at runtime. It will be removed
# when cffi supports #if in cdef. We use functions instead of just a dict of
# lists so we can use coverage to measure which are used.
# TripleDES encryption is disallowed/deprecated throughout 2023 in
# FIPS 140-3. To keep it simple we denylist any use of TripleDES (TDEA).
# Sometimes SHA1 is still permissible. That logic is contained
# within the various *_supported methods.
# This function enables FIPS mode for OpenSSL 3.0.0 on installs that
# have the FIPS provider installed properly.
# Dedicated check for hashing algorithm use in message digest for
# signatures, e.g. RSA PKCS#1 v1.5 SHA1 (sha1WithRSAEncryption).
# FIPS mode still allows SHA1 for HMAC
# FIPS mode requires AES. TripleDES is disallowed/deprecated in
# FIPS 140-3.
# FIPS 186-4 only allows salt length == digest length for PSS
# It is technically acceptable to set an explicit salt length
# equal to the digest length and this will incorrectly fail, but
# since we don't do that in the tests and this method is
# private, we'll ignore that until we need to do otherwise.
# We only support ECDSA right now.
# We use the `include_extras` parameter of `get_type_hints`, which was
# added in Python 3.9. This can be replaced by the `typing` version
# once the min version is >= 3.9
# Recursively normalize the field type into something that the
# Rust code can understand.
# Due to https://github.com/python/mypy/issues/19731, we can't define an alias
# for `dataclass_transform` that conditionally points to `typing` or
# `typing_extensions` depending on the Python version (like we do for
# `get_type_hints`).
# We work around it by making the whole decorated class conditional on the
# Python version.
# We use `dataclasses.dataclass` to add an __init__ method
# to the class with keyword-only parameters.
# `match_args` was added in Python 3.10 and defaults
# to True
# `kw_only` was added in Python 3.10 and defaults to
# False
# Only add an __init__ method, with keyword-only
# parameters.
# This exists to break an import cycle. It is normally accessible from the
# ciphers module.
# Verify that the key is instance of bytes
# Verify that the key size matches the expected key size
# asymmetric padding module.
# This exists to break an import cycle. These classes are normally accessible
# from the serialization module.
# noqa: N801
# RFC 3394 Key Wrap - 2.2.1 (index method)
# every encryption operation is a discrete 16 byte chunk (because
# AES has a 128-bit block size) and since we're using ECB it is
# safe to reuse the encryptor for the entire operation
# Implement RFC 3394 Key Unwrap - 2.2.2 (index method)
# every decryption operation is a discrete 16 byte chunk so
# it is safe to reuse the decryptor for the entire operation
# pad the key to wrap if necessary
# RFC 5649 - 4.1 - exactly 8 octets after padding
# RFC 5649 - 4.2 - exactly two 64-bit blocks
# 1) Check that MSB(32,A) = A65959A6.
# 2) Check that 8*(n-1) < LSB(32,A) <= 8*n.  If so, let
# 3) Let b = (8*n)-MLI, and then check that the rightmost b octets of
# Text is a meaningless option unless it is accompanied by
# DetachedSignature
# No attributes implies no capabilities so we'll error if you try to
# pass both.
# The default content encryption algorithm is AES-128, which the S/MIME
# v3.2 RFC specifies as MUST support (https://datatracker.ietf.org/doc/html/rfc5751#section-2.7)
# Only allow options that make sense for encryption
# OpenSSL accepts both options at the same time, but ignores Text.
# We fail defensively to avoid unexpected outputs.
# This function works pretty hard to replicate what OpenSSL does
# precisely. For good and for ill.
# Using get() instead of get_content_type() since it has None as default,
# where the latter has "text/plain". Both methods are case-insensitive.
# A MIMEPart subclass that replicates OpenSSL's behavior of not including
# a newline if there are no headers.
# U2F application string suffixed pubkey
# These are not key types, only algorithms, so they cannot appear
# as a public key type
# re is only way to work on bytes-like data
# padding for max blocksize
# ciphers that are actually used in key wrapping
# map local curve name to key type
# Confusingly `get_public` is an entry point used by private key
# loading.
# parse header
# load public key data
# load secret data
# see https://bugzilla.mindrot.org/show_bug.cgi?id=3553 for
# information about how OpenSSH handles AEAD tags
# _check_block_size requires data to be a full block so there
# should be no output from finalize
# load per-key struct
# We don't use the comment
# yes, SSH does padding check *after* all other parsing is done.
# need to follow as it writes zero-byte padding too.
# setup parameters
# encode public and private parts together
# top-level structure
# copy result info bytearray
# encrypt in-place
# make mypy happy until we remove DSA support entirely and
# the underlying union won't have a disallowed type
# The signature is encoded as a pair of big-endian integers
# Get the reserved field, which is unused.
# Get the entire cert body and subtract the signature
# RSA certs can have multiple algorithm types
# Hash the binary data
# This is an undocumented limit enforced in the openssh codebase for sshd and
# ssh-keygen, but it is undefined in the ssh certificates spec.
# This is O(n**2)
# Not required
# A zero length list is valid, but means the certificate
# is valid for any principal of the specified type. We require
# the user to explicitly set valid_for_all_principals to get
# that behavior.
# lexically sort our byte strings
# Marshal the bytes to be signed
# RESERVED FIELD
# encode CA public key
# Sigs according to the rules defined for the CA's public key
# (RFC4253 section 6.6 for ssh-rsa, RFC5656 for ECDSA,
# and RFC8032 for Ed25519).
# Just like Golang, we're going to use SHA512 for RSA
# https://cs.opensource.google/go/x/crypto/+/refs/tags/
# v0.4.0:ssh/certs.go;l=445
# RFC 8332 defines SHA256 and 512 as options
# load_ssh_public_identity returns a union, but this is
# guaranteed to be an SSHCertificate, so we cast to make
# mypy happy.
# This lambda_n is the Carmichael totient function.
# The original RSA paper uses the Euler totient function
# here: phi_n = (p - 1) * (q - 1)
# Either version of the private exponent will work, but the
# one generated by the older formulation may be larger
# than necessary. (lambda_n always divides phi_n)
# TODO: Replace with lcm(p - 1, q - 1) once the minimum
# supported Python version is >= 3.9.
# Controls the number of iterations rsa_recover_prime_factors will perform
# to obtain the prime factors.
# reject invalid values early
# See 8.2.2(i) in Handbook of Applied Cryptography.
# The quantity d*e-1 is a multiple of phi(n), even,
# and can be represented as t*2^s.
# Cycle through all multiplicative inverses in Zn.
# The algorithm is non-deterministic, but there is a 50% chance
# any candidate a leads to successful factoring.
# See "Digitalized Signatures and Public Key Functions as Intractable
# as Factorization", M. Rabin, 1979
# Cycle through all values a^{t*2^i}=a^k
# Check if a^k is a non-trivial root of unity (mod n)
# We have found a number such that (cand-1)(cand+1)=0 (mod n).
# Either of the terms divides n.
# Found !
# bit length - 1 per RFC 3447
# Every asymmetric key type
# Just the key types we allow to be used for x509 signing. This mirrors
# the certificate public key types
# the certificate private key types
# This type removes DHPublicKey. x448/x25519 can be a public key
# but cannot be used in signing so they are allowed here.
# mypy needs this assert to narrow the type from our generic
# type. Maybe it won't some time in the future.
# OpenSSL 3.0.0 constrains GCM IVs to [64, 1024] bits inclusive
# This is a sane limit anyway so we'll enforce it here.
# 512 added to support AES-256-XTS, which uses 512-bit keys
# inverse floor division (equivalent to ceiling)
# For counter mode, the number of iterations shall not be
# larger than 2^r-1, where r <= 32 is the binary length of the counter
# This ensures that the counter values used as an input to the
# PRF will not repeat during a particular call to the KDF function.
# This is used by the scrypt tests to skip tests that require more memory
# than the MEM_LIMIT
# Not actually supported, marker for tests
# This class only allows RC2 with a 128-bit key. No support for
# effective key bits or other key sizes is provided.
# Copyright (c) 2024 Evan Welsh
# SPDX-License-Identifier: GPL-3.0+
# License-Filename: LICENSES/GPL-3.0
# Inspired by panels/common/gsd-device-manager.c in GNOME Settings
# https://gitlab.gnome.org/GNOME/gnome-control-center/-/blob/6d2add0e30538692c151e5fa0bd94ae9bece7690/panels/common/gsd-device-manager.c
# Copyright (c) 2011 John Stowers
# Search Button
# Left label
# Left AppMenu
# Right Header
# self.main_box.set_size_request(540, -1)
# self.main_stack.add_css_class("background")
# Text in entry is selected, deselect it
# Clear the page selection when going back to allow
# re-selecting it.
# Translators: Placeholder will be replaced with "GNOME Extensions" in active link form
# Translators: Placeholder will be replaced with "Flathub" in active link form
# TRANSLATORS: Add your name/nickname here (one name per line),
# they will be displayed in the "about" dialog
# map of tweakgroup.name -> tweakgroup
# We can't know where the schema owner was installed, let's assume it's
# the same prefix as ours
# summary is 'compulsory', description is optional
# …in theory, but we should not barf on bad schemas ever
# if missing translations, use the untranslated values
# not present
# some themes etc are actually called default. Ick. Dont show them if they
# are not the actual default value
# indicates the default theme, e.g Adwaita (default)
# prefer user directories first
# if it contains X-GNOME-Autostart-enabled=false then it has
# has been disabled by the user in the session applet, otherwise
# it is enabled
# check the system directories
#copy the original file to the new file, but add the extra exec args
# Ensure we don't error out
# variant override doesnt support .items()
# while I could store meta type information in the VARIANT_TYPES
# dict, its easiest to do default value handling and missing value
# checks in dedicated functions
# Copyright (c) 2012 Cosimo Cecchi
# For Atk, indicate that the rightmost widget, usually the switch relates to the
# label. By convention this is true in the great majority of cases. Settings that
# construct their own widgets will need to set this themselves
# 600 = GTK_STYLE_PROVIDER_PRIORITY_APPLICATION
# Skip empty groups...
# FIXME: need to add remove_tweak_row and remove_tweak (which clears
# the search cache etc)
# Load the current font
# TODO: Port to AdwSpinRow
# returned variant is range:(min, max)
# check key_options is iterable
# and if supplied, check it is a list of 2-tuples
# Translators: For RTL languages, this is the "Right" direction since the
# interface is flipped
# Translators: For RTL languages, this is the "Left" direction since the
# Want even number
# Turn off Global Dark Theme when theme is changed.
# https://bugzilla.gnome.org/783666
#check the shell is running and the usertheme extension is present
#include both system, and user themes
#note: the default theme lives in /system/data/dir/gnome-shell/theme
# add default theme directory since some alternative themes are installed here
#the default value to reset the shell is an empty string
# load the schema from the user installation of User Themes if it exists
# build a combo box with all the valid theme options
#new style theme - extract the name from the json file
#old style themes name was taken from the zip name
# does not look like a valid theme
# set button back to default state
# TODO: The current installer is brittle and the interaction doesn't make sense
# (you select a file and then it is un-selected with notifications informing
# you if it installed correctly)
# grp_led is unsupported
# Build header bar buttons
# Preferencee Group Header
# Body
# Empty Page
# kernel process
###{standalone
###}
# Adding a new option needs to be done in multiple places:
# - In the dictionary below. This is the primary truth of which options `Lark.__init__` accepts
# - In the docstring above. It is used both for the docstring of `LarkOptions` and `Lark`, and in readthedocs
# - As an attribute of `LarkOptions` above
# - Potentially in `_LOAD_ALLOWED_OPTIONS` below this class, when the option doesn't change how the grammar is loaded
# - Potentially in `lark.tools.__init__`, if it makes sense, and it can easily be passed as a cmd argument
# Options that can be passed to the Lark parser, even when it was loaded from cache/standalone.
# These options are only used outside of `load_grammar`.
# Update which fields are serialized
# Set regex or re module
# Some, but not all file-like objects have a 'name' attribute
# Drain file-like objects to get their contents
# The exception raised may be ImportError or OSError in
# the future.  For the cache, we don't care about the
# specific reason - we just want a username.
# Remove options that aren't relevant for loading from cache
# The cache file doesn't exist; parse and compose the grammar as normal
# We should probably narrow done which errors we catch here.
# In theory, the Lark instance might have been messed up by the call to `_load`.
# In practice the only relevant thing that might have been overwritten should be `options`
# Parse the grammar file and compose the grammars
# XXX Is this really important? Maybe just ensure interface compliance
# For lexer-only mode, keep all terminals
# Compile the EBNF grammar into BNF
# If the user asked to invert the priorities, negate them all here.
# Else, if the user asked to disable priorities, strip them from the
# rules and terminals. This allows the Earley parsers to skip an extra forest walk
# for improved performance, if you don't need them (or didn't specify any).
# TODO Deprecate lexer_callbacks?
# we don't need these callbacks if we aren't building a tree
# Not all, but multiple attributes are used
# ensure it's a tree, parse if necessary and possible
# Transformers
# Make sure the function isn't inherited (unless it's overwritten)
# Skip if v_args already applied (at the function level)
# Assumes tree is already transformed
# XXX Deprecated
# Cancel recursion
# Tree to postfix
# Postfix to tree
# We should have only one tree remaining
# There are no guarantees on the type of the value produced by calling a user func for a
# child will produce. This means type system can't statically know that the final result is
# _Return_T. As a result a cast is required.
# Visitors
# child will produce. So only annotate the public method and use an internal method when
# visiting child trees.
# Decorators
# Use the __get__ attribute of the type instead of the instance
# to fully mirror the behavior of getattr
# --- Visitor Utilities ---
# TODO this is here because self.name can be a Token instance.
# Set-up parser
# From cache
# Set-up lexer
# TODO BREAK - Change text from Optional[str] to text: str = ''.
# not custom lexer?
# Lexer Implementation
# For the standalone parser, we need to make sure that has_interegular is False to avoid NameErrors later on
# Pattern Hashing assumes all subclasses have a different priority!
# We represent a generated terminal
# Python sets an unreasonable group limit (currently 100) in its re module
# Worse, the only way to know we reached it is by catching an AssertionError!
# This function recursively tries less and less groups until it's successful.
# Yes, this is what Python provides us.. :/
# Advance the line-count until line_ctr.char_pos == text.start
# When in strict mode, we only ever try to provide one example, so taking
# a long time for that should be fine
# We don't want to show too many collisions.
# Mark this pair to not repeat warnings when multiple different BasicLexers see the same collision
# Notify the user
# Couldn't find an example within max_time steps.
# Sanitization
# Already a callback there, probably UnlessCallback
# We don't need to verify all terminals again
# In the contextual lexer, UnexpectedCharacters can mean that the terminal is defined, but not in the current context.
# This tests the input against the global context, to provide a nicer error.
# Save last_token. Calling root_lexer.next_token will change this to the wrong token
# Raise the original UnexpectedCharacters. The root lexer raises it with the wrong expected set.
# XXX TODO calling compile twice returns different results!
# If postlexer's always_accept is used, we need to recompile the grammar with empty terminals-to-keep
# Choose the best rule from each group of {rule => [rule.alias]}, since we only really need one derivation.
# Skip self-recursive constructs
# validate
# TODO: ambiguity?
# TODO pass callbacks through dict, instead of alias?
# find a full derivation
# Calculate positions while the tree is streaming, according to the rule:
# - nodes start at the start of their first child's container,
# Containers are nodes that take up space in text, but have been inlined in the tree.
# meta was already set, probably because the rule has been inlined (e.g. `?rule`)
# Optimize for left-recursion
# Prepare empty_indices as: How many Nones to insert at each index?
# LALR without placeholders
# -- When we're repeatedly expanding ambiguities we can end up with nested ambiguities.
# Due to the structure of the SPPF,
# an '_iambig' node can only appear as the first child
# function name in a Transformer is a rule name.
# Set to highest level, since we have some warnings amongst the code
# By default, we should not output any log messages
# Object
# Since `sre_parse` cannot deal with Unicode categories of the form `\p{Mn}`, we replace these with
# a simple letter, which makes no difference as we are only trying to get the possible lengths of the regex
# match here below.
# Fixed in next version (past 0.960) of typeshed
# sre_parse does not support the new features in regex. To not completely fail in that case,
# we manually test for the most important info (whether the empty string is matched)
# Python 3.11.7 introducded sre_parse.MAXWIDTH that is used instead of MAXREPEAT
# See lark-parser/lark#1376 and python/cpython#109859
# MAXREPEAT is a none pickable subclass of int, therefore needs to be converted to enable caching
# type: ignore[func-returns-value]
# TODO reversible?
# assert value is None or isinstance(value, (int, float, str, tuple)), value
# Grammar Parser
# Value 5 keeps the number of states in the lalr parser somewhat minimal
# It isn't optimal, but close to it. See PR #949
# The Threshold whether repeat via ~ are split up into different rules
# 50 is chosen since it keeps the number of states low and therefore lalr analysis time low,
# while not being to overaggressive and unnecessarily creating rules that might create shift/reduce conflicts.
# (See PR #949)
# For a small number of repeats, we can take the naive approach
# For large repeat values, we break the repetition into sub-rules.
# We treat ``rule~mn..mx`` as ``rule~mn rule~0..(diff=mx-mn)``.
# We then use small_factors to split up mn and diff up into values [(a, b), ...]
# This values are used with the help of _add_repeat_rule and _add_repeat_rule_opt
# to generate a complete rule/expression that matches the corresponding number of repeats
# We add one because _add_repeat_opt_rule generates rules that match one less
# Match rule 1 times
# match rule 0 times (e.g. up to 1 -1 times)
# a : b c+ d
# a : b _c d
# _c : _c c | c;
# a : b c* d
# a : b _c? d
# rules_list unpacking
# a : b (c|d) e
# a : b c e | b d e
# In AST terms:
# expansion(b, expansions(c, d), e)
# expansions( expansion(b, c, e), expansion(b, d, e) )
# Ensure all children are unique
# dedup is expensive, so try to minimize its use
# Double alias not allowed
# If already defined, use the user-defined terminal name
# Try to assign an indicative anon-terminal name
# Kind of a weird placement.name
# Do a bit of sorting to make sure that the longest option is returned
# (Python's re module otherwise prefers just 'l' when given (l|ll) and both could match)
# We change the trees in-place (to support huge grammars)
# So deepcopy allows calling compile more than once.
# Convert terminal-trees to strings/regexps
# Terminal added through %declare
# 1. Pre-process terminals
# Adds to terminals
# 2. Inline Templates
# 3. Convert EBNF to BNF (and apply step 1 & 2)
# We have to do it like this because rule_defs might grow due to templates
# Dont transform templates
# 4. Compile tree to Rule objects
# Remove duplicates of empty rules, throw error for non-empty duplicates
# Empty rule; assert all other attributes are equal
# Filter out unused rules
# Filter out unused terminals
# Check whether or not the importing grammar was loaded by this module.
# Technically false, but FileNotFound doesn't exist in python2.7, and this message should never reach the end user anyway
# TODO Solve with transitive closure (maybe)
# Not just declared
# Illegal
# recover to a new line
# already sorted
# TODO: is this needed?
# For the grammar parser
# TODO: think about what to do with 'options'
# Keep terminal name, no need to create a new definition
# Multi import
# Can't have aliased multi import, so all aliases will be the same as names
# Single import
# Get name from dotted path
# Aliases if exist
# Import from library
# Relative import
# Import relative to script file path if grammar is coded in script
# Import relative to grammar file path if external grammar file
# TODO terminal templates
# priority
# if mangle is not None, we shouldn't apply ignore, since we aren't in a toplevel grammar
# Search failed. Make Python throw a nice error.
# Remaining checks don't apply to abstract rules/terminals (created with %declare)
# resolve_term_references(term_defs)
# We don't know how to load the path. ignore it.
# Try exact match first
# Fallback to token types match
# , line=-1, column=-1, pos_in_stream=-1)
# TODO considered_tokens and allowed can be figured out using state
# TODO considered_rules and expected can be figured out using state
# XXX deprecate? `accepts` is better
# TODO use orig_expansion.rulename to support templates
# Tabs and spaces
# XXX Hack for ContextualLexer. Maybe there's a more elegant solution?
# Author: Erez Shinan (2017)
# Email : erezshin@gmail.com
# digraph and traverse, see The Theory and Practice of Compiler Writing
# computes F(x) = G(x) union (union { G(y) | x R y })
# X: nodes
# R: relation (function mapping node -> list of nodes that satisfy the relation)
# G: set valued function
# this is always true for the first iteration, but N[x] may be updated in traverse below
# x: single node
# S: stack
# N: weights
# R: relation (see above)
# F: set valued function we are computing (map of input -> output)
# map of kernels to LR0ItemSets
# handle start state
# if s is a not a nonterminal
# if s2 is a terminal
# traverse the states for rp(.rule)
# state2 is at the final state for rp.rule
# Try to resolve conflict based on priority
# compute end states
# Overridden by StableSymbolNode
### We use inf here as it can be safely negated without resorting to conditionals,
# Visiting is a list of IDs of all symbol/intermediate nodes currently in
# the stack. It serves two purposes: to detect when we 'recurse' in and out
# of a symbol/intermediate so that we can process both up and down. Also,
# since the SPPF can have cycles it allows us to detect if we're trying
# to recurse into a node that's already on the stack (infinite recursion).
# set of all nodes that have been visited
# a list of nodes that are currently being visited
# used for the `on_cycle` callback
# We do not use recursion here to walk the Forest due to the limited
# stack size in python. Therefore input_stack is essentially our stack.
# It is much faster to cache these as locals since they are called
# many times in large parses.
### If the current object is not an iterator, pass through to Token/SymbolNode
# results of transformations
# used to track parent nodes
# called when transforming children of symbol nodes
# data is a list of trees or tokens that correspond to the
# symbol's rule expansion
# called when transforming a symbol node
# data is a list of trees where each tree's data is
# equal to the name of the symbol or one of its aliases.
#### Try and be above the Python object ID range; probably impl. specific, but maybe this is okay.
# Necessary for match_examples() to work
# XXX copy
# shift once and return
# reduce+shift as many times as necessary
# state generation ensures no duplicate LR0ItemSets
# foreach grammar rule X ::= Y(1) ... Y(k)
# if k=0 or {Y(1),...,Y(k)} subset of NULLABLE then
# for i = 1 to k
# until none of NULLABLE,FIRST,FOLLOW changed in last iteration
# Calculate NULLABLE and FIRST
# Calculate FOLLOW
# cache RulePtr(r, 0) in r (no duplicate RulePtr objects)
# if not empty rule
# 1) Loop the expectations and ask the lexer to match.
# Since regexp is forward looking on the input stream, and we only
# want to process tokens when we hit the point in the stream at which
# they complete, we push all tokens into a buffer (delayed_matches), to
# be held possibly for a later parse step when we reach the point in the
# input stream at which they complete.
# XXX The following 3 lines were commented out for causing a bug. See issue #768
# # Remove any items that successfully matched in this pass from the to_scan buffer.
# # This ensures we don't carry over tokens that already matched, if we're ignoring below.
# to_scan.remove(item)
# 3) Process any ignores. This is typically used for e.g. whitespace.
# We carry over any unmatched items from the to_scan buffer to be matched again after
# the ignore. This should allow us to use ignored symbols in non-terminals to implement
# e.g. mandatory spacing.
# Carry over any items still in the scan buffer, to past the end of the ignored items.
# If we're ignoring up to the end of the file, # carry over the start symbol if it already completed.
## 4) Process Tokens from delayed_matches.
# This is the core of the Earley scanner. Create an SPPF node for each Token,
# and create the symbol node in the SPPF tree. Advance the item that completed,
# and add the resulting new item to either the Earley set (for processing by the
# completer/predictor) or the to_scan buffer for the next parse step.
# add (B ::= Aai+1.B, h, y) to Q'
# add (B ::= Aa+1.B, h, y) to Ei+1
# No longer needed, so unburden memory
# Cache for nodes & tokens created in a particular parse step.
## The main Earley loop.
# Run the Prediction/Completion cycle for any Items in the current Earley set.
# Completions will be added to the SPPF tree, and predictions will be recursively
# processed down to terminals/empty nodes to be added to the scanner for the next
## Column is now the final column in the parse.
# TODO add typing info
## These could be moved to the grammar analyzer. Pre-computing these is *much* faster than
## Detect if any rules/terminals have priorities set. If the user specified priority = None, then
# Check terminals for priorities
# Ignore terminal priorities if the basic lexer is used
# Held Completions (H in E.Scotts paper).
# R (items) = Ei (column.items)
# remove an element, A say, from R
### The Earley completer
### (item.s == string)
# create_leo_transitives(item.rule.origin, item.start)
###R Joop Leo right recursion Completer
# Add (B :: aC.B, h, y) to Q
# Add (B :: aC.B, h, y) to Ei and R
###R Regular Earley completer
# Empty has 0 length. If we complete an empty symbol in a particular
# parse step, we need to be able to use that same empty symbol to complete
# any predictions that result, that themselves require empty. Avoids
# infinite recursion on empty symbols.
# held_completions is 'H' in E.Scott's paper.
### The Earley predictor
### (item.s == lr0)
# Process any held completions (H).
# def create_leo_transitives(origin, start):
# 'terminals' may not contain token.type when using %declare
# Additionally, token is not always a Token
# For example, it can be a Tree when using TreeMatcher
# Set the priority of the token node to 0 so that the
# terminal priorities do not affect the Tree chosen by
# ForestSumVisitor after the basic lexer has already
# "used up" the terminal priorities
# Define parser functions
# The scan buffer. 'Q' in E.Scott's paper.
## Predict for the start_symbol.
# Add predicted items to the first Earley set (for the predictor) if they
# result in a non-terminal, or the scanner if they result in a terminal.
# If the parse was successful, the start
# symbol should have been completed in the last step of the Earley cycle, and will be in
# this column. Find the item for the start_symbol, which is the root of the SPPF tree.
# Perform our SPPF -> AST conversion
# Disable the ForestToParseTree cache when ambiguity='resolve'
# to prevent a tree construction bug. See issue #1283
# return the root of the SPPF
# If user didn't change the character position, then we should
# ptr
# j
# w
# class TransitiveItem(Item):
# This module provides a LALR interactive parser, which is used for debugging and error handling
# We don't want to call callbacks here since those might have arbitrary side effects
# and are unnecessarily slow.
# is terminal?
# Author: https://github.com/ehudt (2018)
# Adapted by Erez
# Parse tree data structures
# Check if the parse succeeded.
# The CYK table. Indexed with a 2-tuple: (start pos, end pos)
# Top-level structure is similar to the CYK table. Each cell is a dict from
# rule name to the best (lightest) tree for that rule.
# Populate base case with existing terminal production rules
# Iterate over lengths of sub-sentences
# Iterate over sub-sentences with the given length
# Choose partition of the sub-sentence in [1, l)
# This section implements context-free grammar converter to Chomsky normal form.
# It also implements a conversion of parse trees from its CNF to the original
# grammar.
# Overview:
# Applies the following operations in this order:
# * TERM: Eliminates non-solitary terminals from all rules
# * BIN: Eliminates rules with more than 2 symbols on their right-hand-side.
# * UNIT: Eliminates non-terminal unit rules
# The following grammar characteristics aren't featured:
# * Start symbol appears on RHS
# * Empty rules (epsilon rules)
# Validate that the grammar is CNF and populate auxiliary data structures.
# Reverts TERM rule.
# Reverts BIN rule.
# Reverts UNIT rule.
# Based on warnings._showwarnmsg_impl
# ----------------------------------
# Generates a stand-alone LALR(1) parser
# Git:    https://github.com/erezsh/lark
# Author: Erez Shinan (erezshin@gmail.com)
# Docstring
# if not this file
# TODO Add support for macros!
# For usage of lark with PyInstaller. See https://pyinstaller-sample-hook.readthedocs.io/en/latest/index.html
# Copyright (c) 2017-2020, PyInstaller Development Team.
# Distributed under the terms of the GNU General Public License (version 2
# or later) with exception for distributing the bootloader.
# The full license is in the file COPYING.txt, distributed with this software.
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
# defusedxml
# Copyright (c) 2013 by Christian Heimes <christian@python.org>
# expat 1.2
# Limit maximum request size to prevent resource exhaustion DoS
# Also used to limit maximum amount of gzip decoded data in order to prevent
# decompression bombs
# A value of -1 or smaller disables the limit
# 30 MB
# no limit
# response doesn't support tell() and read(), required by
# GzipFile
# Fail early when pyexpat is not installed correctly
# restore module
# restore attribute on original package
# patch pure module to use ParseError from C extension
# Python 2.x old style class
# the 'html' argument has been deprecated and ignored in all
# supported versions of Python. Python 3.8 finally removed it.
# XMLParse is a typo, keep it for backwards compatibility
# blacklist = (etree._Entity, etree._ProcessingInstruction, etree._Comment)
# 'remove_comments': True,
# 'remove_pis': True,
# lxml < 3 has no iterentities()
# iterparse from ElementTree!
# This module is an alias for ElementTree just like xml.etree.cElementTree
# if self._options.entities:
# XXX: For some reason c_bytesize may be None here (probably when python
## Constructors
## Destructors
## Query methods
## Arithmetic
## Comparisons
## METHODS ##
# else try to create a SizeStruct instance from it
## INTERNAL METHODS ##
# needed to make sum() work with Size arguments
# pickling support for Size
# see https://docs.python.org/3/library/pickle.html#object.__reduce__
# https://bitbucket.org/pypy/pypy/issue/1803
# This happens in some cases where the stream was already
# closed.  In this case, we assume the default.
# We need to figure out if the given stream is already binary.
# This can happen because the official docs recommend detaching
# the streams to get binary streams.  Some code might do this, so
# we need to deal with this case explicitly.
# Same situation here; this time we assume that the buffer is
# actually binary in case it's closed.
# If the stream does not have an encoding set, we assume it's set
# to ASCII.  This appears to happen in certain unittest
# environments.  It's not quite clear what the correct behavior is
# but this at least will force Click to recover somehow.
# If the stream looks compatible, and won't default to a
# misconfigured ascii encoding, return it as-is.
# Otherwise, get the underlying binary reader.
# If that's not possible, silently use the original reader
# and get mojibake instead of exceptions.
# Default errors to replace instead of strict in order to get
# something that works.
# Wrap the binary stream in a text stream with the correct
# encoding parameters.
# Standard streams first. These are simple because they ignore the
# atomic flag. Use fsdecode to handle Path("-").
# Non-atomic writes directly go out through the regular open functions.
# Some usability stuff for atomic writes
# Atomic writes are more complicated.  They work by opening a file
# as a proxy in the same folder and then using the fdopen
# functionality to wrap it in a Python file.  Then we wrap it in an
# atomic file that moves the file over on close.
# in case perm includes bits in umask
# On Windows, wrap the output streams with colorama to support ANSI
# color codes.
# NOTE: double check is needed so mypy does not analyze this on Linux
# variant: no call, directly as decorator for a function.
# variant: with positional name and with positional or keyword cls argument:
# @command(namearg, CommandCls, ...) or @command(namearg, cls=CommandCls, ...)
# variant: name omitted, cls _must_ be a keyword argument, @command(cls=CommandCls, ...)
# variant: with optional string name, no cls argument provided.
# @group(namearg, GroupCls, ...) or @group(namearg, cls=GroupCls, ...)
# variant: name omitted, cls _must_ be a keyword argument, @group(cmd=GroupCls, ...)
# Only Bash >= 4.4 has the nosort option.
# Fish stores the partial word in both COMP_WORDS and
# COMP_CWORD, remove it from complete args.
# Raised when end-of-string is reached in an invalid state. Use
# the partial token as-is. The quote or escape character is in
# lex.state, not lex.token.
# Will be None if expose_value is False.
# Different shells treat an "=" between a long option name and
# value differently. Might keep the value joined, return the "="
# as a separate item, or return the split name and value. Always
# split and discard the "=" to make completion easier.
# The "--" marker tells Click to stop treating values as options
# even if they start with the option character. If it hasn't been
# given and the incomplete arg looks like an option, the current
# command will provide option name completions.
# If the last complete arg is an option name with an incomplete
# value, the option will provide value completions.
# It's not an option name or value. The first argument without a
# parsed value will provide value completions.
# There were no unparsed arguments, the command may be a group that
# will provide command name completions.
# There are no standard streams attached to write to. For example,
# pythonw on Windows.
# Iteration is defined in terms of a generator function,
# returned by iter(self); use that to define next(). This works
# because `self.iter` is an iterable consumed by that generator,
# so it is re-entry safe. Calling `next(self.generator())`
# twice works and does "what you want".
# Only output the label once if the output is not a TTY.
# Update width in case the terminal has been resized
# Render the line only if it changed.
# self.avg is a rolling list of length <= 7 of steps where steps are
# defined as time elapsed divided by the total progress through
# self.length.
# WARNING: the iterator interface for `ProgressBar` relies on
# this and only works because this is a simple generator which
# doesn't create or manage additional state. If this function
# changes, the impact should be evaluated both against
# `iter(bar)` and `next(bar)`. `next()` in particular may call
# `self.generator()` repeatedly, and this must remain safe in
# order for that interface to work.
# This allows show_item_func to be updated before the
# item is processed. Only trigger at the beginning of
# the update interval.
# Split and normalize the pager command into parts.
# Split the command into the invoked CLI and its parameters.
# Resolves symlinks and produces a normalized absolute path string.
# Make a local copy of the environment to not affect the global one.
# If we're piping to less and the user hasn't decided on colors, we enable
# them by default we find the -R flag in the command line arguments.
# In case the pager exited unexpectedly, ignore the broken pipe error.
# In case there is an exception we want to close the pager immediately
# and let the caller handle it.
# Otherwise the pager will keep running, and the user may not notice
# the error message, or worse yet it may leave the terminal in a broken state.
# We must close stdin and wait for the pager to exit before we continue
# Close implies flush, so it might throw a BrokenPipeError if the pager
# process exited already.
# Less doesn't respect ^C, but catches it for its own UI purposes (aborting
# search or other commands inside less).
# That means when the user hits ^C, the parent process (click) terminates,
# but less is still alive, paging the output and messing up the terminal.
# If the user wants to make the pager exit on ^C, they should set
# `LESS='-K'`. It's not our decision to make.
# TODO: This never terminates if the passed generator never terminates.
# Command not found
# We cannot know whether or not the type expected is str or bytes when None
# is passed, so str is returned as that was what was done before.
# If the filesystem resolution is 1 second, like Mac OS
# 10.12 Extended, or 2 seconds, like FAT32, and the editor
# closes very fast, require_save can fail. Set the modified
# time to be 2 seconds in the past to work around this.
# Depending on the resolution, the exact value might not be
# recorded, so get the new recorded value.
# Unix-like, Ctrl+D
# Windows, Ctrl+Z
# The function `getch` will return a bytes object corresponding to
# the pressed character. Since Windows 10 build 1803, it will also
# return \x00 when called a second time after pressing a regular key.
# `getwch` does not share this probably-bugged behavior. Moreover, it
# returns a Unicode object by default, which is what we want.
# Either of these functions will return \x00 or \xe0 to indicate
# a special key, and you need to call the same function again to get
# the "rest" of the code. The fun part is that \u00e0 is
# "latin small letter a with grave", so if you type that on a French
# keyboard, you _also_ get a \xe0.
# E.g., consider the Up arrow. This returns \xe0 and then \x48. The
# resulting Unicode string reads as "a with grave" + "capital H".
# This is indistinguishable from when the user actually types
# "a with grave" and then "capital H".
# When \xe0 is returned, we assume it's part of a special-key sequence
# and call `getwch` again, but that means that when the user types
# the \u00e0 character, `getchar` doesn't return until a second
# character is typed.
# The alternative is returning immediately, but that would mess up
# cross-platform handling of arrow keys and others that start with
# \xe0. Another option is using `getch`, but then we can't reliably
# read non-ASCII characters, because return values of `getch` are
# limited to the current 8-bit codepage.
# Anyway, Click doesn't claim to do this Right(tm), and using `getwch`
# is doing the right thing in more situations than with `getch`.
# \x00 and \xe0 are control characters that indicate special key,
# see above.
# This code uses parts of optparse written by Gregory P. Ward and
# maintained by the Python Software Foundation.
# Copyright 2001-2006 Gregory P. Ward
# Copyright 2002-2006 Python Software Foundation
# Sentinel value that indicates an option was passed as a flag without a
# value but is not a flag option. Option.consume_value uses this to
# prompt or use the flag_value.
# If we're reversed, we're pulling in the arguments in reverse,
# so we need to turn them around.
# spos is the position of the wildcard (star).  If it's not `None`,
# we fill it with the remainder.
# Replace empty tuple with None so that a value from the
# environment may be tried.
#: The :class:`~click.Context` for this parser.  This might be
#: `None` for some advanced use cases.
#: This controls how the parser deals with interspersed arguments.
#: If this is set to `False`, the parser will stop on the first
#: non-option.  Click uses this to implement nested subcommands
#: safely.
#: This tells the parser how to deal with unknown options.  By
#: default it will error out (which is sensible), but there is a
#: second mode where it will ignore it and continue processing
#: after shifting all the unknown options into the resulting args.
# Double dashes always handled explicitly regardless of what
# prefixes are valid.
# At this point it's safe to modify rargs by injecting the
# explicit value, because no exception is raised in this
# branch.  This means that the inserted value will be fully
# consumed.
# If we got any unknown options we recombine the string of the
# remaining options and re-attach the prefix, then report that
# to the state as new larg.  This way there is basic combinatorics
# that can be achieved while still ignoring unknown arguments.
# Option allows omitting the value.
# The next arg looks like the start of an option, don't
# use it as the value if omitting the value is allowed.
# Long option handling happens in two parts.  The first part is
# supporting explicitly attached values.  In any case, we will try
# to long match the option first.
# At this point we will match the (assumed) long option through
# the long option matching code.  Note that this allows options
# like "-foo" to be matched as long options.
# At this point the long option matching failed, and we need
# to try with short options.  However there is a special rule
# which says, that if we have a two character options prefix
# (applies to "--foo" for instance), we do not dispatch to the
# short option code and will instead raise the no option
# Is already an input stream.
# Force unbuffered reads, otherwise TextIOWrapper reads a
# large chunk which is echoed early.
#: The formatter class to create with :meth:`make_formatter`.
#: .. versionadded:: 8.0
#: the parent context or `None` if none exists.
#: the :class:`Command` for this context.
#: the descriptive information name
#: Map of parameter names to their parsed values. Parameters
#: with ``expose_value=False`` are not stored.
#: the leftover arguments.
#: protected arguments.  These are arguments that are prepended
#: to `args` when certain parsing scenarios are encountered but
#: must be never propagated to another arguments.  This is used
#: to implement nested parsing.
#: the collected prefixes of the command's options.
#: the user object stored.
#: A dictionary (-like object) with defaults for parameters.
#: This flag indicates if a subcommand is going to be executed. A
#: group callback can use this information to figure out if it's
#: being executed directly or because the execution flow passes
#: onwards to a subcommand. By default it's None, but it can be
#: the name of the subcommand to execute.
#: If chaining is enabled this will be set to ``'*'`` in case
#: any commands are executed.  It is however not possible to
#: figure out which ones.  If you require this knowledge you
#: should use a :func:`result_callback`.
#: The width of the terminal (None is autodetection).
#: The maximum width of formatted content (None implies a sensible
#: default which is 80 for most things).
#: Indicates if the context allows extra args or if it should
#: fail on parsing.
#: .. versionadded:: 3.0
#: Indicates if the context allows mixing of arguments and
#: options or not.
#: Instructs click to ignore options that a command does not
#: understand and will store it on the context for later
#: processing.  This is primarily useful for situations where you
#: want to call into external programs.  Generally this pattern is
#: strongly discouraged because it's not possibly to losslessly
#: forward all arguments.
#: .. versionadded:: 4.0
#: The names for the help options.
#: An optional normalization function for tokens.  This is
#: options, choices, commands etc.
#: Indicates if resilient parsing is enabled.  In that case Click
#: will do its best to not cause any failures and default values
#: will be ignored. Useful for completion.
# If there is no envvar prefix yet, but the parent has one and
# the command on this level has a name, we can expand the envvar
# prefix automatically.
#: Controls if styling output is wanted or not.
#: Show option default values when formatting help text.
# In case the context is reused, create a new exit stack.
# Track all kwargs as params, so that forward() will pass
# them on in subsequent calls.
# Can only forward to other commands, not direct callbacks.
#: The context class to create with :meth:`make_context`.
#: the default for the :attr:`Context.allow_extra_args` flag.
#: the default for the :attr:`Context.allow_interspersed_args` flag.
#: the default for the :attr:`Context.ignore_unknown_options` flag.
#: the name the command thinks it has.  Upon registering a command
#: on a :class:`Group` the group will default the command name
#: with this information.  You should instead use the
#: :class:`Context`\'s :attr:`~Context.info_name` attribute.
#: an optional dictionary with defaults passed to the context.
#: the callback to execute when the command fires.  This might be
#: `None` in which case nothing happens.
#: the list of parameters for this command in the order they
#: should show up in the help page and execute.  Eager parameters
#: will automatically be handled before non eager ones.
# Cache the help option object in private _help_option attribute to
# avoid creating it multiple times. Not doing this will break the
# callback odering by iter_params_for_processing(), which relies on
# object comparison.
# Avoid circular import.
# Apply help_option decorator and pop resulting option
# truncate the help text to the first form feed
# Process shell completion requests and exit early.
# it's not safe to `ctx.exit(rv)` here!
# note that `rv` may actually contain data like "1" which
# has obvious effects
# more subtle case: `rv=[None, None]` can come out of
# chained commands which all returned `None` -- so it's not
# even always obvious that `rv` indicates success/failure
# by its truthiness/falsiness
# in non-standalone mode, return the exit code
# note that this is only reached if `self.invoke` above raises
# an Exit explicitly -- thus bypassing the check there which
# would return its result
# the results of non-standalone execution may therefore be
# somewhat ambiguous: if there are codepaths which lead to
# `ctx.exit(1)` and to `return 1`, the caller won't be able to
# tell the difference between the two
#: If set, this is used by the group's :meth:`command` decorator
#: as the default :class:`Command` class. This is useful to make all
#: subcommands use a custom command class.
#: If set, this is used by the group's :meth:`group` decorator
#: as the default :class:`Group` class. This is useful to make all
#: subgroups use a custom group class.
#: If set to the special value :class:`type` (literally
#: ``group_class = type``), this group's class will be used as the
#: default class. This makes a custom group class continue to make
#: custom groups.
# Literal[type] isn't valid, so use Type[type]
#: The registered subcommands by their exported names.
# The result callback that is stored. This can be set or
# overridden with the :func:`result_callback` decorator.
# What is this, the tool lied about a command.  Ignore it
# allow for 3 times the default spacing
# No subcommand was invoked, so the result callback is
# invoked with the group return value for regular
# groups, or an empty list for chained groups.
# Fetch args back out
# If we're not in chain mode, we only allow the invocation of a
# single command but we also inform the current context about the
# name of the command to invoke.
# Make sure the context is entered so we do not clean up
# resources until the result processor has worked.
# In chain mode we create the contexts step by step, but after the
# base command has been invoked.  Because at that point we do not
# know the subcommands yet, the invoked subcommand attribute is
# set to ``*`` to inform the command that subcommands are executed
# but nothing else.
# Otherwise we make every single context and invoke them in a
# chain.  In that case the return value to the result processor
# is the list of all invoked subcommand's results.
# Get the command
# If we can't find the command but there is a normalization
# function available, we try with that one.
# If we don't find the command we want to show an error message
# to the user that it was not provided.  However, there is
# something else we should do: if the first argument looks like
# an option we want to kick off parsing again for arguments to
# resolve things like --help which now should go to the main
#: The list of registered groups.
# Default nargs to what the type tells us if we have that
# information available.
# Skip no default or callable default.
# Only check the first value against nargs.
# Can be None for multiple with empty default.
# This should only happen when passing in args manually,
# the parser should construct an iterable when parsing
# the command line.
# tuple[t.Any, ...]
# nargs > 1
# If prompt is enabled but not required, then the option can be
# used as a flag to indicate using prompt or flag_value.
# Implicitly a flag because flag_value was set.
# Not a flag, but when used as a flag it shows a prompt.
# Implicitly a flag because flag options were given.
# Not a flag, and prompt is not enabled, can be used as a
# flag if flag_value is set.
# Re-guess the type from the flag value instead of the
# Counting
# group long options first
# Temporarily enable resilient parsing to avoid type casting
# failing for the default. Might be possible to extend this to
# help formatting in general.
# For boolean flags that have distinct True/False opts,
# use the opt without prefix instead of the value.
# skip count with default range type
# If we're a non boolean flag our default is more complex because
# we need to look at all flags in the same group to figure out
# if we're the default one in which case we return the flag
# value as default.
# Calculate the default before prompting anything to be stable.
# If this is a prompt for a flag we need to handle this
# differently.
# If show_default is set to True/False, provide this to `prompt` as well. For
# non-bool values of `show_default`, we use `prompt`'s default behavior
# The parser will emit a sentinel value if the option can be
# given as a flag without a value. This is different from None
# to distinguish from the flag not being given at all.
# The value wasn't set, or used the param's default, prompt if
# prompting is enabled.
# Can force a width.  This is used by the test system
# The arguments will fit to the right of the prefix.
# The prefix is too long, put the arguments on the next line.
# Consider only the first paragraph.
# Collapse newlines, tabs, and spaces.
# The first paragraph started with a "no rewrap" marker, ignore it.
# too long, truncate
# sentence end, truncate without "..."
# not at sentence end, truncate with "..."
# no truncation needed
# Account for the length of the suffix.
# remove words until the length is short enough
# Open and close the file in case we're opening it for
# reading so that we can catch at least some errors in
# some cases early.
# Convert non bytes/text into the native string type.
# If there is a message and the value looks like bytes, we manually
# need to find the binary stream and write the message in there.
# This is done separately so that most stream types will work as you
# would expect. Eg: you can write to StringIO for other cases.
# ANSI style code support. For no message or bytes, nothing happens.
# When outputting to a file instead of a terminal, strip codes.
# The value of __package__ indicates how Python was called. It may
# not exist if a setuptools script is installed as an egg. It may be
# set incorrectly for entry points created with pip on Windows.
# It is set to "" inside a Shiv or PEX zipapp.
# Executed a file, like "python app.py".
# Executed a module, like "python -m example".
# Rewritten by Python from "-m script" to "/path/to/script.py".
# Need to look at main module to determine how it was executed.
# A submodule like "example.cli".
# The prompt functions to use.  The doc tools currently override these
# functions to customize how they work.
# Write the prompt separately so that we get nice
# coloring through colorama on Windows
# Echo a space to stdout to work around an issue where
# readline causes backspace to clear the whole line.
# getpass doesn't print a newline if the user aborts input with ^C.
# Allegedly this behavior is inherited from getpass(3).
# A doc bug has been filed at https://bugs.python.org/issue24711
# convert every element of i to a text type if necessary
# ANSI escape \033[2J clears the screen, \033[1;1H moves the cursor
# If this is provided, getchar() calls into this instead.  This is used
# for unittesting purposes.
#: The exit code for this exception.
# The context will be removed by the time we print the message, so cache
# the color settings here to be used later on (in `show`)
# Translate param_type for known types.
# This module is based on the excellent work by Adam Bartoš who
# provided a lot of what went into the implementation here in
# the discussion to issue1602 in the Python bug tracker.
# There are some general differences in regards to how this works
# compared to the original patches as we do not need to patch
# the entire interpreter but just work in our little world of
# echo and prompt.
# Using `typing_extensions.Buffer` instead of `collections.abc`
# on Windows for some reason does not have `Sized` implemented.
# On PyPy we cannot get buffers so our ability to operate here is
# severely limited.
# wait for KeyboardInterrupt
#: the descriptive name of this type
#: if a list of this type is expected and the value is pulled from a
#: string environment variable, this is what splits it up.  `None`
#: means any whitespace.  For all parameters the general rule is that
#: whitespace splits them up.  The exception are paths and files which
#: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on
#: Windows).
# The class name without the "ParamType" suffix.
# Custom subclasses might not remember to set a name.
# Use curly braces to indicate a required argument.
# Use square braces to indicate an option or optional argument.
# Could use math.nextafter here, but clamping an
# open float range doesn't seem to be particularly useful. It's
# left up to the user to write a callback to do it if needed.
# If a context is provided, we automatically close the file
# at the end of the context execution (or flush out).  If a
# context does not exist, it's the caller's responsibility to
# properly close the file.  This for instance happens when the
# type is used with prompts.
# If the default is empty, ty will remain None and will
# return STRING.
# A tuple of tuples needs to detect the inner types.
# Can't call convert recursively because that would
# incorrectly unwind the tuple to a single type.
# ty is an instance (correct), so issubclass fails.
#: A dummy parameter type that just does nothing.  From a user's
#: perspective this appears to just be the same as `STRING` but
#: internally no string conversion takes place if the input was bytes.
#: This is usually useful when working with file paths as they can
#: appear in bytes and unicode.
#: For path related uses the :class:`Path` type is a better choice but
#: there are situations where an unprocessed type is useful which is why
#: it is is provided.
#: A unicode string parameter type which is the implicit default.  This
#: can also be selected by using ``str`` as type.
#: An integer parameter.  This can also be selected by using ``int`` as
#: type.
#: A floating point value parameter.  This can also be selected by using
#: ``float`` as type.
#: A boolean parameter.  This is the default for boolean flags.  This can
#: also be selected by using ``bool`` as a type.
#: A UUID parameter.
# TODO: Find out what file influences non-login shells. The issue may simply be our Docker setup.
# https://github.com/ofek/userpath/issues/3#issuecomment-492491977
# NOTE: If it is decided in future that we want to make a distinction between
# login and non-login shells, be aware that macOS will still need this since
# Terminal.app runs a login shell by default for each new terminal window.
# https://github.com/fish-shell/fish-shell/issues/527#issuecomment-12436286
# We do this because the output may contain new lines.
# We want this to never throw an exception
# https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-sendmessagetimeoutw
# https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-settingchange
# HWND_BROADCAST
# WM_SETTINGCHANGE
# must be NULL
# SMTO_ABORTIFHUNG
# milliseconds
# De-dup and retain order
# First, try to see what spawned this process
# Then, search for environment variables that are known to be set by certain shells
# NOTE: This likely does not work when not directly in the shell
# Finally, try global environment
# Attach a NullHandler to the top level logger by default
# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library
# TODO: remove this check when dropping Python 3.7 support
# automatically lower confidence
# on small bytes samples.
# https://github.com/jawah/charset_normalizer/issues/391
# Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process
# but chardet does return 'utf-8-sig' and it is a valid codec name.
# Defensive: ensure exit path clean handler
# Lazy str loading may have missed something there
# We might want to check the sequence again with the whole content
# Only if initial MD tests passes
# Preparing those fallbacks in case we got nothing.
# We shall skip the CD when its about ASCII
# Most of the time its not relevant to run "language-detection" on it.
# If md says nothing to worry about, then... stop immediately!
# Contain for each eligible encoding a list of/item bytes SIG/BOM
# Up-to-date Unicode ucd/15.0.0
# pre-computed code page that are similar using the function cp_similarity.
# Sample character sets — replace with full lists if needed
# Combine all into a set
# Logging LEVEL below DEBUG
# Language label that contain the em dash "—"
# character are to be considered alternative seq to origin
# Jap-Kanji
# Jap-Katakana
# Jap-Hiragana
# type: ignore[import-not-found,import]
# Defensive: unicode database outdated?
# includes \n \t \r \v
# Why? Its the ASCII substitute character.
# bug discovered in Python,
# Zero Width No-Break Space located in 	Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space.
# multi-byte bad cutting detector and adjustment
# not the cleanest way to perform that fix but clever enough for now.
# Worse if its the same char duplicated with different accent.
# Word/Buffer ending with an upper case accentuated letter are so rare,
# that we will consider them all as suspicious. Same weight as foreign_long suspicious.
# we can be pretty sure it's garbage when uncommon characters are widely
# used. otherwise it could just be traditional chinese for example.
# Latin characters can be accompanied with a combining diacritical mark
# eg. Vietnamese.
# Japanese Exception
# Chinese/Japanese use dedicated range for punctuation and/or separators.
# Below 1% difference --> Use Coherence
# When having a difficult decision, use the result that decoded as many multi-byte as possible.
# preserve RAM usage!
# Lazy Str Loading
# Unload RAM usage; dirty trick.
# Trying to infer the language based on the given encoding
# Its either English or we should not pronounce ourselves in certain cases.
# doing it there to avoid circular import
# list detected ranges
# filter and sort
# We should disable the submatch factoring when the input file is too heavy (conserve RAM usage)
# XXX "Warnings control" is now deprecated. Leaving in the API function to not
# break code that uses it.
# no direct instantiation, so allow immutable subclasses
#if node is None:
#self.represented_objects[alias_key] = None
#if alias_key is not None:
# Note that in some cases `repr(data)` represents a float number
# without the decimal parts.  For instance:
# Unfortunately, this is not a valid float representation according
# to the definition of the `!!float` tag.  We fix this by adding
# '.0' before the 'e' symbol.
#pairs = (len(data) > 0 and isinstance(data, list))
#if pairs:
#if not pairs:
#value = []
#for item_key, item_value in data:
#return SequenceNode(u'tag:yaml.org,2002:pairs', value)
# We use __reduce__ API to save the data. data.__reduce__ returns
# a tuple of length 2-5:
# For reconstructing, we calls function(*args), then set its state,
# listitems, and dictitems if they are not None.
# A special case is when function.__name__ == '__newobj__'. In this
# case we create the object with args[0].__new__(*args).
# Another special case is when __reduce__ returns a string - we don't
# support it.
# We produce a !!python/object, !!python/object/new or
# !!python/object/apply node.
# Provide uniform representation across different Python versions.
# Drop the STREAM-START event.
# If there are more documents available?
# Get the root node of the next document.
# Compose a document if the stream is not empty.
# Ensure that the stream contains no more documents.
# Drop the STREAM-END event.
# Drop the DOCUMENT-START event.
# Compose the root node.
# Drop the DOCUMENT-END event.
#key_event = self.peek_event()
#if item_key in node.value:
#node.value[item_key] = item_value
# Note: `add_path_resolver` is experimental.  The API could be changed.
# `new_path` is a pattern that is matched against the path from the
# root to the node that is being considered.  `node_path` elements are
# tuples `(node_check, index_check)`.  `node_check` is a node class:
# `ScalarNode`, `SequenceNode`, `MappingNode` or `None`.  `None`
# matches any kind of a node.  `index_check` could be `None`, a boolean
# value, a string value, or a number.  `None` and `False` match against
# any _value_ of sequence and mapping nodes.  `True` matches against
# any _key_ of a mapping node.  A string `index_check` matches against
# a mapping value that corresponds to a scalar key which content is
# equal to the `index_check` value.  An integer `index_check` matches
# against a sequence value with the index equal to `index_check`.
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
# This module contains abstractions for the input stream. You don't have to
# looks further, there are no pretty code.
# We define two classes here.
# It's just a record and its only use is producing nice error messages.
# Parser does not use it for any other purposes.
# Reader determines the encoding of `data` and converts it to unicode.
# Reader provides the following methods and attributes:
# Reader:
# - determines the data encoding and converts it to a unicode string,
# - checks if characters are in allowed range,
# - adds '\0' to the end.
# Reader accepts
# Yeah, it's ugly and slow.
# The following YAML grammar is LL(1) and is parsed by a recursive descent
# parser.
# stream            ::= STREAM-START implicit_document? explicit_document* STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
# block_node_or_indentless_sequence ::=
# block_node        ::= ALIAS
# flow_node         ::= ALIAS
# properties        ::= TAG ANCHOR? | ANCHOR TAG?
# block_content     ::= block_collection | flow_collection | SCALAR
# flow_content      ::= flow_collection | SCALAR
# block_collection  ::= block_sequence | block_mapping
# flow_collection   ::= flow_sequence | flow_mapping
# block_sequence    ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
# indentless_sequence   ::= (BLOCK-ENTRY block_node?)+
# block_mapping     ::= BLOCK-MAPPING_START
# flow_sequence     ::= FLOW-SEQUENCE-START
# flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# flow_mapping      ::= FLOW-MAPPING-START
# flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# FIRST sets:
# stream: { STREAM-START }
# explicit_document: { DIRECTIVE DOCUMENT-START }
# implicit_document: FIRST(block_node)
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_sequence: { BLOCK-SEQUENCE-START }
# block_mapping: { BLOCK-MAPPING-START }
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
# indentless_sequence: { ENTRY }
# flow_sequence: { FLOW-SEQUENCE-START }
# flow_mapping: { FLOW-MAPPING-START }
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
# Since writing a recursive-descendant parser is a straightforward task, we
# do not give many comments here.
# Reset the state attributes (to clear self-references)
# Check the type of the next event.
# Get the next event.
# Get the next event and proceed further.
# stream    ::= STREAM-START implicit_document? explicit_document* STREAM-END
# Parse the stream start.
# Prepare the next state.
# Parse an implicit document.
# Parse any extra document end indicators.
# Parse an explicit document.
# Parse the end of the stream.
# Parse the document end.
# block_node_or_indentless_sequence ::= ALIAS
# block_node    ::= ALIAS
# flow_node     ::= ALIAS
# properties    ::= TAG ANCHOR? | ANCHOR TAG?
#if tag == '!':
# Empty scalars are allowed even if a tag or an anchor is
# specified.
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# Note that while production rules for both flow_sequence_entry and
# flow_mapping_entry are equal, their interpretations are different.
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
# generate an inline mapping (set syntax).
# flow_mapping  ::= FLOW-MAPPING-START
#def __del__(self):
# Construct and return the next document.
# Ensure that the stream contains a single document and construct it.
# Trying to make a quiet NaN (like C99).
# Note: we do not check for duplicate keys, because it's too
# CPU-expensive.
# Note: the same code as `construct_yaml_omap`.
# 'extend' is blacklisted because it is used by
# construct_python_object_apply to add `listitems` to a newly generate
# python instance
# Format:
# or short format:
# The difference between !!python/object/apply and !!python/object/new
# is how an object is created, check make_python_instance for details.
# Constructor is same as UnsafeConstructor. Need to leave this in place in case
# people have extended it directly.
# UnsafeLoader is the same as Loader (which is and was always unsafe on
# untrusted input). Use of either Loader or UnsafeLoader should be rare, since
# FullLoad should be able to load almost all YAML safely. Loader is left intact
# to ensure backwards compatibility.
# Emitter expects events obeying the following grammar:
# stream ::= STREAM-START document* STREAM-END
# document ::= DOCUMENT-START node DOCUMENT-END
# node ::= SCALAR | sequence | mapping
# sequence ::= SEQUENCE-START node* SEQUENCE-END
# mapping ::= MAPPING-START (node node)* MAPPING-END
# The stream should have the methods `write` and possibly `flush`.
# Encoding can be overridden by STREAM-START.
# Emitter is a state machine with a stack of states to handle nested
# structures.
# Current event and the event queue.
# The current indentation level and the stack of previous indents.
# Flow level.
# Contexts.
# Characteristics of the last emitted character:
# Whether the document requires an explicit document indicator
# Formatting details.
# Tag prefixes.
# Prepared anchor and tag.
# Scalar analysis and style.
# In some cases, we wait for a few next events before emitting.
# States.
# Stream handlers.
# Document handlers.
# Node handlers.
# Flow sequence handlers.
# Flow mapping handlers.
# Block sequence handlers.
# Block mapping handlers.
# Checkers.
# Anchor, Tag, and Scalar processors.
#if self.analysis.multiline and split    \
# Analyzers.
# Empty scalar is a special case.
# Indicators and special characters.
# Important whitespace combinations.
# Check document indicators.
# First character or preceded by a whitespace.
# Last character or followed by a whitespace.
# The previous character is a space.
# The previous character is a break.
# Check for indicators.
# Leading indicators are special characters.
# Some indicators cannot appear within a scalar as well.
# Check for line breaks, special, and unicode characters.
# Detect important whitespace combinations.
# Prepare for the next character.
# Let's decide what styles are allowed.
# Leading and trailing whitespaces are bad for plain scalars.
# We do not permit trailing spaces for block scalars.
# Spaces at the beginning of a new line are only acceptable for block
# scalars.
# Spaces followed by breaks, as well as special character are only
# allowed for double quoted scalars.
# Although the plain scalar writer supports breaks, we never emit
# multiline plain scalars.
# Flow indicators are forbidden for flow plain scalars.
# Block indicators are forbidden for block plain scalars.
# Writers.
# Write BOM if needed.
# Scalar streams.
#if isinstance(value, list):
#else:
# Scanner produces tokens of the following types:
# STREAM-START
# STREAM-END
# DIRECTIVE(name, value)
# DOCUMENT-START
# DOCUMENT-END
# BLOCK-SEQUENCE-START
# BLOCK-MAPPING-START
# BLOCK-END
# FLOW-SEQUENCE-START
# FLOW-MAPPING-START
# FLOW-SEQUENCE-END
# FLOW-MAPPING-END
# BLOCK-ENTRY
# FLOW-ENTRY
# KEY
# VALUE
# ALIAS(value)
# ANCHOR(value)
# TAG(value)
# SCALAR(value, plain, style)
# Read comments in the Scanner code for more details.
# See below simple keys treatment.
# It is assumed that Scanner and Reader will have a common descendant.
# Reader do the dirty work of checking for BOM and converting the
# input data to Unicode. It also adds NUL to the end.
# Reader supports the following methods
# Had we reached the end of the stream?
# The number of unclosed '{' and '['. `flow_level == 0` means block
# List of processed tokens that are not yet emitted.
# Add the STREAM-START token.
# Number of tokens that were emitted through the `get_token` method.
# The current indentation level.
# Past indentation levels.
# Variables related to simple keys treatment.
# A simple key is a key that is not denoted by the '?' indicator.
# Example of simple keys:
# We emit the KEY token before all keys, so when we find a potential
# simple key, we try to locate the corresponding ':' indicator.
# Simple keys should be limited to a single line and 1024 characters.
# Can a simple key start at the current position? A simple key may
# start:
# - at the beginning of the line, not counting indentation spaces
# - after '{', '[', ',' (in the flow context),
# - after '?', ':', '-' (in the block context).
# In the block context, this flag also signifies if a block collection
# may start at the current position.
# Keep track of possible simple keys. This is a dictionary. The key
# is `flow_level`; there can be no more that one possible simple key
# for each level. The value is a SimpleKey record:
# A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow),
# '[', or '{' tokens.
# Public methods.
# Check if the next token is one of the given types.
# Return the next token, but do not delete if from the queue.
# Return None if no more tokens.
# Return the next token.
# Private methods.
# The current token may be a potential simple key, so we
# need to look further.
# Eat whitespaces and comments until we reach the next token.
# Remove obsolete possible simple keys.
# Compare the current indentation and column. It may add some tokens
# and decrease the current indentation level.
# Peek the next character.
# Is it the end of stream?
# Is it a directive?
# Is it the document start?
# Is it the document end?
# TODO: support for BOM within a stream.
#if ch == '\uFEFF':
# Note: the order of the following checks is NOT significant.
# Is it the flow sequence start indicator?
# Is it the flow mapping start indicator?
# Is it the flow sequence end indicator?
# Is it the flow mapping end indicator?
# Is it the flow entry indicator?
# Is it the block entry indicator?
# Is it the key indicator?
# Is it the value indicator?
# Is it an alias?
# Is it an anchor?
# Is it a tag?
# Is it a literal scalar?
# Is it a folded scalar?
# Is it a single quoted scalar?
# Is it a double quoted scalar?
# It must be a plain scalar then.
# No? It's an error. Let's produce a nice error message.
# Simple keys treatment.
# Return the number of the nearest possible simple key. Actually we
# don't need to loop through the whole dictionary. We may replace it
# with the following code:
# Remove entries that are no longer possible simple keys. According to
# the YAML specification, simple keys
# - should be limited to a single line,
# - should be no longer than 1024 characters.
# Disabling this procedure will allow simple keys of any length and
# height (may cause problems if indentation is broken though).
# The next token may start a simple key. We check if it's possible
# and save its position. This function is called for
# Check if a simple key is required at the current position.
# The next token might be a simple key. Let's save it's number and
# Remove the saved possible key position at the current flow level.
# Indentation functions.
## In flow context, tokens should respect indentation.
## Actually the condition should be `self.indent >= column` according to
## the spec. But this condition will prohibit intuitively correct
## constructions such as
## key : {
## }
#if self.flow_level and self.indent > column:
# In the flow context, indentation is ignored. We make the scanner less
# restrictive then specification requires.
# In block context, we may need to issue the BLOCK-END tokens.
# Check if we need to increase indentation.
# Fetchers.
# We always add STREAM-START as the first token and STREAM-END as the
# last token.
# Read the token.
# Add STREAM-START.
# Set the current indentation to -1.
# Reset simple keys.
# Add STREAM-END.
# The steam is finished.
# Scan and add DIRECTIVE.
# Reset simple keys. Note that there could not be a block collection
# after '---'.
# Add DOCUMENT-START or DOCUMENT-END.
# '[' and '{' may start a simple key.
# Increase the flow level.
# Simple keys are allowed after '[' and '{'.
# Add FLOW-SEQUENCE-START or FLOW-MAPPING-START.
# Reset possible simple key on the current level.
# Decrease the flow level.
# No simple keys after ']' or '}'.
# Add FLOW-SEQUENCE-END or FLOW-MAPPING-END.
# Simple keys are allowed after ','.
# Add FLOW-ENTRY.
# Block context needs additional checks.
# Are we allowed to start a new entry?
# We may need to add BLOCK-SEQUENCE-START.
# It's an error for the block entry to occur in the flow context,
# but we let the parser detect this.
# Simple keys are allowed after '-'.
# Add BLOCK-ENTRY.
# Are we allowed to start a key (not necessary a simple)?
# We may need to add BLOCK-MAPPING-START.
# Simple keys are allowed after '?' in the block context.
# Add KEY.
# Do we determine a simple key?
# If this key starts a new block mapping, we need to add
# BLOCK-MAPPING-START.
# There cannot be two simple keys one after another.
# It must be a part of a complex key.
# (Do we really need them? They will be caught by the parser
# anyway.)
# We are allowed to start a complex value if and only if
# we can start a simple key.
# If this value starts a new block mapping, we need to add
# BLOCK-MAPPING-START.  It will be detected as an error later by
# the parser.
# Simple keys are allowed after ':' in the block context.
# Add VALUE.
# ALIAS could be a simple key.
# No simple keys after ALIAS.
# Scan and add ALIAS.
# ANCHOR could start a simple key.
# No simple keys after ANCHOR.
# Scan and add ANCHOR.
# TAG could start a simple key.
# No simple keys after TAG.
# Scan and add TAG.
# A simple key may follow a block scalar.
# Scan and add SCALAR.
# A flow scalar could be a simple key.
# No simple keys after flow scalars.
# A plain scalar could be a simple key.
# No simple keys after plain scalars. But note that `scan_plain` will
# change this flag if the scan is finished at the beginning of the
# Scan and add SCALAR. May change `allow_simple_key`.
# DIRECTIVE:        ^ '%' ...
# The '%' indicator is already checked.
# DOCUMENT-START:   ^ '---' (' '|'\n')
# DOCUMENT-END:     ^ '...' (' '|'\n')
# BLOCK-ENTRY:      '-' (' '|'\n')
# KEY(flow context):    '?'
# KEY(block context):   '?' (' '|'\n')
# VALUE(flow context):  ':'
# VALUE(block context): ':' (' '|'\n')
# A plain scalar may start with any non-space character except:
# It may also start with
# if it is followed by a non-space character.
# Note that we limit the last rule to the block context (except the
# '-' character) because we want the flow context to be space
# independent.
# Scanners.
# We ignore spaces, line breaks and comments.
# If we find a line break in the block context, we set the flag
# `allow_simple_key` on.
# The byte order mark is stripped if it's the first character in the
# stream. We do not yet support BOM inside the stream as the
# specification requires. Any such mark will be considered as a part
# of the document.
# TODO: We need to make tab handling rules more sane. A good rule is
# So the checking code is
# We also need to add the check for `allow_simple_keys == True` to
# `unwind_indent` before issuing BLOCK-END.
# Scanners for block, flow, and plain scalars need to be modified.
# See the specification for details.
# The specification does not restrict characters for anchors and
# aliases. This may lead to problems, for instance, the document:
# can be interpreted in two ways, as
# Therefore we restrict aliases to numbers and ASCII letters.
# Scan the header.
# Determine the indentation level and go to the first non-empty line.
# Scan the inner part of the block scalar.
# Unfortunately, folding rules are ambiguous.
# This is the folding according to the specification:
# This is Clark Evans's interpretation (also in the spec
# examples):
#if folded and line_break == '\n':
# Chomp the tail.
# We are done.
# Note that we loose indentation rules for quoted scalars. Quoted
# scalars don't need to adhere indentation because " and ' clearly
# mark the beginning and the end of them. Therefore we are less
# restrictive then the specification requires. We only need to check
# that document separators are not included in scalars.
# Instead of checking indentation, we check for document
# separators.
# We add an additional restriction for the flow context:
# We also keep track of the `allow_simple_key` flag here.
# Indentation rules are loosed for the flow context.
# We allow zero indentation for scalars, but then we need to check for
# document separators at the beginning of the line.
#if indent == 0:
# The specification is really confusing about tabs in plain scalars.
# We just forbid them completely. Do not use tabs in YAML!
# For some strange reasons, the specification does not allow '_' in
# tag handles. I have allowed it anyway.
# Note: we do not check if URI is well-formed.
# Transforms:
#class BOMToken(Token):
# Abstract classes.
# Implementations.
# Using the MutableMapping function directly fails due to the private
# new_vals was not inserted, as there was a previous one
# If already several items got inserted, we have a list
# vals should be a tuple then, i.e. only one item so far
# Need to convert the tuple to list for further extension
#: The expected size of the upload
#: Attribute that requests will check to determine the length of the
#: body. See bug #80 for more details
#: Encoding the input data is using
#: The iterator used to generate the upload data
#: Encoding the iterator is using
# The buffer we use to provide the correct number of bytes requested
# during a read
# Build our queue of requests
# Ensure the user doesn't try to pass their own job_queue
#: The original keyword arguments provided to the queue
#: The wrapped response
#: The captured and wrapped exception
#: Boundary value either passed in by the user or created
# Computed boundary
#: Encoding of the data being passed in
# Pre-encoded boundary
#: Fields provided by the user
#: Whether or not the encoder is finished
#: Pre-computed parts of the upload
# Pre-computed parts iterator
# The part we're currently working with
# Cached computation of the body's length
# Our buffer
# Pre-compute each part's headers
# Load boundary into buffer
# If _len isn't already calculated, calculate, return, and set it
# Length of --{boundary}
# boundary length + header length + body length + len('\r\n') * 2
#: Instance of the :class:`MultipartEncoder` being monitored
#: Optionally function to call after a read
#: Number of bytes already read from the :class:`MultipartEncoder`
#: instance
#: Avoid the same problem in bug #80
# e.g. BytesIO, cStringIO.StringIO
# We want to be at the beginning
# left to read
# Split into header section (if any) and the content
#: Original Content-Type header
#: Response body encoding
#: Parsed parts of the multipart response body
# Regular expressions stolen from werkzeug/http.py
# cd2c97bb0a076da2322f11adce0b2731f9193396 L62-L64
# ignore any directory paths in the filename
# fully qualified file path
# directory to download to
# fallback to downloading to current working directory
# We will be streaming the raw bytes from over the wire, so we need to
# ensure that writing to the fileobject will preserve those bytes. On
# Python3, if the user passes an io.StringIO, this will fail, so we need
# to check for BytesIO instead.
# If we're not on requests 2.8.0+ this method does not exist
# if we present the user/passwd and still get rejected
# https://tools.ietf.org/html/rfc2617#section-3.2.1
# wrong user/passwd
# give up authenticate
# if we have nonce, then just use it, otherwise server will tell us
# Turn tuples into Basic Authentication objects
# If we're not on requests 2.8.0+ this method does not exist and
# is not relevant.
# Check that the attr exists because much older versions of requests
# set this attribute lazily. For example:
# https://github.com/kennethreitz/requests/blob/33735480f77891754304e7f13e3cdf83aaaa76aa/requests/auth.py#L59
# Digest auth would resend the request by itself. We can take a
# shortcut here.
# <prefix><METHOD> <request-path> HTTP/1.1
# <prefix>Host: <request-host> OR host header specified by user
# In the event that the body is a file-like object, let's not try
# to read everything into memory.
# Let's interact almost entirely with urllib3's response
# Let's convert the version int from httplib to bytes
# <prefix>HTTP/<version_str> <status_code> <reason>
# Don't bail out with an exception if data is None
# NOTE(Ian): Perhaps we should raise a warning
# NOTE(Ian): OSX does not have these constants defined, so we
# set them conditionally.
# On OSX, TCP_KEEPALIVE from netinet/tcp.h is not exported
# by python's socket module
# Earlier versions of requests either don't have this method or, worse,
# don't allow passing arbitrary keyword arguments. As a result, only
# conditionally define this method.
# HTTP headers are case-insensitive (RFC 7230)
# an assert_hostname from a previous request may have been left
# Copyright(C) 2018 Phil Sutter <phil@nwl.cc>
# the Free Software Foundation, version 2 of the License.
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
### API function definitions
# initialize libnftables context
# Ugly hack...
# StringIO is slow on PyPy, StringIO is faster.  However: PyPy's own
# StringBuilder is fastest.
# Allow self-specified cert location.
# json/simplejson module import resolution
# No old version logs, only 2.6 + supported
# Setup logging in case debugging is enabled
# The database for severity
# The file to read log messages from
# To keep track of previously included profile fragments
# To store the globs entered by users so they can be provided again
# format: user_globs['/foo*'] = AARE('/foo*')
# let ask_addhat() remember answers for already-seen change_hat events
# Profiles originally in sd, replace by aa
# Preserve this between passes # was our
# Register the on_exit method with atexit
# Limit to checking files under 100k for the sake of speed
# Get the traceback to the message
# Add the traceback to message
# Else tell user what happened
# Check if apparmor is actually mounted there
# XXX valid_path() only checks the syntax, but not if the directory exists!
# If the link an absolute path
# Link is relative path
# Remove leading /
# a force-complain symlink is more packaging-friendly, but breaks caching
# create_symlink('force-complain', filename)
# remove conflicting mode flags
# remove conflicting and complain mode flags
# print(filename)
# print(link)
# link = link + '/%s'%bname
# If the symlink directory does not exist create it
# get the interpreter (without parameters)
# Add required hats to the profile if they match the localfile
# prof_unload(local_prof)
# no inactive profile found
# TODO: search based on the attachment, not (only?) based on the profile name
# needed for CMD_VIEW_PROFILE
# TODO: preserve other flags, if any
# ensure active_profiles has the /etc/apparmor.d/ filename initialized
# TODO: ideally serialize_profile() shouldn't always use active_profiles
# CMD_CREATE_PROFILE chosen
# if not bin_full:
# if not bin_full.startswith('/'):
# Return if executable path not found
# for named profiles
# Create a new profile if no existing profile
# change filename from extra_profile_dir to /etc/apparmor.d/
# use name as name and attachment
# To-Do
# XXX If more than one profile in a file then second one is being ignored XXX
# Do we return flags for both or
# TODO: count the number of matching lines (separated by profile and hat?) and return it
# TODO: change child profile flags even if program is specified
# named profiles can come without an attachment path specified ("profile foo {...}")
# Check cache of profiles
# Check the disk for profile
# print(prof_path)
# Add to cache of profile
# active_profiles[program] = prof_path
# no need to ask if the hat already exists
# create default hat if it doesn't exist yet
# As unknown hat is denied no entry for it should be made
# XXX temporary solution to avoid breaking the existing code
# ignore log entries for non-existing profiles
# ignore log entries for non-existing hats
# nx is not used in profiles but in log files.
# Log parsing methods will convert it to its profile form
# nx is internally cx/px/cix/pix + to_name
# If profiled program executes itself only 'ix' option
# if exec_target == profile:
# Don't allow hats to cx (nested profiles not supported by aa-logprof yet)
# Add deny to options
# Define the default option
# Prompt portion starts
# to_name should not exist here since, transitioning is already handled
# ask user about the exec mode to use
# Disable the unsafe mode
# For inherit we need mr
# Skip remaining events if they ask to deny exec
# Update tracking info based on kind of change
# Check profile exists for px
# named exec
# not creating the target profile effectively results in ix mode
# ATM its lexicographic, should be done to allow better matches later
# make sure the original path is always the last option
# Describe the type of changes
# aa-mergeprof
# XXX limited to two levels to avoid an Exception on nested child profiles or nested null-*
# TODO: honor full profile name as soon as child profiles are listed in active_profiles
# only continue/ask if the parent profile exists
# Ignore log events for a non-existing profile or child profile. Such events can occur
# after deleting a profile or hat manually, or when processing a foreign log.
# (Checking for 'file' is a simplified way to check if it's a ProfileStorage.)
# don't ask about individual rules if the user doesn't want the additional subprofile/hat
# check for and ask about conflicting exec modes
# end profiling loop
# Load variables into sev_db? Not needed/used for capabilities and network rules.
# In complain mode: events default to allow
# In enforce mode: events default to deny
# XXX does this behaviour really make sense, except for "historical reasons"[tm]?
# reset raw rule after manually modifying rule_obj
# note that we check against the original rule_obj here, not edit_rule_obj (which might be based on a globbed path)
# Allow rules covered by denied rules shouldn't be deleted
# only a subset allow rules may actually be denied
# just keep the existing rule
# replace existing rule with merged one
# make sure aa-mergeprof doesn't ask to add conflicting rules later
# TODO: improve/fix logic to honor magic vs. quoted include paths
# never propose includes that are already in the profile (shouldn't happen because of is_known_rule())
# never propose a local/ include (they are meant to be included in exactly one profile)
# XXX type check should go away once we init all profiles correctly
# This line can only run if the 'logfile' exists in settings, otherwise
# it will yield a Python KeyError
# set up variables for this pass
# print(pid)
# print(active_profiles)
# Ensure the changed profiles are actual active profiles
# remember selection
# saving the selected profile removes it from the list, therefore reset selection
# user chose "deny" or "unconfined" for this target, therefore ignore log events
# ignore null-* profiles (probably nested childs)
# otherwise we'd accidentally create a null-* hat in the profile which is worse
# XXX drop this once we support nested childs
# TODO: support nested child profiles
# used to avoid to accidentally initialize aa[profile][hat] or calling is_known_rule() on events for a non-existing profile
# with execs in ix mode, we already have ProfileStorage initialized and should keep the content it already has
# we'll read all profiles from disk, so reset the storage first (autodep() might have created/stored
# a profile already, which would cause a 'Conflicting profile' error in attach_profile_data())
# The skip_profiles parameter should only be specified by tests.
# each autodep() run calls read_inactive_profiles, but that's a) superfluous and b) triggers a conflict because the inactive profiles are already loaded
# therefore don't do anything if the inactive profiles were already loaded
# TODO: handle hats/child profiles independent of main profiles
# use profile as name and attachment
# Make deep copy of data to avoid changes to
# arising due to mutables
# we're dealing with a multiline statement
# is line handled by a *Rule class?
# warn about endless loop, and don't call load_include() (again) for this file
# Starting line of a profile/hat
# in_contained_hat is needed to know if we are already in a profile or not. Simply checking if we are in a hat doesn't work,
# because something like "profile foo//bar" will set profile and hat at once, and later (wrongfully) expect another "}".
# The logic is simple and resembles a "poor man's stack" (with limited/hardcoded height).
# Save the initial comment
# If profile ends and we're not in one
# Conditional Boolean
# Conditional Variable defines
# Conditional Boolean defined
# Handle initial comments
# TODO: allow any number of spaces/tabs after '#'
# keep line as part of initial_comment (if we ever support writing abstractions, we should update serialize_profile())
# Bah, line continues on to the next line
# filter trailing comments
# lastline gets merged into line (and reset to None) when reading the next line.
# If it isn't empty, this means there's something unparsable at the end of the profile
# Below is not required I'd say
# End of file reached but we're stuck in a profile
# file rules need to be parsed after variable rules
# Embedded hats
# External hats
# Here should be all the profiles from the files added write after global/common stuff
# permission_600 = stat.S_IRUSR | stat.S_IWUSR    # Owner read and write
# os.chmod(newprof.name, permission_600)
# XXX get rid of get() checks after we have a proper function to initialize a profile
# a is a subset of w, so remove it
# make sure not to modify the original rule object (with exceptions, see end of this function)
# XXX also handle owner-only perms
# paths in existing rules that match the original path
# already read, do nothing
# If the include is a directory means include all subfiles
# if directory doesn't exist, silently skip loading it
# ------ Initialisations ------ #
# config already initialized (and possibly changed afterwards), so don't overwrite the config variables
# prevent various failures if logprof.conf doesn't exist
# not really correct, but works
# keep this to allow toggling 'audit' flags
# oldprofile = apparmor.serialize_profile(apparmor.split_to_merged(apparmor.original_aa), profile, {})
# , {'is_attachment': True})
# FIXME: should ensure profile is loaded before unloading
# TODO: transform this script to a package to use local imports so that if called with ./aa-notify, we use ./apparmor.*
# Save changes
# Handle the case where no command is provided
# , debug, error, msg
# The profile directory
# used for rule types that don't have severity ratings
# For variable expansions for the profile
# open(dbname, 'r')
# or only rstrip and lstrip?
# error(error_message)
# from None
# raise ValueError("unexpected capability rank input: %s"%resource)
# path contains variable
# file resource
# Check if we have a matching directory tree to descend into
# If severity still not found, match against globs
# Match against all globs at this directory level
# Match rest of the path
# Find max rank
# remove initial / from path
# break path into directory level chunks
# Check for an exact match in the db
# Find max value among the given modes
# Search regex tree for matching glob
# Return default rank if severity cannot be found
# variables = regex_variable.findall(resource)
# Check for leading or trailing / that may need to be collapsed
# find that a single / exists before variable or not
# finds if the replacement has leading / or not
# Set up UI logger for separate messages from UI module
# reads the response on command line for json and verifies the response
# for the dialog type
# text mode
# Originally (\S) was used but with translations it would not work :(
# Get back to english from localised answer
# If user presses any other button ask again
# wait for response to delay deletion of filename (and ignore response content)
# 'CMD_OTHER': '(O)pts',
# Duplicate hotkey
# If they hit return choose default option
# XXX Matching using this regex is not perfect but should be enough for practical scenarios. Limitations are:
# using the specified variables when matching.
# done on first use in match() - that saves us some re.compile() calls
# self.variables = variables  # XXX
# thanks to http://bugs.python.org/issue10076, we need to implement this ourself
# check if a regex is a plain path (not containing variables, alternations or wildcards)
# some special characters are probably not covered by the plain_path regex (if in doubt, better error out on the safe side)
# regex doesn't contain variables or wildcards, therefore handle it as plain path
# better safe than sorry
# /foo/**/ and /foo/*/ => /**/
# /foo**/ and /foo**bar/ => /**/
# /**bar/ => /**/
# /foo/** and /foo/* => /**
# /**foo and /foor**bar => /**
# /foo** => /**
# match /**.ext and /*.ext
# /foo/**.ext and /foo/*.ext => /**.ext
# /foo**.ext and /foo**bar.ext => /**.ext
# /**foo.ext => /**.ext
# Utility classes
# print recursively in a nicely formatted way
# useful for debugging, too verbose for production code ;-)
# based on code "stolen" from Scott S-Allen / MIT License
# http://code.activestate.com/recipes/578094-recursively-print-nested-dictionaries/
# or 2 or 8 or...
# No relative paths
# We double quote elsewhere
# This avoids a crash when reading a logfile with special characters that
# are not utf8-encoded (for example a latin1 "ö"), and also avoids crashes
# at several other places we don't know yet ;-)
# Creates a dictionary for any depth and returns empty dictionary otherwise
# WARNING: when reading non-existing sub-dicts, empty dicts will be added.
# escape '(' and ')'
# print(new_reg)
# Match at least one character if * or ** after /
# ?< is the negative lookback operator
# XXX limit to two levels to avoid an Exception on nested child profiles or nested null-*
# if last item is None, drop it (can happen when called with [profile, hat] when hat is None)
# debugging disabled, don't need to setup logging
# 40
# 20
# Unable to open the default logfile, so create a temporary logfile and tell use about it
# logging.shutdown([self.logger])
# I didn't find a way to get this working with isinstance() :-/
# We use tk without themes as a fallback which makes the GUI uglier but functional.
# Rule=None so we don't show redundant buttons in ShowMoreGUI.
# If same_file we're basically comparing the file against itself to check superfluous rules
# Process the profile of the program
# remove duplicate rules from the preamble
# Process every hat in the profile individually
# Clean up superfluous rules from includes in the other profile
# carefully avoid to accidentally initialize self.other.aa[program][hat]
# Clean duplicate rules in other profile
# Profile parsing Regex
# line start, optionally: leading whitespace, <priority> = number, <audit> and <allow>/deny
# optional whitespace, optional <comment>, optional whitespace, end of the line
# optional whitespace, comma + RE_EOL
# string without spaces, or quoted string. %s is the match group name
# filename (starting with '/') without spaces, or quoted filename.
# quoted or unquoted filename. %s is the match group name
# quoted or unquoted filename or variable. %s is the match group name
# match anything that's not " or #, or matching quotes with anything except quotes inside
# match comma plus any trailing comment
# store in 'not_comment' group
# match trailing comment and store in 'comment' group
# just a path # noqa: E131
# 'profile', profile name, optionally attachment
# optionally exec mode
# optionally exec condition
# optionally '->' target profile
# RE_PATH_PERMS is as restrictive as possible, but might still cause mismatches when adding different rule types.
# Therefore parsing code should match against file rules only after trying to match all other rule types.
# optionally: <owner>
# bare 'file,' # noqa: E131
# optional 'file' keyword
# path and perms
# perms and path
# 'link' keyword
# optional 'subset' keyword
# ' -> '
# sections with optional quotes
# LP: #1738879 - parser doesn't handle unquoted paths everywhere
# LP: 1738880 - parser doesn't handle relative paths everywhere, and
# neither do we (see aa.py)
# LP: #1738877 - parser doesn't handle files with spaces in the name
# TODO: Dicts are ordered in Python 3.7+; use above dict's keys instead
# profile name -> filename
# attachment -> {'f': filename, 'p': profile, 're': AARE(attachment)}
# filename -> content - see init_file()
# profile_name -> ProfileStorage
# don't re-initialize / overwrite existing data
# if a profile doesn't have a name, the attachment is stored as profile name
# None means not to check includes -- TODO check if this makes sense for all preamble rule types
# plain path
# try AARE matches to cover profile names with alternations and wildcards
# XXX this returns the first match, not necessarily the best one
# nothing found
# keep track in which file a variable gets set
# collect variable additions (+=)
# variable additions from main file
# variable additions can happen in other files than the variable definition. First collect them from all files...
# ... and then check if the variables that get extended have an initial definition. If yes, merge them.
# , warn, msg,
# CFG = None
# REPO_CFG = None
# SHELL_FILES = ['easyprof.conf', 'notify.conf', 'parser.conf']
# The type of config file that'll be read and/or written
# Set the option form to string -prevents forced conversion to lowercase
# Owner read and write
# Open a temporary file in the CONF_DIR to write the config file
# Copy permissions from an existing file to temporary file
# If no existing permission set the file permissions as 0600
# Replace the target config file with the temporary file
# @TODO: Use standard ConfigParser when https://bugs.python.org/issue22253 is fixed
# If not a comment of empty line
# option="value" or option=value type
# option type
# All the options in the file
# If a previous file exists modify it keeping the comments
# If line is not empty or comment
# If option=value or option="value" type
# If option exists in the new config file
# If value is different
# Update value
# If option changed to option type from option=value type
# Remove from remaining options list
# If option type
# If its no longer option type
# If its empty or comment copy as it is
# If any new options are present
# option type entry
# option=value type entry
# All the sections in the file
# If it's a section
# If any options from preceding section remain write them
# Remove the written section from the list
# enable write for all entries in that section
# write the section
# disable writing until next valid section
# If write enabled
# If the line is empty or a comment
# split any inline comments
# If any options remain from the preceding section
# If any new sections are present
# self.data['info'] isn't used anywhere, but can be helpful in debugging.
# set in abstractions that should be suggested by aa-logprof
# comment in the profile/hat start line
# profile or hat?
# True for 'hat foo', False for '^foo'
# allow writing bool values
# allow writing str or None to some keys
# allow writing str values
# don't allow overwriting of other types
# child profile
# fallback when not set
# we are inside a profile, so we expect a child profile
# nesting limit reached - a child profile can't contain another child profile
# stand-alone profile
# Flags may be whitespace and/or comma separated
# sort and remove duplicates
# for detecting free X display
# for templates
# for changing profile
# eventually get rid of this
# TODO: get rid of this
# Make sure the dynamic profile has the appropriate line for X
# aa-exec
# used by xauth and for server starts
# preserve our environment
# prepare the new environment
# Disable the global menu for now
# Disable the overlay scrollbar for now-- they don't track correctly
# Kill server with TERM
# Shoot the server dead
# Reset our environment
# TODO: this puts an artificial limit of 256 sandboxed applications
# Use dedicated .Xauthority file
# clean up the old one
# Run any setup code
# Start a Xephyr server
# TODO: break into config file? Which are needed?
# verify_these
# less secure?
# for games? seems not needed
# more secure?
# black background
# reset after last client exists
# terminate at server reset
# FIXME: detect if running
# Next, start the window manager
# update environment
# Annoyingly, xpra doesn't clean up itself well if the application
# failed for some reason. Try to account for that.
# Debugging tip (can also use glxinfo):
# $ xdpyinfo > /tmp/native
# $ aa-sandbox -X -t sandbox-x /usr/bin/xdpyinfo > /tmp/nested
# $ diff -Naur /tmp/native /tmp/nested
# The default from the man page, but be explicit in what we enable
# The dummy driver allows us to use GLX, etc. See:
# http://xpra.org/Xdummy.html
# https://www.xpra.org/trac/ticket/163
# FIXME: is there a better way we can detect this?
# This will clean out any dead sessions
# '--no-mmap', # for security?
# We need to wait for the xpra socket to exist before attaching
# up to self.timeout seconds to start
# Up to self.timeout seconds to start
# Next, attach to xpra
# Make sure that a client has attached
# up to self.timeout seconds to attach
# first, start X
# Debug: show old environment
# Only used with dynamic profiles
# TODO: move this out to a utilities library
# If we made it here, we are safe
# profile name specifies path
# profile name does not specify path
# alphanumeric and Debian version, plus '_'
# if policy starts with '/' and is one line, assume it is a path
# End utility functions
# If we specified the template and it is an absolute path, just set
# the templates directory to the parent of the template so we don't
# have to require --template-dir with absolute paths.
# If specified --policy-version and --policy-vendor, use
# Read in the configuration
# If have an abs path, just use it
# Find the template since we don't have an abs path
# If have abs path, just use it
# Find the policy group since we don't have and abs path
# Make sure we always quote
# Fill-in various comment fields
# Fill-in rules and variables with proper indenting
# should not ever reach this
# Generate an absolute path, converting any path delimiters to '.'
# when profile_name is not specified, the binary (path attachment)
# also functions as the profile name
# Add the template since it isn't part of 'params'
# Add the policy_version since it isn't part of 'params'
# don't re-add the pkey
# binary can by None when specifying --profile-name
# template always get set to 'default', can't conflict
# 'template',
# This option conflicts with any of the value arguments, e.g. name,
# author, template-var, etc.
# add policy args now
# What about specified multiple times?
# generally mirrors what is settable in gen_policy_params()
# The JSON structure is:
# but because binary can be the profile name, we need to handle
# 'profile_name' and 'binary' special. If a profile_name starts with
# '/', then it is considered the binary. Otherwise, set the
# profile_name and set the binary if it is in the JSON.
# handled above
# 2000-01-01
# 2050-01-01
# (nearly) empty wtmp file, no entries
# detect architecture based on utmp format differences
# first possible timestamp position
# noqa: E221
# little endian
# Increment for next entry
# skip padding
# Only parse USER lines
# Read each item and move pointer forward
# Store login timestamp for requested user
# When loop is done, last value should be the latest login timestamp
# used to pre-filter log lines so that we hand over only relevant lines to LibAppArmor parsing
# structure inside {}: {'profilename': init_hashlog(aamode, profilename), 'profilename2': init_hashlog(...), ...}
# already initialized, don't overwrite existing data
# might be changed for null-* profiles based on exec decisions
# flat, no hasher needed
# flat, no hasher needed  (at least in logparser which doesn't support EXEC MODE and EXEC COND)
# If no next log entry fetch it
# To support transition to special userns profiles
# ULONG_MAX
# mount can have a source but not umount.
# Convert aamode values to their counter-parts
# "translate" disconnected paths to errors, which means the event will be ignored.
# XXX Ideally we should propose to add the attach_disconnected flag to the profile
# Skip if AUDIT event was issued due to a change_hat in unconfined mode
# full, nested profile name
# Convert new null profiles to old single level null profile
# TODO: replace all the if conditions with a loop over 'ruletypes'
# operation types that can be network or file operations
# (used by op_type() which checks some event details to decide)
# Note: op_type() also uses some startswith() checks which are not listed here!
# file or network event?
# 'unix' events also use keywords like 'connect', but protocol is 0 and should therefore be filtered out
# Nothing external should reference this class, all external users
# should reference the class field PivotRootRule.ALL
# still here? -> then it is covered
# might raise AppArmorException if the new path doesn't start with / or a variable
# TODO: can the log contain the target profile?
# decides if the (G)lob and Glob w/ (E)xt options are displayed
# defines if the (N)ew option is displayed
# defines if the '(O)wner permissions on/off' option is displayed
# Set only in the parse() class method
# default priority
# still here? -> then the common part is covered, check rule-specific things now
# still here? -> then it is equal
# NOTE: edit_header, validate_edit, and store_edit are not implemented by every subclass.
# XXX TODO: remove in all *Ruleset classes (moved to *Rule)
# delete rules that are covered by include files
# de-duplicate rules inside the profile
# search again in reverse order - this will find the remaining duplicates
# restore original order for raw output
# include a space so that we don't need to add it everywhere when writing the rule
# should reference the class field ChangeProfileRule.ALL
# TODO: honor globbing and variables
# XXX implement all options mentioned above ;-)
# NUMBER ( 'K' | 'M' | 'G' )
# number + time unit (cpu in seconds+, rttime in us+)
# a number between -20 and 19.
# should reference the class field RlimitRule.ALL
# rlimit rules don't support priority, allow keyword, audit or deny
# pragma: no branch - "if rlimit in rlimit_all:" above avoids the need for an "else:" branch
# lower numbers mean a higher limit for nice
# still here? fine :-)
# pragma: no cover - would need breaking the regex
# manpage doesn't list sec
# manpage doesn't list 'd'
# rlimit can't be ALL, therefore using False
# optional domain
# optional type or protocol
# should reference the class field NetworkRule.ALL
# No constraints on this item.
# XXX return 'network DOMAIN,' if 'network DOMAIN TYPE_OR_PROTOCOL' was given
# setup module translations
# one of the access_keyword or
# one or more access_keyword in (...)
# optional access keyword(s)
# optional label
# should reference the class field IOUringRule.ALL
# split by ',' or whitespace
# XXX only remove one part, not all
# boolean variables don't support priority, allow keyword, audit or deny
# aliases don't support priority, allow keyword, audit or deny
# the only way aliases can be covered are exact duplicates
# include doesn't support priority, allow keyword, audit or deny
# TODO: move re_match_include_parse() from regex.py to this class after converting all code to use IncludeRule
# "if exists" is allowed to differ
# only add files, but not subdirectories etc.
# add full_path even if it doesn't exist on disk. Might cause a 'file not found' error later.
# 2 chars - len relevant for split_perms()
# 3 chars - len relevant for split_perms()
# also defines the write order
# should reference the class field FileRule.ALL
# might be set by aa-logprof / aa.py propose_file_rules()
# offer '(O)wner permissions on/off' buttons only if the rule has the owner flag
# XXX subset
# check for invalid combinations (bare 'file,' vs. path rule)
# elif
# plain 'file,' rule
# add leading space
# subset is a restriction (also, if subset is included, this means this instance is a link rule, so other file permissions can't be covered)
# skip _is_covered_list() because it would interpret 'subset' as additional permissions, not as restriction
# perms can be empty if only exec_perms are specified, therefore disable the sanity check in _is_covered_list()...
# 'w' covers 'a', therefore use perms_with_a() to temporarily add 'a' if 'w' is present
# TODO: check  link / link subset vs. 'l'?
# ... and do our own sanity check
# TODO: handle fallback modes?
# other_rule has ANY_EXEC and self has an exec rule set -> covered, so avoid hitting the 'elif' branch
# check exec_mode and target only if other_rule contains exec_perms (except ANY_EXEC) or link permissions
# (for mrwk permissions, the target is ignored anyway)
# a different target means running with a different profile, therefore we have to be more strict than _is_covered_aare()
# XXX should we enforce an exact match for a) exec and/or b) link target?
# no check for file_keyword and leading_perms - they are not relevant for is_covered()
# file_keyword and leading_perms are only cosmetics, but still a difference
# TODO: special handling for link / link subset?
# type check avoids breakage caused by 'unknown'
# effectively 'unknown'
# only list owner perms that are not covered by other perms
# TODO: different output for link rules?
# file_keyword and leading_perms are not really relevant
# FileRule can be of two types, "file" or "exec"
# exec events in enforce mode don't have target=...
# Map c (create) and d (delete) to w (logging is more detailed than the profile language)
# old log styles used :: to indicate if permissions are meant for owner or other
# in current log style, owner permissions are indicated by a match of fsuid and ouid
# if dmask contains x and another mode, drop x here - we should see a separate exec event
# intentionally not allowing 'x' here
# FileRule can be either a 'normal' or an 'exec' file rule. These rules are encoded in hashlog differently.
# Exec Rule
# logparser sums up multiple log events, so both 'a' and 'w' can be present
# TODO: check for existing rules with this path, and merge them into one rule
# XXX do we need to honor the link target?
# no need to change anything
# variables don't support priority, allow keyword, audit or deny
# blindly set, add() prevents redefinition of variables
# TODO : Apparmor remount logs are displayed as mount (with remount flag). Profiles generated with aa-genprof are therefore mount rules. It could be interesting to make them remount rules.
# keep in sync with parser/mount.cc mnt_opts_table!
# We aim to be a bit more restrictive than \S+ used in regex.py
# allow any order of fstype and options
# Note: also matches if multiple fstype= or options= are given to keep the regex simpler
# Source can either be
# - A path          : /foo
# - A globbed Path  : {,/usr}/lib{,32,64,x32}/modules/
# - A filesystem    : sysfs         (sudo mount -t tmpfs tmpfs /tmp/bar)
# - Any label       : mntlabel      (sudo mount -t tmpfs mntlabel /tmp/bar)
# Thus we cannot use directly RE_PROFILE_PATH_OR_VAR
# Destination can also be
# - A globbed Path  : **
# path or variable
# alternation, optionally quoted (note: no leading "/" needed/enforced)
# starting with "**"
# Note: the closing ')))' needs to be added in the final regex
# empty source
# any word including "-"
# check if a rule contains multiple 'options' or 'fstype'
# (not using option_pattern or fs_type_pattern here because a) it also matches an empty string, and b) using it twice would cause name conflicts)
# should reference the class field MountRule.ALL
# mount rules with multiple 'fstype' are not supported by the tools yet, and when writing them, only the last 'fstype' would survive.
# Therefore raise an exception when parsing such a rule to prevent breaking the rule.
# mount rules with multiple 'options' are not supported by the tools yet, and when writing them, only the last 'options' would survive.
# Umount cannot have a source
# Umount
# XXX joint_access_keyword and RE_ACCESS_KEYWORDS exactly as in SignalRule - move to function?
# one of the access_keyword
# string without spaces, or quoted string, optionally wrapped in (...). %s is the match group name
# plaintext version:      | * | "* "  | (    *    ) | (  " *   " )    |
# XXX this regex will allow repeated parameters, last one wins
# XXX (the parser will reject such rules)
# noqa: E131
# optional bus= system | session | AARE, (...) optional  # noqa: E131,E221
# optional path=AARE, (...) optional  # noqa: E221
# optional name=AARE, (...) optional  # noqa: E221
# optional interface=AARE, (...) optional
# optional member=AARE, (...) optional  # noqa: E221
# optional peer=(name=AARE and/or label=AARE), (...) required
# empty peer=()  # noqa: E131
# or  # noqa: E131
# only peer name (match group peername1)  # noqa: E131
# only peer label (match group peerlabel1)  # noqa: E131
# peer name + label (match name peername2/peerlabel2)  # noqa: E131,E221
# peer label + name (match name peername3/peerlabel3)  # noqa: E131,E221
# should reference the class field DbusRule.ALL
# not all combinations are allowed
# XXX move to function _split_access()?
# XXX that happens for "dbus ( )," rules - correct behaviour? (also: same for signal rules?)
# XXX split off _get_access_rule_part? (also needed in PtraceRule)
# Depending on the access type, not all parameters are allowed.
# Ignore them, even if some of them appear in the log.
# Also, the log doesn't provide a peer name, therefore always use ALL.
# / + string for posix, or digits for sys
# type can be sysv or posix
# optional type
# optional mqueue name
# should reference the class field MessageQueueRule.ALL
# The regex checks if it starts with / or if it's numeric
# This class doesn't have any localvars, therefore it doesn't need 'ALL'
# no localvars -> nothing more to do
# no localvars, so there can't be a difference
# allowing _everything_ is the worst thing you could do, therefore hardcode highest severity
# There's nothing to glob in all rules
# should reference the class field UnixRule.ALL
# This comes from the logparser, we convert it to dicts
# Protocol is not supported yet.
# not self.all_accesses:
# rtmin+0..rtmin+32, number may have leading zeros
# don't check against the signal keyword list in the regex to allow a more helpful error message
# one of the signal_keyword
# one or more signal_keyword in (...)
# optional signal set(s)
# optional peer
# used to strip quotes around signal keywords - don't use for peer!
# should reference the class field SignalRule.ALL
# filter out 'set='
# filter out quote pairs
# split at ',' or whitespace
# ignore null-* peers
# abi and include rules have a very similar syntax
# base AbiRule on IncludeRule to inherit most of its behaviour
# abi doesn't support 'if exists'
# optional access keyword
# should reference the class field UserNamespaceRule.ALL
# [7:] removes the 'userns_' prefix
# To support transition to special profiles
# should reference the class field CapabilityRule.ALL
# Because we support having multiple caps in one rule,
# initializer needs to accept a list of caps.
# make sure none of the cap_list arguments are blank, in
# case we decide to return one cap per output line
# XXX return multiple lines, one for each capability, instead?
# XXX 'wr' and 'write' accepted by the parser, but not documented in apparmor.d.pod
# XXX joint_access_keyword and RE_ACCESS_KEYWORDS exactly as in PtraceRule - move to function!
# should reference the class field PtraceRule.ALL
# These must happen after the milc.milc_options() call
# noqa, must come after milc.milc_options()
# Python setuptools entrypoint
# Warn if they use an outdated python version
# Assume the environment isn't workable by default
# Check if we're using the mingw64/msys2 environment
# Check if we're using a `uv`-based environment
# If none of the options above were true, then we're in an unsupported environment. Bomb out.
# Failed to find valid userspace, including what was in the environment if specified -- wipe any environment variable if present as it's clearly invalid.
# Check out and initialize the qmk_firmware environment
# All subcommands are run relative to the qmk_firmware root to make it easier to use the right copy of qmk_firmware.
# Call the entrypoint
# Move up a directory before the next iteration
# Check on qmk_firmware
# If it exists, ask the user what to do with it
# If it doesn't exist, offer to check it out
# Exists (but not an empty dir)
# Offer to set `user.qmk_home` for them.
# Run `qmk doctor` to check the rest of the environment out
# Now munge the current cli config
# dump out requested arg
# dump out everything
# VID  ,  PID
# Add index numbers
# Exit when nothing to do.
# Tune logging level.
# Be verbose with messages.
# This hack to get the executable name as %(name).
# Signal received exit code for bash.
# Some non used exit code for internal errors.
# Get relevant parameters from environment.
# Execute the requested compilation. Do crash if anything goes wrong.
# Call the wrapped method and ignore it's return value.
# Always return the real compiler exit code.
# same as in ear.c
# create entries from the current run
# creates a sequence of entry generators from an exec,
# read entries from previous run
# filter out duplicate entries from both
# run the build command
# read the intercepted exec calls
# dump the compilation database
# write current execution info to the pid file
# For faster lookup in set filename is reverted
# For faster lookup in set directory is reverted
# On OS X the 'cc' and 'c++' compilers are wrappers for
# 'clang' therefore both call would be logged. To avoid
# this the hash does not contain the first word of the
# short validation logic
# make plugins always a list. (it might be None when not specified.)
# make exclude directory list unique and absolute.
# because shared codes for all tools, some common used methods are
# expecting some argument to be present. so, instead of query the args
# object about the presence of the flag, we fake it here. to make those
# methods more readable. (it's an arguable choice, took it only for those
# which have good default value.)
# add cdb parameter invisibly to make report module working.
# Make ctu_dir an abspath as it is needed inside clang
# If the user wants CTU mode
# If CTU analyze_only, the input directory should exist
# Check CTU capability via checking clang-extdef-mapping
# getattr(obj, attr, default) does not really returns default but none
# once it's fixed we can use as expected
# will re-assign the report directory as new output
# Run against a build command. there are cases, when analyzer run
# is not required. But we need to set up everything for the
# wrappers, because 'configure' needs to capture the CC/CXX values
# for the Makefile.
# Run build command with intercept module.
# Run the analyzer against the captured commands.
# Run build command and analyzer with compiler wrappers.
# Cover report generation and bug counting.
# Set exit status as it was requested.
# Run the analyzer against a compilation db.
# Recover namedtuple from json when coming from analyze-cc or analyze-c++
# Remove all temporary files
# filename is either absolute or relative to directory. Need to turn
# it to absolute since 'args.excludes' are absolute paths.
# when verbose output requested execute sequentially
# display error message from the static analyzer
# If we do a CTU collect (1st phase) we remove all previous collection
# data first.
# If the user asked for a collect (1st) and analyze (2nd) phase, we do an
# all-in-one run where we deliberately remove collection data before and
# also after the run. If the user asks only for a single phase data is
# left so multiple analyze runs can use the same data gathered by a single
# collection run.
# CTU strings are coming from args.ctu_dir and extdef_map_cmd,
# so we can leave it empty
# Single runs (collect or analyze) are launched from here.
# don't run analyzer when compilation fails. or when it's not requested.
# check is it a compilation?
# collect the needed parameters from environment, crash when missing
# call static analyzer against the compilation
# 'scan-view' currently does not support sarif format.
# entry from compilation database
# clang executable name (and path)
# arguments from command line
# kill non debug macros
# where generated report files shall go
# it's 'plist', 'html', 'plist-html', 'plist-multi-file', 'sarif', or 'sarif-html'
# generate crash reports or not
# ctu control options
# Classify error type: when Clang terminated by a signal it's a 'Crash'.
# (python subprocess Popen.returncode is negative when child terminated
# by signal.) Everything else is 'Other Error'.
# Create preprocessor output file name. (This is blindly following the
# Perl implementation.)
# Execute Clang again, but run the syntax check only.
# write general information about the crash
# write the captured output too
# Normalize path on windows as well
# Make relative path out of absolute
# In case an other process already created it.
# lazy implementation just append an undefine macro at the end
# language can be given as a parameter...
# ... or find out from source file extension
# filter out disabled architectures and -arch switches
# There should be only one arch given (or the same multiple
# times). If there are multiple arch are given and are not
# the same, those should not change the pre-processing step.
# But that's the only pass we have before run the analyzer.
# To have good results from static analyzer certain compiler options shall be
# omitted. The compiler flag filtering only affects the static analyzer run.
# Keys are the option name, value number of options to skip
# compile option will be overwritten
# static analyzer option will be overwritten
# will set up own output file
# flags below are inherited from the perl implementation.
# the filtered compiler flags
# list of architecture flags
# compilation language, None, if not specified
# 'c' or 'c++'
# iterate on the compile options
# take arch flags into a separate basket
# take language
# parameters which looks source file are not flags
# ignore some flags
# we don't care about extra warnings, but we should suppress ones
# that we don't want to see.
# and consider everything else as compilation flag.
# Ignored compiler options map for compilation database creation.
# The map is used in `split_command` method. (Which does ignore and classify
# parameters.) Please note, that these are not the only parameters which
# might be ignored.
# compiling only flag, ignored because the creator of compilation
# database will explicitly set it.
# preprocessor macros, ignored because would cause duplicate entries in
# the output (the only difference would be these flags). this is actual
# finding from users, who suffered longer execution time caused by the
# duplicates.
# linker options, ignored because for compilation database will contain
# compilation commands only. so, the compiler would ignore these flags
# anyway. the benefit to get rid of them is to make the output more
# readable.
# Known C/C++ compiler executable name patterns
# the result of this method
# quit right now, if the program was not a C/C++ compiler
# quit when compilation pass is not involved
# some parameters could look like filename, take as compile option
# parameter which looks source file is taken...
# and consider everything else as compile option.
# do extra check on number of source files
# regex for activated checker
# the relevant version info is in the first line
# The relevant information is in the last line of the output.
# Don't check if finding last line fails, would throw exception anyway.
# find checkers header
# find entries
# common prefix for source files to have sorter path
# assemble the cover from multiple fragments
# copy additional files to the report
# copy the content of fragments
# type: (str, bool) -> Generator[Dict[str, Any], None, None]
# get the right parser for the job.
# get the input files, which are not empty.
# iterate through subobjects and update it.
# we only merge runs, so we only need to update the run index
# update matches from right to left to make increasing character length (9->10) smoother
# exposed for testing since the order of files returned by glob is not guaranteed to be sorted
# start with the first file
# extract the run and append it to the merged output
# compatibility with < clang-3.5
# do not read the file further
# search for the right lines
# this is a workaround to fix windows read '\r\n' as new lines.
# Path info
# Parse XML
# Get root element attributes
# dict of namespaces by ext_name
# enter the main namespace here
# Register some common types
# This goes out and parses the rest of the XML
# Recursively resolve all types
# Call all the output methods
# Keeps track of what's been imported so far.
# Keeps track of non-request/event/error datatypes
# Keeps track of request datatypes
# Keeps track of event datatypes
# add to its namespace object
# core event
# extension event
# Keeps track of error datatypes
# normalize the offset (just in case)
# compute the required start_alignment based on the size of the type
# do 8-byte primitives require 8-byte alignment in X11?
# alignment 1 with offset 0 is always fulfilled
# there is no external align -> fail
# the external align guarantees less alignment -> not guaranteed
# the external align cannot be divided by our align
# -> not guaranteed
# (this can only happen if there are alignments that are not
# a power of 2, which is highly discouraged. But better be
# safe and check for it)
# offsets do not match
# returns the alignment that is guaranteed when
# both, self or other, can happen
# return the result
# output the OK-list
# output the fail-list
# determine whether an ok callstack is relevant for logging
# empty callstacks are always relevant
# check whether the ok_callstack is a subset or equal to a fail_callstack
# To avoid circular import error
# Screw isinstance().
# the biggest align value of an align-pad contained in this type
# We dump the _placeholder_byte if any fields are added.
#already a string
# Cardinal datatype globals.  See module __init__ method.
# First check if we're using a default value
# An explicit value or bit was specified.
# We need a length field.
# Ask our Expression object for it's name, type, and whether it's on the wire.
# See if the length field is already in the structure.
# It isn't, so we need to add it to the structure ourself.
# Add ourself to the structure by calling our original method.
# resolve() could have changed the size (ComplexType starts with size 0)
# Find my length field again.  We need the actual Field object in the expr.
# This is needed because we might have added it ourself above.
# fixed number of elements
# variable number of elements
# check whether the number of elements is a multiple
# iterate until the combined alignment does not change anymore
# apply "multiple" amount of changes sequentially
# combine with the cumulatively combined alignment
# (to model the variable number of entries)
# does not change anymore by adding more potential elements
# -> finished
# pads don't require any alignment at their start
# fixed size pad
# align-pad
# the alignment pad is size 0 because the start_align
# is already sufficiently aligned -> return the start_align
# the alignment pad has nonzero size -> return the alignment
# that is guaranteed by it, independently of the start_align
# get required_start_alignment
# unknown -> mark for autocompute
# Resolve all of our field datatypes.
# construct the switch type name from the parent type and the field name
# Hit this on Reply
# Get the full type name for the field
# Add the field to ourself
# Recursively resolve the type (could be another structure, list)
# Compute the size of the maximally contain align-pad
# Figure out how big we are
# no required-start-align configured -> calculate it
# required-start-align configured -> check it
# calculate the minimally required start_align that causes no
# align errors
# none of the candidates applies
# this type has illegal internal aligns for all possible start_aligns
# our align pad implementation depends on known alignment
# at the start of our type
# default impls of polymorphic methods which assume sequential layout of fields
# (like Struct or CaseOrBitcaseType)
# find places where the implementation of the C-binding would
# create code that makes the compiler add implicit alignment.
# make these places explicit, so we have
# consistent behaviour for all bindings
# end of fixed-size part
# implicit align pad is required
# default impl assumes sequential layout of fields
# FIXME: switch cannot store lenfields, so it should just delegate the parents
# self.fields contains all possible fields collected from the Bitcase objects,
# whereas self.items contains the Bitcase objects themselves
# use self.parent to indicate anchestor,
# as switch does not contain named fields itself
# add the field to ourself
# recursively resolve the type (could be another structure, list)
# size for switch can only be calculated at runtime
# note: switch is _always_ of variable size, but we indicate here wether
# it contains elements that are variable-sized themselves
# this is done for the CaseType or BitCaseType
# we assume that BitCases can appear in any combination,
# and that at most one Case can appear
# (assuming that Cases are mutually exclusive)
# get all Cases (we assume that at least one case is selected if there are cases)
# there are no case-fields -> check without case-fields
# aux function for unchecked_get_alignment_after
# assume that this field is active -> no combine_with to emulate optional
# we assume that this field is optional, therefore combine
# alignment after the field with the alignment before the field.
# combine with the align before the field because
# the field is optional
# ignore other fields as they are irrelevant for alignment
# a union does not have implicit aligns because all fields start
# at the start of the union
#check proper alignment for all members
#compute the after align from the start_align
# We dump the _placeholder_byte if any bitcases are added.
# Resolve the bitcase expression
#calculate alignment
# Reset pads count
# Add the automatic protocol fields
# get the namespace of the specified extension
# find and add the selected events
# could not find event -> error handling
# add event to EventStruct
# add event. called by resolve
# Recursively resolve the event (could be another structure, list)
# All errors are basically the same, but they still got different XML
# for historic reasons. This 'invents' the missing parts.
# List going into a request, which has no length field (inferred by server)
# Standard list with a fieldref
# Op field.  Need to recurse.
# Hopefully we don't have two separate length fields...
# Constant expression
# xcb_popcount returns 'int' - handle the type in the language-specific part
# sumof with a nested expression which is to be evaluated
# for each list-element in the context of that list-element.
# sumof then returns the sum of the results of these evaluations
# current list element inside iterating expressions such as sumof
# Notreached
# if the value of the expression is a guaranteed multiple of a number
# return this number, else return 1 (which is trivially guaranteed for integers)
# need to find the field with lenfield_name
# switch is the anchestor
# Creating a symlink can fail for a variety of reasons, indicating that the filesystem does not support it.
# E.g. on Linux with a VFAT partition mounted.
# symlink is not supported
# noqa: TRY400
# force flush of log messages before the trace is printed
# pragma: no cov
# noqa: FBT002
# remove handlers of libraries
# noqa: ARG003
# the file system must be able to encode
# note in newer CPython this is always utf-8 https://www.python.org/dev/peps/pep-0529/
# on Windows absolute does not imply resolve so use both
# mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs
# Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the
# subinclude directive from root, at which point on might as well ignore the directory itself, see
# https://www.selenic.com/mercurial/hgignore.5.html for more details
# Bazaar - does not support ignore files in sub-directories, only at root level via .bzrignore
# Subversion - does not support ignore files, requires direct manipulation with the svn tool
# Re-raise FileNotFoundError from `run_cmd()`
# noqa: TRY002, TRY301
# built-in
# this is possible if the standard library cannot be accessed
# pragma: no cover  # noqa: N806
# https://bugs.python.org/issue22199
# landmark  # noqa: PLC0415
# site  # noqa: PLC0415
# try to print out, this will validate if other core modules are available (json in this case)
# noqa: B904  # pragma: no cover
# Priority of where the option is set to follow the order: CLI, env var, file, hardcoded.
# If both set at same level prefers copy over symlink.
# fallback to copy
# noqa: D205
# prefer venv options over ours, but keep our extra
# we cannot allow some install config as that would get packages installed outside of the virtual environment
# the prefix governs where to install the libraries
# do not allow global configs to hijack venv paths
# Import hook that patches some modules to ignore configuration values that break package installation in case
# of virtual environments.
# https://docs.python.org/3/library/importlib.html#setting-up-an-importer
# lock[0] is threading.Lock(), but initialized lazily to avoid importing threading very early at startup,
# because there are gevent-based applications that need to be first to import threading by themselves.
# See https://github.com/pypa/virtualenv/issues/1895 for details.
# noqa: ARG002
# noqa: PLR1702
# initialize lock[0] lazily
# there is possibility that two threads T1 and T2 are simultaneously running into find_spec,
# observing .lock as empty, and further going into hereby initialization. However due to the GIL,
# list.append() operation is atomic and this way only one of the threads will "win" to put the lock
# - that every thread will use - into .lock[0].
# https://docs.python.org/3/faq/library.html#what-kinds-of-global-value-mutation-are-thread-safe
# https://www.python.org/dev/peps/pep-0451/#how-loading-will-work
# C-Extension loaders are r/o such as zipimporter with <3.7
# if we're created as a describer this might be missing
# first we must be able to describe it
# store python is not supported here
# Create new refs with corrected launcher paths
# Keep the original ref unchanged
# Before 3.13 the launcher was called python.exe, after is venvlauncher.exe
# https://github.com/python/cpython/issues/112984
# starting with CPython 3.7 Windows ships with a venvlauncher.exe that avoids the need for dll/pyd copies
# it also means the wrapper must be copied to avoid bugs such as https://bugs.python.org/issue42013
# May be missing on some Python hosts.
# See https://github.com/pypa/virtualenv/issues/2368
# symlink of the python executables does not work reliably, copy always instead
# - https://bugs.python.org/issue42013
# - venv
# for more info on pythonw.exe see https://stackoverflow.com/a/30313091
# change the install_name of the copied python executables
# Make sure we use the embedded interpreter inside the framework, even if sys.executable points to the
# stub executable in ${sys.prefix}/bin.
# See http://groups.google.com/group/python-virtualenv/browse_thread/thread/17cab2f85da75951
# add a symlink to the host python image
# noqa: N806
# noqa: SLF001
# Read Mach-O header (the magic number is assumed read by the caller)
# 64-bits header has one more field.
# The header is followed by n commands
# Read command header
# The first data field in LC_LOAD_DYLIB commands is the offset of the name, starting from the
# beginning of the  command.
# Read the NUL terminated string
# If the string is what is being replaced, overwrite it.
# Seek to the next command
# Read magic number
# Fat binaries contain nfat_arch Mach-O binaries
# Read arch header
# GraalPy 24.0 and older had home without the bin
# GraalPy needs an additional entry in pyvenv.cfg on Windows
# https://bitbucket.org/pypy/pypy/issue/1922/future-proofing-virtualenv
# glob for libpypy3-c.so, libpypy3-c.dylib, libpypy3.9-c.so ...
# PyPy >= 3.8 supports a standard prefix installation, where older
# versions always used a portable/development style installation.
# If this is a standard prefix installation, skip the below:
# Also copy/symlink anything under prefix/lib, which, for "portable"
# PyPy builds, includes the tk,tcl runtime and a number of shared
# objects. In distro-specific builds or on conda this should be empty
# (on PyPy3.8+ it will, like on CPython, hold the stdlib).
# For PyPy3.8+ the stdlib lives in lib/pypy3.8
# We need to avoid creating a symlink to it since that
# will defeat the purpose of a virtualenv
# glob for libpypy*.dll and libffi*.dll
# note the converter already logs a warning when failures happen
# noqa: ARG002, FBT002
# Use `splitlines` rather than a custom check for whether there is
# more than one line. This ensures that the full `splitlines()`
# logic is supported here.
# if we have no more users of this lock, release the thread lock
# multiple processes might be trying to get a first lock... so we cannot check if this directory exist without
# a lock, but that lock might then become expensive, and it's not clear where that lock should live.
# Instead here we just ignore if we fail to create the directory.
# release the acquire try from above
# paths are always UNIX separators, even on Windows, though __file__ still follows platform default
# input disabled
# FileNotFoundError in Python >= 3.3
# noqa: ARG001
# noqa: PLE0704
# py3+ kwonly
# noqa: D415
# noqa: BLE001, S110
# reading and writing on the same file may cause race on multiple processes
# create types
# here we need a write-able application data (e.g. the zipapp might need this for discovery cache)
# do not configure logging if only help is requested, as no logging is required for this
# prefer the built-in venv if present, otherwise fallback to first defined type
# prefer the builtin if present, otherwise fallback to first defined type
# remove the file if it already exists - this prevents permission
# errors when the dest is not writable
# Powershell assumes Windows 1252 encoding when reading files without BOM
# use write_bytes to avoid platform specific line normalization (\n -> \r\n)
# read content as binary to avoid platform specific line normalization (\n -> \r\n)
# noqa: ARG004
# by default, we just let it be unicode
# noqa: B027
# strip away the bin part from the __file__, plus the path separator
# prepend bin to PATH (this file is inside the bin directory)
# virtual env is right above bin directory
# add the virtual environments libraries to the host python import mechanism
# ensure the text has all newlines as \r\n - required by batch
# wheel version needs special handling
# on Python > 3.8, the default is None (as in not used)
# so we can differentiate between explicit and implicit none
# fallback to download in case the exact version is not available
# symlink support requires pip 19.3+
# create the pyc files, as the build image will be R/O
# the root pyc is shared, so we'll not symlink that - but still add the pyc files to the RECORD for close
# remove files that are within the image folder deeper than one level (as these will be not linked directly)
# protect the image by making it read only
# sync image
# generate console executables
# 1. first extract the wheel
# 2. now add additional files not present in the distribution
# 3. finally fix the records file
# https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation
# to get a short path must exist
# inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226
# add top level packages at folder level
# add the dist-info folder itself
# collect entries in record that we did not register yet
# only add if not already added as a base dir
# actually remove stuff in a stable order
# overwrite
# ensure they are executable
# create the pyc files
# https://www.python.org/dev/peps/pep-0427/#file-name-convention
# The wheel filename is {distribution}-{version}(-{build tag})?-{python tag}-{abi tag}-{platform tag}.whl
# if it does not specify a python requires the assumption is compatible
# https://www.python.org/dev/peps/pep-0345/#version-specifiers
#: the version bundled with virtualenv
#: custom version handlers
# prevent version switch in the middle of a CI run
# use only latest patch version per minor, earlier assumed to be buggy
# we don't need a release date for sources other than "periodic"
# always write at the end for proper updates
# noqa: E721
# never completed
# on purpose not called to make it a background process
# set the returncode here -> no ResourceWarning on main process exit if the subprocess still runs
# noqa: C901, PLR0913
# mark the most recent one as source "manual"
# stop download if we reach the embed version
# update other_versions by removing version we just found
# the most accurate is to ask PyPi - e.g. https://pypi.org/pypi/pip/json,
# see https://warehouse.pypa.io/api-reference/json/ for more details
# fallback to non verified HTTPS (the information we request is not sensitive, so fallback)
# noqa: S323, SLF001
# load extra search dir for the given for_py
# 2. check if we have upgraded embed
# 3. acquire from extra search dir
# if version does not match ignore
# not all wheels are compatible with all python versions, so we need to py version qualify it
# 1. acquire from bundle
# 2. download from the internet
# pip has no interface in python - must be a new sub-process
# if for some reason the output does not match fallback to the latest version with that spec
# noqa: C901, PLR0912
# first digit major
# Windows Python executables are almost always unversioned
# Spec is an empty string
# Try matching `direct` first, so the `direct` group is filled when possible.
# noqa: FBT002, PLR0913
# note here we cannot resolve symlinks, as the symlink may trigger different prefix information if there's a
# pyenv.cfg somewhere alongside on python3.5+
# check in the in-memory cache
# otherwise go through the app data cache
# independent if it was from the file or in-memory cache fix the original executable location
# if exists and matches load
# if not loaded run and save
# noqa: S311
# Cookies allow to split the serialized stdout output generated by the script collecting the info from the output
# generated by something else. The right way to deal with it is to create an anonymous pipe and pass its descriptor
# to the child and output to it. But AFAIK all of them are either not cross-platform or too big to implement and are
# not in the stdlib. So the easiest and the shortest way I could mind is just using the cookies.
# We generate pseudorandom cookies because it easy to implement and avoids breakage from outputting modules source
# code, i.e. by debug output libraries. We reverse the cookies to avoid breakages resulting from variable values
# appearing in debug output.
# prevent sys.prefix from leaking into the child process - see https://bugs.python.org/issue22490
# keep original executable as this may contain initialization code
# Rotate the list
# noqa: C901, PLR0912, PLR0915
# 0. if it's a path and exists, and is absolute path, this is the only option we consider
# Windows Store Python does not work with os.path.exists, but does for os.lstat
# 1. try with first
# 1. if it's a path and exists
# 2. otherwise try with the current
# 3. otherwise fallback to platform default logic
# try to find on path, the path order matters (as the candidates are less easy to control by end user)
# otherwise try uv-managed python (~/.local/share/uv/python or platform equivalent)
# this is the over the board debug
# 4. then maybe it's something exact on PATH - if it was direct lookup implementation no longer counts
# 5. or from the spec we can deduce if a name on path matches
# the implementation must match when we find “python[ver]”
# noqa: PYI024
# noqa: PLR0915
# unroll relative elements from path (e.g. ..)
# qualifies the python
# this is a tuple in earlier, struct later, unify to our own named tuple
# Use the same implementation as found in stdlib platform.architecture
# to account for platforms where the maximum integer is not equal the
# pointer size.
# Used to determine some file names.
# See `CPython3Windows.python_zip()`.
# information about the prefix - determines python home
# prefix we think
# venv
# old virtualenv
# information about the exec prefix - dynamic stdlib modules
# the executable we were invoked via
# the executable as known by the interpreter
# the executable we are based of (if available)
# we cannot use distutils at all if "venv" exists, distutils don't know it
# debian / ubuntu python 3.10 without `python3-distutils` will report
# mangled `local/bin` / etc. names for the default prefix
# intentionally select `posix_prefix` which is the unaltered posix-like paths
# a list of content to store from sysconfig
# Try to get TK library path directly first
# We found it directly
# Reset if invalid
# If direct query failed, try constructing the path
# Try different version formats
# Full version like "8.6.12"
# Major.minor like "8.6"
# Just major like "8"
# Validate it's actually a TK directory
# if we're not in a virtual environment, this is already a system python, so return the original executable
# note we must choose the original and not the pure executable as shim scripts might throw us off
# if this is NOT a virtual environment, can't determine easily, bail out
# some platforms may set this to help us
# use the saved system executable if present
# we know we're in a virtual environment, can not be us
# We're not in a venv and base_executable exists; use it directly
# Try fallback for POSIX virtual environments
# search relative to the directory of sys._base_executable
# in this case we just can't tell easily without poking around FS and calling them, bail
# use sysconfig if sysconfig_scheme is set or distutils is unavailable
# set prefixes to empty => result is relative from cwd
# use distutils primarily because that's what pip does
# https://github.com/pypa/pip/blob/main/src/pip/_internal/locations.py#L95
# note here we don't import Distribution directly to allow setuptools to patch it
# disable warning for PEP-632
# if removed or not installed ignore
# conf files not parsed so they do not hijack paths
# disable macOS static paths for framework  # noqa: SLF001
# paths generated are relative to prefix that contains the path sep, this makes it relative
# some broken packaging don't respect the sysconfig, fallback to distutils path
# the pattern include the distribution name too at the end, remove that via the parent call
# this method is not used by itself, so here and called functions can import stuff locally
# noqa: C901, PLR0911
# if the path is a our own executable path we're done
# if path set, and is not our original executable name, this does not match
# don't save calculated paths, as these are non primitive types
# namedtuple to dictionary
# the dictionary unroll here is to protect against pypy bug of interpreter crashing
# restore this to a named tuple structure
# if we're linking back to ourselves accept ourselves with a WARNING
# we don't know explicitly here, do some guess work - our executable name should tell
# ignore if for some reason we can't query
# no exact match found, start relaxing our requirements then to facilitate system package upgrades that
# could cause this (when using copy strategy of the host python)
# we need to setup some priority of traits, this is as follows:
# implementation, major, minor, micro, architecture, tag, serial
# sort by priority in decreasing order
# following path pattern of the current
# or at root level
# python is always the final option as in practice is used by multiple implementation as exe name
# dump a JSON representation of the current python
# Map of well-known organizations (as per PEP 514 Company Windows Registry key part) versus Python implementation
# see if PEP-514 entries are good
# start with higher python versions in an effort to use the latest version available
# and prefer PythonCore over conda pythons (as virtualenv is mostly used by non conda tools)
# Map well-known/most common organizations to a Python implementation, use the org name as a fallback for
# Pre-filtering based on Windows Registry metadata, for CPython only
# Final filtering/matching using interpreter metadata
# if failed to get version bail
# Orca
# Copyright 2005-2008 Sun Microsystems Inc.
# Free Software Foundation, Inc., Franklin Street, Fifth Floor,
# Boston MA  02110-1301 USA.
# establish the _bookmarks index
# update the currentbookmark
# Notify the observers
# get the hardware keys that have registered bookmarks
# no bookmarks have been entered
# only 1 bookmark or we are just starting out
# find current bookmark hw_code in our sorted list.
# Go to next one if possible
# Go to previous one if possible
# notify the observers
# create directory if it does not exist.  correct place??
# Copyright 2010 Joanmarie Diggs.
# pylint: disable=too-many-public-methods
# TODO - JD: We actually need to keep track of a root object and its descendant
# caret context, just like we do in the web script. And it all should move into
# the focus manager. In the meantime, just get things working for the active app.
# ARIA levels are 1-based.
# Candidates will be in the rows beneath the current row. Only check in the current column
# and stop checking as soon as the node level of a candidate is equal or less than our
# current level.
# TODO - JD: Move this into the AXUtilities.
# TODO - JD: The only caller is the web speech generator. Consider moving this there.
# TODO - JD: Move this into AXUtilities.
# TODO - JD: See if the web script logic can be included here and then it all moved
# into AXUtilities.
# TODO - JD: Replace callers of this function with the logic below.
# TODO - JD: This broad exception is swallowing a pyatspism meaning the one caller
# (recovery code for web brokeness) is not recovering. Which suggests that code can
# TODO - JD: Determine what actually needs this support and why.
# TODO - JD: Candidate for the focus manager.
# TODO - JD: Move into AXUtilities.
# Don't bother if the root is a 'pre' or 'code' element. Those often have
# nothing but a TON of static text leaf nodes, which we want to ignore.
# Eliminate things suspected to be labels for widgets
# TODO - JD: This logic ultimately belongs in the focus manager.
# We cannot count on implementations clearing the selection for us when we set the caret
# offset. Also, we should clear the selected text first.
# https://bugs.documentfoundation.org/show_bug.cgi?id=167930
# TODO - JD: The web script's set_caret_position() also sets the caret context.
# Ensuring global structural navigation, caret navigation, browse mode, etc.
# work means we should do the same here. Ultimately, however, we need to clearly
# define and unite the whole offset + position + context + locus of focus +
# last cursor position.
# TODO - JD. Remove this function if the web override can be adjusted
# TODO - JD: This was originally in the LO script. See if it is still an issue when
# lots of cells are selected.
# TODO - JD: This doesn't belong here.
# TODO - JD: Move this into AXUtilities. Or into the where am i presenter.
# LO Writer implements the selection interface on paragraphs and possibly
# other things.
# If we're in an ongoing series of native navigation-by-word commands, just present the
# newly-traversed string.
# Otherwise, attempt some smarts so that the user winds up with the same presentation
# they would get were this an ongoing series of native navigation-by-word commands.
# If we moved left via native nav, this should be the start of a native-navigation
# word boundary, regardless of what ATK/AT-SPI2 tells us.
# The ATK/AT-SPI2 word typically ends in a space; if the ending is neither a space,
# nor an alphanumeric character, then suspect that character is a navigation boundary
# where we would have landed before via the native previous word command.
# If we moved right via native nav, this should be the end of a native-navigation
# This suggests we just moved to the end of the previous word.
# If the character to the left of our present position is neither a space, nor
# an alphanumeric character, then suspect that character is a navigation boundary
# where we would have landed before via the native next word command.
# We only want to present the newline character when we cross a boundary moving from one
# word to another. If we're in the same word, strip it out.
# The selection interface gives us access to what is selected, which might
# not actually be a direct child.
# Note: This guesswork to figure out what actually changed with respect
# to text selection will get eliminated once the new text-selection API
# is added to ATK and implemented by the toolkits. (BGO 638378)
# Even though we present a message, treat it as unhandled so the new location is
# still presented.
# A simultaneous unselection and selection centered at one offset.
# There's a possibility that we have a link spanning multiple lines. If so,
# we want to present the continuation that just became selected.
# we want to present the continuation that just became unselected.
# Copyright 2004-2009 Sun Microsystems Inc.
# The speech server to use for all speech operations.
# Now, get the speech server we care about.
# HACK: Orca goes to incredible lengths to avoid a broken configuration, so this
# Try fallback modules
# Copyright 2014 Igalia, S.L.
# Author: Joanmarie Diggs <jdiggs@igalia.com>
# Note that the following are to help us identify what is likely a math symbol
# (as opposed to one serving the function of an image in "This way up.")
# Unicode has a huge number of individual symbols to include styles, such as
# bold, italic, double-struck, etc. These are so far not supported by speech
# synthesizers. So we'll maintain a dictionary of equivalent symbols which
# speech synthesizers should know along with lists of the various styles.
# Translators: Unicode has a large set of characters consisting of a common
# alphanumeric symbol and a style. For instance, character 1D400 is a bold A,
# 1D468 is a bold italic A, 1D4D0 is a bold script A,, etc., etc. These styles
# can have specific meanings in mathematics and thus should be spoken along
# with the alphanumeric character. However, given the vast quantity of these
# characters, string substitution is being used with the substituted string
# being a single alphanumeric character. The full set of symbols can be found
# at http://www.unicode.org/charts/PDF/U1D400.pdf.
# Translators: this is the spoken representation for the character '←' (U+2190)
# Translators: this is the spoken representation for the character '↑' (U+2191)
# Translators: this is the spoken representation for the character '→' (U+2192)
# Translators: this is the spoken representation for the character '↓' (U+2193)
# Translators: this is the spoken representation for the character '↔' (U+2194)
# Translators: this is the spoken representation for the character '↕' (U+2195)
# Translators: this is the spoken representation for the character '↖' (U+2196)
# Translators: this is the spoken representation for the character '↗' (U+2197)
# Translators: this is the spoken representation for the character '↘' (U+2198)
# Translators: this is the spoken representation for the character '↤' (U+21a4)
# Translators: this is the spoken representation for the character '↥' (U+21a5)
# Translators: this is the spoken representation for the character '↦' (U+21a6)
# Translators: this is the spoken representation for the character '↧' (U+21a7)
# Translators: this is the spoken representation for the character '⇐' (U+21d0)
# Translators: this is the spoken representation for the character '⇑' (U+21d1)
# Translators: this is the spoken representation for the character '⇒' (U+21d2)
# Translators: this is the spoken representation for the character '⇓' (U+21d3)
# Translators: this is the spoken representation for the character '⇔' (U+21d4)
# Translators: this is the spoken representation for the character '⇕' (U+21d5)
# Translators: this is the spoken representation for the character '⇖' (U+21d6)
# Translators: this is the spoken representation for the character '⇗' (U+21d7)
# Translators: this is the spoken representation for the character '⇘' (U+21d8)
# Translators: this is the spoken representation for the character '⇙' (U+21d9)
# Translators: this is the spoken representation for the character '➔' (U+2794)
# Translators: this is the spoken representation for the character '➢' (U+27a2)
# Translators: this is the spoken word for the character '-' (U+002d) when used
# as a MathML operator.
# Translators: this is the spoken word for the character '<' (U+003c) when used
# Translators: this is the spoken word for the character '>' (U+003e) when used
# Translators: this is the spoken word for the character '^' (U+005e) when used
# Translators: this is the spoken word for the character 'ˇ' (U+02c7) when used
# Translators: this is the spoken word for the character '˘' (U+02d8) when used
# Translators: this is the spoken word for the character '˙' (U+02d9) when used
# Translators: this is the spoken word for the character '‖' (U+2016) when used
# Translators: this is the spoken representation for the character '…' (U+2026)
# Translators: this is the spoken representation for the character '∀' (U+2200)
# Translators: this is the spoken representation for the character '∁' (U+2201)
# Translators: this is the spoken representation for the character '∂' (U+2202)
# Translators: this is the spoken representation for the character '∃' (U+2203)
# Translators: this is the spoken representation for the character '∄' (U+2204)
# Translators: this is the spoken representation for the character '∅' (U+2205)
# Translators: this is the spoken representation for the character '∆' (U+2206)
# Translators: this is the spoken representation for the character '∇' (U+2207)
# Translators: this is the spoken representation for the character '∈' (U+2208)
# Translators: this is the spoken representation for the character '∉' (U+2209)
# Translators: this is the spoken representation for the character '∊' (U+220a)
# Translators: this is the spoken representation for the character '∋' (U+220b)
# Translators: this is the spoken representation for the character '∌' (U+220c)
# Translators: this is the spoken representation for the character '∍' (U+220d)
# Translators: this is the spoken representation for the character '∎' (U+220e)
# Translators: this is the spoken representation for the character '∏' (U+220f)
# Translators: this is the spoken representation for the character '∐' (U+2210)
# Translators: this is the spoken representation for the character '∑' (U+2211)
# Translators: this is the spoken representation for the character '−' (U+2212)
# Translators: this is the spoken representation for the character '∓' (U+2213)
# Translators: this is the spoken representation for the character '∔' (U+2214)
# Translators: this is the spoken representation for the character '∕' (U+2215)
# Translators: this is the spoken representation for the character '∖' (U+2216)
# Translators: this is the spoken representation for the character '∗' (U+2217)
# Translators: this is the spoken representation for the character '∘' (U+2218)
# Translators: this is the spoken representation for the character '∙' (U+2219)
# Translators: this is the spoken representation for the character '√' (U+221a)
# Translators: this is the spoken representation for the character '∛' (U+221b)
# Translators: this is the spoken representation for the character '∜' (U+221c)
# Translators: this is the spoken representation for the character '∝' (U+221d)
# Translators: this is the spoken representation for the character '∞' (U+221e)
# Translators: this is the spoken representation for the character '∟' (U+221f)
# Translators: this is the spoken representation for the character '∠' (U+2220)
# Translators: this is the spoken representation for the character '∡' (U+2221)
# Translators: this is the spoken representation for the character '∢' (U+2222)
# Translators: this is the spoken representation for the character '∣' (U+2223)
# Translators: this is the spoken representation for the character '∤' (U+2224)
# Translators: this is the spoken representation for the character '∥' (U+2225)
# Translators: this is the spoken representation for the character '∦' (U+2226)
# Translators: this is the spoken representation for the character '∧' (U+2227)
# Translators: this is the spoken representation for the character '∨' (U+2228)
# Translators: this is the spoken representation for the character '∩' (U+2229)
# Translators: this is the spoken representation for the character '∪' (U+222a)
# Translators: this is the spoken representation for the character '∫' (U+222b)
# Translators: this is the spoken representation for the character '∬' (U+222c)
# Translators: this is the spoken representation for the character '∭' (U+222d)
# Translators: this is the spoken representation for the character '∮' (U+222e)
# Translators: this is the spoken representation for the character '∯' (U+222f)
# Translators: this is the spoken representation for the character '∰' (U+2230)
# Translators: this is the spoken representation for the character '∱' (U+2231)
# Translators: this is the spoken representation for the character '∲' (U+2232)
# Translators: this is the spoken representation for the character '∳' (U+2233)
# Translators: this is the spoken representation for the character '∴' (U+2234)
# Translators: this is the spoken representation for the character '∵' (U+2235)
# Translators: this is the spoken representation for the character '∶' (U+2236)
# Translators: this is the spoken representation for the character '∷' (U+2237)
# Translators: this is the spoken representation for the character '∸' (U+2238)
# Translators: this is the spoken representation for the character '∹' (U+2239)
# Translators: this is the spoken representation for the character '∺' (U+223a)
# Translators: this is the spoken representation for the character '∻' (U+223b)
# Translators: this is the spoken representation for the character '∼' (U+223c)
# Translators: this is the spoken representation for the character '∽' (U+223d)
# Translators: this is the spoken representation for the character '∾' (U+223e)
# Translators: this is the spoken representation for the character '∿' (U+223f)
# Translators: this is the spoken representation for the character '≀' (U+2240)
# Translators: this is the spoken representation for the character '≁' (U+2241)
# Translators: this is the spoken representation for the character '≂' (U+2242)
# Translators: this is the spoken representation for the character '≃' (U+2243)
# Translators: this is the spoken representation for the character '≄' (U+2244)
# Translators: this is the spoken representation for the character '≅' (U+2245)
# Translators: this is the spoken representation for the character '≆' (U+2246)
# Translators: this is the spoken representation for the character '≇' (U+2247)
# Translators: this is the spoken representation for the character '≈' (U+2248)
# Translators: this is the spoken representation for the character '≉' (U+2249)
# Translators: this is the spoken representation for the character '≊' (U+224a)
# Translators: this is the spoken representation for the character '≋' (U+224b)
# Translators: this is the spoken representation for the character '≌' (U+224c)
# Translators: this is the spoken representation for the character '≍' (U+224d)
# Translators: this is the spoken representation for the character '≎' (U+224e)
# Translators: this is the spoken representation for the character '≏' (U+224f)
# Translators: this is the spoken representation for the character '≐' (U+2250)
# Translators: this is the spoken representation for the character '≑' (U+2251)
# Translators: this is the spoken representation for the character '≒' (U+2252)
# Translators: this is the spoken representation for the character '≓' (U+2253)
# Translators: this is the spoken representation for the character '≔' (U+2254)
# Translators: this is the spoken representation for the character '≕' (U+2255)
# Translators: this is the spoken representation for the character '≖' (U+2256)
# Translators: this is the spoken representation for the character '≗' (U+2257)
# Translators: this is the spoken representation for the character '≘' (U+2258)
# Translators: this is the spoken representation for the character '≙' (U+2259)
# Translators: this is the spoken representation for the character '≚' (U+225a)
# Translators: this is the spoken representation for the character '≛' (U+225b)
# Translators: this is the spoken representation for the character '≜' (U+225c)
# Translators: this is the spoken representation for the character '≝' (U+225d)
# Translators: this is the spoken representation for the character '≞' (U+225e)
# Translators: this is the spoken representation for the character '≟' (U+225f)
# Translators: this is the spoken representation for the character '≠' (U+2260)
# Translators: this is the spoken representation for the character '≡' (U+2261)
# Translators: this is the spoken representation for the character '≢' (U+2262)
# Translators: this is the spoken representation for the character '≣' (U+2263)
# Translators: this is the spoken representation for the character '≤' (U+2264)
# Translators: this is the spoken representation for the character '≥' (U+2265)
# Translators: this is the spoken representation for the character '≦' (U+2266)
# Translators: this is the spoken representation for the character '≧' (U+2267)
# Translators: this is the spoken representation for the character '≨' (U+2268)
# Translators: this is the spoken representation for the character '≩' (U+2269)
# Translators: this is the spoken representation for the character '≪' (U+226a)
# Translators: this is the spoken representation for the character '≫' (U+226b)
# Translators: this is the spoken representation for the character '≬' (U+226c)
# Translators: this is the spoken representation for the character '≭' (U+226d)
# Translators: this is the spoken representation for the character '≮' (U+226e)
# Translators: this is the spoken representation for the character '≯' (U+226f)
# Translators: this is the spoken representation for the character '≰' (U+2270)
# Translators: this is the spoken representation for the character '≱' (U+2271)
# Translators: this is the spoken representation for the character '≲' (U+2272)
# Translators: this is the spoken representation for the character '≳' (U+2273)
# Translators: this is the spoken representation for the character '≴' (U+2274)
# Translators: this is the spoken representation for the character '≵' (U+2275)
# Translators: this is the spoken representation for the character '≶' (U+2276)
# Translators: this is the spoken representation for the character '≷' (U+2277)
# Translators: this is the spoken representation for the character '≸' (U+2278)
# Translators: this is the spoken representation for the character '≹' (U+2279)
# Translators: this is the spoken representation for the character '≺' (U+227a)
# Translators: this is the spoken representation for the character '≻' (U+227b)
# Translators: this is the spoken representation for the character '≼' (U+227c)
# Translators: this is the spoken representation for the character '≽' (U+227d)
# Translators: this is the spoken representation for the character '≾' (U+227e)
# Translators: this is the spoken representation for the character '≿' (U+227f)
# Translators: this is the spoken representation for the character '⊀' (U+2280)
# Translators: this is the spoken representation for the character '⊁' (U+2281)
# Translators: this is the spoken representation for the character '⊂' (U+2282)
# Translators: this is the spoken representation for the character '⊃' (U+2283)
# Translators: this is the spoken representation for the character '⊄' (U+2284)
# Translators: this is the spoken representation for the character '⊅' (U+2285)
# Translators: this is the spoken representation for the character '⊆' (U+2286)
# Translators: this is the spoken representation for the character '⊇' (U+2287)
# Translators: this is the spoken representation for the character '⊈' (U+2288)
# Translators: this is the spoken representation for the character '⊉' (U+2289)
# Translators: this is the spoken representation for the character '⊊' (U+228a)
# Translators: this is the spoken representation for the character '⊋' (U+228b)
# Translators: this is the spoken representation for the character '⊌' (U+228c)
# Translators: this is the spoken representation for the character '⊍' (U+228d)
# Translators: this is the spoken representation for the character '⊎' (U+228e)
# Translators: this is the spoken representation for the character '⊏' (U+228f)
# Translators: this is the spoken representation for the character '⊐' (U+2290)
# Translators: this is the spoken representation for the character '⊑' (U+2291)
# Translators: this is the spoken representation for the character '⊒' (U+2292)
# Translators: this is the spoken representation for the character '⊓' (U+2293)
# Translators: this is the spoken representation for the character '⊔' (U+2294)
# Translators: this is the spoken representation for the character '⊕' (U+2295)
# Translators: this is the spoken representation for the character '⊖' (U+2296)
# Translators: this is the spoken representation for the character '⊗' (U+2297)
# Translators: this is the spoken representation for the character '⊘' (U+2298)
# Translators: this is the spoken representation for the character '⊙' (U+2299)
# Translators: this is the spoken representation for the character '⊚' (U+229a)
# Translators: this is the spoken representation for the character '⊛' (U+229b)
# Translators: this is the spoken representation for the character '⊜' (U+229c)
# Translators: this is the spoken representation for the character '⊝' (U+229d)
# Translators: this is the spoken representation for the character '⊞' (U+229e)
# Translators: this is the spoken representation for the character '⊟' (U+229f)
# Translators: this is the spoken representation for the character '⊠' (U+22a0)
# Translators: this is the spoken representation for the character '⊡' (U+22a1)
# Translators: this is the spoken representation for the character '⊢' (U+22a2)
# Translators: this is the spoken representation for the character '⊣' (U+22a3)
# Translators: this is the spoken representation for the character '⊤' (U+22a4)
# Translators: this is the spoken representation for the character '⊥' (U+22a5)
# Translators: this is the spoken representation for the character '⊦' (U+22a6)
# Translators: this is the spoken representation for the character '⊧' (U+22a7)
# Translators: this is the spoken representation for the character '⊨' (U+22a8)
# Translators: this is the spoken representation for the character '⊩' (U+22a9)
# Translators: this is the spoken representation for the character '⊪' (U+22aa)
# Translators: this is the spoken representation for the character '⊫' (U+22ab)
# Translators: this is the spoken representation for the character '⊬' (U+22ac)
# Translators: this is the spoken representation for the character '⊭' (U+22ad)
# Translators: this is the spoken representation for the character '⊮' (U+22ae)
# Translators: this is the spoken representation for the character '⊯' (U+22af)
# Translators: this is the spoken representation for the character '⊰' (U+22b0)
# Translators: this is the spoken representation for the character '⊱' (U+22b1)
# Translators: this is the spoken representation for the character '⊲' (U+22b2)
# Translators: this is the spoken representation for the character '⊳' (U+22b3)
# Translators: this is the spoken representation for the character '⊴' (U+22b4)
# Translators: this is the spoken representation for the character '⊵' (U+22b5)
# Translators: this is the spoken representation for the character '⊶' (U+22b6)
# Translators: this is the spoken representation for the character '⊷' (U+22b7)
# Translators: this is the spoken representation for the character '⊸' (U+22b8)
# Translators: this is the spoken representation for the character '⊹' (U+22b9)
# Translators: this is the spoken representation for the character '⊺' (U+22ba)
# Translators: this is the spoken representation for the character '⊻' (U+22bb)
# Translators: this is the spoken representation for the character '⊼' (U+22bc)
# Translators: this is the spoken representation for the character '⊽' (U+22bd)
# Translators: this is the spoken representation for the character '⊾' (U+22be)
# Translators: this is the spoken representation for the character '⊿' (U+22bf)
# Translators: this is the spoken representation for the character '⋀' (U+22c0)
# Translators: this is the spoken representation for the character '⋁' (U+22c1)
# Translators: this is the spoken representation for the character '⋂' (U+22c2)
# Translators: this is the spoken representation for the character '⋃' (U+22c3)
# Translators: this is the spoken representation for the character '⋄' (U+22c4)
# Translators: this is the spoken representation for the character '⋅' (U+22c5)
# Translators: this is the spoken representation for the character '⋆' (U+22c6)
# Translators: this is the spoken representation for the character '⋇' (U+22c7)
# Translators: this is the spoken representation for the character '⋈' (U+22c8)
# Translators: this is the spoken representation for the character '⋉' (U+22c9)
# Translators: this is the spoken representation for the character '⋊' (U+22ca)
# Translators: this is the spoken representation for the character '⋋' (U+22cb)
# Translators: this is the spoken representation for the character '⋌' (U+22cc)
# Translators: this is the spoken representation for the character '⋍' (U+22cd)
# Translators: this is the spoken representation for the character '⋎' (U+22ce)
# Translators: this is the spoken representation for the character '⋏' (U+22cf)
# Translators: this is the spoken representation for the character '⋐' (U+22d0)
# Translators: this is the spoken representation for the character '⋑' (U+22d1)
# Translators: this is the spoken representation for the character '⋒' (U+22d2)
# Translators: this is the spoken representation for the character '⋓' (U+22d3)
# Translators: this is the spoken representation for the character '⋔' (U+22d4)
# Translators: this is the spoken representation for the character '⋕' (U+22d5)
# Translators: this is the spoken representation for the character '⋖' (U+22d6)
# Translators: this is the spoken representation for the character '⋗' (U+22d7)
# Translators: this is the spoken representation for the character '⋘' (U+22d8)
# Translators: this is the spoken representation for the character '⋙' (U+22d9)
# Translators: this is the spoken representation for the character '⋚' (U+22da)
# Translators: this is the spoken representation for the character '⋛' (U+22db)
# Translators: this is the spoken representation for the character '⋜' (U+22dc)
# Translators: this is the spoken representation for the character '⋝' (U+22dd)
# Translators: this is the spoken representation for the character '⋝' (U+22de)
# Translators: this is the spoken representation for the character '⋝' (U+22df)
# Translators: this is the spoken representation for the character '⋠' (U+22e0)
# Translators: this is the spoken representation for the character '⋡' (U+22e1)
# Translators: this is the spoken representation for the character '⋢' (U+22e2)
# Translators: this is the spoken representation for the character '⋣' (U+22e3)
# Translators: this is the spoken representation for the character '⋤' (U+22e4)
# Translators: this is the spoken representation for the character '⋥' (U+22e5)
# Translators: this is the spoken representation for the character '⋦' (U+22e6)
# Translators: this is the spoken representation for the character '⋧' (U+22e7)
# Translators: this is the spoken representation for the character '⋨' (U+22e8)
# Translators: this is the spoken representation for the character '⋩' (U+22e9)
# Translators: this is the spoken representation for the character '⋪' (U+22ea)
# Translators: this is the spoken representation for the character '⋫' (U+22eb)
# Translators: this is the spoken representation for the character '⋬' (U+22ec)
# Translators: this is the spoken representation for the character '⋭' (U+22ed)
# Translators: this is the spoken representation for the character '⋮' (U+22ee)
# Translators: this is the spoken representation for the character '⋯' (U+22ef)
# Translators: this is the spoken representation for the character '⋰' (U+22f0)
# Translators: this is the spoken representation for the character '⋱' (U+22f1)
# Translators: this is the spoken representation for the character '⋲' (U+22f2)
# Translators: this is the spoken representation for the character '⋳' (U+22f3)
# Translators: this is the spoken representation for the character '⋴' (U+22f4)
# Translators: this is the spoken representation for the character '⋵' (U+22f5)
# Translators: this is the spoken representation for the character '⋶' (U+22f6)
# Translators: this is the spoken representation for the character '⋷' (U+22f7)
# Translators: this is the spoken representation for the character '⋸' (U+22f8)
# Translators: this is the spoken representation for the character '⋹' (U+22f9)
# Translators: this is the spoken representation for the character '⋺' (U+22fa)
# Translators: this is the spoken representation for the character '⋻' (U+22fb)
# Translators: this is the spoken representation for the character '⋼' (U+22fc)
# Translators: this is the spoken representation for the character '⋽' (U+22fd)
# Translators: this is the spoken representation for the character '⋾' (U+22fe)
# Translators: this is the spoken representation for the character '⋿' (U+22ff)
# Translators: this is the spoken representation for the character '⌈' (U+2308)
# Translators: this is the spoken representation for the character '⌉' (U+2309)
# Translators: this is the spoken representation for the character '⌊' (U+230a)
# Translators: this is the spoken representation for the character '⌋' (U+230b)
# Translators: this is the spoken representation for the character '⏞' (U+23de)
# Translators: this is the spoken representation for the character '⏟' (U+23df)
# Translators: this is the spoken representation for the character '⟨' (U+27e8)
# Translators: this is the spoken representation for the character '⟩' (U+27e9)
# Translators: this is the spoken representation for the character '⨀' (U+2a00)
# Translators: this is the spoken representation for the character '⨁' (U+2a01)
# Translators: this is the spoken representation for the character '⨂' (U+2a02)
# Translators: this is the spoken representation for the character '⨃' (U+2a03)
# Translators: this is the spoken representation for the character '⨄' (U+2a04)
# Translators: this is the spoken representation for the character '⨅' (U+2a05)
# Translators: this is the spoken representation for the character '⨆' (U+2a06)
# Translators: this is the spoken representation for the character '■' (U+25a0)
# when used as a geometric shape (i.e. as opposed to a bullet in a list).
# Translators: this is the spoken representation for the character '□' (U+25a1)
# Translators: this is the spoken representation for the character '◆' (U+25c6)
# Translators: this is the spoken representation for the character '○' (U+25cb)
# Translators: this is the spoken representation for the character '●' (U+25cf)
# Translators: this is the spoken representation for the character '◦' (U+25e6)
# Translators: this is the spoken representation for the character '◾' (U+25fe)
# Translators: this is the spoken representation for the character '̱' (U+0331)
# which combines with the preceding character. '%s' is a placeholder for the
# preceding character. Some examples of combined symbols can be seen in this
# table: http://www.w3.org/TR/MathML3/appendixc.html#oper-dict.entries-table.
# Translators: this is the spoken representation for the character '̸' (U+0338)
# Translators: this is the spoken representation for the character '⃒' (U+20D2)
# Handle combining characters first
# Combining characters modify the preceding character
# Look for any character followed by the combining character
# Replace the base char + combining char with the spoken name
# Handle regular math symbols
# Utilities for obtaining event-related information.
# Copyright 2024 Igalia, S.L.
# Copyright 2024 GNOME Foundation Inc.
# Gecko inserts a newline at the offset past the space in contenteditables.
# Example: The browser's address bar in response to return on a link.
# Radio buttons normally change their state when you arrow to them, so we handle the
# announcement of their state changes in the focus handling code.
# If this state is cleared, the new state will become checked or unchecked
# and we should get object:state-changed:checked events for those cases.
# Example: Typing the subject in an email client causing the window name to change.
# This can happen in web content where the focus is a contenteditable element and a
# new child element is created for new or changed text.
# Copyright 2006-2008 Sun Microsystems Inc.
# Contains keyboard-label:presentable-name pairs
# Translators: this is how someone would speak the name of the shift key
# Translators: this is how someone would speak the name of the alt key
# Translators: this is how someone would speak the name of the control key
# Translators: this is how someone would speak the name of the left shift key
# Translators: this is how someone would speak the name of the left alt key
# Translators: this is how someone would speak the name of the left ctrl key
# Translators: this is how someone would speak the name of the right shift key
# Translators: this is how someone would speak the name of the right alt key
# Translators: this is how someone would speak the name of the right ctrl key
# Translators: this is how someone would speak the name of the left meta key
# Translators: this is how someone would speak the name of the right meta key
# Translators: this is how someone would speak the name of the num lock key
# Translators: this is how someone would speak the name of the caps lock key
# Translators: this is how someone would speak the name of the shift lock key
# There is no reason to make it different from the translation for "caps lock"
# Translators: this is how someone would speak the name of the scroll lock key
# Translators: this is how someone would speak the name of the page up key
# Translators: this is how someone would speak the name of the page down key
# Translators: this is how someone would speak the name of the tab key
# Translators: this is how someone would speak the name of the left tab key
# Translators: this is the spoken word for the space character
# Translators: this is how someone would speak the name of the backspace key
# Translators: this is how someone would speak the name of the return key
# Translators: this is how someone would speak the name of the enter key
# Translators: this is how someone would speak the name of the up arrow key
# Translators: this is how someone would speak the name of the down arrow key
# Translators: this is how someone would speak the name of the left arrow key
# Translators: this is how someone would speak the name of the right arrow key
# Translators: this is how someone would speak the name of the left super key
# Translators: this is how someone would speak the name of the right super key
# Translators: this is how someone would speak the name of the menu key
# Translators: this is how someone would speak the name of the ISO shift key
# Translators: this is how someone would speak the name of the help key
# Translators: this is how someone would speak the name of the multi key
# Translators: this is how someone would speak the name of the mode switch key
# Translators: this is how someone would speak the name of the escape key
# Translators: this is how someone would speak the name of the insert key
# Translators: this is how someone would speak the name of the delete key
# Translators: this is how someone would speak the name of the home key
# Translators: this is how someone would speak the name of the end key
# Translators: this is how someone would speak the name of the begin key
# Translators: this is how someone would speak the name of the  non-spacing
# diacritical key for the grave glyph
# Translators: this is how someone would speak the name of the non-spacing
# diacritical key for the acute glyph
# diacritical key for the circumflex glyph
# diacritical key for the tilde glyph
# diacritical key for the diaeresis glyph
# diacritical key for the ring glyph
# diacritical key for the cedilla glyph
# diacritical key for the stroke glyph
# Translators: this is how someone would speak the name of the minus key
# Translators: this is how someone would speak the name of the plus key
# Copyright 2016 Orca Team.
# Utilities for obtaining information about accessible applications.
# Copyright 2023-2024 Igalia, S.L.
# Utilities related to the clipboard
# Copyright 2024-2025 Igalia, S.L.
# pylint: disable=wrong-import-order
# This has to be the first non-docstring line in the module to make linters happy.
# Test if the service is actually available by checking properties
# Test if the service is actually available by calling a simple method
# This pulls in the user's overrides to alternative keys.
# If you try to connect to Klipper from a GNOME session, it will fail with a DBus
# exception. However, if you try to connect to GPaste from a KDE session, it will
# succeed -- or at least not throw an exception. Therefore, check for Klipper first.
# See comment above. Check for GPaste last.
# Some applications send multiple text insertion events for part of a given paste.
# Copyright 2016-2024 Igalia, S.L.
# Copyright 2013-2025 Igalia, S.L.
# To make it possible for focus mode to suspend this navigation without
# changing the user's preferred setting.
# There's a small, theoretical possibility that we can creep out of the logical container,
# but until that happens, this check is the most performant.
# If the "word" to the right consists of the content of the last word in an embedded
# object followed by the space of the parent object, the normal space-adjustment we
# do will cause us to set the caret to the offset with the embedded child and then
# present the first word in that child.
# We get the current line in order to set the last object on the line as the prior object,
# so that we don't re-announce context.
# Copyright 2023 Igalia, S.L.
# Copyright 2005-2009 Sun Microsystems Inc.
# We guess at the focused region.  It's going to be a
# Component or Text region whose accessible is the same
# as the object we're generating braille for.  There is
# a small hack-like thing here where we include knowledge
# that we represent the text area of editable comboboxes
# instead of the combobox itself.  We also do the same
# for table cells because they sometimes have children
# that we present.
# Strip off leading and trailing spaces.
################################# BASIC DETAILS #################################
# egg-list-box, e.g. privacy panel in gnome-control-center
################################### KEYBOARD ###################################
################################ PROGRESS BARS ##################################
##################################### TEXT ######################################
################################### PER-ROLE ####################################
# For multiline text areas, we only show the context if we are on the very first line,
# and there is text on that line.
# TODO - JD: The lines below reflect what we've been doing, but only make sense
# for text fields. Historically we've also used generic text object generation
# for things like paragraphs. For now, maintain the original logic so that we can
# land the refactor. Then follow up with improvements.
# Copyright 2010-2013 The Orca Team
# Translators: This string appears on a button in a dialog. "Activating" the
# selected item will perform the action that one would expect to occur if the
# object were clicked on with the mouse. If the object is a link, activating
# it will bring you to a new page. If the object is a button, activating it
# will press the button. If the object is a combobox, activating it will expand
# it to show all of its contents. And so on.
# Translators: Orca has a number of commands that override the default behavior
# within an application. For instance, on a web page Orca's Structural Navigation
# command "h" moves you to the next heading. What should happen when you press
# "h" in an entry on a web page depends: If you want to resume reading content,
# "h" should move to the next heading; if you want to enter text, "h" should not
# move you to the next heading. Because Orca doesn't know what you want to do,
# it has two modes: In browse mode, Orca treats key presses as commands to read
# the content; in focus mode, Orca treats key presses as something that should be
# handled by the focused widget. Orca optionally can attempt to detect which mode
# is appropriate for the current situation and switch automatically. This string
# is a label for a GUI option to enable such automatic switching when structural
# navigation commands are used. As an example, if this setting were enabled,
# pressing "e" to move to the next entry would move focus there and also turn
# focus mode on so that the next press of "e" would type an "e" into the entry.
# If this setting is not enabled, the second press of "e" would continue to be
# a navigation command to move amongst entries.
# within an application. For instance, if you are at the bottom of an entry and
# press Down arrow, should you leave the entry? It depends on if you want to
# resume reading content or if you are editing the text in the entry. Because
# Orca doesn't know what you want to do, it has two modes: In browse mode, Orca
# treats key presses as commands to read the content; in focus mode, Orca treats
# key presses as something that should be handled by the focused widget. Orca
# optionally can attempt to detect which mode is appropriate for the current
# situation and switch automatically. This string is a label for a GUI option to
# enable such automatic switching when caret navigation commands are used. As an
# example, if this setting were enabled, pressing Down Arrow would allow you to
# move into an entry but once you had done so, Orca would switch to Focus mode
# and subsequent presses of Down Arrow would be controlled by the web browser
# and not by Orca. If this setting is not enabled, Orca would continue to control
# what happens when you press an arrow key, thus making it possible to arrow out
# of the entry.
# enable such automatic switching when native navigation commands are used.
# Here "native" means "not Orca"; it could be a browser navigation command such
# as the Tab key, or it might be a web page behavior, such as the search field
# automatically gaining focus when the page loads.
# Translators: A single braille cell on a refreshable braille display consists
# of 8 dots. Dot 7 is the dot in the bottom left corner. If the user selects
# this option, Dot 7 will be used to 'underline' text of interest, e.g. when
# "marking"/indicating that a given word is bold.
# of 8 dots. Dot 8 is the dot in the bottom right corner. If the user selects
# this option, Dot 8 will be used to 'underline' text of interest,  e.g. when
# of 8 dots. Dots 7-8 are the dots at the bottom. If the user selects this
# option, Dots 7-8 will be used to 'underline' text of interest,  e.g. when
# Translators: This is the label for a button in a dialog.
# Translators: Orca uses Speech Dispatcher to present content to users via
# text-to-speech. Speech Dispatcher has a feature to control how capital
# letters are presented: Do nothing at all, say the word 'capital' prior to
# presenting a capital letter (which Speech Dispatcher refers to as 'spell'),
# or play a tone (which Speech Dispatcher refers to as a sound 'icon'.) This
# string to be translated appears as a combo box item in Orca's Preferences.
# Translators: If this checkbox is checked, then Orca will tell you when one of
# your buddies is typing a message.
# Translators: If this checkbox is checked, then Orca will provide the user with
# chat room specific message histories rather than just a single history which
# contains the latest messages from all the chat rooms that they are in.
# Translators: This is the label of a panel holding options for how messages in
# this application's chat rooms should be spoken. The options are: Speak messages
# from all channels (i.e. even if the chat application doesn't have focus); speak
# messages from a channel only if it is the active channel; speak messages from
# any channel, but only if the chat application has focus.
# Translators: This is the label of a radio button. If it is selected, Orca will
# speak all new chat messages as they appear irrespective of whether or not the
# chat application currently has focus. This is the default behaviour.
# speak all new chat messages as they appear if and only if the chat application
# has focus. The string substitution is for the application name (e.g Pidgin).
# only speak new chat messages for the currently active channel, irrespective of
# whether the chat application has focus.
# Translators: If this checkbox is checked, then Orca will speak the name of the
# chat room prior to presenting an incoming message.
# Translators: When presenting the content of a line on a web page, Orca by
# default presents the full line, including any links or form fields on that
# line, in order to reflect the on-screen layout as seen by sighted users.
# Not all users like this presentation, however, and prefer to have objects
# treated as if they were on individual lines, such as is done by Windows
# screen readers, so that unrelated objects (e.g. links in a navbar) are not
# all jumbled together. As a result, this is now configurable. If layout mode
# is enabled, Orca will present the full line as it appears on the screen; if
# it is disabled, Orca will treat each object as if it were on a separate line,
# both for presentation and navigation.
# Translators: Orca's keybindings support double and triple "clicks" or key
# presses, similar to using a mouse. This string appears in Orca's preferences
# dialog after a keybinding which requires a double click.
# dialog after a keybinding which requires a triple click.
# Translators: This is a label which will appear in the list of available speech
# engines as a special item. It refers to the default engine configured within
# the speech subsystem. Apart from this item, the user will have a chance to
# select a particular speech engine by its real name (Festival, IBMTTS, etc.)
# Translators: This is a label for a column header in Orca's pronunciation
# dictionary. The pronunciation dictionary allows the user to correct words
# which the speech synthesizer mispronounces (e.g. a person's name, a technical
# word) or doesn't pronounce as the user desires (e.g. an acronym) by providing
# an alternative string. The "Actual String" here refers to the word to be
# corrected as it would actually appear in text being read. Example: "LOL".
# an alternative string. The "Replacement String" here refers to how the user
# would like the "Actual String" to be pronounced by the speech synthesizer.
# Example: "L O L" or "Laughing Out Loud" (for Actual String "LOL").
# Translators: Orca has an "echo" feature to present text as it is being written
# by the user. While Orca's "key echo" options present the actual keyboard keys
# being pressed, "character echo" presents the character/string of length 1 that
# is inserted as a result of the keypress.
# by the user. This string refers to a "key echo" option. When this option is
# enabled, dead keys will be announced when pressed.
# Translators: Orca has a "find" feature which allows the user to search the
# active application for on screen text and widgets. This string is the title
# of the dialog box.
# active application for on screen text and widgets. This label is associated
# with the text entry where the user types the term to search for.
# with a group of options related to where the search should begin. The options
# are to begin the search from the current location or from the top of the window.
# with the radio button to begin the search from the current location rather
# than from the top of the window.
# with the radio button to begin the search from the top of the window rather
# than the current location.
# with a group of options related to the direction of the search. The options
# are to search backwards and to wrap.
# with the checkbox to perform the search in the reverse direction.
# with the checkbox to wrap around when the top/bottom of the window has been
# reached.
# with a group of options related to what constitutes a match. The options are
# to match case and to match the entire word only.
# with the checkbox to make the search case-sensitive.
# with the checkbox to only match if the full word consists of the search term.
# Translators: This is the label for a spinbutton. This option allows the user
# to specify the number of matched characters that must be present before Orca
# speaks the line that contains the results from an application's Find toolbar.
# Translators: This is the label of a panel containing options for what Orca
# presents when the user is in the Find toolbar of an application, e.g. Firefox.
# Translators: This is the label for a checkbox. This option controls whether
# the line that contains the match from an application's Find toolbar should
# always be spoken, or only spoken if it is a different line than the line
# which contained the last match.
# Translators: This is the label for a checkbox. This option controls whether or
# not Orca will automatically speak the line that contains the match while the
# user is performing a search from the Find toolbar of an application, e.g.
# Firefox.
# Translators: Command is a table column header where the cells in the column
# are a sentence that briefly describes what action Orca will take if and when
# the user invokes that keyboard command.
# Translators: Key Binding is a table column header where the cells in the
# column represent keyboard combinations the user can press to invoke Orca
# Translators: This string is a label for the group of Orca commands which
# can be used in any setting, task, or application. They are not specific
# to, for instance, web browsing.
# are related to debugging.
# are related to its "learn mode". Please use the same translation as done
# in cmdnames.py
# are related to presenting and performing the accessible actions associated
# with the current object.
# Translators: An external braille device has buttons on it that permit the
# user to create input gestures from the braille device. The braille bindings
# are what determine the actions Orca will take when the user presses these
# buttons.
# are related to saving and jumping among objects via "bookmarks".
# are related to caret navigation, such as moving by character, word, and line.
# These commands are enabled by default for web content and can be optionally
# toggled on in other applications.
# are related to the clipboard.
# are related to presenting the date and time.
# Translators: Orca has a sleep mode which causes Orca to essentially behave as
# if it were not running for a given application. Some use cases include self-
# voicing apps with associated commands (e.g. ChromeVox) and VMs. In the former
# case, the self-voicing app is expected to provide all needed commands as well
# as speech and braille. In the latter case, we want to ensure that Orca's
# commands and speech/braille do not interfere with that of the VM and any
# screen reader being used in that VM. Thus when an application is being used
# in sleep mode, nearly all Orca commands become unbound/free, and nothing is
# spoken or brailled. But if the user toggles sleep mode off or switches to
# another application window, Orca commands, speech, and braille immediately
# resume working. This string is a label for the group of Orca commands which
# are related to sleep mode.
# are related to presenting the object under the mouse pointer in speech
# and/or braille. The translation should be consistent with the string
# used in cmdnames.py.
# are related to object navigation.
# Translators: This string is a label for a group of Orca commands which are
# related to presenting information about the system, such as date, time,
# battery status, CPU status, etc.
# are related to structural navigation, such as moving to the next heading,
# paragraph, form field, etc. in a given direction.
# are related to table navigation, such as moving to the next cell in a
# given direction.
# are related to presenting information about the current location, such as
# the title, status bar, and default button of the current window; the
# name, role, and location of the currently-focused object; the selected
# text in the currently-focused object; etc.
# are related to Orca's "flat review" feature. This feature allows the blind
# user to explore the text in a window in a 2D fashion. That is, Orca treats
# all the text from all objects in a window (e.g., buttons, labels, etc.) as
# a sequence of words in a sequence of lines.  The flat review feature allows
# the user to explore this text by the {previous,next} {line,word,character}.
# Those commands are all listed under this group label.
# are related to Orca's speech and verbosity settings. This group of commands
# allows on-the-fly configuration of how much (or little) Orca says about a
# particular object, as well certain aspects of the voice with which things
# are spoken.
# Translators: the 'flat review' feature of Orca allows the blind user to
# explore the text in a window in a 2D fashion.  That is, Orca treats all
# the text from all objects in a window (e.g., buttons, labels, etc.) as a
# sequence of words in a sequence of lines.  The flat review feature allows
# Normally the contents are navigated without leaving the application being
# reviewed. There is a command which will place the entire contents of the
# flat review representation into a text view to make it easy to review
# and copy the text. This string is the title of the window with the text view.
# Translators: Modified is a table column header in Orca's preferences dialog.
# This column contains a checkbox which indicates whether a key binding
# for an Orca command has been changed by the user to something other than its
# Translators: This label refers to the keyboard layout (desktop or laptop).
# Translators: Orca has a feature to list all of the notification messages
# received, similar to the functionality gnome-shell provides when you press
# Super+M, but it works in all desktop environments. Orca's list is a table
# with two columns, one column for the text of the notification and one
# column for the time of the notification. This string is a column header
# for the text of the notifications.
# for the time, which will be relative (e.g. "10 minutes ago") or absolute.
# are associated with presenting notifications.
# Translators: Orca's preferences can be configured on a per-application basis,
# allowing users to customize Orca's behavior, keybindings, etc. to work one
# way in LibreOffice and another way in a chat application. This string is the
# title of Orca's application-specific preferences dialog for an application.
# The string substituted in is the accessible name of the application (e.g.
# "Gedit", "Firefox", etc.
# Translators: This is a table column header. This column consists of a single
# checkbox. If the checkbox is checked, Orca will indicate the associated item
# or attribute by "marking" it in braille. "Marking" is not the same as writing
# out the word; instead marking refers to adding some other indicator, e.g.
# "underlining" with braille dots 7-8 a word that is bold.
# Translators: "Present Unless" is a column header of the text attributes panel
# of the Orca preferences dialog. On this panel, the user can select a set of
# text attributes that they would like spoken and/or indicated in braille.
# Because the list of attributes could get quite lengthy, we provide the option
# to always speak/braille a text attribute *unless* its value is equal to the
# value given by the user in this column of the list. For example, given the
# text attribute "underline" and a present unless value of "none", the user is
# stating that he/she would like to have underlined text announced for all cases
# (single, double, low, etc.) except when the value of underline is none (i.e.
# when it's not underlined). "Present" here is being used as a verb.
# Translators: This is a table column header. The "Speak" column consists of a
# single checkbox. If the checkbox is checked, Orca will speak the associated
# item or attribute (e.g. saying "Bold" as part of the information presented
# when the user gives the Orca command to obtain the format and font details of
# the current text).
# Translators: This is the title of a message dialog informing the user that
# he/she attempted to save a new user profile under a name which already exists.
# A "user profile" is a collection of settings which apply to a given task, such
# as a "Spanish" profile which would use Spanish text-to-speech and Spanish
# braille and selected when reading Spanish content.
# Translators: This is the label of a message dialog informing the user that
# Translators: This is the message in a dialog informing the user that he/she
# attempted to save a new user profile under a name which already exists.
# Translators: This text is displayed in a message dialog when a user indicates
# he/she wants to switch to a new user profile which will cause him/her to lose
# settings which have been altered but not yet saved. A "user profile" is a
# collection of settings which apply to a given task such as a "Spanish" profile
# which would use Spanish text-to-speech and Spanish braille and selected when
# reading Spanish content.
# Translators: Profiles in Orca make it possible for users to quickly switch
# amongst a group of pre-defined settings (e.g. an 'English' profile for reading
# text written in English using an English-language speech synthesizer and
# braille rules, and a similar 'Spanish' profile for reading Spanish text. The
# following string is the title of a dialog in which users can save a newly-
# defined profile.
# following string is the label for a text entry in which the user enters the
# name of a new settings profile being saved via the 'Save Profile As' dialog.
# braille rules, and a similar 'Spanish' profile for reading Spanish text.
# The following is a label in a dialog informing the user that he/she
# is about to remove a user profile, and action that cannot be undone.
# The following is a message in a dialog informing the user that he/she
# is about to remove a user profile, an action that cannot be undone.
# Translators: Orca has a setting which determines which progress bar updates
# should be announced. Choosing "All" means that Orca will present progress bar
# updates regardless of what application and window they happen to be in.
# should be announced. Choosing "Application" means that Orca will present
# progress bar updates as long as the progress bar is in the active application
# (but not necessarily in the current window).
# should be announced. Choosing "Window" means that Orca will present progress
# bar updates as long as the progress bar is in the active window.
# Translators: If this setting is chosen, no punctuation symbols will be spoken
# as a user reads a document.
# Translators: If this setting is chosen, common punctuation symbols (like
# comma, period, question mark) will not be spoken as a user reads a document,
# but less common symbols (such as #, @, $) will.
# Translators: If this setting is chosen, the majority of punctuation symbols
# will be spoken as a user reads a document.
# Translators: If this setting is chosen and the user is reading over an entire
# document, Orca will pause at the end of each line.
# document, Orca will pause at the end of each sentence.
# Translators: Orca has a command that presents a list of structural navigation
# objects in a dialog box so that users can navigate more quickly than they
# could with native keyboard navigation. This is the title for a column which
# contains the text of a blockquote.
# contains the text of a button.
# contains the caption of a table.
# contains the label of a check box.
# contains the text displayed for a web element with an "onClick" handler.
# contains the selected item in a combo box.
# contains the description of an element.
# contains the text of a heading.
# contains the title associated with an iframe.
# contains the text (alt text, title, etc.) associated with an image.
# contains the label of a form field.
# contains the text of a landmark. ARIA role landmarks are the W3C defined HTML
# tag attribute 'role' used to identify important part of webpage like banners,
# main context, search etc.
# could with native keyboard navigation. This is the title of a column which
# contains the level of a heading. Level will be a "1" for <h1>, a "2" for <h2>,
# and so on.
# contains the text of a link.
# contains the text of a list.
# contains the text of a list item.
# contains the text of an object.
# contains the text of a paragraph.
# contains the label of a radio button.
# contains the role of a widget. Examples include "heading", "paragraph",
# "table", "combo box", etc.
# contains the selected item of a form field.
# contains the state of a widget. Examples include "checked"/"not checked",
# "selected"/"not selected", "visited/not visited", etc.
# contains the text of an entry.
# contains the URI of a link.
# contains the value of a form field.
# could with native keyboard navigation. This is the title of such a dialog box.
# "Clickables" are web elements which have an "onClick" handler.
# Level will be a "1" for <h1>, a "2" for <h2>, and so on.
# ARIA role landmarks are the W3C defined HTML tag attribute 'role' used to
# identify important part of webpage like banners, main context, search etc.
# A 'large object' is a logical chunk of text, such as a paragraph, a list,
# a table, etc.
# Translators: This is the title of a panel holding options for how to navigate
# HTML content (e.g., Orca caret navigation, positioning of caret, structural
# navigation, etc.).
# Translators: When the user loads a new web page, they can optionally have Orca
# automatically start reading the page from beginning to end. This is the label
# of a checkbox in which users can indicate their preference.
# automatically summarize details about the page, such as the number of elements
# (landmarks, forms, links, tables, etc.).
# Translators: Different speech systems and speech engines work differently when
# it comes to handling pauses (e.g. sentence boundaries). This property allows
# the user to specify whether speech should be sent to the speech synthesis
# system immediately when a pause directive is encountered or if it should be
# queued up and sent to the speech synthesis system once the entire set of
# utterances has been calculated.
# Translators: This string will appear in the list of available voices for the
# current speech engine. "%s" will be replaced by the name of the current speech
# engine, such as "Festival default voice" or "IBMTTS default voice". It refers
# to the default voice configured for given speech engine within the speech
# subsystem. Apart from this item, the list will contain the names of all
# available "real" voices provided by the speech engine.
# Translators: This refers to the voice used by Orca when presenting the content
# of the screen and other messages.
# Translators: This refers to the voice used by Orca when presenting one or more
# characters which is part of a hyperlink.
# Translators: This refers to the voice used by Orca when presenting information
# which is not displayed on the screen as text, but is still being communicated
# by the system in some visual fashion. For instance, Orca says "misspelled" to
# indicate the presence of the red squiggly line found under a spelling error;
# Orca might say "3 of 6" when a user Tabs into a list of six items and the
# third item is selected. And so on.
# characters which is written in uppercase.
# Translators this label refers to the name of particular speech synthesis
# system. (http://devel.freebsoft.org/speechd)
# system. (https://github.com/eeejay/spiel)
# Translators: This is a label for a group of options related to Orca's behavior
# when presenting an application's spell check dialog.
# Translators: This is a label for a checkbox associated with an Orca setting.
# When this option is enabled, Orca will spell out the current error in addition
# to speaking it. For example, if the misspelled word is "foo," enabling this
# setting would cause Orca to speak "f o o" after speaking "foo".
# When this option is enabled, Orca will spell out the current suggestion in
# addition to speaking it. For example, if the misspelled word is "foo," and
# the first suggestion is "for" enabling this setting would cause Orca to speak
# "f o r" after speaking "for".
# When this option is enabled, Orca will present the context (surrounding text,
# typically the sentence or line) in which the mistake occurred.
# Translators: This is a label for an option to tell Orca whether or not it
# should speak the coordinates of the current spreadsheet cell. Coordinates are
# the row and column position within the spreadsheet (i.e. A1, B1, C2 ...)
# Translators: This is a label for an option which controls what Orca speaks when
# presenting selection changes in a spreadsheet. By default, Orca will speak just
# what changed. For instance, if cells A1 through A8 are already selected, and the
# user adds A9 to the selection, Orca by default would just say "A9 selected."
# Some users, however, prefer to have Orca always announce the entire selected range,
# i.e. in the same scenario say "A1 through A9 selected." Those users should enable
# this option.
# Translators: This is a label for an option for whether or not to speak the
# header of a table cell in document content.
# Translators: This is the title of a panel containing options for specifying
# how to navigate tables in document content.
# Translators: This is a label for an option to tell Orca to skip over empty/
# blank cells when navigating tables in document content.
# Translators: When users are navigating a table, they sometimes want the entire
# row of a table read; other times they want just the current cell presented to
# them. This label is associated with the default presentation to be used.
# should speak table cell coordinates in document content.
# should speak the span size of a table cell (e.g., how many rows and columns
# a particular table cell spans in a table).
# Translators: This is a table column header. "Attribute" here refers to text
# attributes such as bold, underline, family-name, etc.
# Translators: Gecko native caret navigation is where Firefox itself controls
# how the arrow keys move the caret around HTML content. It's often broken, so
# Orca needs to provide its own support. As such, Orca offers the user the
# ability to switch between the Firefox mode and the Orca mode. This is the
# label of a checkbox in which users can indicate their default preference.
# Translators: Orca provides keystrokes to navigate HTML content in a structural
# manner: go to previous/next header, list item, table, etc. This is the label
# of a checkbox in which users can indicate their default preference.
# Translators: This refers to the amount of information Orca provides about a
# particular object that receives focus.
# Super+M, but it works in all desktop environments. This string is the title
# of the dialog that contains the list of notification messages. The string
# substitution is for the number of messages in the list.
# Copyright 2006, 2007, 2008, 2009 Brailcom, o.p.s.
# Copyright © 2024 GNOME Foundation Inc.
# Author: Andy Holmes <andyholmes@gnome.org>
# Contributor: Tomas Cerha <cerha@brailcom.org>
# Shutdown unavailable providers
# Update the default server's voices
# Don't return the instance, unless it is successfully added
# to `_active_servers'.
# *** Instance methods ***
# The speechServerInfo setting is not connected to the speechServerFactory. As a result,
# the user's chosen server (synthesizer) might be from speech-dispatcher.
# The default 50 (0-100) to Spiel's 1.0 (0.1-10.0)
# The default 5.0 (0-10.0) is mapped to Spiel's 1.0 (0-2.0)
# The default 5.0 (0-10.0) to Spiel's 1.0 (0-2.0)
# Duplicate of what's in speechdispatcherfactory.py
# Use voices from the current provider, not all voices from the speaker
# If we have a tracked voice, prioritize it over ACSS family. This allows D-Bus voice
# changes to take precedence.
# If no specific voice family is requested, use first available voice.
# Prioritize the voice language, dialect and then family
# next voice
# Language and dialect are ensured, so a match here is perfect
# Maintain a speaker singleton for all providers
# Load the provider voices for this server
# If the text is not pre-formatted SSML, and the voice supports it,
# convert the text to SSML with word offsets marked.
# Always offer the configured default voice with a language
# set according to the current locale.
# Check whether how it appears in the server list
#speechserver.VoiceFamily.GENDER: speechserver.VoiceFamily.MALE,
# See speechdispatcherfactory.py for why this is disabled
# if interrupt:
# Convert dict to ACSS if needed for compatibility with speech generation system
# TODO: map start/end to current_offset/current_end_offset
# Use the range-started signal for progress, if supported
# If we're not the last speaker, don't shut down.
# If there's not a default speaker, there's nothing we can do.
# Don't immediately cut off speech.
# Candidate for sound
# Users may prefer DEFAULT_VOICE here
# This is purely for debugging. The code needed to actually switch voices
# does not yet exist due to some problems which need to be debugged and
# fixed.
# TODO - JD: We need other ways to determine group membership. Not all
# implementations expose the member-of relation. Gtk3 does. Others are TBD.
# TODO - JD: Only presenting the lack of children is what we were doing before the
# generator cleanup. This is potentially because we would (optionally) present the
# number of children as part of the selected child's presentation. Figure out what
# we should really be doing, e.g in the case where no item is selected.
# TODO - JD: Move to the speech and verbosity manager.
# TODO - JD: Create an alternative role for this.
# we just want a single character
##################################### LINK #####################################
##################################### MATH #####################################
############################### START-OF/END-OF ################################
##################################### STATE #####################################
################################## POSITION #####################################
##################################### TABLE #####################################
# TODO - JD: This function and fake role really need to die....
# TODO - JD: speech.speak() has a bug which causes a list of utterances to
# be presented before a string+voice pair that comes first. Until we can
# fix speak() properly, we'll avoid triggering it here.
# result.append(rv)
##################################### VALUE #####################################
################################### PER-ROLE ###################################
# Do not call _generate_accessible_static_text here for ancestors.
# The roles of objects which typically have static text we want to
# present (panels, groupings, dialogs) already generate it. If we
# include it here, it will be double-presented.
# TODO - JD: Move this logic here.
# TODO - JD: Move the logic from these functions here.
# TODO - JD: Should this instead or also be using the logic in get_notification_content()?
# TODO - JD: There should be separate generators for each type of cell.
# Translators: This is the name of a braille translation table. To learn more
# about braille translation tables, see http://en.wikipedia.org/wiki/Braille.
# 00000000
# 01000000
# 10000000
# 11000000
# three letter abbreviations
# full rolename
# Profiles
# Speech
# None means let the factory decide.
# Sound
# Keyboard and Echo
# Mouse review
# Flat review
# Progressbars
# Structural navigation
# Caret navigation
# Chat
# Spellcheck
# Day and time
# App search support
# Latent support to allow the user to override/define keybindings
# and braille bindings. Unsupported and undocumented for now.
# Use at your own risk.
# N.B. The following are experimental and may change or go away at any time.
# Copyright 2011-2024 Igalia, S.L.
# Setting this to lower ensures we present the state and/or text changes that triggered
# the invalid state prior to presenting the invalid state.
# gnome-shell fires "focused" events spuriously after the Alt+Tab switcher
# is used and something else has claimed focus. We don't want to update our
# location or the keygrabs in response.
# Events on the window itself are typically something we want to handle.
# Events from the text role are typically something we want to handle.
# One exception is a huge text insertion. For instance when a huge plain text document
# is loaded in a text editor, the text view in some versions of GTK fires a ton of text
# insertions of several thousand characters each, along with caret moved events. Ignore
# the former and let our flood protection handle the latter.
# Notifications and alerts are things we want to handle.
# Keep these checks early in the process so we can assume them throughout
# the rest of our checks.
# We see an unbelievable number of active-descendant-changed and selection changed from Caja
# when the user navigates from one giant folder to another. We need the spam filtering
# below to catch this bad behavior coming from a focused object, so only return early here
# if the focused object doesn't manage descendants, or the event is not a focus claim.
# mutter-x11-frames is firing accessibility events. We will never present them.
# TeamTalk5 spam
# Thunderbird spam
# Web app spam
# Thunderbird spams us with these when a message list thread is expanded/collapsed.
# Gtk 3 apps. See https://gitlab.gnome.org/GNOME/gtk/-/issues/6449
# The Gedit and Thunderbird scripts pay attention to this event for spellcheck.
# Thunderbird spams us with text changes every time the selected item changes.
# If we are enqueuing events, we're not dead and should not be killed
# and restarted by systemd.
# destroy and don't call again
# This condition appears with gnome-screensaver-dialog.
# See bug 530368.
# TODO - JD: Under what conditions could this actually happen?
# The listener can be None if the event type has a suffix such as "system".
# When we have double/triple-click bindings, the single-click binding will be
# registered first, and subsequent attempts to register what is externally the
# same grab will fail. If we only have a double/triple-click, it succeeds.
# A grab id of 0 indicates failure.
# TODO - JD: It probably makes sense to process remote controller events here rather
# than just updating state.
# pylint: disable=too-many-positional-arguments
# One example: Brave's popup menus live in frames which lack the active state.
# pylint: enable=too-many-arguments
# pylint: enable=too-many-positional-arguments
# TODO: JD - 8 is the value of keybindings.MODIFIER_ORCA, but we need to
# avoid a circular import.
# LibreOffice
# Copyright 2012 Igalia, S.L.
# Copyright 2016-2023 Igalia, S.L.
# We clear the cache on the locus of focus because too many apps and toolkits fail
# to emit the correct accessibility events. We do so recursively on table cells
# to handle bugs like https://gitlab.gnome.org/GNOME/nautilus/-/issues/3253.
# We save the current row and column of a newly focused or selected table cell so that on
# subsequent cell focus/selection we only present the changed location.
# We save the offset for text objects because some apps and toolkits emit caret-moved events
# immediately after a text object gains focus, even though the caret has not actually moved.
# TODO - JD: We should consider making this part of `save_object_info_for_events()` for the
# motivation described above. However, we need to audit callers that set/get the position
# before doing so.
# We save additional information about the object for events that were received at the same
# time as the prioritized focus-change event so we don't double-present aspects about obj.
# TODO - JD: Consider always updating the active script here.
# Don't update the focus to the active window if we can't get to the active window
# from the focused object. https://bugreports.qt.io/browse/QTBUG-130116
# $ORCA_VERSION
# The revision if built from git; otherwise an empty string
# "--prefix" parameter used when configuring the build.
# The package name (should be "orca").
# The location of the data directory (usually "share").
# The directory where we could find liblouis translation tables.
# Copyright 2005-2008 Google Inc.
# Portions Copyright 2007-2008, Sun Microsystems, Inc.
# A value of None means use the engine's default value.
# Do a 'deep copy' of the family.  Otherwise,
# the new ACSS shares the actual data with the
# props passed in.  This can cause unexpected
# side effects.
# pronunciation_dict is a dictionary where the keys are words and the value is a list
# containing the original word and the replacement pronunciation string.
# Copyright 2011-2025 Igalia, S.L.
# pylint:disable=too-many-branches
# pylint:disable=too-many-return-statements
# This must be the first non-docstring line in the module to make linters happy.
# TODO - JD: It turns out there's no UI for this setting, so it defaults to None.
# If the previous character is not a word delimiter, there's nothing to echo.
# Two back-to-back delimiters should not result in a re-echo.
# pylint: disable-next=too-many-statements
# Historically we had only filtered out Orca-modified events. But it seems strange to
# treat the Orca modifier as special. A change to make Orca-modified events echoed in
# the same fashion as other modified events received some negative feedback, specifically
# in relation to flat review commands in laptop layout. Feedback regarding whether any
# command-like modifier should result in no echo has thus far ranged from yes to
# it's "not disturbing" to hear echo with other modifiers. Given the lack of demand
# for echo with modifiers, treat all command modifiers the same and suppress echo.
# We have no reliable way of knowing a password is being entered into a terminal --
# other than the fact that the text typed isn't there. Before we waited for the
# release event and echoed that. But that is laggy. So delay presentation until we
# see the text appear. If it doesn't appear, we never echo it.
# pylint: disable=no-name-in-module
# Mark beginning of words with U+E000 (private use) and record the
# string offsets
# Note: we need to do this before disturbing the text offsets
# Note2: we assume that subsequent text mangling leaves U+E000 untouched
# Original text already contains U+E000. But syntheses will not
# know what to do with it anyway, so discard it
# Word begin
# Word end
# We had a wholy numeric word, possibly next word is as well.
# Skip to next word
# Check next word
# add a mark
# Finished with a word
# Annotate the text with U+E000 (private use) and record the offsets
# Transcribe to SSML, translating U+E000 into marks
# Note: we need to do this after all mangling otherwise the ssml markup
# would get mangled too
# This is really not supposed to happen
# Disable for now, until speech dispatcher properly parses them (version 0.8.9 or later)
#elif c == '"':
#elif c == "'":
# An Atspi.Point object stores width in x and height in y.
# If the center points differ by more than delta, they are not on the same line.
# If there's a significant difference in height, they are not on the same line.
# If the objects claim to have the same coordinates and the same parent,
# we probably have bogus coordinates from the implementation.
# Get a dictionary of text attributes that the user cares about, falling back on the
# default presentable attributes if the user has not specified any.
# TODO - JD: For some reason, we are starting the basic where am I
# in response to the first click. Then we do the detailed one in
# response to the second click. Until that's fixed, interrupt the
# first one.
# Copyright 2022 Igalia, S.L.
# Utilities for performing tasks related to accessibility inspection.
# Copyright 2023-2025 Igalia, S.L.
# Things we cache.
# Firefox alerts and dialogs suffer from this bug too, but if we ignore these windows
# we'll fail to fully present things like the file chooser dialog and the replace-file
# alert. https://bugzilla.mozilla.org/show_bug.cgi?id=1882794
# Some electron apps running in the background claim to be active even when they
# are not. These are the ones we know about. We can add others as we go.
# In a collapsed combobox, one can arrow to change the selection without showing the items.
# ARIA posinset is 1-based.
# In GTK, the contents of the page tab descends from the page tab.
# This should work, but some toolkits are broken.
# The "SpellingDialog" accessible id is supported by the following:
# * LO >= 25.2
# Must match the order of voice types in the GtkBuilder file.
# Must match the order that the timeFormatCombo is populated.
# Must match the order that the dateFormatCombo is populated.
# Restore the default rate/pitch/gain,
# in case the user played with the sliders.
# ***** Key Bindings treeview initialization *****
# Handler name
# Human Readable Description
# Modifier mask 1
# Used Modifiers 1
# Modifier key name 1
# Click count 1
# Original Text of the Key Binding Shown 1
# Text of the Key Binding Shown 1
# Key Modified by User
# Row with fields editable or not
# HANDLER - invisble column
# DESCRIP
# MOD_MASK1 - invisble column
# MOD_USED1 - invisble column
# KEY1 - invisble column
# CLICK_COUNT1 - invisble column
# OLDTEXT1 - invisble column which will store a copy of the
# original keybinding in TEXT1 prior to the Apply or OK
# buttons being pressed.  This will prevent automatic
# resorting each time a cell is edited.
# TEXT1
# MODIF
#column.set_visible(False)
# EDITABLE - invisble column
# Populates the treeview with all the keybindings:
# TODO - JD: Will this ever be the case??
#settings.voices[voiceType] = voiceACSS
# If user manually selected a family for the current speech server
# this choice it's restored. In other case the first family
# (usually the default one) is selected
# TODO: get translated language name from CLDR or such
# Unsupported locale
# If user manually selected a language for the current speech server
# this choice it's restored. In other case the first language
# The family name will be selected as part of selecting the
# voice type.  Whenever the families change, we'll reset the
# voice type selection to the first one ("Default").
# We'll fallback to whatever we happen to be using in the event
# that this preference has never been set.
# Just a note on general naming pattern:
# *        = The name of the combobox
# *Model   = the name of the comobox model
# *Choices = the Orca/speech python objects
# *Choice  = a value from *Choices
# Where * = speechSystems, speechServers, speechLanguages, speechFamilies
# This cascades into systems->servers->voice_type->families...
# Initially setup the list store model based on the values of all
# the known text attributes.
# Attribute Name column (NAME).
# Attribute Speak column (IS_SPOKEN).
# Attribute Mark in Braille column (IS_BRAILLED).
# For braille none should be enabled by default. So only set those the user chose.
# TODO - JD: Is this still needed (for the purpose of the search column)?
# existing entries in the pronunciation dictionary -- unless it's
# the default script.
# Try to do something sensible for the previous format of
# pronunciation dictionary entries. See bug #464754 for
# more details.
# Pronunciation Dictionary actual string (word) column (ACTUAL).
# Pronunciation Dictionary replacement string column (REPLACEMENT)
# Connect a handler for when the user changes columns within the
# view, so that we can adjust the search column for item lookups.
# Speech pane.
# Braille pane.
# Set up contraction table combo box and set it to the
# currently used one.
# Key Echo pane.
# Text attributes pane.
# Pronunciation dictionary pane.
# General pane.
# Orca User Profiles
# We always want to re-order the text attributes page so that enabled
# items are consistently at the top.
# We only want to display one column; not two.
# Force the display comboboxes to be left aligned.
# TODO - JD: Eliminate this function.
# Keep track of new/unbound keybindings that have yet to be applied.
# Whenever the speech servers change, we need to make sure we
# clear whatever family was in use by the current voice types.
# Otherwise, we can end up with family names from one server
# bleeding over (e.g., "Paul" from Fonix ends up getting in
# the "Default" voice type after we switch to eSpeak).
# Remember the last family manually selected by the user for the
# current speech server.
# To use this default handler please make sure:
# The name of the setting that will be changed is: settingName
# The id of the widget in the ui should be: settingNameCheckButton
# strip "CheckButton" from the end.
# We want the keyname rather than the printable character.
# If it's not on the keypad, get the name of the unshifted
# character. (i.e. "1" instead of "!")
# If we remove the currently used starting profile, fallback on
# the first listed profile, or the default one if there's
# nothing better
# Update the current profile to the active profile unless we're
# removing that one, in which case we use the new starting
# profile
# Make sure nothing is referencing the removed profile anymore
# Copyright 2016-2025 Igalia, S.L.
# (start_offset, string) -> index
# TODO - JD: For now, don't fake character and word extents.
# The main goal is to improve reviewability.
# Ensure words are initialized
# Clear any existing mapping
# Optionally skip whitespace-only lines based on user preference.
# Items can be pruned from the flat review context. When this happens, usually the parent
# or one of its children will still be in the context.
# This is related to contracted braille.
# Translators: short braille for the rolename of an invalid GUI object.
# We strive to keep it under three characters to preserve real estate.
# Translators: short braille for the rolename of an alert dialog.
# NOTE for all the short braille words: they we strive to keep them
# around three characters to preserve real estate on the braille
# display.  The letters are chosen to make them unique across all
# other rolenames, and they typically act like an abbreviation.
# Translators: short braille for the rolename of an animation widget.
# Translators: short braille for the rolename of an arrow widget.
# Translators: short braille for the rolename of a calendar widget.
# Translators: short braille for the rolename of a canvas widget.
# Translators: short braille for the rolename of a caption (e.g.,
# table caption).
# Translators: short braille for the rolename of a checkbox.
# Translators: short braille for the rolename of a check menu item.
# Translators: short braille for the rolename of a color chooser.
# Translators: short braille for the rolename of a column header.
# Translators: short braille for the rolename of a combo box.
# Translators: short braille for the rolename of a date editor.
# Translators: short braille for the rolename of a desktop icon.
# Translators: short braille for the rolename of a desktop frame.
# Translators: short braille for the rolename of a dial.
# You should attempt to treat it as an abbreviation of
# the translated word for "dial".  It is OK to use an
# unabbreviated word as long as it is relatively short.
# Translators: short braille for the rolename of a dialog.
# Translators: short braille for the rolename of a directory pane.
# Translators: short braille for the rolename of an HTML document frame.
# Translators: short braille for the rolename of a drawing area.
# Translators: short braille for the rolename of a file chooser.
# Translators: short braille for the rolename of a filler.
# Translators: short braille for the rolename of a font chooser.
# Translators: short braille for the rolename of a form.
# the translated word for "form".  It is OK to use an
# Translators: short braille for the rolename of a frame.
# Translators: short braille for the rolename of a glass pane.
# Translators: short braille for the rolename of a heading.
# Translators: short braille for the rolename of an html container.
# Translators: short braille for the rolename of a icon.
# Translators: short braille for the rolename of a image.
# Translators: short braille for the rolename of an internal frame.
# Translators: short braille for the rolename of a label.
# Translators: short braille for the rolename of a layered pane.
# Translators: short braille for the rolename of a link.
# Translators: short braille for the rolename of a list.
# Translators: short braille for the rolename of a list item.
# Translators: short braille for the rolename of a menu.
# Translators: short braille for the rolename of a menu bar.
# Translators: short braille for the rolename of a menu item.
# Translators: short braille for the rolename of an option pane.
# Translators: short braille for the rolename of a page tab.
# Translators: short braille for the rolename of a page tab list.
# Translators: short braille for the rolename of a panel.
# Translators: short braille for the rolename of a password field.
# Translators: short braille for the rolename of a popup menu.
# Translators: short braille for the rolename of a progress bar.
# Translators: short braille for the rolename of a push button.
# Translators: short braille for the rolename of a radio button.
# Translators: short braille for the rolename of a radio menu item.
# Translators: short braille for the rolename of a root pane.
# Translators: short braille for the rolename of a row header.
# Translators: short braille for the rolename of a scroll bar.
# Translators: short braille for the rolename of a scroll pane.
# Translators: short braille for the rolename of a section (e.g., in html).
# Translators: short braille for the rolename of a separator.
# Translators: short braille for the rolename of a slider.
# Translators: short braille for the rolename of a split pane.
# Translators: short braille for the rolename of a spin button.
# Translators: short braille for the rolename of a statusbar.
# Translators: short braille for the rolename of a table.
# Translators: short braille for the rolename of a table cell.
# Translators: short braille for the rolename of a table column header.
# Translators: short braille for the rolename of a table row header.
# Translators: short braille for the rolename of a tear off menu item.
# Translators: short braille for the rolename of a terminal.
# Translators: short braille for the rolename of a text entry field.
# Translators: short braille for the rolename of a toggle button.
# Translators: short braille for the rolename of a toolbar.
# Translators: short braille for the rolename of a tooltip.
# Translators: short braille for the rolename of a tree.
# Translators: short braille for the rolename of a tree table.
# Translators: short braille for when the rolename of an object is unknown.
# Translators: short braille for the rolename of a viewport.
# Translators: short braille for the rolename of a window.
# Translators: short braille for the rolename of a header.
# Translators: short braille for the rolename of a footer.
# Translators: short braille for the rolename of a paragraph.
# Translators: short braille for the rolename of a application.
# Translators: short braille for the rolename of a autocomplete.
# Translators: short braille for the rolename of an editbar.
# Translators: short braille for the rolename of an embedded component.
# Load GtkBuilder file.
# Force the localization of widgets to work around a GtkBuilder
# bug. See bgo bug 589362.
# Set default application icon.
# Called when no attribute in __dict__
# Add reference to cache.
# TODO - JD: This is a workaround for a GtkBuilder bug which prevents
# the strings displayed by widgets from being translated. See bgo bug
# 589362.
# For some reason, if we localize the frame, which has a label
# but does not (itself) support use_markup, we get unmarked
# labels which are not bold but which do have <b></b>. If we
# skip the frames, the labels get processed as expected. And
# there was much rejoicing. Yea.
# pylint: disable=c-extension-no-member
# If we are in unrestricted mode, update the context as below.
# If the context already exists, but the active mode is not flat review, update
# the flat review location to that of the object of interest -- if the object of
# interest is in the flat review context (which means it's on screen). In some
# cases the object of interest will not be in the flat review context because it
# is represented by descendant text objects. set_current_to_zone_with_object checks
# for this condition and if it can find a zone whose ancestor is the object of
# interest, it will set the current zone to the descendant, causing Orca to
# present the text at the location of the object of interest.
# If we are restricting, and the current mode is not flat review, calculate a new context
# TODO - JD: See what adjustments might be needed for the pan_amount parameter
# Reset the context
# This is needed to get past any punctuation. Note that we cannot check starts_word
# here. Example: The letter after an apostrophe is reported as a word start.
# The text iter word can start with one or more spaces. Move to the beginning of the flat
# review word before advancing by character.
# TODO - JD: The script manager should not be interacting with speech or braille directly.
# When the presentation manager is created, it should handle speech and braille.
# Only defer to the toolkit script for this object if the app script
# is based on a different toolkit.
# Example: old_script is terminal, new_script is mate-terminal (e.g. for UI)
# Utilities for obtaining state-related information.
# We cannot count on GTK to set the read-only state on text objects.
# Copyright 2016 Igalia, S.L.
#####################################################################
# State information                                                 #
# To better indicate the progress completion.
# Reduce volume as pitch increases.
# Adjusting so that the initial beeps are not too deep.
# TODO: Implement the result.
# index = AXUtilities.get_position_in_set(obj)
# total = AXUtilities.get_set_size(obj)
# percent = int((index / total) * 100)
#########################################################################################
# Copyright 2009 Sun Microsystems Inc.
# Copyright 2015-2016 Igalia, S.L.
# TODO - JD: Replace this with the real role once dependencies are bumped to v2.56.
# TODO - JD: The role check is a quick workaround for issue #535 in which we stopped
# presenting Qt table cells because Qt keeps giving us a different object each and
# every time we ask for the cell. https://bugreports.qt.io/browse/QTBUG-128558
# Once that's fixed we can remove the role check.
# If we don't have a label, always use the name.
# To make the unlabeled icons in gnome-panel more accessible.
# TODO - JD: There is no braille property and the braille generation
# doesn't generate this state. Shouldn't it be presented in braille?
# Note that in the case of speech, this state is added to the role name.
# TODO - JD: This is part of the complicated "REAL_ROLE_TABLE_CELL" mess.
# Remove any pre-calculated values which only apply to obj and not row cells.
# TODO - JD: If we had dedicated generators for cell types, we wouldn't need this.
# TODO - JD: This needs to also be looked into.
# Copyright 2010 Consorcio Fernando de los Rios.
# Author: Javier Hernandez Antunez <jhernandez@emergya.es>
# Author: Alejandro Leiva <aleiva@emergya.es>
# Right now the content area is a GtkBox. We'll need to update
# this once GtkBox is fully deprecated.
# Translators: This refers to a CSS color name. The name, hex value, and color
# can be found at http://www.w3schools.com/cssref/css_colornames.asp and at
# http://en.wikipedia.org/wiki/Web_colors#X11_color_names.
# http://en.wikipedia.org/wiki/Web_colors#HTML_color_names.
# Find the closest match.
# Hold black and white to higher standards than the other close colors.
# Utilities for obtaining information about accessible tables.
# Copyright 2023 GNOME Foundation Inc.
# Things which have to be explicitly cleared.
# We might have nested cells. So far this has only been seen in Gtk,
# where the parent of a table cell is also a table cell. We need the
# index of the parent for use with the table interface.
# Cells in a tree are expected to not span multiple rows or columns.
# Also this: https://bugreports.qt.io/browse/QTBUG-119167
# TODO - JD: We get the spans individually due to
# https://bugzilla.mozilla.org/show_bug.cgi?id=1862437
# Firefox has the following implementation:
# Chromium has the following implementation:
# The Firefox implementation means we can get all the headers with some work.
# The Chromium implementation means less work, but makes it hard to present
# the changed outer header when navigating among nested row/column headers.
# TODO - JD: Figure out what the rest do, and then try to get the implementations
# aligned.
# There either are no headers, or we got all of them.
# The attribute officially has the word "index" in it for clarity.
# TODO - JD: Google Sheets needs to start using the correct attribute name.
# Copyright 2010-2011 The Orca Team
# TODO - JD: This ultimately belongs in an extension manager.
# Handle the case where a change was made in the Orca Preferences dialog.
# Pause event queuing first so that it clears its queue and will not accept new
# events. Then let the script manager unregister script event listeners as well
# as key grabs. Finally deactivate the event manager, which will also cause the
# Atspi.Device to be set to None.
# Shutdown all the other support.
# Legacy behavior, here for backwards-compatibility. You really should
# never rely on this. Run git blame and read the commit message!
# Author: Adrian Vovk <avovk@redhat.com>
# Abstract socket
# Normal filesystem socket
# VSOCK, etc
# µs -> ms
# https://freedesktop.org/software/systemd/man/sd_notify.html#Standalone%20Implementations
# The interval systemd reports to us is the deadline: if we miss it,
# systemd will restart Orca. So, we want to ping more quickly than
# requested, to avoid a situation where timer inaccuracies will
# cause us to miss the deadline. systemd's code for this pings
# anywhere from 133% - 200% faster than necessary. For us it's
# easier to just ping 2x as fast. Use a high priority so that it is
# scheduled ahead of other work done on the main loop (e.g. event
# processing during a flood).
# Translators: Sometimes when we attempt to get the name of an accessible
# software application, we fail because the app or one of its elements is
# defunct. This is a generic name so that we can still refer to this element
# in messages.
# Translators: Orca has a command to report the battery status. This message
# is presented to the user when they use this command but Orca was unable to
# retrieve any information about the battery.
# presents the battery level as a percent.
# presents the plugged-in status to the user.
# Translators: This is presented when the user has navigated to an empty line.
# Translators: This refers to font weight.
# Translators: Orca has a feature in which users can store/save a particular
# location in an application window and return to it later by pressing a
# keystroke. These stored/saved locations are "bookmarks". This string is
# presented to the user when a new bookmark has been entered into the list
# of bookmarks.
# presented to the user when the active list of bookmarks have been saved to
# disk.
# presented to the user when an error was encountered, preventing the active
# list of bookmarks being saved to disk.
# presented to the user when they try to go to a bookmark, but don't have
# any bookmarks.
# presented to the user when they try to go to a bookmark at a particular
# index (e.g. bookmark 1 or bookmark 2) but there is no bookmark stored at
# that index.
# Translators: Orca has a command which toggles all (other) Orca commands so that
# the associated keystroke can by consumed by the native application. For example,
# if there were an Orca command bound to Alt+Down, normally pressing Alt+Down
# would cause the Orca command to be used. This would mean Alt+Down could not be
# used in editors to move the current line of text down. By temporarily disabling
# Orca commands, Alt+Down would be ignored by Orca and work as expected in the
# editor. This string is what Orca presents to the user when Orca's commands are
# being toggled off.
# being toggled back on.
# presenting a capital letter, or play a tone which Speech Dispatcher refers
# to as a sound 'icon'. This string to be translated refers to the brief/
# non-verbose output presented in response to the use of an Orca command which
# makes it possible for users to quickly cycle amongst these alternatives
# without having to get into a GUI.
# to as a sound 'icon'. This string to be translated refers to the full/verbose
# output presented in response to the use of an Orca command which makes it
# possible for users to quickly cycle amongst these alternatives without having
# to get into a GUI.
# Translators: Native application caret navigation does not always work as the
# Orca user wants. As such, Orca offers the user the ability to toggle between
# the application controlling the caret and Orca controlling it. This message
# is presented to indicate that the application's native caret navigation is
# active / not being overridden by Orca.
# Translators: Gecko native caret navigation is where Firefox (or Thunderbird)
# itself controls how the arrow keys move the caret around HTML content. It's
# often broken, so Orca needs to provide its own support. As such, Orca offers
# the user the ability to toggle which application is controlling the caret.
# Translators: this is the name of a cell in a spreadsheet.
# Translators: this message is spoken to announce that a table cell just became
# selected (e.g as a result of navigation via Shift + Arrows). The string
# substitution is the cell name. In the case of a spreadsheet the cell name
# will be something like "B3".
# Translators: this message is spoken to announce that multiple table cells just
# became selected (e.g as a result of navigation via Shift + Arrows). The first
# string substitution is the name of the first cell in the range. The second string
# substitution is for the name of the last cell in the range. An example message
# for Calc would be "A1 through A30 selected".
# became unselected (e.g as a result of navigation via Shift + Arrows). The first
# for Calc would be "A1 through A30 unselected".
# unselected (e.g as a result of navigation via Shift + Arrows). The string
# Translators: This is the description of command line option '-d, --disable'
# which allows the user to specify an option to disable as Orca is started.
# Translators: this is the description of command line option '-e, --enable'
# which allows the user to specify an option to enable as Orca is started.
# Translators: This string indicates to the user what should be provided when
# using the '-e, --enable' or '-d, --disable' command line options.
# Translators: This string appears when using 'Orca -h' at the command line.
# It serves as a sort of title and is followed by a detailed list of Orca's
# optional command-line arguments.
# It is followed by a brief list of Orca's optional command-line arguments.
# Translators: This message is displayed when the user starts Orca from the
# command line and includes an invalid option or argument. After the message,
# the list of invalid items, as typed by the user, is displayed.
# Translators: This is the description of command line option '-l, --list-apps'
# which prints the names of running applications which can be seen by assistive
# technologies such as Orca and Accerciser.
# Translators: This is the description of command line option '-p, --profile'
# which allows you to specify a profile to be loaded. A profile stores a group
# of Orca settings configured by the user for a particular purpose, such as a
# 'Spanish' profile which would include Spanish braille and text-to-speech.
# An Orca settings file contains one or more profiles.
# Translators: This message is presented to the user when the specified profile
# could not be loaded. A profile stores a group of Orca settings configured for
# a particular purpose, such as a Spanish profile which would include Spanish
# braille and Spanish text-to-speech. The string substituted in is the user-
# provided profile name.
# Translators: This message is presented to the user who attempts to launch Orca
# from some other environment than the graphical desktop.
# but the launch fails due to an error related to the settings manager.
# Translators: This message is presented to the user when he/she tries to launch
# Orca, but Orca is already running.
# using the '-p, --profile' command line option.
# Translators: This is the description of command line option '-u, --user-prefs'
# that allows you to specify an alternate location from which to load the user
# preferences.
# using the '-u, --user-prefs' command line option.
# Translators: This is the description of command line option '--speech-system'
# which allows you to specify a speech system to use. A speech system provides
# various synthesizers with different voices and languages.
# This option can be used to override the configured default speech system.
# Translators: This message is presented to the user when the specified speech
# system is unavailable. A speech system provides various synthesizers with
# different voices and languages. The first string substituted in is the user-
# provided speech system. The second string substituted is a comma separated
# list of avaialable speech systems.
# using the '--speech-system' command line option.
# Translators: This is the description of command line option '-v, --version'
# which prints the version of Orca. E.g. '1.23.4'.
# Translators: This is the description of command line option '-r, --replace'
# which tells Orca to replace any existing Orca process that might be running.
# Translators: this is the description of command line option '-h, --help'
# which lists all the available command line options.
# Translators: This is the description of command line option '--debug' which
# causes debugging output for Orca to be sent to a file. The YYYY-MM-DD-HH:MM:SS
# portion of the string indicates the file name will be formed from the current
# date and time with 'debug' in front and '.out' at the end. The 'debug' and
# '.out' portions of this string should not be translated (i.e. it should always
# start with 'debug' and end with '.out', regardless of the locale.).
# Translators: This is the description of command line option '--debug-file'
# which allows the user to override the default date-based name of the debugging
# output file.
# using the '--debug-file' command line option.
# Translators: This is the description of command line option '-t, --text-setup'
# that will initially display a list of questions in text form, that the user
# will need to answer, before Orca will startup. For this to happen properly,
# Orca will need to be run from a terminal window.
# Translators: This is the description of command line option '-s, --setup'
# that will place the user in Orca's GUI preferences dialog.
# Translators: This text is the description displayed when Orca is launched
# from the command line and the help text is displayed.
# Translators: Orca has a command to present the contents of the clipboard without
# the user having to switch to a clipboard manager. This message is spoken by Orca
# before speaking the text which is in the clipboard. The string substitution is
# for the clipboard contents.
# Translators: Orca normal speaks the text which was just deleted from a
# document via command. Depending on the circumstances, that might be a
# large string. Therefore, if the text which has just been deleted from a
# document matches the clipboard contents, Orca will indicate that fact
# instead of presenting the full string which was just deleted. This message
# is the full/verbose indication.
# is the brief indication.
# Translators: This message is the detailed message presented when the contents
# of the clipboard have changed and match the current selection.
# Translators: This message is the brief message presented when the contents
# Translators: Orca normal speaks the text which was just inserted into a
# large string. Therefore, if the text which has just been inserted into a
# instead of presenting the full string which was just inserted. This message
# Translators: In chat applications, it is often possible to see that a "buddy"
# is typing currently (e.g. via a keyboard icon or status text). Some users like
# to have this typing status announced by Orca; others find that announcement
# unpleasant. Therefore, it is a setting in Orca. This string to be translated
# is presented when the value of the setting is toggled.
# Translators: In chat applications, Orca automatically presents incoming
# messages in speech and braille. If a user is in multiple conversations or
# channels at the same time, it can be confusing to know what room or channel
# a given message came from just from hearing/reading it. This string to be
# translated is presented to the user to clarify where an incoming message
# came from. The name of the chat room is the string substitution.
# Translators: This message is presented to inform the user that a new chat
# conversation has been added to the existing conversations. The "tab" here
# refers to the tab which contains the label for a GtkNotebook page. The
# label on the tab is the string substitution.
# a given message came from just from hearing/reading it. For this reason, Orca
# has an option to present the name of the room first ("#a11y <joanie> hello!"
# instead of "<joanie> hello!"). This string to be translated is presented when
# the value of the setting is toggled.
# Translators: Orca has a command to review previous chat room messages in
# speech and braille. Some users prefer to have this message history combined
# (e.g. the last ten messages which came in, no matter what room they came
# from). Other users prefer to have specific room history (e.g. the last ten
# messages from #a11y). Therefore, this is a setting in Orca. This string to be
# translated is presented when the value of the setting is toggled.
# Translators: This phrase is spoken to inform the user that what is about to
# be said is content marked for deletion in a document, such as content which
# is inside an HTML 'del' element, or the removed code in a diff.
# Translators: This phrase is spoken to inform the user that they have reached
# the end of content marked for deletion in a document, such as content which
# be said is content marked for insertion in a document, such as content which
# is inside an HTML 'ins' element, or the added code in a diff.
# be said is content marked/highlighted in a document, such as content which
# is inside an HTML 'mark' element.
# the end of content marked/highlighted in a document, such as content which
# Translators: This phrase is spoken to inform the user that the content being
# presented is the end of an inline suggestion a document. A "suggestion" is a
# proposed change. This change can include the insertion and/or deletion
# of content, and would typically be seen in a collaborative editor, such as
# in Google Docs.
# Translators: This is for navigating document content by moving to the start
# or end of a container. Examples of containers include tables, lists, and
# blockquotes. When moving to the end of a container, Orca attempts to place
# the caret at the content which follows that container. If this is cannot be
# done (e.g. because the container is the last element on the page), Orca will
# instead present this message as an indication that the container was not
# exited as expected.
# blockquotes. If the user attempts to use this command in an object which is
# not a container, this message will be presented.
# Translators: This message is presented when the user selects all of the items
# in a container that supports selection, such as a GUI table or a list of icons.
# Translators: This message is presented when the user is in a date picker and
# navigates to the item which reflects the current date.
# Translators: This message is presented wwhen the user is in a time picker/schedule
# and navigates to the item which reflects the current time.
# Translators: This message is presented when the user is in a map or flow chart and
# navigates to the item which reflects the current location.
# Translators: This message is presented when the user is in a table of contents or
# other set of pagination links and navigates to the item which reflects the current
# page.
# Translators: This message is presented when the user is in a wizard or other
# step-based interface and navigates to the item which reflects the current step.
# Translators: This message is presented when the user is in a list of generic or
# unspecified items and navigates to the link/object which reflects the current item.
# Translators: Orca has a command to report CPU and memory usage. This message
# retrieve this information.
# Translators: Orca has a command to report CPU and memory usage levels. This
# message presents the levels to the user.
# Translators: Orca has a command for advanced users and developers to clear
# the AT-SPI cache in case there is stale information due to an application
# bug. This message is presented when the user tried to clear the cache but
# an error occurred.
# bug. This message is presented when the user performs the command.
# Translators: this is a debug message for advanced users and developers. It
# describes a command to print detailed debugging information about the current
# state with respect to the accessible applications being used, such as the
# accessibility tree of the current window, a list of all the running accessible
# objects, etc. This message is presented to confirm to the user that the snapshot
# capture has begun.
# Translators: The "default" button in a dialog box is the button that gets
# activated when Enter is pressed anywhere within that dialog box. The string
# substitution is the name of the button (e.g. "OK" or "Close").
# activated when Enter is pressed anywhere within that dialog box. This
# message is presented when the default button was found but is insensitive /
# grayed out / cannot be activated. The string substitution is the name of
# the button (e.g. "OK" or "Close"). When translating "Grayed," please use
# the same word used for the string in object_properties.py.
# activated when Enter is pressed anywhere within that dialog box. Orca has
# a command to present the default button. This is the message Orca will
# present if it could not find the default button.
# Translators: This string is part of the presentation of an item that includes
# one or several consecutive subscripted characters. For example, 'X' followed
# by 'subscript 2' followed by 'subscript 3' should be presented to the user as
# 'X subscript 23'.
# one or several consecutive superscripted characters. For example, 'X' followed
# by 'superscript 2' followed by 'superscript 3' should be presented to the user
# as 'X superscript 23'.
# Translators: this message is presented when the user tries to perform a command
# specific to dialog boxes, such as presenting the default button, but is not in
# a dialog.
# Translators: when the user selects (highlights) or unselects text in a
# document, Orca will speak information about what they have selected or
# unselected. This message is presented when the user selects the entire
# document by pressing Ctrl+A.
# unselected. This message is presented when the entire document had been
# selected but the user presses a key (e.g. an arrow key) causing the
# selection to be completely removed.
# Translators: Orca allows you to dynamically define which row of a spreadsheet
# or table should be treated as containing column headers. This message is
# presented when the user sets the row to a particular row number.
# presented when the user unsets the row so it is no longer treated as if it
# contained column headers.
# Translators: Orca allows you to dynamically define which column of a
# spreadsheet or table should be treated as containing column headers. This
# message is presented when the user sets the column to a particular column
# message is presented when the user unsets the column so it is no longer
# treated as if it contained row headers.
# Translators: this is used to announce that the current input line in a
# spreadsheet is blank/empty.
# Translators: This is the size of a file in kilobytes
# Translators: This is the size of a file in megabytes
# Translators: This message is presented to the user after performing a file
# search to indicate there were no matches.
# sequence of words in a sequence of lines.  This message is presented to
# let the user know that he/she successfully appended the contents under
# flat review onto the existing contents of the clipboard.
# let the user know that he/she successfully copied the contents under flat
# review to the clipboard.
# let the user know that he/she attempted to use a flat review command when
# not using flat review.
# let the user know he/she just entered flat review.
# let the user know that flat review is being restricted to the current
# object of interest.
# let the user know that flat review is unrestricted,
# that is, the entire window can be explored.
# Translators: this means a particular cell in a spreadsheet has a formula
# (e.g., "=sum(a1:d1)")
# Translators: this message will be presented to indicate the focused object
# will cause a dialog to appear if activated.
# will cause a grid to appear if activated. A grid is an interactive table.
# will cause a listbox to appear if activated.
# will cause a menu to appear if activated.
# will cause a tree to appear if activated. A tree is a list with sub-levels
# which can be expanded or collapsed, similar to the list of folders in an
# email client.
# will cause a popup to appear if activated.
# Translators: The following string is spoken to let the user know that he/she
# is on a link within an image map. An image map is an image/graphic which has
# been divided into regions. Each region can be clicked on and has an associated
# link. Please see http://en.wikipedia.org/wiki/Imagemap for more information
# and examples.
# Translators: This is a spoken and/or brailled message letting the user know
# that the key combination (e.g., Ctrl+Alt+f) they just entered has already been
# bound to another command and is thus unavailable. The string substituted in is
# the name of the command which already has the binding.
# that Orca has recorded a new key combination (e.g. Alt+Ctrl+g) as a result of
# their input. The string substituted in is the new key combination.
# that Orca has assigned a new key combination (e.g. Alt+Ctrl+g) as a result of
# Orca is about to delete an existing key combination (e.g. Alt+Ctrl+g) as a
# result of their input.
# Orca has deleted an existing key combination (e.g. Alt+Ctrl+g) as a result of
# their input.
# Translators: This is a spoken and/or brailled message asking the user to press
# a new key combination (e.g., Alt+Ctrl+g) to create a new key binding for an
# Orca command.
# Translators: Orca has an "echo" setting which allows the user to configure
# what is spoken in response to a key press. Given a user who typed "Hello
# world.":
# - key echo: "H e l l o space w o r l d period"
# - word echo: "Hello" spoken when the space is pressed;
# - sentence echo: "Hello world" spoken when the period
# A user can choose to have no echo, one type of echo, or multiple types of
# echo and can cycle through the various levels quickly via a command. The
# following string is a brief message which will be presented to the user who
# is cycling amongst the various echo options.
# echo and can cycle through the various levels quickly via a command.
# Translators: This phrase is spoken to inform the user of all of the MathML
# enclosure notations associated with a given mathematical expression. For
# instance, the expression x+y could be enclosed by a box, or enclosed by a
# circle. It could also be enclosed by a box and a circle and long division
# sign and have a line on the left and on the right and a vertical strike.
# (Though let's hope not.) Given that we do not know the enclosures, their
# order, or their combination, we'll present them as a list. The string
# substitution is for that list of enclosure types. For more information
# about the MathML 'menclose' element and its notation types, see:
# http://www.w3.org/TR/MathML3/chapter3.html#presm.menclose
# Translators: This phrase is spoken to describe one MathML enclosure notation
# associated with a mathematical expression. Because an expression, such as
# x+y, can have one or many enclosure notations (box, circle, long division,
# line on the left, vertical strike), we present them as a list of notations.
# For more information about the MathML 'menclose' element and its notation
# types, see: http://www.w3.org/TR/MathML3/chapter3.html#presm.menclose
# This particular string is for the "madruwb" notation type.
# order, or their combination, we'll present them as a list. This string
# will be inserted before the final item in the list if there is more than
# one enclosure notation. For more information about the MathML 'menclose'
# element and its notation types, see:
# be said is part of a mathematical fraction. For instance, given x+1/y+2, Orca
# would say "fraction start, x+1 over y+2, fraction end."
# be said is part of a mathematical fraction whose bar is not displayed. See
# https://en.wikipedia.org/wiki/Combination for an example. Note that the
# comma is inserted here to cause a very brief pause in the speech. Otherwise,
# in English, the resulting speech sounds like we have a fraction which lacks
# the start of the bar. If this is a non-issue for your language, the comma and
# the pause which results is not needed. You should be able to test this with
# "spd-say <your text here>" in a terminal on a machine where speech-dispatcher
# is installed.
# Translators: This word refers to the line separating the numerator from the
# denominator in a mathematical fraction. For instance, given x+1/y+2, Orca
# would would say "fraction start, x+1 over y+2, fraction end."
# Translators: This phrase is spoken to inform the user that the last spoken
# phrase is the end of a mathematical fraction. For instance, given x+1/y+2,
# Orca would would say "fraction start, x+1 over y+2, fraction end."
# be spoken is a square root. For instance, for √9 Orca would say "square root
# of 9, root end" (assuming the user settings indicate that root endings should
# be spoken). Note that the radicand, which follows the "of", is unknown and
# might not even be a simple string; it might be the square root of another
# expression such as a fraction.
# be spoken is a cube root. For instance, for the cube root of 9 Orca would
# say "cube root of 9, root end" (assuming the user settings indicate that root
# endings should  be spoken). Note that the radicand, which follows the "of",
# is unknown and might not even be a simple string; it might be the cube root
# of another expression such as a fraction.
# be spoken is an nth root. https://en.wikipedia.org/wiki/Nth_root. For instance,
# for the fourth root of 9, Orca would say "fourth root of 9, root end" (assuming
# the user settings indicate that root endings should be spoken). Note that the
# index, which precedes this string, is unknown and might not even be a simple
# expression like "fourth"; the index might instead be a fraction.
# be said is part of a mathematical root (square root, cube root, nth root).
# It is primarily intended to be spoken when the index of the root is not a
# simple expression. For instance, for the fourth root of 9, simply speaking
# "fourth root of 9" may be sufficient for the user. But if the index is not
# 4, but instead the fraction x/4, beginning the phrase with "root start" can
# help the user better understand that x/4 is the index of the root.
# phrase is the end of a mathematical root (square root, cube root, nth root).
# For instance, for the cube root of 9, Orca would say "cube root of 9, root
# end" (assuming the user settings indicate that root endings should be spoken).
# be spoken is subscripted text in a mathematical expression. Note that the
# subscript might be simple text or may itself be a mathematical expression,
# and in this instance we have no additional context through which a more user-
# friendly word or phrase can reliably be chosen.
# be spoken is superscripted text in a mathematical expression. Note that the
# superscript might be simple text or may itself be a mathematical expression,
# be spoken is subscripted text which precedes the base in a mathematical
# expression. See, for instance, the MathML mmultiscripts element:
# http://www.w3.org/TR/MathML3/chapter3.html#presm.mmultiscripts
# https://developer.mozilla.org/en-US/docs/Web/MathML/Element/mmultiscripts
# be spoken is superscripted text which precedes the base in a mathematical
# be spoken is underscripted text in a mathematical expression. Note that the
# underscript might be simple text or may itself be a mathematical expression,
# friendly word or phrase can reliably be chosen. Examples of underscripts:
# http://www.w3.org/TR/MathML/chapter3.html#presm.munder
# https://reference.wolfram.com/language/ref/Underscript.html
# be spoken is overscripted text in a mathematical expression. Note that the
# overscript might be simple text or may itself be a mathematical expression,
# friendly word or phrase can reliably be chosen. Examples of overscripts:
# http://www.w3.org/TR/MathML/chapter3.html#presm.mover
# https://reference.wolfram.com/language/ref/Overscript.html
# phrase is the end of a mathematical table.
# phrase is the end of a mathematical table which is nested inside another
# mathematical table.
# Translators: Inaccessible means that the application cannot be read by Orca.
# This usually means the application is not friendly to the assistive technology
# infrastructure.
# Translators: This brief message indicates that indentation and
# justification will be spoken.
# Translators: This detailed message indicates that indentation and
# justification will not be spoken.
# Translators: Orca announces when a widget has an associated error, such as
# disallowed characters in an input, or a must-check box that is not checked
# (e.g. "I read and agree to the terms of service."). When this error message
# goes away as a consequence of the user fixing the error, Orca will present
# this string. When translating this string please use language similar to
# that used for `C_("error", "invalid entry")` in object_properties.py.
# Translators: Orca has a "Learn Mode" that will allow the user to type any key
# on the keyboard and hear what the effects of that key would be.  The effects
# might be what Orca would do if it had a handler for the particular key
# combination, or they might just be to echo the name of the key if Orca doesn't
# have a handler. This message is what is presented on the braille display when
# entering Learn Mode.
# have a handler. This message is what is spoken to the user when entering Learn
# Mode.
# Translators: This message is presented when a user is navigating within a
# blockquote and then navigates out of it.
# Translators: In web content, authors can identify an element which contains
# detailed information about another element. For instance, for a password
# field, there may be a list of requirements (number of characters, number of
# special symbols, etc.). For an image, there may be an extended description
# before or after the image. Often there are visual clues connecting the
# detailed information to its related object. We need to convey this non-visually.
# This message is presented when a user just navigated out of a container holding
# detailed information about another object.
# See https://w3c.github.io/aria/#aria-details
# Translators: This message is presented when a user is navigating within
# an object and then navigates out of it. The word or phrase that follows
# "leaving" should be consistent with the translation provided for the
# corresponding term with context "role" found in object_properties.py
# form and then navigates out of it.
# panel and then navigates out of it. A grouping is a container of related
# widgets.
# a type of landmark and then navigates out of it. The word or phrase that
# follows "leaving" should be consistent with the translation provided for
# the corresponding term with context "role" found in object_properties.py
# list and then navigates out of it.
# panel and then navigates out of it. A panel is a generic container of
# objects, such as a group of related form fields.
# table and then navigates out of it.
# tooltip in a web application and then navigates out of it.
# a document container and then navigates out of it. The word or phrase
# that follows "leaving" should be consistent with the translation provided
# for the corresponding term with context "role" found in object_properties.py
# suggestion and then navigates out of it. A "suggestion" is a container with
# a proposed change. This change can include the insertion and/or deletion
# have a handler. This message is what is presented in speech and braille when
# exiting Learn Mode.
# Translators: this indicates that this piece of text is a hypertext link.
# Translators: this is an indication that a given link points to an object
# that is on the same page.
# that is at the same site (but not on the same page as the link).
# that is at a different site than that of the link.
# Translators: this refers to a link to a file, where the first item is the
# protocol (ftp, ftps, or file) and the second item the name of the file being
# linked to.
# Translators: this message conveys the protocol of a link eg. http, mailto.
# along with the visited state of that link.
# Translators: The following string instructs the user how to navigate amongst
# the list of commands presented in learn mode, as well as how to exit the list
# when finished.
# Translators: A live region is an area of a web page that is periodically
# updated, e.g. stock ticker. http://www.w3.org/TR/wai-aria/terms#def_liveregion
# The "politeness" level is an indication of when the user wishes to be notified
# about a change to live region content. Examples include: never ("off"), when
# idle ("polite"), and when there is a change ("assertive"). Orca has several
# features to facilitate accessing live regions. This message is presented to
# inform the user that Orca's live region's "politeness" level has changed to
# "off" for all of the live regions.
# inform the user that Orca's live region's "politeness" level for all live
# regions has been restored to their original values.
# inform the user of the "politeness" level for the current live region.
# inform the user that Orca's live region's "politeness" level has changed for
# the current live region.
# Orca has several features to facilitate accessing live regions. This message
# is presented in response to a command that toggles whether or not Orca pays
# attention to changes in live regions. Note that turning off monitoring of live
# events is NOT the same as turning the politeness level to "off". The user can
# opt to have no notifications presented (politeness level of "off") and still
# manually review recent updates to live regions via Orca commands for doing so
# -- as long as the monitoring of live regions is enabled.
# is presented to inform the user that a cached message is not available for the
# is presented to inform the user that Orca's live region features have been
# turned off.
# Translators: Orca has a command that allows the user to move the mouse pointer
# to the current object. This is a brief message which will be presented if for
# some reason Orca cannot identify/find the current location.
# to the current object. This is a detailed message which will be presented if
# for some reason Orca cannot identify/find the current location.
# Translators: This string is used to present the state of a locking key, such
# as Caps Lock. If Caps Lock is "off", then letters typed will appear in
# lowercase; if Caps Lock is "on", they will instead appear in uppercase. This
# string is also applied to Num Lock and potentially will be applied to similar
# keys in the future.
# Translators: This is to inform the user of the presence of the red squiggly
# line which indicates that a given word is not spelled correctly.
# Translators: Orca tries to provide more compelling output of the spell check
# dialog in some applications. The first thing it does is let the user know
# what the misspelled word is.
# dialog in some applications. The second thing it does is give the phrase
# containing the misspelled word in the document. This is known as the context.
# Translators: Orca has a number of commands that override the default
# behavior within an application. For instance, on a web page, "h" moves
# you to the next heading. What should happen when you press an "h" in
# an entry on a web page depends: If you want to resume reading content,
# "h" should move to the next heading; if you want to enter text, "h"
# should not move you to the next heading. Similarly, if you are
# at the bottom of an entry and press Down arrow, should you leave the
# entry? Again, it depends on if you want to resume reading content or
# if you are editing the text in the entry. Because Orca doesn't know
# what you want to do, it has two modes: In browse mode, Orca treats
# key presses as commands to read the content; in focus mode, Orca treats
# key presses as something that should be handled by the focused widget.
# This string is the message presented when Orca switches to browse mode.
# This string is the message presented when Orca switches to focus mode.
# This string is a tutorial message presented to the user who has just
# navigated to a widget in browse mode to inform them of the keystroke
# they must press to enable focus mode for the purposes of interacting
# with the widget. The substituted string is a human-consumable keybinding
# such as "Alt+Shift+A."
# Translators: (Please see the previous, detailed translator notes about
# Focus mode and Browse mode.) In order to minimize the amount of work Orca
# users need to do to switch between focus mode and browse mode, Orca attempts
# to automatically switch to the mode which is appropriate to the current
# web element. Sometimes, however, this automatic mode switching is not what
# the user wants. A good example being web apps which have their own keyboard
# navigation and use interaction model. As a result, Orca has a command which
# enables setting a "sticky" focus mode which disables all automatic toggling.
# This string is the message presented when Orca switches to sticky focus mode.
# enables setting a "sticky" browse mode which disables all automatic toggling.
# This string is the message presented when Orca switches to sticky browse mode.
# both for presentation and navigation. This string is presented when the user
# switches to layout mode via an Orca command.
# toggles layout mode off via an Orca command and switches to the aforementioned
# object-based presentation.
# Translators: This message is presented to the user when the command to move
# the mouse pointer to a particular object is believed to have succeeded.
# Translators: Orca has a feature to speak the item under the pointer. This feature,
# known as mouse review, can be enabled and disabled via command. The following is
# the message which Orca will present when mouse review is toggled off via command.
# the message which Orca will present when mouse review is toggled on via command.
# could with native keyboard navigation. This is a message that will be
# presented to the user when an error (such as the operation timing out) kept us
# from getting these objects.
# Translators: the object navigator allows users to explore UI objects presented
# as a hierarchy. This message is spoken when the current node in the hierarchy
# has no children.
# has no next sibling.
# has no parent.
# has no previous sibling.
# as a hierarchy. This hierarchy can be simplified to aid with navigation. This
# message is spoken when the simplified view is enabled.
# message is spoken when the simplified view is disabled.
# Translators: This message describes a list item in a document. Nesting level
# is how "deep" the item is (e.g., a level of 2 represents a list item inside a
# list that's inside another list).
# Translators: Orca has a command that moves the mouse pointer to the current
# location on a web page. If moving the mouse pointer caused an item to appear
# such as a pop-up menu, we want to present that fact.
# Translators: Orca has a command which presents a menu with accessible actions
# that can be performed on the current object. This is the message that Orca
# presents when the object has no actions. The string substitution will be the
# name of the object if it has a name (e.g. "OK" or "Close") or it's accessible,
# localized rolename if it does not.
# Translators: This is intended to be a short phrase to present the fact that no
# no accessible component has keyboard focus.
# Translators: This message presents the fact that no accessible application has
# has keyboard focus.
# Translators: This is for navigating document content by moving from blockquote
# to blockquote. This is a detailed message which will be presented to the user
# if no more blockquotes can be found.
# Translators: This is for navigating document content by moving from button
# to button. This is a detailed message which will be presented to the user
# if no more buttons can be found.
# Translators: This is for navigating document content by moving from check
# box to check box. This is a detailed message which will be presented to the
# user if no more check boxes can be found.
# Translators: This is for navigating document content by moving from 'large
# object' to 'large object'. A 'large object' is a logical chunk of text,
# such as a paragraph, a list, a table, etc. This is a detailed message which
# will be presented to the user if no more check boxes can be found.
# Translators: This is for navigating document content by moving amongst web
# elements which have an "onClick" action. This is a detailed message which
# will be presented to the user if no more clickable elements can be found.
# Translators: This is for navigating document content by moving from combo
# box to combo box. This is a detailed message which will be presented to the
# user if no more combo boxes can be found.
# Translators: This is for navigating document content by moving from entry
# to entry. This is a detailed message which will be presented to the user
# if no more entries can be found.
# Translators: This is for navigating document content by moving from form
# field to form field. This is a detailed message which will be presented to
# the user if no more form fields can be found.
# Translators: This is for navigating document content by moving from heading
# to heading. This is a detailed message which will be presented to the user
# if no more headings can be found.
# to heading at a particular level (i.e. only <h1> or only <h2>, etc.). This
# is a detailed message which will be presented to the user if no more headings
# at the desired level can be found.
# Translators: This is for navigating document content by moving from iframe
# to iframe. This is a detailed message which will be presented to the user
# if no more iframes can be found.
# Translators: This is for navigating document content by moving from image
# to image. This is a detailed message which will be presented to the user
# if no more images can be found.
# Translators: this is for navigating to the previous ARIA role landmark.
# This is an indication that one was not found.
# Translators: This is for navigating document content by moving from link to
# link (regardless of visited state). This is a detailed message which will be
# presented to the user if no more links can be found.
# Translators: This is for navigating document content by moving from bulleted/
# numbered list to bulleted/numbered list. This is a detailed message which will
# be presented to the user if no more lists can be found.
# numbered list item to bulleted/numbered list item. This is a detailed message
# which will be presented to the user if no more list items can be found.
# Translators: This is for navigating document content by moving from live
# region to live region. A live region is an area of a web page that is
# periodically updated, e.g. stock ticker. This is a detailed message which
# will be presented to the user if no more live regions can be found. For
# more info, see http://www.w3.org/TR/wai-aria/terms#def_liveregion
# Translators: This is for navigating document content by moving from paragraph
# to paragraph. This is a detailed message which will be presented to the user
# if no more paragraphs can be found.
# Translators: This is for navigating document content by moving from radio
# button to radio button. This is a detailed message which will be presented to
# the user if no more radio buttons can be found.
# Translators: This is for navigating document content by moving from separator
# to separator (e.g. <hr> tags). This is a detailed message which will be
# presented to the user if no more separators can be found.
# Translators: This is for navigating document content by moving from table to
# to table. This is a detailed message which will be presented to the user if
# no more tables can be found.
# Translators: This is for navigating document content by moving from unvisited
# link to unvisited link. This is a detailed message which will be presented to
# the user if no more unvisited links can be found.
# Translators: This is for navigating document content by moving from visited
# link to visited link. This is a detailed message which will be presented to
# the user if no more visited links can be found.
# Translators: Orca has a dedicated command to speak the currently-selected
# text. This message is what Orca will present if the user performs this
# command when no text is selected.
# Translators: Orca has a dedicated command to speak detailed information
# about the currently-focused link. This message is what Orca will present
# if the user performs this command when not on a link.
# Translators: Orca has commands to navigate among objects by object type.
# When the user moves to a focusable object, such as a button, Orca grabs
# focus on the object. This should update the focus in the GUI, e.g. so the
# user could then use native application keyboard shortcuts (Tab, Space, Enter)
# to interact with the application. Unfortunately, not all apps support
# updating the focus in response to a grab. Therefore Orca will present this
# message to indicate that the focus was not updated.
# Translators: This message alerts the user to the fact that what will be
# presented next came from a notification.
# Translators: This is a brief message presented to the user when the bottom of
# the list of notifications is reached.
# Translators: This is a brief message presented to the user when the top of the
# list of notifications is reached.
# Translators: This message is presented to the user when the notifications list
# is empty.
# Translators: Orca has a setting through which users can control how a number is
# spoken. The options are digits ("1 2 3") and words ("one hundred and twenty
# three"). There is an associated Orca command for quickly toggling between the
# two options. This string to be translated is the brief message spoken when the
# user has enabled speaking numbers as digits.
# two options. This string to be translated is the verbose message spoken when
# the user has enabled speaking numbers as digits.
# user has enabled speaking numbers as words.
# the user has enabled speaking numbers as words.
# Translators: This brief message is presented to indicate the state of widgets
# (checkboxes, push buttons, toggle buttons) on a toolbar which are associated
# with text formatting (bold, italics, underlining, justification, etc.).
# Translators: This message is presented to the user when a web page or similar
# item has started loading.
# item has finished loading.
# item has finished loading. The string substitution is for the name of the
# object which has just finished loading (most likely the page's title).
# Translators: This message is presented to the user when the page of the
# current document changes, e.g. as a result of navigation or scrolling.
# The string substitution is the number of the current page.
# (landmarks, forms, links, tables, etc.). The following string precedes the
# presentation of the summary. The string substitution is a list of items, such
# as "10 headings, 1 form, 52 links".
# Translators: This message appears in a warning dialog when the user performs
# the command to get into Orca's preferences dialog when the preferences dialog
# is already open.
# Translators: This message is an indication of the position of the focused
# slide and the total number of slides in the presentation.
# Translators: This is a detailed message which will be presented as the user
# cycles amongst his/her saved profiles. A "profile" is a collection of settings
# which apply to a given task, such as a "Spanish" profile which would use
# Spanish text-to-speech and Spanish braille and selected when reading Spanish
# content. The string representing the profile name is created by the user.
# Translators: This is an error message presented when the user attempts to
# cycle among his/her saved profiles, but no profiles can be found. A profile
# is a collection of settings which apply to a given task, such as a "Spanish"
# profile which would use Spanish text-to-speech and Spanish braille and
# selected when reading Spanish content.
# Translators: this is an index value so that we can present value changes
# regarding a specific progress bar in environments where there are multiple
# progress bars (e.g. in the Firefox downloads dialog).
# Translators: This brief message will be presented as the user cycles
# through the different levels of spoken punctuation. The options are:
# All punctuation marks will be spoken, None will be spoken, Most will be
# spoken, or Some will be spoken.
# Translators: This detailed message will be presented as the user cycles
# Translators: This message is presented to indicate that a search has begun
# or is still taking place.
# Translators: This message is presented to indicate a search executed by the
# user has been completed.
# Translators: This message is presented to the user when Orca's preferences
# have been reloaded.
# text. This message is spoken by Orca before speaking the text which is
# selected. The string substitution is for the selected text.
# document matches the previously-selected contents, Orca will indicate that
# fact instead of presenting the full string which was just deleted.
# document is also already selected, it is likely that the insertion is
# due to having been restored (e.g. the user selected text, deleted it,
# and then pressed Ctrl+Z to undo that deletion). In this instance, Orca
# will indicate the restoration rather than presenting the full string
# which was just inserted.
# Translators: This message is presented to the user when text had been
# selected in a document and no longer is, e.g. as the result of navigating
# without holding down the shift key.
# Translators: Orca has a command which presents the size and position of the
# current object in pixels. This string refers to the brief/non-verbose output
# presented in response to the command. The string substitutions are all for
# quantities (in pixels).
# current object in pixels. This string refers to the full/verbose output
# Translators: This message is presented to the user when speech synthesis
# has been temporarily turned off.
# has been turned back on.
# resume working. This string is the message Orca presents when sleep mode is
# disabled by the user. The string substitution is the name of the application.
# For example "Sleep mode disabled for VirtualBox."
# enabled by the user. The string substitution is the name of the application.
# For example "Sleep mode enabled for VirtualBox."
# Translators: This string announces speech rate change.
# Translators: This string announces speech pitch change.
# Translators: This string announces speech volume change.
# Translators: Orca's verbosity levels control how much (or how little)
# Orca will speak when presenting objects as the user navigates within
# applications and reads content. The two levels are "brief" and "verbose".
# The following string is a message spoken to the user upon toggling
# this setting via command.
# Translators: We replace the ellipses (both manual and UTF-8) with a spoken
# string. The extra space you see at the beginning is because we need the
# speech synthesis engine to speak the new string well. For example, "Open..."
# turns into "Open dot dot dot".
# Translators: This message is presented when the user attempts to use a
# command specific to a spreadsheet, such as reading the input line, but is
# not in a spreadsheet.
# Translators: This message is presented to the user when Orca is launched.
# Translators: This message is presented to the user when Orca is quit.
# Translators: This message means speech synthesis is not installed or working.
# Translators: Orca has a command to present the contents of the status bar.
# This is a brief message which will be presented if Orca cannot find the
# status bar (e.g. because there isn't one).
# This is a detailed message which will be presented if Orca cannot find the
# Translators: the Orca "Find" dialog allows a user to search for text in a
# window and then move focus to that text.  For example, they may want to find
# the "OK" button.  This message lets them know a string they were searching
# for was not found.
# Translators: The structural navigation keys are designed to move around in
# a document or other container by object type. H moves you to the next heading,
# Shift H to the previous heading, T to the next table, and so on. This message
# is presented when the user disables the structural navigation feature of Orca.
# is presented when the user enables the structural navigation feature for
# navigating within the current document.
# navigating within the GUI of the current application.
# Translators: Orca has a command that allows the user to move to the next
# structural navigation object. In Orca, "structural navigation" refers to
# quickly moving through a container by jumping amongst objects of a given
# type, such as from link to link, or from heading to heading, or from form
# field to form field. This is a brief message which will be presented to the
# user if the desired structural navigation object could not be found.
# field to form field. In order for this functionality to work, Orca uses
# the AtspiCollection interface. If an object claims to not support that
# interface, Orca will present this message to the user to indicate that
# structural navigation is not available. This is the detailed version.
# structural navigation is not available. This is the brief version.
# Translators: This message describes the (row, col) position of a table cell.
# Translators: This message is presented to indicate the user is in the last
# cell of a table in a document.
# row of a table read; other times they want just the current cell presented.
# This string is a message presented to the user when this setting is toggled.
# Translators: a uniform table is one in which each table cell occupies one row
# and one column (i.e. a perfect grid). In contrast, a non-uniform table is one
# in which at least one table cell occupies more than one row and/or column.
# Translators: This is for navigating document content by moving from table cell
# to table cell. If the user gives a table navigation command but is not in a
# table, presents this message.
# Translators: Orca has commands for navigating within a table, e.g. to the
# next cell in a given direction. This string is the message that will be
# presented when those commands are disabled.
# presented when those commands are enabled.
# Translators: This is a message presented to users when the columns in a table
# have been reordered.
# Translators: This is a message presented to users when the rows in a table
# Translators: this is in reference to a column in a table. The substitution
# is the index (e.g. the first column is "column 1").
# Translators: this is in reference to a column in a table. If the user is in
# the first column of a table with five columns, the position is "column 1 of 5"
# to table cell. This is the message presented when the user attempts to move to
# the cell below the current cell and is already in the last row.
# the cell above the current cell and is already in the first row.
# Translators: this message is spoken to announce that a table column just became
# selected (e.g as a result of navigation via Shift + Arrows). The string substitution
# is the column label (e.g. "B").
# Translators: this message is spoken to announce that multiple table columns just
# string substitution is the label of the first column in the range. The second string
# substitution is the label in the last column in the range. An example message for
# Calc would be "Columns B through F selected".
# Calc would be "Columns B through F unselected".
# unselected (e.g as a result of navigation via Shift + Arrows). The string substitution
# Translators: this is in reference to a row in a table. The substitution is
# the index (e.g. the first row is "row 1").
# Translators: this is in reference to a row in a table. If the user is in the
# the first row of a table with five rows, the position is "row 1 of 5"
# the left of the current cell and is already in the first column.
# the right of the current cell and is already in the last column.
# Translators: This message is presented to the user to confirm that he/she just
# deleted a table row.
# deleted the last row of a table.
# inserted a table row.
# inserted a table row at the end of the table. This typically happens when the
# user presses Tab from within the last cell of the table.
# Translators: this message is spoken to announce that a table row just became selected
# (e.g as a result of navigation via Shift + Arrows). The string substitution is the row
# label (e.g. "2").
# Translators: this message is spoken to announce that multiple table rows just
# string substitution is the label of the first row in the range. The second string
# substitution is the label of the last row in the range. An example message for
# Calc would be "Rows 2 through 10 selected".
# Calc would be "Rows 2 through 10 unselected".
# Translators: this message is spoken to announce that a table row just became
# substitution is the row label (e.g. "2").
# Translators: when the user selects (highlights) text in a document, Orca lets
# them know.
# Translators: when the user unselects (un-highlights) text in a document, Orca
# lets them know.
# Translators: Orca has a feature to speak the time when the user presses a
# shortcut key. This is one of the alternative formats that the user may wish
# it to be presented with.
# Translators: this is information about a unicode character reported to the
# user.  The value is the unicode number value of this character in hex.
# Translators: This string is presented when an application's undo command is
# used in a document resulting in a change to that document's contents.
# Translators: This string is presented when an application's redo command is
# Translators: This message presents the Orca version number.
# Translators: This is presented when the user has navigated to a line with only
# whitespace characters (space, tab, etc.) on it.
# Translators: when the user is attempting to locate a particular object and the
# top of a page or list is reached without that object being found, we "wrap" to
# the bottom and continue looking upwards. We need to inform the user when this
# is taking place.
# bottom of a page or list is reached without that object being found, we "wrap"
# to the top and continue looking downwards. We need to inform the user when
# this is taking place.
# Translators, normally layered panes and tables have items in them. Thus it is
# noteworthy when this is not the case. This message is presented to the user to
# indicate the current layered pane or table contains zero items.
# Translators: The cell here refers to a cell within a table within a
# document. We need to announce when the cell occupies or "spans" more
# than a single row and/or column.
# Translators: this represents the number of columns in a table.
# Translators: This message describes the number of characters in a string.
# Translators: This message describes the number of characters that were just
# selected in a body of text.
# unselected in a body of text.
# Translators: People can enter a string of text that is too wide to be
# fully displayed in a spreadsheet cell. This message will be spoken if
# such a cell is encountered.
# Translators: This message informs the user how many unfocused alert and
# dialog windows a newly (re)focused application has. It is added at the
# end of a braille message containing the app which just claimed focus.
# end of a spoken message containing the app which just claimed focus.
# Translators: This is the size of a file in bytes
# Translators: This message informs the user hoq many files were found as
# a result of a search.
# Translators: This message presents the number of forms in a document.
# Translators: This message presents the number of headings in a document.
# Translators: This message presents the number of items in a layered pane
# or table.
# Translators: This message presents the number of landmarks in a document.
# Translators: Orca has several commands that search for, and present a list
# of, objects based on one or more criteria. This is a message that will be
# presented to the user to indicate how many matching items were found.
# series of nested blockquotes, such as can be seen in deep email threads,
# and then navigates out of several levels at once.
# series of nested lists and then navigates out of several levels at once.
# Translators: This message describes a list in web content for which the
# size is unknown. Examples include unlimited scrolling news/article feeds
# on social media sites, and message lists on services such as gmail where
# you're currently viewing messages 1-100 out of some huge, unspecified
# number. Normally Orca announces "list with n items" when the count is
# known. This is the corresponding message for the unknown-count scenario.
# Translators: This message describes a bulleted or numbered list.
# Translators: This message describes the number of items of a bulleted or numbered list
# that is inside of another list.
# Translators: This message describes a news/article feed whose size is
# unknown, such as can be found on social media sites that have unlimited
# scrolling, adding and/or removing items as the user moves up or down.
# Normally Orca announces "feed with n articles" when the count is known.
# This is the corresponding message for the unknown-count scenario.
# Translators: This message describes the number of articles (news items,
# social media posts, etc.) in a feed.
# Translators: This message describes a description list.
# See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl
# Note that the "term" here corresponds to the "dt" element
# Translators: A GtkNotebook (https://docs.gtk.org/gtk4/class.Notebook.html) is an
# example of a "tab list". This message describes the tab list to the user.
# A given term ("dt" element) can have 0 or more values ("dd" elements).
# This message presents the number values a particular term has.
# Translators: this represents the number of rows in a mathematical table.
# See http://www.w3.org/TR/MathML3/chapter3.html#presm.mtable
# Translators: this represents the number of columns in a mathematical table.
# Translators: this represents the number of rows in a mathematical table
# which is nested inside another mathematical table.
# Translators: this represents the number of rows in a mathematic table
# Translators: This message is presented to inform the user of the number of
# messages in a list.
# Translators: This message is presented to inform the user of the value of
# a slider, progress bar, or other such component.
# Translators: This message announces the percentage of the document that
# has been read. The value is calculated by knowing the index of the current
# position divided by the total number of objects on the page.
# Translators: this represents a text attribute expressed in pixels, such as
# a margin, indentation, font size, etc.
# Translators: Orca will tell you how many characters are repeated on a line
# of text. For example: "22 space characters". The %d is the number and the
# %s is the spoken word for the character.
# Translators: This message is presented to indicate the number of selected
# objects (e.g. icons) and the total number of those objects.
# Translators: This message is presented when the user is in a list of
# shortcuts associated with Orca commands which are not specific to the
# current application. It appears as the title of the dialog containing
# shortcuts associated with Orca commands specific to the current
# application. It appears as the title of the dialog containing the list.
# space characters in a string.
# tab characters in a string.
# Translators: This message presents the number of tables in a document.
# Translators: This message describes a table for which both the
# number of rows and the number of columns are unknown. Normally
# Orca announces the table dimensions (e.g. "table with 100 rows
# 15 columns"). When both counts are unknown, it presents this.
# Translators: This message describes a table for which the number of
# rows is unknown, but the number of columns is known. This might occur
# in a vertically infinitely scrollable table or grid on the web.
# columns is unknown, but the number of rows is known. This might occur
# in a horizontally infinitely scrollable table or grid on the web.
# Translators: this represents the number of rows in a table.
# Translators: This message informs the user how long ago something took
# place in terms of seconds.
# place in terms of minutes.
# place in terms of hours.
# place in terms of days.
# message presents the amount of memory used and total amount in GB.
# message presents the amount of memory used and total amount in MB.
# Translators: This message presents the number of unvisited links in a
# document.
# Translators: This message presents the number of visited links in a
# TODO - JD: The result is not in sync with the current output module. Should it be?
# TODO - JD: The only caller is the preferences dialog. And the useful functionality is in
# the methods to get (and set) the output module. So why exactly do we need this?
# TODO - JD: This is due to the requirement on script utilities.
# This adjustment should only be made in cases where there is only presentable text.
# In content where embedded objects are present, "link" is presented as the role of any
# embedded link children.
# If the user has set their punctuation level to All, then the synthesizer will
# do the work for us. If the user has set their punctuation level to None, then
# they really don't want punctuation and we mustn't override that.
# If we're on whitespace or punctuation, we cannot be on an error.
# TODO - JD: We're using the message here to preserve existing behavior.
# Utilities for obtaining information about containers supporting selection
# Translators: this attribute specifies the background color of the text.
# The value is an RGB value of the format "u,u,u".
# http://developer.gnome.org/atk/stable/AtkText.html#AtkTextAttribute
# Translators: this attribute specifies whether to make the background
# color for each character the height of the highest font used on the
# current line, or the height of the font used for the current character.
# It will be a "true" or "false" value.
# Translators: this attribute specifies whether a GdkBitmap is set for
# stippling the background color. It will be a "true" or "false" value.
# Translators: this attribute specifies the direction of the text.
# Values are "none", "ltr" or "rtl".
# Translators: this attribute specifies whether the text is editable.
# Translators: this attribute specifies the font family name of the text.
# Translators: this attribute specifies the foreground color of the text.
# stippling the foreground color. It will be a "true" or "false" value.
# Translators: this attribute specifies the effect applied to the font
# used by the text.
# http://www.w3.org/TR/2002/WD-css3-fonts-20020802/#font-effect
# http://wiki.services.openoffice.org/wiki/Accessibility/TextAttributes
# Translators: this attribute specifies the indentation of the text
# (in pixels).
# Translators: this attribute specifies there is something "wrong" with
# the text, such as it being a misspelled word. See:
# https://developer.mozilla.org/en/Accessibility/AT-APIs/Gecko/TextAttrs
# Translators: this attribute specifies whether the text is invisible.
# Translators: this attribute specifies how the justification of the text.
# Values are "left", "right", "center" or "fill".
# Translators: this attribute specifies the language that the text is
# written in.
# Translators: this attribute specifies the pixel width of the left margin.
# Translators: this attribute specifies the height of the line of text.
# http://www.w3.org/TR/1998/REC-CSS2-19980512/visudet.html#propdef-line-height
# Translators: this attribute refers to the named style which is associated
# with the entire paragraph and which controls the default formatting
# (font, text size, alignment, etc.) of that paragraph. Examples of
# paragraph styles include "Heading 1", "Heading 2", "Caption", "Footnote",
# "Text Body", "Title", and "Subtitle".
# Translators: this attribute specifies the pixels of blank space to
# leave above each newline-terminated line.
# leave below each newline-terminated line.
# leave between wrapped lines inside the same newline-terminated line
# (paragraph).
# Translators: this attribute specifies the pixel width of the right margin.
# Translators: this attribute specifies the number of pixels that the
# text characters are risen above the baseline.
# Translators: this attribute specifies the scale of the characters. The
# value is a string representation of a double.
# Translators: this attribute specifies the size of the text.
# Translators: this attribute specifies the stretch of he text, if set.
# Values are "ultra_condensed", "extra_condensed", "condensed",
# "semi_condensed", "normal", "semi_expanded", "expanded",
# "extra_expanded" or "ultra_expanded".
# Translators: this attribute specifies whether the text is strike though
# (in other words, whether there is a line drawn through it). Values are
# "true" or "false".
# Translators: this attribute specifies the slant style of the text,
# if set. Values are "normal", "oblique" or "italic".
# Translators: this attribute specifies the decoration of the text.
# http://www.w3.org/TR/1998/REC-CSS2-19980512/text.html#propdef-text-decoration
# Translators: this attribute specifies the angle at which the text is
# displayed (i.e. rotated from the norm) and is represented in degrees
# of rotation.
# http://www.w3.org/TR/2003/CR-css3-text-20030514/#glyph-orientation-horizontal
# Translators: this attribute specifies the shadow effects applied to the text.
# http://www.w3.org/TR/1998/REC-CSS2-19980512/text.html#propdef-text-shadow
# Translators: this attributes specifies whether the text is underlined.
# Values are "none", "single", "double" or "low".
# Translators: this attribute specifies the capitalization variant of
# the text, if set. Values are "normal" or "small_caps".
# Translators: this attributes specifies what vertical alignment property
# has been applied to the text.
#http://www.w3.org/TR/1998/REC-CSS2-19980512/visudet.html#propdef-vertical-align
# Translators: this attribute specifies the weight of the text.
# http://www.w3.org/TR/1998/REC-CSS2-19980512/fonts.html#propdef-font-weight
# Translators: this attribute specifies the wrap mode of the text, if any.
# Values are "none", "char" or "word".
# Translators: this attribute specifies the way the text is written.
# Values are "lr-tb", "rl-tb", "tb-rl", "tb-lr", "bt-rl", "bt-lr", "lr",
# "rl" and "tb".
# http://www.w3.org/TR/2001/WD-css3-text-20010517/#PrimaryTextAdvanceDirection
# The following are the known values of some of these text attributes.
# These values were found in the Atk documentation at:
# No doubt there will be more, and as they are found, they can be added
# to this table so they can be translated.
# Translators: this is one of the text attribute values for the following
# text attributes: "invisible", "editable", bg-full-height", "strikethrough",
# "bg-stipple" and "fg-stipple".
# text attributes: "font-effect", "underline", "text-shadow", "wrap mode"
# and "direction".
# text attributes: "font-effect".
# text attributes: "text-decoration".
# text attributes: "text-shadow".
# text attributes: "underline".
# text attributes: "wrap mode".
# text attributes: "wrap mode." It corresponds to GTK_WRAP_WORD_CHAR,
# defined in the Gtk documentation as "Wrap text, breaking lines in
# between words, or if that is not enough, also between graphemes."
# http://library.gnome.org/devel/gtk/stable/GtkTextTag.html#GtkWrapMode
# text attributes: "direction".
# text attributes: "justification".
# text attributes: "justification". In Gecko, when no justification has
# be explicitly set, they report a justification of "start".
# text attributes: "stretch".
# text attributes: "stretch" and "variant".
# text attributes: "variant".
# text attributes: "style".
# text attributes: "paragraph-style".
# text attributes: "vertical-align".
# text attributes: "vertical-align" and "writing-mode".
# text attributes: "writing-mode".
# text attributes: "strikethrough." It refers to the line style.
# text attributes: "invalid". It is an indication that the text is not
# spelled correctly. See:
# Translators: This is the text-spelling attribute. See:
# Based on the feature created by:
# Author: Jose Vilmar <vilmar@informal.com.br>
# Copyright 2010 Informal Informatica LTDA.
# The list is arranged with the most recent message being at the end of
# the list. The current index is relative to, and used directly, with the
# python list, i.e. self._notifications[-3] would return the third-to-last
# notification message.
# This is the first (oldest) message in the list.
# This is the last (newest) message in the list.
# If we're in a matching object, return the next/previous one in the list.
# If an author put an ARIA heading inside a native heading (or vice versa), candidate
# could be the inner heading. If we treat the outer heading as as the previous heading
# and then set the caret context to the first position inside the outer heading, i.e.
# the inner heading, we'll get stuck. Thanks authors.
# If we're not in a matching object, find the next/previous one based on the path.
########################
# Blockquotes          #
# Buttons              #
# Check boxes          #
# Large Objects        #
# Combo Boxes          #
# Entries              #
# Form Fields          #
# Headings             #
# Iframes              #
# Images               #
# Landmarks            #
# Lists                #
# The reason we present the item (or first child) rather than the full list are twofold:
# 1. Given a huge list, navigating to the item and presenting the ancestor list is more
# 2. When we calculate what's on the same line, it should be based on the item's bounding
# TODO - JD: Handle the second issue in the utilities which calculate the line.
# List Items           #
# Live Regions         #
# Paragraphs           #
# We're choosing 3 characters as the minimum because some paragraphs contain a single
# image or link and a text of length 2: An embedded object character and a space.
# We want to skip these.
# Radio Buttons        #
# Separators           #
# Tables               #
# The reason we present the cell rather than the full table are twofold:
# 1. Given a huge table, navigating to the cell and presenting the ancestor table is more
# 2. When we calculate what's on the same line, it should be based on the cell's bounding
# Unvisited Links      #
# Visited Links        #
# Links                #
# Clickables           #
# Containers           #
# Unlike going to the start of the container, when we move to the next edge
# we pass beyond it on purpose. This makes us consistent with NVDA.
# Dictionaries for store the default values
# The keys and values are defined at orca.settings
# Dictionaries that store the key:value pairs which values are
# different from the current profile and the default ones
# Dictionaries that store the current settings.
# They are result to overwrite the default values with
# the ones from the current active profile
# For handling the currently-"classic" application settings
# Load the backend and the default values
# If this is the first time we launch Orca, there is no user settings
# yet, so we need to create the user config directories and store the
# initial default settings
# Set the active profile and load its stored settings
# Set up the user's preferences directory
# ($XDG_DATA_HOME/orca by default).
# Set up $XDG_DATA_HOME/orca/orca-scripts as a Python package
# Set up $XDG_DATA_HOME/orca/orca-customizations.py empty file and
# define orca_dir as a Python package.
# Treat this failure as a "success" so that we don't stomp on the existing file.
# Assign current profile
# Elements that need to stay updated in main configuration.
# TODO - JD: See about moving this logic, along with any callers, into KeyBindings.
# Establishing and maintaining grabs should JustWork(tm) as part of the overall
# keybinding/command process.
# Utilities for obtaining role-related information.
# Note: We are deliberately leaving out listbox options because they can be complex,
# both in ARIA and in GTK.
# TODO - JD: Remove this check when dependencies are bumped to v2.56.
# TODO - JD: The is_switch() call be be removed as part of the removal above.
# rich text editing pred recommended
# predicate recommended to check it is editable
# Note: We are deliberately leaving out sections because those are often DIVs
# which are generic and often not large. The primary consumer of this function
# is structural navigation which uses it for the jump-to-edge functionality.
# The splitter has the opposite orientation of the split pane.
# Some toolkits don't localize the symbolic icon names, so it's worth a try.
# pylint:disable=too-many-instance-attributes
# pylint:disable=too-many-public-methods
# pylint:disable=assignment-from-none
# pylint:enable=assignment-from-none
# Base Script doesn't have all methods that Bookmarks expects from default.Script
# pylint: disable=assignment-from-none
# Utilities for obtaining objects via the collection interface.
# Too many arguments and too many local variables.
# This function wraps Atspi.MatchRule.new which has all the arguments.
# pylint: disable=R0913,R0914
# pylint: enable=R0913,R0914
# 0 means no limit on the number of results
# The final argument, traverse, is not supported but is expected.
# 1 means limit the number of results to 1
# Related to hacks which will soon die.
# Event handlers for input devices being plugged in/unplugged.
# TODO - JD: We currently handle CapsLock one way and Insert a different way.
# Ideally that will stop being the case at some point.
# Because we will synthesize another press and release, wait until the real release.
# Translators: this command will move the mouse pointer to the current item,
# typically a widget, without clicking on it.
# Translators: Orca has a command to synthesize mouse events. This string
# describes the Orca command to generate a left mouse button click on the
# current item, typically a widget.
# describes the Orca command to generate a right mouse button click on the
# Translators: the Orca "SayAll" command allows the user to press a key and have
# the entire document in a window be automatically spoken to the user. If the
# user presses any key during a SayAll operation, the speech will be interrupted
# and the cursor will be positioned at the point where the speech was interrupted.
# Translators: the 'flat review' feature of Orca allows the user to explore the
# text in a window in a 2D fashion. That is, Orca treats all the text from all
# objects in a window (e.g., buttons, labels, etc.) as a sequence of words in a
# sequence of lines. The flat review feature allows the user to explore this text
# by the {previous,next} {line,word,character}. This string is the name of a command
# which causes Orca to speak the entire contents of the window using flat review.
# Translators: the "Where Am I" feature of Orca allows a user to press a key and
# then have information about their current context spoken and brailled to them.
# For example, the information may include the name of the current pushbutton
# with focus as well as its mnemonic.
# Translators: This is the description of a dedicated command to speak the
# current selection / highlighted object(s). For instance, in a text object,
# "selection" refers to the selected/highlighted text. In a spreadsheet, it
# refers to the selected/highlighted cells. In an file manager, it refers to
# the selected/highlighted icons. Etc.
# Translators: This is the description of a dedicated command to speak details
# about a link, such as the uri and type of link.
# Translators: This command will cause the dialog's default button name to be
# spoken and displayed in braille. The "default" button in a dialog box is the
# button that gets activated when Enter is pressed anywhere within that dialog
# box.
# Translators: This command will cause the window's status bar contents to be
# spoken and displayed in braille.
# Translators: This command will cause the window's title to be spoken and
# displayed in braille.
# window and then move focus to that text. For example, they may want to find
# the "OK" button.
# Translators: Orca has a command which presents a list with accessible actions
# that can be performed on the current object. This is the name of that command.
# the "OK" button. This string is used for finding the next occurrence of a
# the "OK" button. This string is used for finding the previous occurrence of a
# This switch allows the user to restrict the flat review function to a specific object.
# The home position is the beginning of the content in the window.
# The home position is the last bit of information in the window.
# This particular command will cause Orca to spell the current line character
# by character.
# by character phonetically, saying "Alpha" for "a", "Bravo" for "b" and so on.
# Previous will go backwards in the window until you reach the top (i.e., it
# will wrap across lines if necessary).
# This command will speak the current word or item.
# This particular command will cause Orca to spell the current word or item
# character by character.
# character by character phonetically, saying "Alpha" for "a", "Bravo" for "b"
# Next will go forwards in the window until you reach the end (i.e., it
# Above in this case means geographically above, as if you drew a vertical
# line upward on the screen.
# With respect to this command, the flat review object is typically something
# like a pushbutton, a label, or some other GUI widget. The 'speaks' means it
# will speak the text associated with the object.
# Below in this case means geographically below, as if you drew a vertical
# line downward on the screen.
# This command will speak the current character
# This particular command will cause Orca to present the character phonetically,
# saying "Alpha" for "a", "Bravo" for "b" and so on.
# This particular command will cause Orca to present the character's unicode
# value.
# Previous will go forwards in the window until you reach the end (i.e., it
# This command will move to and present the end of the line.
# The bottom left is the bottom left of the window currently being reviewed.
# This command lets the user copy the contents currently being reviewed to the
# clipboard.
# This command lets the user append the contents currently being reviewed to
# the existing contents of the clipboard.
# and copy the text. This string describes that command.
# Translators: when users are navigating a table, they sometimes want the
# entire row of a table read; other times they just want the current cell
# to be presented to them.
# Translators: the attributes being presented are the text attributes, such as
# bold, italic, font name, font size, etc.
# Translators: a refreshable braille display is an external hardware device that
# presents braille characters to the user. There are a limited number of cells
# on the display (typically 40 cells).  Orca provides the feature to build up a
# longer logical line and allow the user to press buttons on the braille display
# so they can pan left and right over this line.
# Flat review is modal, and the user can be exploring the window without
# changing which object in the window which has focus. The feature used here
# will return the flat review to the object with focus.
# Translators: braille can be displayed in many ways. Contracted braille
# provides a more efficient means to represent text, especially long
# documents. The feature used here is an option to toggle between contracted
# and uncontracted.
# Translators: hardware braille displays often have buttons near each braille
# cell. These are called cursor routing keys and are a way for a user to tell
# the machine they are interested in a particular character on the display.
# Translators: this is used to indicate the start point of a text selection.
# Translators: this is used to indicate the end point of a text selection.
# on the keyboard and hear what the effects of that key would be. The effects
# have a handler.
# Translators: the speech rate is how fast the speech synthesis engine will
# generate speech.
# Translators: the speech pitch is how high or low in pitch/frequency the
# speech synthesis engine will generate speech.
# Translators: the speech volume is how high or low in gain/volume the
# Translators: Orca allows the user to turn speech synthesis on or off.
# resume working. This string is the command which toggles sleep mode on/off
# for the app being used at the time the command is given.
# applications and reads content. The levels can be toggled via command.
# This string describes that command.
# Translators: this string is associated with the keyboard shortcut to quit
# Orca.
# Translators: the preferences configuration dialog is the dialog that allows
# users to set their preferences for Orca.
# users to set their preferences for a specific application within Orca.
# Translators: Orca allows the user to enable/disable speaking of indentation
# and justification.
# three"). This string to be translated refers to an Orca command for quickly
# toggling between the two options.
# Translators: Orca has a command to present the current clipboard contents to
# the user without them having to switch to a clipboard manager application. This
# string is the description of that command.
# Translators: Orca allows users to cycle through punctuation levels. None,
# some, most, or all, punctuation will be spoken.
# Translators: Orca allows users to cycle through the speech synthesizers
# available on their system, such as espeak, voxin, mbrola, etc. This string
# is the description of the command.
# Translators: Orca has a feature whereby users can set up different "profiles,"
# which are collection of settings which apply to a given task, such as a
# "Spanish" profile which would use Spanish text-to-speech and Spanish braille
# and selected when reading Spanish content. This string to be translated refers
# to an Orca command which makes it possible for users to quickly cycle amongst
# their saved profiles without having to get into a GUI.
# Translators: Orca uses Speech Dispatcher to present content to users via text-
# to-speech. Speech Dispatcher has a feature to control how capital letters are
# presented: Do nothing at all, say the word 'capital' prior to presenting a
# capital letter, or play a tone which Speech Dispatcher refers to as a sound
# 'icon'. This string to be translated refers to an Orca command which makes it
# - word echo: "Hello" spoken when the space is pressed; "world" spoken when
# - sentence echo: "Hello world" spoken when the period is pressed.
# echo. The following string refers to a command that allows the user to quickly
# choose which type of echo is being used.
# Translators: this is a debug message that Orca users will not normally see. It
# describes a debug routine that allows the user to adjust the level of debug
# information that Orca generates at run time.
# describes a debug command that allows the user to clear the AT-SPI cache for
# the currently-running application. This is sometimes needed because applications
# fail to notify AT-SPI when something changes resulting in AT-SPI having stale
# information which impacts Orca's logic and/or presentation to users.
# state such as the current Orca settings and a list of all the running accessible
# applications, etc.
# Translators: this command announces information regarding the relationship of
# the given bookmark to the current position. Note that in this context, the
# "bookmark" is storing the location of an accessible object, typically on a web
# Translators: this event handler cycles through the registered bookmarks and
# takes the user to the previous bookmark location. Note that in this context,
# the "bookmark" is storing the location of an accessible object, typically on
# a web page.
# Translators: this command moves the user to the location stored at the bookmark.
# Note that in this context, the "bookmark" is storing the location of an
# accessible object, typically on a web page.
# takes the user to the next bookmark location. Note that in this context, the
# Translators: this event handler binds an in-page accessible object location to
# the given input key command.
# Translators: this event handler saves all bookmarks for the current application
# to disk.
# Translators: Orca allows the item under the pointer to be spoken. This toggles
# the feature without the need to get into a GUI.
# Translators: Orca has a command to present the battery status (e.g. level, whether
# or not it is plugged in, etc.). This string is the name of that command.
# Translators: Orca has a command to present the CPU and memory usage as percents.
# This string is the name of that command.
# Translators: Orca has a command to present the current time in speech and in
# braille.
# Translators: Orca has a command to present the current date in speech and in
# Translators: Orca has a command to present the pixel size and location of
# the current object. This string is how this command is described in the list
# of keyboard shortcuts.
# Translators: This command toggles all (other) Orca commands so that the
# associated keystroke can by consumed by the native application. For example,
# would cause the Orca command to be used. This would mean Alt+Down could not
# be used in editors to move the current line of text down. By temporarily
# disabling Orca commands, Alt+Down would be ignored by Orca and work as
# expected in the editor.
# speech and braille. This string to be translated is associated with the
# keyboard commands used to review those previous messages.
# is associated with the command to toggle typing status presentation on or off.
# translated is associated with the command to toggle specific room history on
# or off.
# instead of "<joanie> hello!"). This string to be translated is associated with
# the command to toggle room name presentation on or off.
# Translators: this is a command for a button on a refreshable braille display
# (an external hardware device used by people who are blind). When pressing the
# button, the display scrolls to the left.
# button, the display scrolls to the right.
# button, the display scrolls up.
# button, the display scrolls down.
# button, it instructs the braille display to freeze.
# button, the display scrolls to the top left of the window.
# button, the display scrolls to the bottom left of the window.
# button, the display scrolls to position containing the cursor.
# button, the display toggles between six-dot braille and eight-dot braille.
# (an external hardware device used by people who are blind). This command
# represents a whole set of buttons known as cursor routing keys and are a way
# for a user to move the application's caret to the position indicated on the
# display.
# represents the start of a selection operation. It is called "Cut Begin" to map
# to what BrlTTY users are used to: in character cell mode operation on virtual
# consoles, the act of copying text is erroneously called a "cut" operation.
# represents marking the endpoint of a selection. It is called "Cut Line" to map
# Translators: this is a command which causes Orca to present the last received
# Translators: this is a command which causes Orca to present a list of all the
# notification messages received.
# Translators: this is a command which causes Orca to present the previous
# Translators: this is a command which causes Orca to present the next
# Translators: this is a command related to navigating within a document.
# Translators: this is for causing a collapsed combo box which was reached
# by Orca's caret navigation to be expanded.
# features to facilitate accessing live regions. This string refers to a command
# to cycle through the different "politeness" levels.
# to turn off live regions by default.
# This string refers to a command for reviewing up to nine stored previous live
# messages.
# This string refers to an Orca command which allows the user to toggle whether
# or not Orca pays attention to changes in live regions. Note that turning off
# monitoring of live events is NOT the same as turning the politeness level
# to "off". The user can opt to have no notifications presented (politeness
# level of "off") and still manually review recent updates to live regions via
# Orca commands for doing so -- as long as the monitoring of live regions is
# enabled.
# or table should be treated as containing column headers. This string refers to
# the command to set the row.
# the command to unset the row so it is no longer treated as if it contained
# column headers.
# spreadsheet or table should be treated as containing row headers. This
# string refers to the command to set the column.
# string refers to the command to unset the column so it is no longer treated
# as if it contained row headers.
# Translators: This string refers to an Orca command. The "input line" refers
# to the place where one enters formulas for a spreadsheet.
# Translators: the structural navigation keys are designed to move around in a
# document or container by object type. Thus H moves you to the next heading, Shift+H
# to the previous heading, T to the next table, and so on. This feature needs to be
# toggle-able so that it does not interfere with normal writing functions. In addition,
# the navigation can be restricted to the current document or to non-document/GUI
# objects. This string is the description of the command which switches among the
# available modes: off, document, and GUI.
# next cell in a given direction. This string is the description of the command
# which enables/disables this support.
# Translators: this is for navigating among blockquotes in a document.
# Translators: this is for navigating among buttons in a document.
# Translators: this is for navigating among check boxes in a document.
# Translators: this is for navigating among clickable objects in a document.
# A "clickable" is a web element with an "onClick" handler.
# Translators: this is for navigating among combo boxes in a document.
# Translators: This string describes a document navigation command which moves
# to the start of the current container. Examples of containers include tables,
# lists, and blockquotes.
# to the end of the current container. Examples of containers include tables,
# Translators: this is for navigating among entries in a document.
# Translators: this is for navigating among form fields in a document.
# Translators: this is for navigating among headings (e.g. <h1>) in a document.
# <h1> is a heading at level 1, <h2> is a heading at level 2, etc.
# Translators: this is for navigating among iframes in a document.
# Translators: this is for navigating among images in a document.
# Translators: this is for navigating among ARIA landmarks in a document. ARIA
# role landmarks are the W3C defined HTML tag attribute 'role' used to identify
# important part of webpage like banners, main context, search etc.
# Translators: this is for navigating among large objects in a document.
# Translators: this is for navigating among links in a document.
# Translators: this is for navigating among lists in a document.
# Translators: this is for navigating among list items in a document.
# Translators: this is for navigating among live regions in a document. A live
# region is an area of a web page that is periodically updated, e.g. a stock
# ticker. http://www.w3.org/TR/wai-aria/terms#def_liveregion
# as a hierarchy.
# as a hierarchy. Users are also able to synthesize a click on the objects.
# as a hierarchy. This hierarchy can be simplified, and the simplification can be
# toggled on and off.
# Translators: this is for navigating among paragraphs in a document.
# Translators: this is for navigating among radio buttons in a document.
# Translators: this is for navigating among separators (e.g. <hr>) in a
# Translators: this is for navigating among tables in a document.
# Translators: this is for navigating among table cells in a document.
# both for presentation and navigation. This string is associated with the Orca
# command to manually toggle layout mode on/off.
# This string is associated with the Orca command to manually switch
# between these two modes.
# This string is associated with the Orca command to enable sticky focus mode.
# This string is associated with the Orca command to enable sticky browse mode.
# Translators: this is for navigating among unvisited links in a document.
# Translators: this is for navigating among visited links in a document.
# We want to avoid self-referential relationships.
# Copyright (C) 2011-2013 Igalia, S.L.
# TODO - We probably do not wish to "infer" from these. Instead, we
# should ensure that this content gets presented as part of the widget.
# (i.e. the label is something on screen. Widget name and description
# are each something other than a label.)
# Desperate times call for desperate measures....
# None of the cells immediately surrounding this cell seem to be serving
# as a functional label. Therefore, see if this table looks like a grid
# of widgets with the functional labels in the first row.
# TODO - JD: According to the doc string above, one of the main motivators of the work here is
# Solaris. If the situation stated does not apply to Linux, do we need to do this work?
# Find the numerical value of the keysym
# Now find the keycodes for the keysym.   Since a keysym can
# be associated with more than one key, we'll shoot for the
# keysym that's in group 0, regardless of shift level (each
# entry is of the form [keycode, group, level]).
# TODO - JD: Consider moving these localized strings to one of the dedicated i18n files.
# Translators: this is presented in a GUI to represent the
# "insert" key when used as the Orca modifier.
# "caps lock" modifier.
#if mods & (1 << Atspi.ModifierType.NUMLOCK):
# "right alt" modifier.
# "super" modifier.
# "meta 2" modifier.
#if mods & (1 << Atspi.ModifierType.META):
# "alt" modifier.
# "control" modifier.
# "shift " modifier.
# Translators: Orca keybindings support double and triple "clicks" or key presses, similar
# to using a mouse.
# We lazily bind the keycode.  The primary reason for doing this
# is so that atspi does not have to be initialized before setting
# keybindings in the user's preferences file.
# If we are using keysyms, we need to bind the uppercase keysyms if requested,
# as well as the lowercase ones, because keysyms represent characters, not key locations.
# If there are no candidates, we could be in a situation where we went from outside
# of web content to inside web content in focus mode. When that occurs, refreshing
# keybindings will attempt to remove grabs for browse-mode commands that were already
# removed due to leaving document content. That should be harmless.
# TODO - JD: This shouldn't happen, but it does when trying to remove an overridden
# binding. This function gets called with the original binding.
# TODO - JD: This better not happen. Be sure that is indeed the case.
# pylint:disable=too-many-boolean-expressions
# If we don't have multiple matches, we're good.
# If we have multiple matches, but they have unique click counts, we're good.
# Checking the modifier mask ensures we don't consume flat review commands
# when NumLock is on.
# If there's no keysymstring, it's unbound and cannot be a match.
# If we're still here, we don't have an exact match. Prefer the one whose click count is
# closest to, but does not exceed, the actual click count.
# Translators: this is the action name for the 'toggle' action. It must be the
# same string used in the *.po file for gail.
# Translators: this is a indication of the focused icon and the count of the
# total number of icons within an icon panel. An example of an icon panel is
# the Nautilus folder view.
# Translators: this refers to the position of an item in a list or group of
# objects, such as menu items in a menu, radio buttons in a radio button group,
# combobox item in a combobox, etc.
# Translators: this refers to the position of an item in a list for which the
# number. Normally Orca announces both the position of the item and the
# total number (e.g. "3 of 5"). This is the corresponding message for the
# unknown-count scenario.
# list that's inside another list). This string is specifically for braille.
# Because braille displays lack real estate, we're using a shorter string than
# we use for speech.
# Translators: This represents the depth of a node in a TreeView (i.e. how many
# ancestors the node has). This is the spoken version.
# ancestors the node has). This is the braille version.
# This relationship will be presented for the object containing the details, e.g.
# when arrowing into or out of it. The string substitution is for the object to
# which the detailed information applies. For instance, when navigating into
# the details for an image named Pythagorean Theorem, Orca would present:
# "details for Pythagorean Theorem image".
# This relationship will be presented for the object which has details to tell
# the user the type of object where the details can be found so that they can
# more quickly navigate to it. The string substitution is for the object to
# which the detailed information applies. For instance, when navigating to
# a password field which has details in a list named "Requirements", Orca would
# present: "has details in Requirements list".
# Translators: This string should be treated as a role describing an object.
# Examples of roles include "checkbox", "radio button", "paragraph", and "link."
# This role refers to a container with a proposed change. This change can
# include the insertion and/or deletion of content, and would typically be seen
# in a collaborative editor, such as in Google Docs.
# The reason for including the editable state as part of the role is to make it
# possible for users to quickly identify combo boxes in which a value can be
# typed or arrowed to.
# This role is to describe elements in web content which have the contenteditable
# attribute set to true, indicating that the element can be edited by the user.
# The feed role is a scrollable list of articles where scrolling may cause
# articles to be added to or removed from either end of the list.
# https://w3c.github.io/aria/#feed
# The figure role is a perceivable section of content that typically contains a
# graphical document, images, code snippets, or example text.
# https://w3c.github.io/aria/#figure
# This role refers to the abstract in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-abstract
# This role refers to the acknowledgments in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-acknowledgments
# This role refers to the afterword in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-afterword
# This role refers to the appendix in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-appendix
# This role refers to a bibliography entry in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-biblioentry
# This role refers to the bibliography in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-bibliography
# This role refers to a chapter in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-chapter
# This role refers to the colophon in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-colophon
# This role refers to the conclusion in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-conclusion
# This role refers to the cover in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-cover
# This role refers to a single credit in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-credit
# This role refers to the credits in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-credits
# This role refers to the dedication in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-dedication
# This role refers to a single endnote in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-endnote
# This role refers to the endnotes in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-endnotes
# This role refers to the epigraph in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-epigraph
# This role refers to the epilogue in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-epilogue
# This role refers to the errata in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-errata
# This role refers to an example in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-example
# This role refers to the foreword in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-foreword
# This role refers to the glossary in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-glossary
# This role refers to the index in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-index
# This role refers to the introduction in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-introduction
# This role refers to a pagebreak in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-pagebreak
# This role refers to a page list in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-pagelist
# This role refers to a named part in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-part
# This role refers to the preface in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-preface
# This role refers to the prologue in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-prologue
# This role refers to a pullquote in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-pullquote
# This role refers to a questions-and-answers section in a digitally-published
# document. https://w3c.github.io/dpub-aria/#doc-qna
# In English, "QNA" is generally recognized by native speakers. If your language
# lacks the equivalent, please prefer the shortest phrase which clearly conveys
# the meaning.
# This role refers to the subtitle in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-subtitle
# This role refers to the table of contents in a digitally-published document.
# https://w3c.github.io/dpub-aria/#doc-toc
# Translators: The 'h' in this string represents a heading level attribute for
# content that you might find in something such as HTML content (e.g., <h1>).
# The translated form is meant to be a single character followed by a numeric
# heading level, where the single character is to indicate 'heading'.
# Translators: The %(level)d is in reference to a heading level in HTML (e.g.,
# For <h3>, the level is 3) and the %(role)s is in reference to a previously
# translated rolename for the heading.
# The reason we include the orientation as part of the role is because in some
# applications and toolkits, it can dictate which keyboard keys should be used
# to modify the value of the widget.
# A slider is a widget which looks like a bar and displays a value as a range.
# A common example of a slider can be found in UI for modifying volume levels.
# A splitter is a bar that divides a container into two parts. It is often, but
# not necessarily, user resizable. A common example of a splitter can be found
# in email applications, where there is a container on the left which holds a
# list of all the mail folders and a container on the right which lists all of
# the messages in the selected folder. The bar which you click on and drag to
# resize these containers is the splitter. The reason we include the orientation
# as part of the role is because in some applications and toolkits, it can
# dictate which keyboard keys should be used to modify the value of the widget.
# The "switch" role is a "light switch" style toggle, such as can be seen in
# https://developer.gnome.org/gtk3/stable/GtkSwitch.html
# Translators: This is an alternative name for the parent object of a series
# of icons.
# The "banner" role is defined in the ARIA specification as "A region that
# contains mostly site-oriented content, rather than page-specific content."
# See https://www.w3.org/TR/wai-aria-1.1/#banner
# The "complementary" role is defined in the ARIA specification as "A supporting
# section of the document, designed to be complementary to the main content at a
# similar level in the DOM hierarchy, but remains meaningful when separated from
# the main content." See https://www.w3.org/TR/wai-aria-1.1/#complementary
# The "contentinfo" role is defined in the ARIA specification as "A large
# perceivable region that contains information about the parent document.
# Examples of information included in this region of the page are copyrights and
# links to privacy statements." See https://www.w3.org/TR/wai-aria-1.1/#contentinfo
# The "main" role is defined in the ARIA specification as "The main content of
# a document." See https://www.w3.org/TR/wai-aria-1.1/#main
# The "navigation" role is defined in the ARIA specification as "A collection of
# navigational elements (usually links) for navigating the document or related
# documents." See https://www.w3.org/TR/wai-aria-1.1/#navigation
# The "region" role is defined in the ARIA specification as "A perceivable
# section containing content that is relevant to a specific, author-specified
# purpose and sufficiently important that users will likely want to be able to
# navigate to the section easily and to have it listed in a summary of the page."
# See https://www.w3.org/TR/wai-aria-1.1/#region
# The "search" role is defined in the ARIA specification as "A landmark region
# that contains a collection of items and objects that, as a whole, combine to
# create a search facility." See https://www.w3.org/TR/wai-aria-1.1/#search
# The reason for including the visited state as part of the role is to make it
# possible for users to quickly identify if the link is associated with content
# already read.
# Translators: This string refers to a row or column whose sort-order has been set
# to ascending.
# to descending.
# Translators: This string refers to a row or column whose sort-order has been set,
# but the nature of the sort order is unknown or something other than ascending or
# descending.
# Translators: This is a state which applies to elements in document content
# which have an "onClick" action.
# Translators: This is a state which applies to items which can be expanded
# or collapsed such as combo boxes and nodes/groups in a treeview. Collapsed
# means the item's children are not showing; expanded means they are.
# which have a longdesc attribute. http://www.w3.org/TR/WCAG20-TECHS/H45.html
# Translators: This is a state which applies to the orientation of widgets
# such as sliders and scroll bars.
# Translators: This is a state which applies to a check box.
# Please don't use the same translation as for "selected",
# or it will be impossible to differentiate a checkbox in a list-item.
# Please don't use the same translation as for "not selected",
# Translators: This is a state which applies to a switch. For an example of
# a switch, see https://developer.gnome.org/gtk3/stable/GtkSwitch.html
# Translators: This is a state which applies to a toggle button.
# Translators: This is a state which applies to an item or option
# in a selectable list.
# Translators: This is a state which applies to a radio button.
# Translators: This is a state which applies to a table cell.
# Translators: This is a state which applies to a link.
# Translators: This state represents an item on the screen that has been set
# insensitive (or grayed out).
# Translators: Certain objects (such as form controls on web pages) can have
# STATE_EDITABLE set to inform the user that this field can be filled out.
# It is assumed that form fields will be editable; if they lack this state,
# we need to present that information to the user. This string is the spoken
# we need to present that information to the user. This string is the braille
# version. (Because braille displays have limited real estate, we abbreviate.)
# STATE_REQUIRED set to inform the user that this field must be filled out.
# Translators: "multi-select" refers to a web form list in which more than
# one item can be selected at a time.
# Translators: STATE_INVALID_ENTRY indicates that the associated object, such
# as a form field, has an error. The following string is spoken when all we
# know is that an error has occurred, but not the type of error.
# as a form field, has an error. The following string is displayed in braille
# when all we know is that an error has occurred, but not the type of error.
# We prefer a smaller string than in speech because braille displays have a
# limited size.
# as a form field, has an error. The following string is spoken when the error
# is related to spelling.
# when the error is related to spelling. We prefer a smaller string than in
# speech because braille displays have a limited size.
# is related to grammar.
# when the error is related to grammar. We prefer a smaller string than in
# TODO: Look at why we're doing this as lists.
# Copyright 2018-2023 Igalia, S.L.
# Copyright 2025 Valve Corporation
# pylint: disable=too-many-nested-blocks
# Only strip prefixes for getters and setters, not for commands
# orca.shutdown() shuts down the dbus service, so send the response immediately and then
# do the actual shutdown after a brief delay.
# TODO - JD: It probably makes sense to fully process these input
# events just like any others, rather than have the caller here.
# Parameterized Command
# Setter
# Copyright 2011-2023 Igalia, S.L.
# We might have nested cells. So far this has only been seen in Gtk, where the
# parent of a table cell is also a table cell. From the user's perspective, we
# are on the parent. This check also covers Writer documents in which the caret
# is likely in a paragraph child of the cell.
# And we might instead be in some deeply-nested elements which display text in
# a web table, so we do one more check.
# If we're in a cell that spans multiple rows and/or columns, the coordinates will refer to
# the upper left cell in the spanned range(s). We're storing the last row and column that
# we presented in order to facilitate more linear movement. Therefore, if the cell at the
# stored coordinates is the same as cell, we prefer the stored coordinates.
# TODO - JD: This should be part of the normal table cell presentation.
# TODO - JD: Ditto.
# Author: Tomas Cerha <cerha@brailcom.org>
# to `_active_Servers'.
# The following constants must be initialized in runtime since they
# depend on the speechd module being available.
# the user's chosen server (synthesizer) might be from spiel.
# Try to set precise dialect
# This command is not available with older SD versions.
# This callback is called in Speech Dispatcher listener thread.
# No subsequent Speech Dispatcher interaction is allowed here,
# so we pass the calls to the gidle thread.
# to indicate, that we don't want to be called again.
# In order to re-enable this, a potentially non-trivial amount of work
# will be needed to ensure multiple utterances sent to speech.speak
# do not result in the intial utterances getting cut off before they
# can be heard by the user. Anyone needing to interrupt speech can
# do so by using the default script's method interrupt_presentation.
#if interrupt:
# TODO - JD: If we're reaching this point without the name having been adjusted, we're
# doing it wrong.
# Get all voices and filter manually
# TODO - JD: The speech-dispatcher API accepts language parameters for filtering,
# but this seems to fail for Voxin (returns all voices regardless of language).
# TODO - JD: This updates the output module, but not the the value of self._id.
# That might be desired (e.g. self._id impacts what is shown in Orca preferences),
# but it can be confusing.
# Don't call _cancel() here because it can cut off messages we want to complete, such
# as "screen reader off."
# Set client to None to allow immediate garbage collection.
# Force garbage collection to clean up Speech Dispatcher objects immediately
# This prevents hanging during Python's final garbage collection.
# Try to set synthesis voice (individual voice like Nathan, Federica)
# Note: Anything added here should also have an entry in text_attribute_names.py.
# The tuple is (non-localized name, enable by default).
# TODO - JD: Is this still needed?
# TODO - JD: Are these still needed?
# Don't adjust the length in multiline text because we want to say "blank" at the end.
# This may or may not be sufficient. GTK3 seems to give us the correct, empty line. But
# (at least) Chromium does not. See comment below.
# Try again, e.g. Chromium returns "", -1, -1.
# If the caller provides an offset positioned at the end boundary of the
# current line (e.g. start iteration from the previous line's end), some
# implementations of Atspi return the same line again for that offset.
# To avoid yielding duplicates (e.g. in get_visible_lines()), only yield
# the current line when the offset points inside it; otherwise start with
# the next distinct line.
# Skip whitespace and embedded objects to find start of next sentence.
# Only add boundary if we haven't reached the end and it's not a duplicate.
# Avoid yielding a duplicate when the starting offset is exactly at the
# end boundary of the current sentence. Some implementations can return
# the same (sentence, start, end) again for that offset.
# TODO - JD: For now we always assume and operate on the first selection.
# This preserves the original functionality prior to the refactor. But whether
# that functionality is what it should be needs investigation.
# TODO - JD: This should be converted to return AXTextAttribute values.
# Adjust for web browsers that report indentation and justification at object attributes
# rather than text attributes.
# TODO - JD: We're sometimes seeing this from WebKit, e.g. in Evo gitlab messages.
# https://gitlab.gnome.org/GNOME/gtk/-/issues/6419
# Translators: this is a structure to assist in the generation of
# spoken military-style spelling.  For example, 'abc' becomes 'alpha
# bravo charlie'.
# It is a simple structure that consists of pairs of
# letter : word(s)
# where the letter and word(s) are separate by colons and each
# pair is separated by commas.  For example, we see:
# a : alpha, b : bravo, c : charlie,
# And so on.  The complete set should consist of all the letters from
# the alphabet for your language paired with the common
# military/phonetic word(s) used to describe that letter.
# The Wikipedia entry
# http://en.wikipedia.org/wiki/NATO_phonetic_alphabet has a few
# interesting tidbits about local conventions in the sections
# "Additions in German, Danish and Norwegian" and "Variants".
# Leave these as-is for now so as not to break debugging for anyone using orca-customizations.py
# pylint: enable=invalid-name
# Prevent reentrancy.
# TODO - JD: This is a workaround from the web script. Ideally, the speech servers' say-all
# would be able to handle more complex utterances.
# Copyright 2004-2008 Sun Microsystems Inc.
# Alias gettext.gettext to _ and gettext.ngettext to ngettext
# Tell gettext where to find localized strings.
# Utility methods to facilitate easier translation                     #
# no translation found, return input string
# Utilities for finding all objects that meet a certain criteria.
# TODO: Can we get the tablesdir info at runtime?
# Common names for most used BrlTTY commands, to be shown in the GUI,
# The size of the physical display (width, height).  The coordinate system of
# the display is set such that the upper left is (0,0), x values increase from
# left to right, and y values increase from top to bottom.
# For the purposes of testing w/o a braille display, we'll set the display
# size to width=32 and height=1.
# [[[TODO: WDW - Only a height of 1 is support at this time.]]]
# The list of lines on the display.  This represents the entire amount of data
# to be drawn on the display.  It will be clipped by the viewport if too large.
# The region with focus.  This will be displayed at the home position.
# The last text information painted.  This has the following fields:
# lastTextObj = the last accessible
# lastCaretOffset = the last caret offset of the last text displayed
# lastLineOffset = the last line offset of the last text displayed
# lastCursorCell = the last cell on the braille display for the caret
# The viewport is a rectangular region of size _displaySize whose upper left
# corner is defined by the point (x, line number).  As such, the viewport is
# identified solely by its upper left point.
# The callback to call on a BrlTTY input event.  This is passed to
# the init method.
# If True, the given portion of the currently displayed line is showing
# on the display.
# 1-based offset saying which braille cell has the cursor.  A value
# of 0 means no cell has the cursor.
# The event source of a timeout used for flashing a message.
# Line information saved prior to flashing any messages
# Set to True when we lower our output priority
# BRLAPI priority levels if Orca should have idle, normal or high priority
# Saved BRLAPI priority
# Translators: These are the braille translation table names for different
# languages. You could read about braille tables at:
# http://en.wikipedia.org/wiki/Braille
# Some of the tables are probably not a good choice for default table....
# Some of the tables might be a better default than others. For instance, someone who
# can read grade 2 braille presumably can read grade 1; the reverse is not necessarily
# true. Literary braille might be easier for some users to read than computer braille.
# We can adjust this based on user feedback, but in general the goal is a sane default
# for the largest group of users; not the perfect default for all users.
# If we couldn't find a preferred match, just go with the first match for the locale.
# If louis is None, then we don't go into contracted mode.
# The uncontracted string for the line.
# Make sure the cursor is at a realistic spot.
# Note that if cursor_offset is beyond the end of the buffer,
# a spurious value is returned by liblouis in cursorPos.
# Off the chart, we just place the cursor at the end of the line.
# Not in contracted mode.
# Do a mouse button 1 click if we have to.  For example, page tabs
# don't have any actions but we want to be able to select them with
# the cursor routing key.
# Ensure there is a place to click on at the end of a line so the user can route the
# caret there.
# Add empty mask characters for the EOL character as well as for the label.
# TODO: The way words are being combined here can result in incorrect range groupings.
# For instance, if we generate the full ancestry of a multiline text object and the
# line begins with whitespace, we'll wind up with a single range that contains the
# last word of the ancestor followed by the whitespace and the first word, e.g.
# "frame      Hello". We probably should not be creating a single string which we then
# split into words.
# Subdivide long words that exceed the display width.
# Translate the cursor offset for this line into a cursor offset for a region, and then
# pass the event off to the region for handling.
# Adjust the viewport according to the new region with focus.
# The goal is to have the first cell of the region be in the
# home position, but we will give priority to make sure the
# cursor for the region is on the display.  For example, when
# faced with a long text area, we'll show the position with
# the caret vs. showing the beginning of the region.
# If the cursor is too far right, we scroll the viewport
# so the cursor will be on the last cell of the display.
# We do want to try to clear the output we left on the device
# Restore default priority
# BrlAPI before 0.8 and we really want to shut down
# TODO - JD: Split this work out into smaller methods.
# TODO - JD: This should be taken care of in orca.py.
# If there is no target cursor cell and panning to cursor was
# requested, then try to set one.  We
# currently only do this for text objects, and we do so by looking
# at the last position of the caret offset and cursor cell.  The
# primary goal here is to keep the cursor movement on the display
# somewhat predictable.
# Now, we figure out the 0-based offset for where the cursor actually is in the string.
# Now, if desired, we'll automatically pan the viewport to show
# the cursor.  If there's no targetCursorCell, then we favor the
# left of the display if we need to pan left, or we favor the
# right of the display if we need to pan right.
# Now normalize the cursor position to BrlTTY, which uses 1 as
# the first cursor position as opposed to 0.
# Normalize to 1-based offset
# [[[WDW - if you want to muck around with the dots on the
# display to do things such as add underlines, you can use
# the attrOr field of the write structure to do so.  The
# attrOr field is a string whose length must be the same
# length as the display and whose dots will end up showing
# up on the display.  Each character represents a bitfield
# where each bit corresponds to a dot (i.e., bit 0 = dot 1,
# bit 1 = dot 2, and so on).  Here's an example that underlines
# all the text.]]]
#myUnderline = ""
#for i in range(0, _displaySize[0]):
#writeStruct.attrOr = myUnderline
# Remember the text information we were presenting (if any)
# The monitor will be created in refresh if needed.
# Ring List. A fixed size circular list by Flavio Catalani                  #
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/435902            #
# Included here to keep track of conversation histories.                    #
# Conversation                                                              #
# The number of messages to keep in the history
# A cyclic list to hold the chat room history for this conversation
# Initially populate the cyclic lists with empty strings.
# Keep track of the last typing status because some platforms (e.g.
# MSN) seem to issue the status constantly and even though it has
# not changed.
# ConversationList                                                          #
# A cyclic list to hold the most recent (messageListLength) previous
# messages for all conversations in the ConversationList.
# A corresponding cyclic list to hold the name of the conversation
# associated with each message in the messageHistory.
# TODO - JD: In the Pidgin script, I do not believe we handle the
# case where a conversation window is closed. I *think* it remains
# in the overall chat history. What do we want to do in that case?
# I would assume that we'd want to remove it.... So here's a method
# to do so. Nothing in the Chat class uses it yet.
# Chat                                                                      #
# Keybindings to provide conversation message history. The message
# review order will be based on the index within the list. Thus F1
# is associated with the most recent message, F2 the message before
# that, and so on. A script could override this. Setting messageKeys
# to ["a", "b", "c" ... ] will cause "a" to be associated with the
# most recent message, "b" to be associated with the message before
# that, etc. Scripts can also override the messageKeyModifier.
# The length of the message history will be based on how many keys
# are bound to the task of providing it.
# InputEvent handlers and supporting utilities                         #
# Only speak/braille the new message if it matches how the user
# wants chat messages spoken.
# The script should handle non-chat specific text areas (e.g.,
# adding a new account).
# These are status changes. What the Pidgin script currently
# does for these is ignore them. It might be nice to add
# some options to allow the user to customize what status
# changes are presented. But for now, we'll ignore them
# across the board.
# The user may or may not want us to present this message. Also,
# don't speak the name if it's the focused chat.
# Convenience methods for identifying, locating different accessibles  #
# Note: This is a very simple heuristic based on existing chat apps.
# Subclasses can override this function.
# TODO - JD: If we have multiple chats going on and those
# chats have the same name, and we're in the input area,
# this approach will fail. What I should probably do instead
# is, upon creation of a new conversation, figure out where
# the input area is and save it. For now, I just want to get
# things working. And people should not be in multiple chat
# rooms with identical names anyway. :-)
# Most of the time, it seems that the name can be found in the
# page tab which is the ancestor of the chat history. Failing
# that, we'll look at the frame name. Failing that, scripts
# should override this method. :-)
# TODO - JD: I still need to figure this one out. Pidgin seems to
# no longer be presenting this change in the conversation history
# as it was doing before. And I'm not yet sure what other apps do.
# In the meantime, scripts can override this.
# Copyright 2023 The Orca Team
# Author: Rynhardt Kruger <rynkruger@gmail.com>
# is_layout_only should catch things that really should be skipped.
# You do not want to exclude all sections because they may be focusable, e.g.
# <div tabindex=0>foo</div> should not be excluded, despite the poor authoring.
# You do not want to exclude table cells and headers because it will make the
# selectable items in tables non-navigable (e.g. the mail folders in Evolution)
# Add the children of excluded objects to our list of children.
# The first non-excluded ancestor is the functional parent.
# Utilities for obtaining information about accessible objects.
# TODO - JD: Periodically check for fixes and remove hacks which are no
# longer needed.
# https://bugzilla.mozilla.org/show_bug.cgi?id=1879750
# https://bugreports.qt.io/browse/QTBUG-130116
# To avoid circular import. pylint: disable=import-outside-toplevel
# This performs our check and includes any errors. We don't need the return value here.
# Keep track of objects we've encountered in order to handle broken trees.
# The return value is an AtspiPoint, hence x and y.
# Added in Atspi 2.52.
# This is for prototyping in the meantime.
# We use the Atspi function rather than the AXObject function because the
# latter intentionally handles exceptions.
# GTK4 does this.
# We get all sorts of variations in the keybinding string. Try to normalize it.
# We use Gtk for conversion to handle things like <Primary>.
# The ARIA spec suggests a given shortcut's components should be separated by a "+".
# Multiple shortcuts are apparently allowed and separated by a space.
# Accelerators are typically modified and thus more than one character.
# This should be a string separated by semicolons and in the form:
# In practice we get all sorts of variations.
# If there's a third item, it's probably the accelerator.
# If the last thing has Ctrl in it, it's probably the accelerator.
# If it's not a single letter it's probably not the mnemonic.
# If Ctrl is in the result, it's probably the accelerator rather than the mnemonic.
# Don't treat space as a mnemonic.
# Copyright 2011-2016 Igalia, S.L.
# pylint:disable=too-many-arguments
# pylint:disable=too-many-positional-arguments
# Some implementors don't include numlock in the modifiers. Unfortunately,
# trying to heuristically hack around this just by looking at the event
# is not reliable. Ditto regarding asking Gdk for the numlock state.
# pylint:enable=too-many-arguments
# pylint:enable=too-many-positional-arguments
# Mn = Mark, nonspacing; Mc = Mark, spacing combining; Me = Mark, enclosing
# TODO - JD: This should go away once plugin support is in place.
# TODO - JD: Remove this and the validation logic once we have a fix for
# https://gitlab.gnome.org/GNOME/at-spi2-core/-/issues/194.
# Seconds a message is held in the queue before it is discarded
# The number of messages that are cached and can later be reviewed via
# LiveRegionManager.reviewLiveAnnouncement.
# corresponds to one of nine key bindings
# message priority queue
# To make it possible for focus mode to suspend commands without changing
# the user's preferred setting.
# This is temporary.
# Message cache.  Used to store up to 9 previous messages so user can
# review if desired.
# User overrides for politeness settings.
# last live obj to be announced
# Used to track whether a user wants to monitor all live regions
# Not to be confused with the global Gecko.liveRegionsOn which
# completely turns off live region support.  This one is based on
# a user control by changing politeness levels to LIVE_OFF or back
# to the bookmark or markup politeness value.
# Set up politeness level overrides and subscribe to bookmarks
# for load and save user events.
# We are initialized after bookmarks so call the load handler once
# to get initialized.
# First we will purge our politeness override dictionary of LIVE_NONE
# objects that are not registered for this page
# readBookmarksFromDisk() returns None on error. Just initialize to an
# empty dictionary if this is the case.
# All the 'registered' LIVE_NONE objects will be set to off
# if not monitoring.  We will ignore LIVE_NONE objects that
# arrive after the user switches off monitoring.
# Nothing to do for now
# Form output message.  No need to repeat labels and content.
# TODO: really needs to be tested in real life cases.  Perhaps
# a verbosity setting?
# set the last live obj to be announced
# cache our message
# We still want to maintain our queue if we are not monitoring
# See you again soon, stay in event loop if we still have messages.
# The current priority is either a previous override or the
# live property.  If an exception is thrown, an override for
# this object has never occurred and the object does not have
# live markup.  In either case, set the override to LIVE_NONE.
# start at the document frame
# get the URI of the page.  It is used as a partial key.
# The user is currently monitoring live regions but now wants to
# change all live region politeness on page to LIVE_OFF
# First we'll save off a copy for quick restoration
# Set all politeness overrides to LIVE_OFF.
# look through all the objects on the page and set/add to
# politeness overrides.  This only adds live regions with good
# markup.
# Toggle our flag
# The user wants to restore politeness levels
# get the description if there is one.
# We will add on descriptions if they don't duplicate
# what's already in the object's description.
# See http://bugzilla.gnome.org/show_bug.cgi?id=568467
# for more information.
# get the politeness level as a string
# We will only output useful information
# A message is divided into two parts: labels and content.  We
# will first try to get the content.  If there is None,
# assume it is an invalid message and return None
# Proper live regions typically come with proper aria labels. These
# labels are typically exposed as names. Failing that, descriptions.
# Looking for actual labels seems a non-performant waste of time.
# instantly send out notify messages
# This function is called as part of presentation interrupt. One of the times we interrupt
# presentation is in response to a key press. The motivation for clearing the last message
# details is to prevent concluding incorrectly that a live region message is duplicate.
# For instance, if the same live region message results from two different back-to-back key
# presses, both of those messages should be presented.
# look to see if there is a user politeness override
# We'll save off a reference to LIVE_NONE if we are monitoring
# to give the user a chance to change the politeness level.  It
# is done here for performance sake (objectid, uri are expensive)
# The setting, unfortunately, is disableBrailleEOL.
# Copyright 2011 The Orca Team.
# Attribute/Selection mask strings:
# "○"
# "●"
# Mouse reviewer for Orca
# Copyright 2008 Eitan Isaacson
# We get various and sundry results for the bounding box if the implementor
# included newline characters as part of the word or line at offset. Try to
# detect this and adjust the bounding boxes before getting the intersection.
# Set up the initial object as the one with the focus to avoid
# presenting irrelevant info the first time.
# On first startup windows and workspace are likely to be None,
# but the signals we connect to will get emitted when proper values
# become available;  but in case we got disabled and re-enabled we
# have to get the initial values manually.
# Adjust the pointer screen coordinates to be relative to the window. This is
# needed because we won't be able to get the screen coordinates in Wayland.
# Copyright 2010-2011 Consorcio Fernando de los Rios.
# Author: Juanje Ojeda Croissier <jojeda@emergya.es>
# if we removed the last profile, restore the default ones
# Do not enable learn mode for this action
# We want the user to be able to combine modifiers with the mouse click, therefore we
# do not "care" about the modifiers -- unless it's the Orca modifier.
# TODO - JD: Move these into the extension commands. That will require a new string
# and GUI change.
# TODO: Should probably avoid removing key grabs and re-adding them.
# Otherwise, a key could conceivably leak through while the script is
# in the process of updating the bindings.
# TODO - JD: Should these be moved into check_speech_setting?
# INPUT EVENT HANDLERS (AKA ORCA COMMANDS)                             #
# If we're at the beginning of a line of a multiline text
# area, then force it's caret to the end of the previous
# line.  The assumption here is that we're currently
# viewing the line that has the caret -- which is a pretty
# good assumption for focus tacking mode.  When we set the
# caret position, we will get a caret event, which will
# then update the braille.
# If we didn't move the caret and we're in a terminal, we
# jump into flat review to review the text.  See
# http://bugzilla.gnome.org/show_bug.cgi?id=482294.
# We might be panning through a flashed message.
# If we're at the end of a line of a multiline text area, then
# force it's caret to the beginning of the next line.  The
# assumption here is that we're currently viewing the line that
# has the caret -- which is a pretty good assumption for focus
# tacking mode.  When we set the caret position, we will get a
# caret event, which will then update the braille.
# Don't kill flash here because it will restore the previous contents and
# then process the routing key. If the contents accept a click action, this
# would result in clicking on the link instead of clearing the flash message.
# profile_names is now list[list[str]] where each is [display_name, internal_name]
# If no current match, start with first profile
# TODO: This is another "too close to code freeze" hack to cause the
# command names to be presented in the correct language.
# AT-SPI OBJECT EVENT HANDLERS                                         #
# Event handlers return bool:                                          #
# - True: Event was fully handled, no further processing needed        #
# - False: Event wasn't handled, should be passed to parent handlers   #
# Default return value is True (event handled by default script)       #
# TODO - JD: See if this can be removed. If it's still needed document why.
# TODO - JD: These need to be harmonized / unified / simplified.
# Force the update so that braille is refreshed.
# TODO - JD: Unlike the other state-changed callbacks, it seems unwise
# to call generate_speech() here because that also will present the
# expandable state if appropriate for the object type. The generators
# need to gain some smarts w.r.t. state changes.
# There is a bug in (at least) Pidgin in which a newly-expanded submenu lacks the
# showing and visible states, causing the logic below to be triggered. Work around
# that here by trusting selection changes from the locus of focus are probably valid
# even if the state set is not.
# If the current combobox is collapsed, its menu child that fired the event might lack
# the showing and visible states. This happens in (at least) Thunderbird's calendar
# new-appointment comboboxes. Therefore check to see if the event came from the current
# combobox. This is necessary because (at least) VSCode's debugger has some hidden menu
# that the user is not in which is firing this event. This is why we cannot have nice
# things.
# If the current item's selection is toggled, we'll present that
# via the state-changed event.
# If a wizard-like notebook page being reviewed changes, we might not get
# any events to update the locusOfFocus. As a result, subsequent flat
# review commands will continue to present the stale content.
# TODO - JD: We can potentially do some automatic reading here.
# Because some implementations are broken.
# We won't handle undo here as it can lead to double-presentation.
# If there is an application for which text-changed events are
# missing upon undo, handle them in an app or toolkit script.
# Popup menus in Chromium live in a menu bar whose first child is a panel.
# Methods for presenting content                                       #
# TODO - JD: Make this a TextEventReason. Also handle structural navigation
# and table navigation here. Technically that's not been necessary because
# it won't match anything below. But it would be cleaner to cover each cause
# explicitly.
# If we have selected text and the last event was a move to the
# right, then speak the character to the left of where the text
# caret is (i.e. the selected character).
# This is a blank line. Announce it if the user requested
# that blank lines be spoken.
# TODO - JD: This needs to be done in the generators.
# Some synthesizers will verbalize initial whitespace.
# Announce when we cross a hard line boundary.
# say_phrase is useful because it handles punctuation verbalization, but we don't want
# to trigger its whitespace presentation.
# TODO - JD: Remove this and have callers use the speech-adjustment logic.
############################################################################
# Presentation methods                                                     #
# (scripts should not call methods in braille.py or speech.py directly)    #
# TODO - JD: This is temporary and in place just so that we could include D-Bus support
# for the web script's commands prior to having global browse mode.
# Copyright 2019 Igalia, S.L.
# Mate
# Budgie
# Budgie's container doesn't actually hold the label.
# https://gitlab.gnome.org/GNOME/orca/-/issues/358
# To let default script handle presentation.
# TODO - JD: What condition specifically is this here for?
# The text in the description is the same as the text in the page
# tab and similar to (and sometimes the same as) the prompt.
# To ensure that when the Gtk script is active, events from document content
# are not ignored.
# Copyright (C) 2013-2014 Igalia, S.L.
# Handle changes within an entry completion popup.
# Copyright (C) 2023 Igalia, S.L.
# This is needed because Qt apps might insert some junk (e.g. a filler) in
# between the window/frame/dialog and the application.
# The fallback search is needed because sometimes we can ascend the accessibility
# tree all the way to the top; other times, we cannot get the parent, but can still
# get the children. The fallback search handles the latter scenario.
# https://bugreports.qt.io/browse/QTBUG-129656
# https://bugreports.qt.io/browse/QTBUG-116204
# Copyright (C) 2013-2019 Igalia, S.L.
# Copyright 2018-2019 Igalia, S.L.
# When there are no results due to the absence of a search term, the status
# bar lacks a name. When there are no results due to lack of match, the name
# of the status bar is "No results" (presumably localized). Therefore fall
# back on the widgets. TODO: This would be far easier if Chromium gave us an
# object attribute we could look for....
# Copyright 2018-2109 Igalia, S.L.
# The popup for an input with autocomplete on is a listbox child of a nameless frame.
# It lives outside of the document and also doesn't fire selection-changed events.
# Right now we don't get accessibility events for alerts which are
# already showing at the time of window activation. If that changes,
# we should store presented alerts so we don't double-present them.
# Copyright 2014-2015 Igalia, S.L.
# TODO: This would be far easier if Gecko gave us an object attribute to look for....
# Copyright 2010 Orca Team.
# We're sometimes getting a spurious focus claim from the Firefox window after opening
# a file from (at least) Caja.
# Override the base class type annotations
# TODO - JD: Can this logic be moved to the web script?
# We present changes when the list has focus via focus-changed events.
# Try to stop unwanted chatter when a message is being replied to.
# See bgo#618484.
# Type annotation to help with method resolution
# increment the column because the expander cell is hidden.
# Candidates will be in the rows beneath the current row.
# Only check in the current column and stop checking as
# soon as the node level of a candidate is equal or less
# than our current level.
# Type annotation to override the base class script type
# Copyright 2010 Joanmarie Diggs
# In the chat window, the frame name changes to reflect the active chat.
# So if we don't have a matching tab, this isn't the chat window.
# Hack to "tickle" the accessible hierarchy. Otherwise, the
# events we need to present text added to the chatroom are
# Overridden here because the event.source is in a hidden column.
# Bit of a hack. Pidgin inserts text into the chat history when the
# user is typing. We seem able to (more or less) reliably distinguish
# this text via its attributes because these attributes are absent
# from user inserted text -- no matter how that text is formatted.
# Override the base class type annotation since this script always has spellcheck
# If we're here, the locusOfFocus was in the selection list when
# that list got destroyed and repopulated. Focus is still there.
# Copyright 2018 Igalia, S.L.
# Override the base class type annotation since this script always has chat
# Copyright (C) 2015 Igalia, S.L.
# Copyright 2013 Igalia, S.L.
# For the no-such-function when the function is only in the subclass.
# TODO - JD: Figure out what's causing this in Evolution or WebKit and file a bug.
# When the selected message changes and the preview panel is showing, a panel with the
# `iframe` tag claims focus. We don't want to update our location in response.
# TODO - JD: Is this still needed? See also _get_cell_name_for_coordinates.
# https://bugs.documentfoundation.org/show_bug.cgi?id=158030
# https://bugs.documentfoundation.org/show_bug.cgi?id=160806
# Currently the coordinates of the cell are exposed as the name.
# TODO - JD: The SayLine, etc. code should be generated and not put
# together in the scripts. In addition, the voice crap needs to go
# here. Then it needs to be removed from the scripts.
# TODO - JD: Can this be moved to AXText?
# Copyright 2015 Igalia, S.L.
# Copyright 2010-2013 The Orca Team.
# TODO - JD: This is a hack that needs to be done better. For now it
# fixes the broken echo previous word on Return.
# TODO - JD: And this hack is another one that needs to be done better.
# But this will get us to speak the entire paragraph when navigation by
# paragraph has occurred.
# If we immediately set focus to the table, the lack of common ancestor will result in
# the ancestry up to the frame being spoken.
# Now setting focus to the table should cause us to present it. Then we can handle the
# presentation of the actual event we're processing without too much chattiness.
# TODO - JD: Can we remove this?
# If we were in a cell, and a different table is claiming focus, it's likely that
# the current sheet has just changed. There will not be a common ancestor between
# the old cell and the table and we'll wind up re-announcing the frame. To prevent
# that, set the focus to the parent of the sheet before the default script causes
# the table to be presented.
# https://bugs.documentfoundation.org/show_bug.cgi?id=163801
# Copyright (C) 2014 Igalia, S.L.
# https://gitlab.gnome.org/GNOME/mutter/-/issues/3826
# Copyright (C) 2010-2013 Igalia, S.L.
# Author: Alejandro Pinheiro Iglesias <apinheiro@igalia.com>
# TODO - JD: This workaround no longer works because the window has a name.
# We're getting a spurious focus claim from the gnome-shell window after
# the window switcher is used.
# If we're already in a dialog, and a label inside that dialog changes its name,
# present the new name. Example: the "Command not found" label in the Run dialog.
# gnome-shell fails to implement the selection interface but fires state-changed
# selected in the switcher and similar containers.
# The convenience of using a dictionary to add/goto a bookmark is offset
# by the difficulty in finding the next bookmark. We will need to sort
# our keys to determine the next bookmark on a page-by-page basis.
# mine out the hardware keys for this page and sort them
# TODO - JD: Should callers instead call dump_cache with preserve_context=True?
# To avoid triggering popup lists.
# TODO - JD: Audit callers and see if this can be merged into the default logic.
### TODO - JD: Replace this with an AXComponent utility.
# Embedded objects such as images and certain widgets won't implement the text interface
# and thus won't expose text attributes. Therefore try to get the info from the parent.
# This should cause us to initially stop at the large containers before
# allowing the user to drill down into them in browse mode.
# Example: Some StackExchange instances have a focusable "note"/comment role
# with a name (e.g. "Accepted"), and a single child div which is empty.
# Check for things in the same sentence before this object.
# Check for things in the same sentence after this object.
# Check for things in the same word to the left of this object.
# Check for things in the same word to the right of this object.
# We want to treat the list item marker as its own word.
# Do not treat a literal newline char as the end of line. When there is an
# actual newline character present, user agents should give us the right value
# for the line at that offset. Here we are trying to figure out where asking
# for the line at offset will give us the next line rather than the line where
# the cursor is physically blinking.
# This happens with dynamic skip links such as found on Wikipedia.
# Check for things on the same line to the left of this object.
# Check for things on the same line to the right of this object.
# TODO - JD: Move to AXUtilities.
# TODO - JD: This could go in the AXUtilities.
# TODO - JD: Audit this to see if they are now redundant.
# TODO - JD: Move into one of the AX* classes.
# If we have a series of embedded object characters, there's a reasonable chance
# they'll look like the one-word-per-line CSSified text we're trying to detect.
# We don't want that false positive. By the same token, the one-word-per-line
# CSSified text we're trying to detect can have embedded object characters. So
# if we have more than 30% EOCs, don't use this workaround. (The 30% is based on
# testing with problematic text.)
# Note: We cannot check for the editable-text interface, because Gecko
# seems to be exposing that for non-editable things. Thanks Gecko.
# they'll look like the one-char-per-line CSSified text we're trying to detect.
# We don't want that false positive. By the same token, the one-char-per-line
# TODO - JD: Move into the AXUtilities.
# TODO - JD: This should be in AXUtilities.
# TODO - JD: This should be in AXEventUtilities.
# TODO - JD: This is pretty generic and belongs in AXEventUtilities.
# TODO - JD: This should be in AXEventUtilities. And also fixed.
# There may be other roles where we need to do this. For now, solve the known one.
# TODO - JD: This is one of those "noise" functions.
# TODO - JD: Revisit all the cases this function is used and white editablility is not
# enough of a check.
# We try to do this check only if needed because getting object attributes is
# not as performant, and we cannot use the cached attribute because aria-hidden
# can change frequently depending on the app.
# TODO - JD: All the clearing stuff should be unified.
# TODO - JD: Can we remove this? If it's needed, should it be recursive?
# TODO - JD: Can we remove this? Even if it is needed, we now also clear the
# cache in _handleEventForRemovedSelectableChild. Also, if it is needed, should
# it be recursive?
# Risk "chattiness" if the locusOfFocus is dead and the object we've found is
# focused and has a different name than the last known focused object.
# Descending an element that we're treating as whole can lead to looping/getting stuck.
# If we're here, start looking up the tree, up to the document.
# TODO - JD: Is this detached document logic still needed?
# Copyright 2010-2011 Orca Team
# Copyright 2011-2015 Igalia, S.L.
# TODO - JD: Can this logic be moved to the default braille generator?
# TODO - JD: Can this logic be moved into the default braille generator?
# TODO - JD: Can this be merged into the default's
# TODO - JD: Can this logic be moved into the default speech generator?
# TODO - JD: Can this logic be moved to the default speech generator?
# TODO - JD: The default speech generator"s method determines group membership
# via the member-of relation. We cannot count on that here. Plus, radio buttons
# on the web typically live in a group which is labelled. Thus the new-ancestor
# presentation accomplishes the same thing. Unless this can be further sorted out,
# try to filter out some of the noise....
# We handle things even for non-document content due to issues in
# other toolkits (e.g. exposing list items to us that are not
# exposed to sighted users)
# Do this check before the roledescription check, e.g. navigation within VSCode's editor.
# TODO - JD: This function and its associated fake role really need to die....
# TODO - JD: Why isn"t this logic part of normal table cell generation?
# TODO - Why not [[string, self.voice(speech_generator.DEFAULT, **args)]] ?
# Copyright 2014-2025 Igalia, S.L.
# Type annotations to override the base class types
# TODO - JD: Clean up the focused + alreadyFocused mess which by side effect is causing
# the content of some objects (e.g. table cells) to not be generated.
# TODO - JD: We're making an exception here because the default script's say_line()
# handles verbalized punctuation, indentation, repeats, etc. That adjustment belongs
# in the generators, but that's another potentially non-trivial change.
# Objects might be destroyed as a consequence of scrolling, such as in an infinite scroll
# list. Therefore, store its name and role beforehand. Objects in the process of being
# destroyed typically lose their name even if they lack the defunct state. If the name of
# the object is different after scrolling, we'll try to find a child with the same name and
# role.
# Reasons we don't want to dive deep into the object include:
# 1. Editors like VSCode use the entry role for the code editor.
# 2. Giant nested lists.
# We shouldn't use cache in this method, because if the last thing we presented
# included this object and offset (e.g. a Say All or Mouse Review), we're in
# danger of presented irrelevant context.
# TODO - JD: Getting the caret context can, by side effect, update it. This in turn
# can prevent us from presenting table column headers when braille is enabled because
# we think they are not "new." Commit bd877203f0 addressed that, but we need to stop
# such side effects from happening in the first place.
# Hack: When panning to the left in a document, we want to start at
# the right/bottom of each new object. For now, we'll pan there.
# When time permits, we'll give our braille code some smarts.
# Hack: When panning to the right in a document, we want to start at
# the left/top of each new object. For now, we'll pan there. When time
# permits, we'll give our braille code some smarts.
# TODO - JD: Make this a TextEventReason.
# TODO - JD: Can this logic be removed?
# https://bugzilla.mozilla.org/show_bug.cgi?id=1867044
# Because we cannot count on the app firing the right state-changed events
# for descendants.
# Xlib.display -- high level display object
# modify it under the terms of the GNU Lesser General Public License
# as published by the Free Software Foundation; either version 2.1
# of the License, or (at your option) any later version.
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Lesser General Public License for more details.
# Python modules
# Python 2/3 compatibility.
# Xlib modules
# Xlib.protocol modules
# Xlib.xobjects modules
# Implement a cache of atom names, used by Window objects when
# dealing with some ICCCM properties not defined in Xlib.Xatom
# don't cache NONE responses in case someone creates this later
# Create the keymap cache
# Translations for keysyms to strings.
# Find all supported extensions
# a dict that maps the event name to the code
# or, when it's an event with a subcode, to a tuple of (event,subcode)
# note this wraps the dict so you address it as
# extension_event.EXTENSION_EVENT_NAME rather than
# extension_event["EXTENSION_EVENT_NAME"]
# Go through all extension modules
# Import the module and fetch it
# Call initialiasation function
# Finalize extensions by creating new classes
# Problem: we have already created some objects without the
# extensions: the screen roots and default colormaps.
# Fix that by reinstantiating them.
# Do a light-weight replyrequest to sync.  There must
# be a better way to do it...
# We need this to handle display extension methods
### display information retrieval
### Extension module interface
# Maybe should check extension overrides too
# store subcodes as a tuple of (event code, subcode) in the
# extension dict maintained in the display object
### keymap cache implementation
# The keycode->keysym map is stored in a list with 256 elements.
# Each element represents a keycode, and the tuple elements are
# the keysyms bound to the key.
# The keysym->keycode map is stored in a mapping, where the keys
# are keysyms.  The values are a sorted list of tuples with two
# elements each: (index, keycode)
# keycode is the code for a key to which this keysym is bound, and
# index is the keysyms index in the map for that keycode.
# Copy the map list, reversing the arguments
# Delete all sym->code maps for the changed codes
# Get the new keyboard mapping
# Replace code->sym map with the new map
# Update sym->code map
### client-internal keysym to string translations
### X requests
# Xlib.__init__ -- glue for Xlib package
# Explicitly exclude threaded, so that it isn't imported by
# Xlib.Xcursorfont -- standard cursors
# Xlib.Xutil -- ICCCM definitions and similar stuff
# Xlib.X -- basic X constants
# Avoid overwriting None if doing "from Xlib.X import *"
# background pixmap in CreateWindow
# and ChangeWindowAttributes
# border pixmap in CreateWindow
# special VisualID and special window
# class passed to CreateWindow
# destination window in SendEvent
# focus window in SetInputFocus
# special Atom, passed to GetProperty
# special Key Code, passed to GrabKey
# special Button Code, passed to GrabButton
# special Resource ID passed to KillClient
# special Time
# special KeySym
#-----------------------------------------------------------------------
# Event masks:
# Event names:
# Used in "type" field in XEvent structures.  Not to be confused with event
# masks above.  They start from 2 because 0 and 1 are reserved in the
# protocol for errors and replies.
# must be bigger than any event
# Key masks:
# Used as modifiers to GrabButton and GrabKey, results of QueryPointer,
# state in various key-, mouse-, and button-related events.
# Modifier names:
# Used to build a SetModifierMapping request or to read a
# GetModifierMapping request.  These correspond to the masks defined above.
# Button masks:
# Used in same manner as Key masks above. Not to be confused with button
# names below.  Note that 0 is already defined above as "AnyButton".
# used in GrabButton, GrabKey
# Button names:
# Used as arguments to GrabButton and as detail in ButtonPress and
# ButtonRelease events.  Not to be confused with button masks above.
# Note that 0 is already defined above as "AnyButton".
# XXX These still need documentation -- for now, read <X11/X.h>
# Xlib.threaded -- Import this module to enable threading
# We change the allocate_lock function in Xlib.support.lock to
# return a basic Python lock, instead of the default dummy lock
# Xlib.XK -- X keysym defs
# This module defines some functions for working with X keysyms as well
# as a modular keysym definition and loading mechanism. See the keysym
# definition modules in the Xlib/keysymdef directory.
#Get a reference to XK.__dict__ a.k.a. globals
#Import just the keysyms module.
#Extract names of just the keysyms.
#Copy the named keysyms into XK.__dict__
## k = mod.__dict__[keysym]; assert k == int(k) #probably too much.
#And get rid of the keysym module.
# Always import miscellany and latin1 keysyms
# ISO latin 1, LSB is the code
# We should be able to do these things quite automatically
# for latin2, latin3, etc, in Python 2.0 using the Unicode,
# but that will have to wait.
# Xlib.xauth -- ~/.Xauthority access
# entry format (all shorts in big-endian)
# raise an error?  this should get partially caught by the XNoAuthError in get_best_auth..
# Xlib.rdb -- X resource database implementation
# See end of file for an explanation of the algorithm and
# data structures used.
# Standard modules
# Set up a few regexpes for parsing string representation of resources
# Constants used for determining which match is best
# Option error class
# First split string into lines
# Skip empty line
# Handle continued lines
# Split line into resource and value
# Bad line, just ignore it silently
# Convert all escape sequences in value
# strip the last value part to get rid of any
# unescaped blanks
# Split res into components and bindings
# If the last part is empty, this is an invalid resource
# which we simply ignore
# Create a new mapping/value group
# Use second mapping if a loose binding, first otherwise
# Insert value into the derived db
# Split name and class into their parts
# It is an error for name and class to have different number
# of parts
# Lock database and wrap the lookup code in a try-finally
# block to make sure that it is unlocked.
# Precedence order: name -> class -> ?
# Special case for the unlikely event that the resource
# only has one component
# Special case for resources which begins with a loose
# binding, e.g. '*foo.bar'
# Now iterate over all components until we find the best match.
# For each component, we choose the best partial match among
# the mappings by applying these rules in order:
# Rule 1: If the current group contains a match for the
# name, class or '?', we drop all previously found loose
# binding mappings.
# Rule 2: A matching name has precedence over a matching
# class, which in turn has precedence over '?'.
# Rule 3: Tight bindings have precedence over loose
# bindings.
# Work on the first element == the best current match
# print 'path:  ', x.path
# if x.skip:
# print
# Attempt to find a match in x
# Hey, we actually found a value!
# Else just insert the new match
# Generate a new loose match
# Oh well, nothing matched
# Can't make another skip if we have run out of components
# If this already is a skip match, clone a new one
# Only generate a skip match if the loose binding mapping
# This is a dead end match
# Helper function for ResourceDB.__getitem__()
# Helper functions for ResourceDB.update()
# DEST already contains this component, update it
# Update tight and loose binding databases
# If a value has been set in SRC, update
# value in DEST
# COMP not in src, make a deep copy
# Helper functions for output
# There's a value for this component
# Output tight and loose bindings
# If first or last character is space or tab, escape them.
# Option type definitions
# Common X options
# Notes on the implementation:
# Resource names are split into their components, and each component
# is stored in a mapping.  The value for a component is a tuple of two
# or three elements:
# tightmapping contains the next components which are connected with a
# tight binding (.).  loosemapping contains the ones connected with
# loose binding (*).  If value is present, then this component is the
# last component for some resource which that value.
# The top level components are stored in the mapping r.db, where r is
# the resource object.
# Example:  Inserting "foo.bar*gazonk: yep" into an otherwise empty
# resource database would give the following structure:
# { 'foo': ( { 'bar': ( { },
# Xlib.error -- basic error classes
# Always 0
# Xlib.Xatom -- Standard X atoms
# Xlib.protocol.display -- core display communication
# Internal structures for communication, grouped
# by their function and locks
# Socket error indicator, set when the socket is closed
# in one way or another
# Event queue
# Unsent request queue and sequence number counter
# Send-and-receive loop, see function send_and_receive
# for a detailed explanation
# Calculate optimal default buffer size for recv.
# Data used by the send-and-receive loop
# Resource ID structures
# Use an default error handler, one which just prints the error
# Right, now we're all set up for the connection setup
# request with the server.
# Figure out which endianness the hardware uses
# Send connection setup
# Did connection fail?
# Set up remaining info
# Public interface
# Main lock, so that only one thread at a time performs the
# event waiting code.  This at least guarantees that the first
# thread calling next_event() will get the next event, although
# no order is guaranteed among other threads calling next_event()
# while the first is blocking.
# Lock event queue, so we can check if it is empty
# We have too loop until we get an event, as
# we might be woken up when there is no event.
# Lock send_recv so no send_and_receive
# can start or stop while we're checking
# whether there are one active.
# Release event queue to allow an send_and_recv to
# insert any now.
# Call send_and_recv, which will return when
# something has occured
# Before looping around, lock the event queue against
# modifications.
# Whiew, we have an event!  Remove it from
# the event queue and relaese its write lock.
# Finally, allow any other threads which have called next_event()
# while we were waiting to proceed.
# And return the event!
# Make a send_and_recv pass, receiving any events
# Lock the queue, get the event count, and unlock again.
# Attempting to free a resource id outside our range
# Private functions
# Clear out data structures
# Close the connection
# Set a connection closed indicator
# We go to sleep if there is already a thread doing what we
# want to do:
# FIXME: It would be good if we could also sleep when we're waiting on
# a response to a request that has already been sent.
# Signal that we are waiting for something.  These locks
# together with the *_waiting variables are used as
# semaphores.  When an event or a request response arrives,
# it will zero the *_waiting and unlock the lock.  The
# locks will also be unlocked when an active send_and_recv
# finishes to signal the other waiting threads that one of
# them has to take over the send_and_recv function.
# All this makes these locks and variables a part of the
# send_and_recv control logic, and hence must be modified
# only when we have the send_recv_lock locked.
# Release send_recv, allowing a send_and_recive
# to terminate or other threads to queue up
# Return immediately if flushing, even if that
# might mean that not necessarily all requests
# have been sent.
# Wait for something to happen, as the wait locks are
# unlocked either when what we wait for has arrived (not
# necessarily the exact object we're waiting for, though),
# or when an active send_and_recv exits.
# Release it immediately afterwards as we're only using
# the lock for synchonization.  Since we're not modifying
# event_waiting or request_waiting here we don't have
# to lock send_and_recv_lock.  In fact, we can't do that
# or we trigger a dead-lock.
# Return to caller to let it check whether it has
# got the data it was waiting for
# There's no thread doing what we need to do.  Find out exactly
# what to do
# There must always be some thread receiving data, but it must not
# necessarily be us
# Loop, receiving and sending data.
# We might want to start sending data
# Turn all requests on request queue into binary form
# and append them to self.data_send
# If there now is data to send, mark us as senders
# We've done all setup, so release the lock and start waiting
# for the network to fire up
# There's no longer anything useful we can do here.
# If we're flushing, figure out how many bytes we
# have to send so that we're not caught in an interminable
# loop if other threads continuously append requests.
# We're only checking for the socket to be writable
# if we're the sending thread.  We always check for it
# to become readable: either we are the receiving thread
# and should take care of the data, or the receiving thread
# might finish receiving after having read the data
# Timeout immediately if we're only checking for
# something to read or if we're flushing, otherwise block
# Ignore errors caused by a signal received while blocking.
# All other errors are re-raised.
# We must lock send_and_recv before we can loop to
# the start of the loop
# Socket is ready for sending data, send as much as possible.
# There is data to read
# We're the receiving thread, parse the data
# Clear up, set a connection closed indicator and raise it
# Otherwise return, allowing the calling thread to figure
# out if it has got the data it needs
# We must be a sending thread if we're here, so reset
# that indicator.
# And return to the caller
# There are three different end of send-recv-loop conditions.
# However, we don't leave the loop immediately, instead we
# try to send and receive any data that might be left.  We
# do this by giving a timeout of 0 to select to poll
# the socket.
# When flushing: all requests have been sent
# When waiting for an event: an event has been read
# When processing a certain request: got its reply
# Always break if we just want to receive as much as possible
# Else there's may still data which must be sent, or
# we haven't got the data we waited for.  Lock and loop
# We have accomplished the callers request.
# Record that there are now no active send_and_recv,
# and wake up all waiting thread
# Parse ordinary server response
# Check the first byte to find out what kind of response it is
# Are we're waiting for additional data for the current packet?
# Every response is at least 32 bytes long, so don't bother
# until we have received that much
# Error response
# Request response or generic event.
# Set reply length, and loop around to see if
# we have got the full response
# Else non-generic event
# Code is second byte
# Fetch error class
# print 'recv Error:', e
# Error for a request whose response we are waiting for,
# or which have an error handler.  However, if the error
# handler indicates that it hasn't taken care of the
# error, pass it on to the default error handler
# If this was a ReplyRequest, unlock any threads waiting
# for a request to finish
# Else call the error handler
# Sequence number is always data[2:4]
# Do sanity check before trying to parse the data
# print 'recv Request:', req
# Unlock any response waiting threads
# Skip bit 8, that is set if this event came from an SendEvent
# Python2 compatibility
# this etype refers to a set of sub-events with individual subcodes
# Drop all requests having an error handler,
# but which obviously succeded.
# Decrement it by one, so that we don't remove the request
# that generated these events, if there is such a one.
# Bug reported by Ilpo Nyyssönen
# Note: not all events have a sequence_number field!
# (e.g. KeymapNotify).
# print 'recv Event:', e
# Insert the event into the queue
# Unlock any event waiting threads
# Normalize sequence numbers, even if they have wrapped.
# This ensures that
# No matching events at all
# Find last req <= sno
# Did serial numbers just wrap around?
# Delete all request such as req <= sno
# Reply for an unknown request?  No, that can't happen.
# Only the ConnectionSetupRequest has been sent so far
# print 'data_send:', repr(self.data_send)
# print 'data_recv:', repr(self.data_recv)
# The full response haven't arrived yet
# Connection failed or further authentication is needed.
# Set reason to the reason string
# Else connection succeeded, parse the reply
# The base reply is 8 bytes long
# Loop around to see if we have got the additional data
# Don't bother about locking, since no other threads have
# access to the display yet
# However, we must lock send_and_recv, but we don't have
# to loop.
# Xlib.protocol.__init__ -- glue for Xlib.protocol package
# Xlib.protocol.structs -- some common request structures
# Xlib.protocol.request -- definitions of core requests
# Somebody must have smoked some really wicked weed when they
# defined the ListFontsWithInfo request:
# The server sends a reply for _each_ matching font...
# It then sends a special reply (name length == 0) to indicate
# that there are no more fonts in the reply.
# This means that we have to do some special parsing to see if
# we have got the end-of-reply reply.  If we haven't, we
# have to reinsert the request in the front of the
# display.sent_request queue to catch the next response.
# Bastards.
# Override the default __getattr__, since it isn't usable for
# the list reply.  Instead provide a __getitem__ and a __len__.
# Xlib.protocol.rq -- structure primitives for request, events and errors
# These are struct codes, we know their byte sizes
# Unfortunately, we don't know the array sizes of B, H and L, since
# these use the underlying architecture's size for a char, short and
# long.  Therefore we probe for their sizes, and additionally create
# a mapping that translates from struct codes to array codes.
# Bleah.
# print array_unsigned_codes, struct_to_array_codes
# if not display:
# Single-char values, we'll assume that means integer lists.
# Objects usable for List and FixedList fields.
# Struct is also usable.
# Structures for to_binary, parse_value and parse_binary
# Append structcode if there is one and we haven't
# got any varsize fields yet.
# Only store fields with values
# If we have got one varsize field, all the rest must
# also be varsize fields.
# These functions get called only once, as they will override
# themselves with dynamically created functions in the Struct
# Emulate Python function argument handling with our field names
# /argument handling
# First pack all varfields so their lengths and formats are
# available when we pack their static LengthFields and
# FormatFields
# Construct item list for struct.pack call, packing all static fields.
# If this is a total length field, insert
# the calculated field value here
# Format field, just insert the value we got previously
# A constant field, insert its value directly
# Value fields
# If there's a value check/convert function, call it
# Else just use the argument as provided
# Multivalue field.  Handled like single valuefield,
# but the value are tuple unpacked into separate arguments
# which are appended to pack_items
# Fields without names should be ignored, and there should
# not be any length or format fields if this function
# ever gets called.  (If there were such fields, there should
# be a matching field in var_fields and then parse_binary
# would have been called instead.
# If this field has a parse_value method, call it, otherwise
# use the unpacked value as is.
# Fields without name should be ignored.  This is typically
# pad and constant fields
# Store index in val for Length and Format fields, to be used
# when treating varfields.
# Treat value fields the same was as in parse_value.
# Call parse_binary_value for each var_field, passing the
# length and format values from the unpacked val.
# Let values be simple strings, meaning a delta of 0
# A tuple, it should be (delta, string)
# Encode it as one or more textitems
# Else an integer, i.e. a font change
# Use fontable cast function if instance
# Pad out to four byte length
# font change
# skip null strings
# string with delta
# Send request and wait for reply if we hasn't
# already got one.  This means that reply() can safely
# be called more than one time.
# If error has been set, raise it
# split event type into type and send_event bit
# Xlib.protocol.event -- definitions of core events
# Xlib.keysymdef -- X keysym defs
# Xlib.ext.res -- X-Resource extension module
# v1.0
# v1.2
# inline struct ResourceIdSpec to work around
# a parser bug with nested objects
# Automatically generated file; DO NOT EDIT.
# Generated from: /usr/share/xcb/shape.xml
# Sub events.
# Xlib.ext.__init__ -- X extension modules
# __extensions__ is a list of tuples: (extname, extmod)
# extname is the name of the extension according to the X
# protocol.  extmod is the name of the module in this package.
# We load this first so other extensions can register generic event data
# Xlib.ext.damage -- DAMAGE extension module
# Event codes #
# Error codes #
# DamageReportLevel options
# Events #
# Xlib.ext.xinput -- XInput extension module
# optimised math.ldexp(float(frac), -32)
# We need to build a "binary mask" that (as far as I can tell) is
# encoded in native byte order from end to end.  The simple case is
# with a single unsigned 32-bit value, for which we construct an
# array with just one item.  For values too big to fit inside 4
# bytes we build a longer array, being careful to maintain native
# byte order across the entire set of values.
# Mask: bitfield of <length> button states.
# Xlib.ext.xinerama -- Xinerama extension module
# IsActive is only available from Xinerama 1.1 and later.
# It should be used in preference to GetState.
# QueryScreens is only available from Xinerama 1.1 and later
# Hmm. This one needs to read the screen data from the socket. Ooops...
# GetInfo is only available from some Xinerama 1.0, and *NOT* later! Untested
# An array of subwindow slots goes here. Bah.
# Xlib.ext.xtest -- XTEST extension module
# Xlib.ext.record -- RECORD extension module
# Record_RC
# Record_Element_Header
# Record_XIDBase
# This request receives multiple responses, so we need to keep
# ourselves in the 'sent_requests' list in order to receive them all.
# See the discussion on ListFonstsWithInfo in request.py
# Hack ourselves a sequence number, used by the code in
# Xlib.protocol.display.Display.parse_request_response()
# Xlib.ext.ge -- Generic Event extension module
# Some generic events make use of this space, but with
# others the data is simply discarded.  In any case we
# don't need to explicitly pad this out as we are
# always given at least 32 bytes and we save
# everything after the first ten as the "data" field.
#rq.Pad(22),
# $Id: xtest.py,v 1.1 2000/08/21 10:03:45 petli Exp $
# Xlib.ext.composite -- Composite extension module
# FIXME: this should be a Region from XFIXES extension
# FIXME: create Region object and return it
# Xlib.ext.security -- SECURITY extension module
# The order of fields here does not match the specifications I've seen
# online, but it *does* match with the X.org implementation.  I guess the
# spec is out-of-date.
# Xlib.ext.randr -- RandR extension module
# V1.2 additions
# RRNotify Subcodes
# Event selection bits #
# Constants #
# used in the rotation field; rotation and reflection in 0.1 proto.
# new in 1.0 protocol, to allow reflection of screen
# new in 1.2 protocol
# event types?
# Conventional RandR output properties
# subpixel order - TODO: These constants are part of the RENDER extension and
# should be moved there if/when that extension is added to python-xlib.
# Error Codes #
# Error classes #
# Data Structures #
# TODO: This struct is part of the RENDER extension and should be moved there
# if/when that extension is added to python-xlib.
#FIXME: All of these are listed as FIXED in the protocol header.
# Requests #
# added in version 1.1
# XCB's protocol description disagrees with the X headers on this; ignoring.
#rq.List('rates', RandR_Rates) #FIXME: Why does uncommenting this cause an error?
# version 1.3
#FIXME: The protocol says FIXED? http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt#n2161
# Version 1.5 methods
# Initialization #
# If the server is running RANDR 1.5+, enable 1.5 compatible methods and events
# version 1.5 compatible
# add RRNotify events (1 event code with 3 subcodes)
# Xlib.ext.dpms -- X Display Power Management Signaling
# DPMS Extension Power Levels
# Xlib.ext.screensaver -- X ScreenSaver extension module
# Event members
# Notify state
# Notify kind
# rq.Set('event_mask', 4, (NotifyMask, CycleMask)),
# Xlib.ext.nvcontrol -- NV-CONTROL extension module
# Some attributes may only be read; some may require a display_mask
# argument and others may be valid only for specific target types.
# This information is encoded in the "permission" comment after each
# attribute #define, and can be queried at run time with
# XNVCTRLQueryValidAttributeValues() and/or
# XNVCTRLQueryValidTargetAttributeValues()
# Key to Integer Attribute "Permissions":
# R: The attribute is readable (in general, all attributes will be
# W: The attribute is writable (attributes may not be writable for
# D: The attribute requires the display mask argument.  The
# G: The attribute may be queried using an NV_CTRL_TARGET_TYPE_GPU
# F: The attribute may be queried using an NV_CTRL_TARGET_TYPE_FRAMELOCK
# X: When Xinerama is enabled, this attribute is kept consistent across
# V: The attribute may be queried using an NV_CTRL_TARGET_TYPE_VCSC
# I: The attribute may be queried using an NV_CTRL_TARGET_TYPE_GVI target type
# Q: The attribute is a 64-bit integer attribute;  use the 64-bit versions
# C: The attribute may be queried using an NV_CTRL_TARGET_TYPE_COOLER target
# S: The attribute may be queried using an NV_CTRL_TARGET_TYPE_THERMAL_SENSOR
# T: The attribute may be queried using an
# NOTE: Unless mentioned otherwise, all attributes may be queried using
# Integer attributes:
# Integer attributes can be queried through the XNVCTRLQueryAttribute() and
# XNVCTRLQueryTargetAttribute() function calls.
# Integer attributes can be set through the XNVCTRLSetAttribute() and
# XNVCTRLSetTargetAttribute() function calls.
# Unless otherwise noted, all integer attributes can be queried/set
# using an NV_CTRL_TARGET_TYPE_X_SCREEN target.  Attributes that cannot
# take an NV_CTRL_TARGET_TYPE_X_SCREEN also cannot be queried/set through
# XNVCTRLQueryAttribute()/XNVCTRLSetAttribute() (Since these assume
# an X Screen target).
# NV_CTRL_FLATPANEL_SCALING - not supported
# not supported
# NV_CTRL_FLATPANEL_DITHERING - not supported
# NV_CTRL_DITHERING should be used instead.
# NV_CTRL_DITHERING - the requested dithering configuration;
# possible values are:
# 0: auto     (the driver will decide when to dither)
# 1: enabled  (the driver will always dither when possible)
# 2: disabled (the driver will never dither)
# RWDG
# NV_CTRL_DIGITAL_VIBRANCE - sets the digital vibrance level for the
# specified display device.
# NV_CTRL_BUS_TYPE - returns the bus type through which the specified device
# is connected to the computer.
# When this attribute is queried on an X screen target, the bus type of the
# GPU driving the X screen is returned.
# R--GI
# NV_CTRL_TOTAL_GPU_MEMORY - returns the total amount of memory available
# to the specified GPU (or the GPU driving the specified X
# screen).  Note: if the GPU supports TurboCache(TM), the value
# reported may exceed the amount of video memory installed on the
# GPU.  The value reported for integrated GPUs may likewise exceed
# the amount of dedicated system memory set aside by the system
# BIOS for use by the integrated GPU.
# R--G
# NV_CTRL_IRQ - returns the interrupt request line used by the specified
# When this attribute is queried on an X screen target, the IRQ of the GPU
# driving the X screen is returned.
# NV_CTRL_OPERATING_SYSTEM - returns the operating system on which
# the X server is running.
# NV_CTRL_SYNC_TO_VBLANK - enables sync to vblank for OpenGL clients.
# This setting is only applied to OpenGL clients that are started
# after this setting is applied.
# RW-X
# NV_CTRL_LOG_ANISO - enables anisotropic filtering for OpenGL
# clients; on some NVIDIA hardware, this can only be enabled or
# disabled; on other hardware different levels of anisotropic
# filtering can be specified.  This setting is only applied to OpenGL
# clients that are started after this setting is applied.
# NV_CTRL_FSAA_MODE - the FSAA setting for OpenGL clients; possible
# FSAA modes:
# NV_CTRL_FSAA_MODE_2x     "2x Bilinear Multisampling"
# NV_CTRL_FSAA_MODE_2x_5t  "2x Quincunx Multisampling"
# NV_CTRL_FSAA_MODE_15x15  "1.5 x 1.5 Supersampling"
# NV_CTRL_FSAA_MODE_2x2    "2 x 2 Supersampling"
# NV_CTRL_FSAA_MODE_4x     "4x Bilinear Multisampling"
# NV_CTRL_FSAA_MODE_4x_9t  "4x Gaussian Multisampling"
# NV_CTRL_FSAA_MODE_8x     "2x Bilinear Multisampling by 4x Supersampling"
# NV_CTRL_FSAA_MODE_16x    "4x Bilinear Multisampling by 4x Supersampling"
# NV_CTRL_FSAA_MODE_8xS    "4x Multisampling by 2x Supersampling"
# NV_CTRL_UBB - returns whether UBB is enabled for the specified X
# screen.
# R--
# NV_CTRL_OVERLAY - returns whether the RGB overlay is enabled for
# the specified X screen.
# NV_CTRL_STEREO - returns whether stereo (and what type) is enabled
# for the specified X screen.
# NV_CTRL_EMULATE - not supported
# NV_CTRL_TWINVIEW - returns whether TwinView is enabled for the
# specified X screen.
# NV_CTRL_CONNECTED_DISPLAYS - deprecated
# NV_CTRL_BINARY_DATA_DISPLAYS_CONNECTED_TO_GPU and
# NV_CTRL_BINARY_DATA_DISPLAYS_ASSIGNED_TO_XSCREEN should be used instead.
# NV_CTRL_ENABLED_DISPLAYS - Event that notifies when one or more display
# devices are enabled or disabled on a GPU and/or X screen.
# This attribute may be queried through XNVCTRLQueryTargetAttribute()
# using a NV_CTRL_TARGET_TYPE_GPU or NV_CTRL_TARGET_TYPE_X_SCREEN target.
# Note: Querying this value has been deprecated.
# ---G
# Integer attributes specific to configuring Frame Lock on boards that
# NV_CTRL_FRAMELOCK - returns whether the underlying GPU supports
# Frame Lock.  All of the other frame lock attributes are only
# applicable if NV_CTRL_FRAMELOCK is _SUPPORTED.
# NV_CTRL_FRAMELOCK_MASTER - deprecated
# NV_CTRL_FRAMELOCK_DISPLAY_CONFIG should be used instead.
# NV_CTRL_FRAMELOCK_POLARITY - sync either to the rising edge of the
# frame lock pulse, the falling edge of the frame lock pulse or both.
# On Quadro Sync II, this attribute is ignored when
# NV_CTRL_USE_HOUSE_SYNC is OUTPUT.
# using a NV_CTRL_TARGET_TYPE_FRAMELOCK or NV_CTRL_TARGET_TYPE_X_SCREEN
# RW-F
# NV_CTRL_FRAMELOCK_SYNC_DELAY - delay between the frame lock pulse
# and the GPU sync.  This value must be multiplied by
# NV_CTRL_FRAMELOCK_SYNC_DELAY_RESOLUTION to determine the sync delay in
# nanoseconds.
# USAGE NOTE: NV_CTRL_FRAMELOCK_SYNC_DELAY_MAX and
# NV_CTRL_FRAMELOCK_SYNC_INTERVAL - how many house sync pulses
# between the frame lock sync generation (0 == sync every house sync);
# this only applies to the master when receiving house sync.
# NV_CTRL_FRAMELOCK_PORT0_STATUS - status of the rj45 port0.
# R--F
# NV_CTRL_FRAMELOCK_PORT1_STATUS - status of the rj45 port1.
# NV_CTRL_FRAMELOCK_HOUSE_STATUS - returns whether or not the house
# sync input signal was detected on the BNC connector of the frame lock
# board.
# NV_CTRL_FRAMELOCK_SYNC - enable/disable the syncing of display
# devices to the frame lock pulse as specified by previous calls to
# NV_CTRL_FRAMELOCK_DISPLAY_CONFIG.
# This attribute can only be queried through XNVCTRLQueryTargetAttribute()
# using a NV_CTRL_TARGET_TYPE_GPU target.  This attribute cannot be
# queried using a NV_CTRL_TARGET_TYPE_X_SCREEN.
# RW-G
# NV_CTRL_FRAMELOCK_SYNC_READY - reports whether a frame lock
# board is receiving sync (regardless of whether or not any display
# devices are using the sync).
# NV_CTRL_FRAMELOCK_STEREO_SYNC - this indicates that the GPU stereo
# signal is in sync with the frame lock stereo signal.
# using a NV_CTRL_TARGET_TYPE_GPU or NV_CTRL_TARGET_TYPE_X_SCREEN
# NV_CTRL_FRAMELOCK_TEST_SIGNAL - to test the connections in the sync
# group, tell the master to enable a test signal, then query port[01]
# status and sync_ready on all slaves.  When done, tell the master to
# disable the test signal.  Test signal should only be manipulated
# while NV_CTRL_FRAMELOCK_SYNC is enabled.
# The TEST_SIGNAL is also used to reset the Universal Frame Count (as
# returned by the glXQueryFrameCountNV() function in the
# GLX_NV_swap_group extension).  Note: for best accuracy of the
# Universal Frame Count, it is recommended to toggle the TEST_SIGNAL
# on and off after enabling frame lock.
# NV_CTRL_FRAMELOCK_ETHERNET_DETECTED - The frame lock boards are
# cabled together using regular cat5 cable, connecting to rj45 ports
# on the backplane of the card.  There is some concern that users may
# think these are ethernet ports and connect them to a
# router/hub/etc.  The hardware can detect this and will shut off to
# prevent damage (either to itself or to the router).
# NV_CTRL_FRAMELOCK_ETHERNET_DETECTED may be called to find out if
# ethernet is connected to one of the rj45 ports.  An appropriate
# error message should then be displayed.  The _PORT0 and _PORT1
# values may be or'ed together.
# NV_CTRL_FRAMELOCK_VIDEO_MODE - get/set what video mode is used
# to interperate the house sync signal.  This should only be set
# on the master.
# During FRAMELOCK bring-up, the above values were redefined to
# these:
# NV_CTRL_FRAMELOCK_SYNC_RATE - this is the refresh rate that the
# frame lock board is sending to the GPU, in milliHz.
# NV_CTRL_FORCE_GENERIC_CPU - not supported
# NV_CTRL_OPENGL_AA_LINE_GAMMA - for OpenGL clients, allow
# Gamma-corrected antialiased lines to consider variances in the
# color display capabilities of output devices when rendering smooth
# lines.  Only available on recent Quadro GPUs.  This setting is only
# applied to OpenGL clients that are started after this setting is
# applied.
# NV_CTRL_FRAMELOCK_TIMING - this is TRUE when the gpu is both receiving
# and locked to an input timing signal. Timing information may come from
# the following places: Another frame lock device that is set to master,
# the house sync signal, or the GPU's internal timing from a display
# NV_CTRL_FLIPPING_ALLOWED - when TRUE, OpenGL will swap by flipping
# when possible; when FALSE, OpenGL will always swap by blitting.
# NV_CTRL_ARCHITECTURE - returns the architecture on which the X server is
# running.
# NV_CTRL_TEXTURE_CLAMPING - texture clamping mode in OpenGL.  By
# default, _SPEC is used, which forces OpenGL texture clamping to
# conform with the OpenGL specification.  _EDGE forces NVIDIA's
# OpenGL implementation to remap GL_CLAMP to GL_CLAMP_TO_EDGE,
# which is not strictly conformant, but some applications rely on
# the non-conformant behavior.
# The NV_CTRL_CURSOR_SHADOW - not supported
# use an ARGB cursor instead.
# When Application Control for FSAA is enabled, then what the
# application requests is used, and NV_CTRL_FSAA_MODE is ignored.  If
# this is disabled, then any application setting is overridden with
# NV_CTRL_FSAA_MODE
# When Application Control for LogAniso is enabled, then what the
# application requests is used, and NV_CTRL_LOG_ANISO is ignored.  If
# NV_CTRL_LOG_ANISO
# IMAGE_SHARPENING adjusts the sharpness of the display's image
# quality by amplifying high frequency content.  Valid values will
# normally be in the range [0,32).  Only available on GeForceFX or
# newer.
# NV_CTRL_TV_OVERSCAN - not supported
# NV_CTRL_TV_FLICKER_FILTER - not supported
# NV_CTRL_TV_BRIGHTNESS  - not supported
# NV_CTRL_TV_HUE - not supported
# NV_CTRL_TV_CONTRAST - not suppoerted
# NV_CTRL_TV_SATURATION - not supported
# NV_CTRL_TV_RESET_SETTINGS - not supported
# NV_CTRL_GPU_CORE_TEMPERATURE reports the current core temperature
# of the GPU driving the X screen.
# NV_CTRL_GPU_CORE_THRESHOLD reports the current GPU core slowdown
# threshold temperature, NV_CTRL_GPU_DEFAULT_CORE_THRESHOLD and
# NV_CTRL_GPU_MAX_CORE_THRESHOLD report the default and MAX core
# slowdown threshold temperatures.
# NV_CTRL_GPU_CORE_THRESHOLD reflects the temperature at which the
# GPU is throttled to prevent overheating.
# NV_CTRL_AMBIENT_TEMPERATURE reports the current temperature in the
# immediate neighbourhood of the GPU driving the X screen.
# NV_CTRL_PBUFFER_SCANOUT_SUPPORTED - returns whether this X screen
# supports scanout of FP pbuffers;
# if this screen does not support PBUFFER_SCANOUT, then all other
# PBUFFER_SCANOUT attributes are unavailable.
# PBUFFER_SCANOUT is supported if and only if:
# - Twinview is configured with clone mode.  The secondary screen is used to
# - The desktop is running in with 16 bits per pixel.
# NV_CTRL_PBUFFER_SCANOUT_XID indicates the XID of the pbuffer used for
# scanout.
# The NV_CTRL_GVO_* integer attributes are used to configure GVO
# (Graphics to Video Out).  This functionality is available, for
# example, on the Quadro SDI Output card.
# The following is a typical usage pattern for the GVO attributes:
# - query NV_CTRL_GVO_SUPPORTED to determine if the X screen supports GV0.
# - specify NV_CTRL_GVO_SYNC_MODE (one of FREE_RUNNING, GENLOCK, or
# FRAMELOCK); if you specify GENLOCK or FRAMELOCK, you should also
# specify NV_CTRL_GVO_SYNC_SOURCE.
# - Use NV_CTRL_GVO_COMPOSITE_SYNC_INPUT_DETECTED and
# NV_CTRL_GVO_SDI_SYNC_INPUT_DETECTED to detect what input syncs are
# present.
# (If no analog sync is detected but it is known that a valid
# bi-level or tri-level sync is connected set
# NV_CTRL_GVO_COMPOSITE_SYNC_INPUT_DETECT_MODE appropriately and
# retest with NV_CTRL_GVO_COMPOSITE_SYNC_INPUT_DETECTED).
# - if syncing to input sync, query the
# NV_CTRL_GVIO_DETECTED_VIDEO_FORMAT attribute; note that Input video
# format can only be queried after SYNC_SOURCE is specified.
# - specify the NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT
# - specify the NV_CTRL_GVO_DATA_FORMAT
# - specify any custom Color Space Conversion (CSC) matrix, offset,
# and scale with XNVCTRLSetGvoColorConversion().
# - if using the GLX_NV_video_out extension to display one or more
# pbuffers, call glXGetVideoDeviceNV() to lock the GVO output for use
# by the GLX client; then bind the pbuffer(s) to the GVO output with
# glXBindVideoImageNV() and send pbuffers to the GVO output with
# glXSendPbufferToVideoNV(); see the GLX_NV_video_out spec for more
# - if using the GLX_NV_present_video extension, call
# glXBindVideoDeviceNV() to bind the GVO video device to current
# OpenGL context.
# Note that setting most GVO attributes only causes the value to be
# cached in the X server.  The values will be flushed to the hardware
# either when the next MetaMode is set that uses the GVO display
# device, or when a GLX pbuffer is bound to the GVO output (with
# glXBindVideoImageNV()).
# Note that GLX_NV_video_out/GLX_NV_present_video and X screen use
# are mutually exclusive.  If a MetaMode is currently using the GVO
# device, then glXGetVideoDeviceNV and glXBindVideoImageNV() will
# fail.  Similarly, if a GLX client has locked the GVO output (via
# glXGetVideoDeviceNV or glXBindVideoImageNV), then setting a
# MetaMode that uses the GVO device will fail.  The
# NV_CTRL_GVO_GLX_LOCKED event will be sent when a GLX client locks
# the GVO output.
# NV_CTRL_GVO_SUPPORTED - returns whether this X screen supports GVO;
# if this screen does not support GVO output, then all other GVO
# attributes are unavailable.
# NV_CTRL_GVO_SYNC_MODE - selects the GVO sync mode; possible values
# are:
# FREE_RUNNING - GVO does not sync to any external signal
# GENLOCK - the GVO output is genlocked to an incoming sync signal;
# genlocking locks at hsync.  This requires that the output video
# format exactly match the incoming sync video format.
# FRAMELOCK - the GVO output is frame locked to an incoming sync
# signal; frame locking locks at vsync.  This requires that the output
# video format have the same refresh rate as the incoming sync video
# format.
# RW-
# NV_CTRL_GVO_SYNC_SOURCE - if NV_CTRL_GVO_SYNC_MODE is set to either
# GENLOCK or FRAMELOCK, this controls which sync source is used as
# the incoming sync signal (either Composite or SDI).  If
# NV_CTRL_GVO_SYNC_MODE is FREE_RUNNING, this attribute has no
# effect.
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT - specifies the desired output video
# format for GVO devices or the desired input video format for GVI devices.
# Note that for GVO, the valid video formats may vary depending on
# the NV_CTRL_GVO_SYNC_MODE and the incoming sync video format.  See
# the definition of NV_CTRL_GVO_SYNC_MODE.
# Note that when querying the ValidValues for this data type, the
# values are reported as bits within a bitmask
# (ATTRIBUTE_TYPE_INT_BITS); unfortunately, there are more valid
# value bits than will fit in a single 32-bit value.  To solve this,
# query the ValidValues for NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT to
# check which of the first 31 VIDEO_FORMATS are valid, query the
# ValidValues for NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT2 to check which
# of the 32-63 VIDEO_FORMATS are valid, and query the ValidValues of
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT3 to check which of the 64-95
# VIDEO_FORMATS are valid.
# Note: Setting this attribute on a GVI device may also result in the
# RW--I
# The following have been renamed; NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT and the
# corresponding NV_CTRL_GVIO_* formats should be used instead.
# renamed
# NV_CTRL_GVIO_DETECTED_VIDEO_FORMAT - indicates the input video format
# detected for GVO or GVI devices; the possible values are the
# NV_CTRL_GVIO_VIDEO_FORMAT constants.
# For GVI devices, the jack number should be specified in the lower
# 16 bits of the "display_mask" parameter, while the channel number should be
# specified in the upper 16 bits.
# R--I
# NV_CTRL_GVO_INPUT_VIDEO_FORMAT - renamed
# NV_CTRL_GVIO_DETECTED_VIDEO_FORMAT should be used instead.
# NV_CTRL_GVO_DATA_FORMAT - This controls how the data in the source
# (either the X screen or the GLX pbuffer) is interpretted and
# Note: some of the below DATA_FORMATS have been renamed.  For
# example, R8G8B8_TO_RGB444 has been renamed to X8X8X8_444_PASSTHRU.
# This is to more accurately reflect DATA_FORMATS where the
# per-channel data could be either RGB or YCrCb -- the point is that
# the driver and GVO hardware do not perform any implicit color space
# conversion on the data; it is passed through to the SDI out.
# NV_CTRL_GVO_DISPLAY_X_SCREEN - not supported
# NV_CTRL_GVO_COMPOSITE_SYNC_INPUT_DETECTED - indicates whether
# Composite Sync input is detected.
# NV_CTRL_GVO_COMPOSITE_SYNC_INPUT_DETECT_MODE - get/set the
# Composite Sync input detect mode.
# NV_CTRL_GVO_SYNC_INPUT_DETECTED - indicates whether SDI Sync input
# is detected, and what type.
# NV_CTRL_GVO_VIDEO_OUTPUTS - indicates which GVO video output
# connectors are currently outputing data.
# NV_CTRL_GVO_FIRMWARE_VERSION - deprecated
# NV_CTRL_STRING_GVIO_FIRMWARE_VERSION should be used instead.
# NV_CTRL_GVO_SYNC_DELAY_PIXELS - controls the delay between the
# input sync and the output sync in numbers of pixels from hsync;
# this is a 12 bit value.
# If the NV_CTRL_GVO_CAPABILITIES_ADVANCE_SYNC_SKEW bit is set,
# then setting this value will set an advance instead of a delay.
# NV_CTRL_GVO_SYNC_DELAY_LINES - controls the delay between the input
# sync and the output sync in numbers of lines from vsync; this is a
# 12 bit value.
# NV_CTRL_GVO_INPUT_VIDEO_FORMAT_REACQUIRE - must be set for a period
# of about 2 seconds for the new InputVideoFormat to be properly
# locked to.  In nvidia-settings, we do a reacquire whenever genlock
# or frame lock mode is entered into, when the user clicks the
# "detect" button.  This value can be written, but always reads back
# _FALSE.
# -W-
# NV_CTRL_GVO_GLX_LOCKED - deprecated
# NV_CTRL_GVO_LOCK_OWNER should be used instead.
# NV_CTRL_GVIO_VIDEO_FORMAT_{WIDTH,HEIGHT,REFRESH_RATE} - query the
# width, height, and refresh rate for the specified
# NV_CTRL_GVIO_VIDEO_FORMAT_*.  So that this can be queried with
# existing interfaces, XNVCTRLQueryAttribute() should be used, and
# the video format specified in the display_mask field; eg:
# XNVCTRLQueryAttribute (dpy,
# Note that Refresh Rate is in milliHertz values
# The following have been renamed; use the NV_CTRL_GVIO_* versions, instead
# NV_CTRL_GVO_X_SCREEN_PAN_[XY] - not supported
# NV_CTRL_GPU_OVERCLOCKING_STATE - not supported
# NV_CTRL_GPU_{2,3}D_CLOCK_FREQS - not supported
# NV_CTRL_GPU_DEFAULT_{2,3}D_CLOCK_FREQS - not supported
# NV_CTRL_GPU_CURRENT_CLOCK_FREQS - query the current GPU and memory
# clocks of the graphics device driving the X screen.
# NV_CTRL_GPU_CURRENT_CLOCK_FREQS is a "packed" integer attribute;
# the GPU clock is stored in the upper 16 bits of the integer, and
# the memory clock is stored in the lower 16 bits of the integer.
# All clock values are in MHz.  All clock values are in MHz.
# NV_CTRL_GPU_OPTIMAL_CLOCK_FREQS - not supported
# NV_CTRL_GPU_OPTIMAL_CLOCK_FREQS_DETECTION - not supported
# NV_CTRL_GPU_OPTIMAL_CLOCK_FREQS_DETECTION_STATE - not supported
# NV_CTRL_FLATPANEL_CHIP_LOCATION - for the specified display device,
# report whether the flat panel is driven by the on-chip controller,
# or a separate controller chip elsewhere on the graphics board.
# This attribute is only available for flat panels.
# R-DG
# NV_CTRL_FLATPANEL_LINK - report the number of links for a DVI connection, or
# the main link's active lane count for DisplayPort.
# NV_CTRL_FLATPANEL_SIGNAL - for the specified display device, report
# whether the flat panel is driven by an LVDS, TMDS, or DisplayPort signal.
# NV_CTRL_USE_HOUSE_SYNC - when INPUT, the server (master) frame lock
# device will propagate the incoming house sync signal as the outgoing
# frame lock sync signal.  If the frame lock device cannot detect a
# frame lock sync signal, it will default to using the internal timings
# from the GPU connected to the primary connector.
# When set to OUTPUT, the server (master) frame lock device will
# generate a house sync signal from its internal timing and output
# this signal over the BNC connector on the frame lock device.  This
# is only allowed on a Quadro Sync II device.  If an incoming house
# sync signal is present on the BNC connector, this setting will
# have no effect.
# aliases with FALSE
# aliases with TRUE
# NV_CTRL_EDID_AVAILABLE - report if an EDID is available for the
# This attribute may also be queried through XNVCTRLQueryTargetAttribute()
# NV_CTRL_FORCE_STEREO - when TRUE, OpenGL will force stereo flipping
# even when no stereo drawables are visible (if the device is configured
# to support it, see the "Stereo" X config option).
# When false, fall back to the default behavior of only flipping when a
# stereo drawable is visible.
# NV_CTRL_IMAGE_SETTINGS - the image quality setting for OpenGL clients.
# NV_CTRL_XINERAMA - return whether xinerama is enabled
# NV_CTRL_XINERAMA_STEREO - when TRUE, OpenGL will allow stereo flipping
# on multiple X screens configured with Xinerama.
# When FALSE, flipping is allowed only on one X screen at a time.
# NV_CTRL_BUS_RATE - if the bus type of the specified device is AGP, then
# NV_CTRL_BUS_RATE returns the configured AGP transfer rate.  If the bus type
# is PCI Express, then this attribute returns the maximum link width.
# When this attribute is queried on an X screen target, the bus rate of the
# NV_CTRL_GPU_PCIE_MAX_LINK_WIDTH - returns the maximum
# PCIe link width, in number of lanes.
# NV_CTRL_SHOW_SLI_VISUAL_INDICATOR - when TRUE, OpenGL will draw information
# about the current SLI mode.
# NV_CTRL_SHOW_SLI_HUD - when TRUE, OpenGL will draw information about the
# current SLI mode.
# Renamed this attribute to NV_CTRL_SHOW_SLI_VISUAL_INDICATOR
# NV_CTRL_XV_SYNC_TO_DISPLAY - deprecated
# NV_CTRL_XV_SYNC_TO_DISPLAY_ID should be used instead.
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT2 - this attribute is only
# intended to be used to query the ValidValues for
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT for VIDEO_FORMAT values between
# 31 and 63.  See NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT for details.
# ---GI
# NV_CTRL_GVO_OUTPUT_VIDEO_FORMAT2 - renamed
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT2 should be used instead.
# NV_CTRL_GVO_OVERRIDE_HW_CSC - Override the SDI hardware's Color Space
# Conversion with the values controlled through
# XNVCTRLSetGvoColorConversion() and XNVCTRLGetGvoColorConversion().  If
# this attribute is FALSE, then the values specified through
# XNVCTRLSetGvoColorConversion() are ignored.
# NV_CTRL_GVO_CAPABILITIES - this read-only attribute describes GVO
# capabilities that differ between NVIDIA SDI products.  This value
# is a bitmask where each bit indicates whether that capability is
# APPLY_CSC_IMMEDIATELY - whether the CSC matrix, offset, and scale
# specified through XNVCTRLSetGvoColorConversion() will take affect
# immediately, or only after SDI output is disabled and enabled
# APPLY_CSC_TO_X_SCREEN - whether the CSC matrix, offset, and scale
# specified through XNVCTRLSetGvoColorConversion() will also apply
# to GVO output of an X screen, or only to OpenGL GVO output, as
# enabled through the GLX_NV_video_out extension.
# COMPOSITE_TERMINATION - whether the 75 ohm termination of the
# SDI composite input signal can be programmed through the
# NV_CTRL_GVO_COMPOSITE_TERMINATION attribute.
# SHARED_SYNC_BNC - whether the SDI device has a single BNC
# connector used for both (SDI & Composite) incoming signals.
# MULTIRATE_SYNC - whether the SDI device supports synchronization
# of input and output video modes that match in being odd or even
# modes (ie, AA.00 Hz modes can be synched to other BB.00 Hz modes and
# AA.XX Hz can match to BB.YY Hz where .XX and .YY are not .00)
# NV_CTRL_GVO_COMPOSITE_TERMINATION - enable or disable 75 ohm
# termination of the SDI composite input signal.
# NV_CTRL_ASSOCIATED_DISPLAY_DEVICES - deprecated
# NV_CTRL_FRAMELOCK_SLAVES - deprecated
# NV_CTRL_FRAMELOCK_MASTERABLE - deprecated
# NV_CTRL_PROBE_DISPLAYS - re-probes the hardware to detect what
# display devices are connected to the GPU or GPU driving the
# specified X screen.  The return value is deprecated and should not be used.
# NV_CTRL_REFRESH_RATE - Returns the refresh rate of the specified
# display device in 100# Hz (ie. to get the refresh rate in Hz, divide
# the returned value by 100.)
# NV_CTRL_GVO_FLIP_QUEUE_SIZE - The Graphics to Video Out interface
# exposed through NV-CONTROL and the GLX_NV_video_out extension uses
# an internal flip queue when pbuffers are sent to the video device
# (via glXSendPbufferToVideoNV()).  The NV_CTRL_GVO_FLIP_QUEUE_SIZE
# can be used to query and assign the flip queue size.  This
# attribute is applied to GLX when glXGetVideoDeviceNV() is called by
# the application.
# NV_CTRL_CURRENT_SCANLINE - query the current scanline for the
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT - Controls where X pixmaps are initially
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT_FORCE_SYSMEM causes pixmaps to stay in
# system memory. These pixmaps can't be accelerated by the NVIDIA driver; this
# will cause blank windows if used with an OpenGL compositing manager.
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT_SYSMEM creates pixmaps in system memory
# initially, but allows them to migrate to video memory.
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT_VIDMEM creates pixmaps in video memory
# when enough resources are available.
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT_RESERVED is currently reserved for future
# use.  Behavior is undefined.
# NV_CTRL_INITIAL_PIXMAP_PLACEMENT_GPU_SYSMEM creates pixmaps in GPU accessible
# system memory when enough resources are available.
# NV_CTRL_PCI_BUS - Returns the PCI bus number the specified device is using.
# NV_CTRL_PCI_DEVICE - Returns the PCI device number the specified device is
# using.
# NV_CTRL_PCI_FUNCTION - Returns the PCI function number the specified device
# is using.
# NV_CTRL_FRAMELOCK_FPGA_REVISION - Queries the FPGA revision of the
# Frame Lock device.
# This attribute must be queried through XNVCTRLQueryTargetAttribute()
# using a NV_CTRL_TARGET_TYPE_FRAMELOCK target.
# NV_CTRL_MAX_SCREEN_{WIDTH,HEIGHT} - the maximum allowable size, in
# pixels, of either the specified X screen (if the target_type of the
# query is an X screen), or any X screen on the specified GPU (if the
# target_type of the query is a GPU).
# NV_CTRL_MAX_DISPLAYS - The maximum number of display devices that
# can be driven simultaneously on a GPU (e.g., that can be used in a
# MetaMode at once).  Note that this does not indicate the maximum
# number of displays that are listed in NV_CTRL_BINARY_DATA_DISPLAYS_ON_GPU
# and NV_CTRL_BINARY_DATA_DISPLAYS_CONNECTED_TO_GPU because more display
# devices can be connected than are actively in use.
# NV_CTRL_DYNAMIC_TWINVIEW - Returns whether or not the screen
# supports dynamic twinview.
# NV_CTRL_MULTIGPU_DISPLAY_OWNER - Returns the (NV-CONTROL) GPU ID of
# the GPU that has the display device(s) used for showing the X Screen.
# NV_CTRL_GPU_SCALING - not supported
# NV_CTRL_FRONTEND_RESOLUTION - not supported
# NV_CTRL_BACKEND_RESOLUTION - not supported
# NV_CTRL_FLATPANEL_NATIVE_RESOLUTION - not supported
# NV_CTRL_FLATPANEL_BEST_FIT_RESOLUTION - not supported
# NV_CTRL_GPU_SCALING_ACTIVE - not supported
# NV_CTRL_DFP_SCALING_ACTIVE - not supported
# NV_CTRL_FSAA_APPLICATION_ENHANCED - Controls how the NV_CTRL_FSAA_MODE
# is applied when NV_CTRL_FSAA_APPLICATION_CONTROLLED is set to
# NV_CTRL_APPLICATION_CONTROLLED_DISABLED.  When
# NV_CTRL_FSAA_APPLICATION_ENHANCED is _DISABLED, OpenGL applications will
# be forced to use the FSAA mode specified by NV_CTRL_FSAA_MODE.  when set
# to _ENABLED, only those applications that have selected a multisample
# FBConfig will be made to use the NV_CTRL_FSAA_MODE specified.
# This attribute is ignored when NV_CTRL_FSAA_APPLICATION_CONTROLLED is
# set to NV_CTRL_FSAA_APPLICATION_CONTROLLED_ENABLED.
# NV_CTRL_FRAMELOCK_SYNC_RATE_4 - This is the refresh rate that the
# frame lock board is sending to the GPU with 4 digits of precision.
# using a NV_CTRL_TARGET_TYPE_FRAMELOCK.
# NV_CTRL_GVO_LOCK_OWNER - indicates that the GVO device is available
# or in use (by GLX or an X screen).
# The GVO device is locked by GLX when either glXGetVideoDeviceNV
# (part of GLX_NV_video_out) or glXBindVideoDeviceNV (part of
# GLX_NV_present_video) is called.  All GVO output resources are
# locked until released by the GLX_NV_video_out/GLX_NV_present_video
# client.
# The GVO device is locked/unlocked by an X screen, when the GVO device is
# used in a MetaMode on an X screen.
# When the GVO device is locked, setting of the following GVO NV-CONTROL
# attributes will not happen immediately and will instead be cached.  The
# GVO resource will need to be disabled/released and re-enabled/claimed for
# the values to be flushed. These attributes are:
# NV_CTRL_HWOVERLAY - when a workstation overlay is in use, reports
# whether the hardware overlay is used, or if the overlay is emulated.
# NV_CTRL_NUM_GPU_ERRORS_RECOVERED - Returns the number of GPU errors
# occured. This attribute may be queried through XNVCTRLQueryTargetAttribute()
# using a NV_CTRL_TARGET_TYPE_X_SCREEN target.
# R---
# NV_CTRL_REFRESH_RATE_3 - Returns the refresh rate of the specified
# display device in 1000# Hz (ie. to get the refresh rate in Hz, divide
# the returned value by 1000.)
# NV_CTRL_ONDEMAND_VBLANK_INTERRUPTS - not supported
# NV_CTRL_GPU_POWER_SOURCE reports the type of power source
# NV_CTRL_GPU_CURRENT_PERFORMANCE_MODE - not supported
# NV_CTRL_GLYPH_CACHE - Enables RENDER Glyph Caching to VRAM
# NV_CTRL_GPU_CURRENT_PERFORMANCE_LEVEL reports the current
# Performance level of the GPU driving the X screen.  Each
# Performance level has associated NVClock and Mem Clock values.
# NV_CTRL_GPU_ADAPTIVE_CLOCK_STATE reports if Adaptive Clocking
# is Enabled on the GPU driving the X screen.
# NV_CTRL_GVO_OUTPUT_VIDEO_LOCKED - Returns whether or not the GVO output
# video is locked to the GPU.
# NV_CTRL_GVO_SYNC_LOCK_STATUS - Returns whether or not the GVO device
# is locked to the input ref signal.  If the sync mode is set to
# NV_CTRL_GVO_SYNC_MODE_GENLOCK, then this returns the genlock
# sync status, and if the sync mode is set to NV_CTRL_GVO_SYNC_MODE_FRAMELOCK,
# then this reports the frame lock status.
# NV_CTRL_GVO_ANC_TIME_CODE_GENERATION - Allows SDI device to generate
# time codes in the ANC region of the SDI video output stream.
# RW--
# NV_CTRL_GVO_COMPOSITE - Enables/Disables SDI compositing.  This attribute
# is only available when an SDI input source is detected and is in genlock
# mode.
# NV_CTRL_GVO_COMPOSITE_ALPHA_KEY - When compositing is enabled, this
# enables/disables alpha blending.
# NV_CTRL_GVO_COMPOSITE_LUMA_KEY_RANGE - Set the values of a luma
# channel range.  This is a packed int that has the following format
# (in order of high-bits to low bits):
# Range # (11 bits), (Enabled 1 bit), min value (10 bits), max value (10 bits)
# To query the current values, pass the range # throught the display_mask
# NV_CTRL_GVO_COMPOSITE_CR_KEY_RANGE - Set the values of a CR
# To query the current values, pass the range # throught he display_mask
# NV_CTRL_GVO_COMPOSITE_CB_KEY_RANGE - Set the values of a CB
# NV_CTRL_GVO_COMPOSITE_NUM_KEY_RANGES - Returns the number of ranges
# available for each channel (Y/Luma, Cr, and Cb.)
# NV_CTRL_SWITCH_TO_DISPLAYS - not supported
# NV_CTRL_NOTEBOOK_DISPLAY_CHANGE_LID_EVENT - not supported
# NV_CTRL_NOTEBOOK_INTERNAL_LCD - deprecated
# NV_CTRL_DEPTH_30_ALLOWED - returns whether the NVIDIA X driver supports
# depth 30 on the specified X screen or GPU.
# NV_CTRL_MODE_SET_EVENT This attribute is sent as an event
# when hotkey, ctrl-alt-+/- or randr event occurs.  Note that
# This attribute cannot be set or queried and is meant to
# be received by clients that wish to be notified of when
# mode set events occur.
# NV_CTRL_OPENGL_AA_LINE_GAMMA_VALUE - the gamma value used by
# OpenGL when NV_CTRL_OPENGL_AA_LINE_GAMMA is enabled
# NV_CTRL_VCSC_HIGH_PERF_MODE - deprecated
# Is used to both query High Performance Mode status on the Visual Computing
# System, and also to enable or disable High Performance Mode.
# RW-V
# NV_CTRL_DISPLAYPORT_LINK_RATE - returns the negotiated lane bandwidth of the
# DisplayPort main link.  The numerical value of this attribute is the link
# rate in bps divided by 27000000.
# This attribute is only available for DisplayPort flat panels.
# NV_CTRL_STEREO_EYES_EXCHANGE - Controls whether or not the left and right
# eyes of a stereo image are flipped.
# NV_CTRL_NO_SCANOUT - returns whether the special "NoScanout" mode is
# enabled on the specified X screen or GPU; for details on this mode,
# see the description of the "none" value for the "UseDisplayDevice"
# X configuration option in the NVIDIA driver README.
# NV_CTRL_GVO_CSC_CHANGED_EVENT This attribute is sent as an event
# when the color space conversion matrix has been altered by another
# NV_CTRL_FRAMELOCK_SLAVEABLE - deprecated
# NV_CTRL_GVO_SYNC_TO_DISPLAY This attribute controls whether or not
# the non-SDI display device will be sync'ed to the SDI display device
# (when configured in TwinView, Clone Mode or when using the SDI device
# with OpenGL).
# NV_CTRL_X_SERVER_UNIQUE_ID - returns a pseudo-unique identifier for this
# X server. Intended for use in cases where an NV-CONTROL client communicates
# with multiple X servers, and wants some level of confidence that two
# X Display connections correspond to the same or different X servers.
# NV_CTRL_PIXMAP_CACHE - This attribute controls whether the driver attempts to
# store video memory pixmaps in a cache.  The cache speeds up allocation and
# deallocation of pixmaps, but could use more memory than when the cache is
# disabled.
# NV_CTRL_PIXMAP_CACHE_ROUNDING_SIZE_KB - When the pixmap cache is enabled and
# there is not enough free space in the cache to fit a new pixmap, the driver
# will round up to the next multiple of this number of kilobytes when
# allocating more memory for the cache.
# NV_CTRL_IS_GVO_DISPLAY - returns whether or not a given display is an
# SDI device.
# R-D
# NV_CTRL_PCI_ID - Returns the PCI vendor and device ID of the specified
# NV_CTRL_PCI_ID is a "packed" integer attribute; the PCI vendor ID is stored
# in the upper 16 bits of the integer, and the PCI device ID is stored in the
# lower 16 bits of the integer.
# NV_CTRL_GVO_FULL_RANGE_COLOR - Allow full range color data [4-1019]
# without clamping to [64-940].
# NV_CTRL_SLI_MOSAIC_MODE_AVAILABLE - Returns whether or not
# SLI Mosaic Mode supported.
# NV_CTRL_GVO_ENABLE_RGB_DATA - Allows clients to specify when
# the GVO board should process colors as RGB when the output data
# format is one of the NV_CTRL_GVO_DATA_FORMAT_???_PASSTRHU modes.
# NV_CTRL_IMAGE_SHARPENING_DEFAULT - Returns default value of
# Image Sharpening.
# NV_CTRL_PCI_DOMAIN - Returns the PCI domain number the specified device is
# NV_CTRL_GVI_NUM_JACKS - Returns the number of input BNC jacks available
# on a GVI device.
# NV_CTRL_GVI_MAX_LINKS_PER_STREAM - Returns the maximum supported number of
# links that can be tied to one stream.
# NV_CTRL_GVI_DETECTED_CHANNEL_BITS_PER_COMPONENT - Returns the detected
# number of bits per component (BPC) of data on the given input jack+
# The jack number should be specified in the lower 16 bits of the
# "display_mask" parameter, while the channel number should be specified in
# the upper 16 bits.
# NV_CTRL_GVI_REQUESTED_STREAM_BITS_PER_COMPONENT - Specify the number of
# bits per component (BPC) of data for the captured stream.
# The stream number should be specified in the "display_mask" parameter.
# Note: Setting this attribute may also result in the following
# RW-I
# NV_CTRL_GVI_DETECTED_CHANNEL_COMPONENT_SAMPLING - Returns the detected
# sampling format for the input jack+channel.
# NV_CTRL_GVI_REQUESTED_COMPONENT_SAMPLING - Specify the sampling format for
# the captured stream.
# The possible values are the NV_CTRL_GVI_DETECTED_COMPONENT_SAMPLING
# constants.
# NV_CTRL_GVI_CHROMA_EXPAND - Enable or disable 4:2:2 -> 4:4:4 chroma
# expansion for the captured stream.  This value is ignored when a
# COMPONENT_SAMPLING format is selected that does not use chroma subsampling,
# or if a BITS_PER_COMPONENT value is selected that is not supported.
# NV_CTRL_GVI_DETECTED_CHANNEL_COLOR_SPACE - Returns the detected color space
# of the input jack+channel.
# NV_CTRL_GVI_DETECTED_CHANNEL_LINK_ID - Returns the detected link identifier
# for the given input jack+channel.
# NV_CTRL_GVI_DETECTED_CHANNEL_SMPTE352_IDENTIFIER - Returns the 4-byte
# SMPTE 352 identifier from the given input jack+channel.
# NV_CTRL_GVI_GLOBAL_IDENTIFIER - Returns a global identifier for the
# GVI device.  This identifier can be used to relate GVI devices named
# in NV-CONTROL with those enumerated in OpenGL.
# NV_CTRL_FRAMELOCK_SYNC_DELAY_RESOLUTION - Returns the number of nanoseconds
# that one unit of NV_CTRL_FRAMELOCK_SYNC_DELAY corresponds to.
# NV_CTRL_GPU_COOLER_MANUAL_CONTROL - Query the current or set a new
# cooler control state; the value of this attribute controls the
# availability of additional cooler control attributes (see below).
# Note: this attribute is unavailable unless cooler control support
# has been enabled in the X server (by the user).
# NV_CTRL_THERMAL_COOLER_LEVEL - The cooler's target level.
# Normally, the driver dynamically adjusts the cooler based on
# the needs of the GPU.  But when NV_CTRL_GPU_COOLER_MANUAL_CONTROL=TRUE,
# the driver will attempt to make the cooler achieve the setting in
# NV_CTRL_THERMAL_COOLER_LEVEL.  The actual current level of the cooler
# is reported in NV_CTRL_THERMAL_COOLER_CURRENT_LEVEL.
# RW-C
# NV_CTRL_THERMAL_COOLER_LEVEL_SET_DEFAULT - Sets default values of
# cooler.
# -W-C
# NV_CTRL_THERMAL_COOLER_CONTROL_TYPE -
# Returns a cooler's control signal characteristics.
# The possible types are restricted, Variable and Toggle.
# R--C
# NV_CTRL_THERMAL_COOLER_TARGET - Returns objects that cooler cools.
# Targets may be GPU, Memory, Power Supply or All of these.
# GPU_RELATED = GPU | MEMORY | POWER_SUPPLY
# NV_CTRL_GPU_ECC_SUPPORTED - Reports whether ECC is supported by the
# targeted GPU.
# NV_CTRL_GPU_ECC_STATUS - Returns the current hardware ECC setting
# for the targeted GPU.
# NV_CTRL_GPU_ECC_CONFIGURATION - Reports whether ECC can be configured
# dynamically for the GPU in question.
# NV_CTRL_GPU_ECC_CONFIGURATION_SETTING - Returns the current ECC
# configuration setting or specifies new settings.  New settings do not
# take effect until the next POST.
# NV_CTRL_GPU_ECC_DEFAULT_CONFIGURATION_SETTING - Returns the default
# ECC configuration setting.
# NV_CTRL_GPU_ECC_SINGLE_BIT_ERRORS - Returns the number of single-bit
# ECC errors detected by the targeted GPU since the last POST.
# Note: this attribute is a 64-bit integer attribute.
# R--GQ
# NV_CTRL_GPU_ECC_DOUBLE_BIT_ERRORS - Returns the number of double-bit
# NV_CTRL_GPU_ECC_AGGREGATE_SINGLE_BIT_ERRORS - Returns the number of
# single-bit ECC errors detected by the targeted GPU since the
# last counter reset.
# NV_CTRL_GPU_ECC_AGGREGATE_DOUBLE_BIT_ERRORS - Returns the number of
# double-bit ECC errors detected by the targeted GPU since the
# NV_CTRL_GPU_ECC_RESET_ERROR_STATUS - Resets the volatile/aggregate
# single-bit and double-bit error counters.  This attribute is a
# bitmask attribute.
# -W-G
# NV_CTRL_GPU_POWER_MIZER_MODE - Provides a hint to the driver
# as to how to manage the performance of the GPU.
# ADAPTIVE                      - adjust GPU clocks based on GPU
# PREFER_MAXIMUM_PERFORMANCE    - raise GPU clocks to favor
# AUTO                          - let the driver choose the performance
# PREFER_CONSISTENT_PERFORMANCE - lock to GPU base clocks
# NV_CTRL_GVI_SYNC_OUTPUT_FORMAT - Returns the output sync signal
# from the GVI device.
# NV_CTRL_GVI_MAX_CHANNELS_PER_JACK  - Returns the maximum
# supported number of (logical) channels within a single physical jack of
# a GVI device.  For most SDI video formats, there is only one channel
# (channel 0).  But for 3G video formats (as specified in SMPTE 425),
# as an example, there are two channels (channel 0 and channel 1) per
# physical jack.
# NV_CTRL_GVI_MAX_STREAMS  - Returns the maximum number of streams
# that can be configured on the GVI device.
# NV_CTRL_GVI_NUM_CAPTURE_SURFACES - The GVI interface exposed through
# NV-CONTROL and the GLX_NV_video_input extension uses internal capture
# surfaces when frames are read from the GVI device.  The
# NV_CTRL_GVI_NUM_CAPTURE_SURFACES can be used to query and assign the
# number of capture surfaces.  This attribute is applied when
# glXBindVideoCaptureDeviceNV() is called by the application.
# A lower number of capture surfaces will mean less video memory is used,
# but can result in frames being dropped if the application cannot keep up
# with the capture device.  A higher number will prevent frames from being
# dropped, making capture more reliable but will consume move video memory.
# NV_CTRL_OVERSCAN_COMPENSATION - not supported
# NV_CTRL_GPU_PCIE_GENERATION - Reports the current PCIe generation.
# NV_CTRL_GVI_BOUND_GPU - Returns the NV_CTRL_TARGET_TYPE_GPU target_id of
# the GPU currently bound to the GVI device.  Returns -1 if no GPU is
# currently bound to the GVI device.
# NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT3 - this attribute is only
# 64 and 95.  See NV_CTRL_GVIO_REQUESTED_VIDEO_FORMAT for details.
# NV_CTRL_ACCELERATE_TRAPEZOIDS - Toggles RENDER Trapezoid acceleration
# NV_CTRL_GPU_CORES - Returns number of GPU cores supported by the graphics
# pipeline.
# NV_CTRL_GPU_MEMORY_BUS_WIDTH - Returns memory bus bandwidth on the associated
# subdevice.
# NV_CTRL_GVI_TEST_MODE - This attribute controls the GVI test mode.  When
# enabled, the GVI device will generate fake data as quickly as possible.  All
# GVI settings are still valid when this is enabled (e.g., the requested video
# format is honored and sets the video size).
# This may be used to test the pipeline.
# NV_CTRL_COLOR_SPACE - This option controls the preferred color space of the
# video signal. This may not match the current color space depending on the
# current mode on this display.
# NV_CTRL_CURRENT_COLOR_SPACE will reflect the actual color space in use.
# NV_CTRL_COLOR_RANGE - This option controls the preferred color range of the
# video signal.
# If the current color space requires it, the actual color range will be
# limited.
# NV_CTRL_CURRENT_COLOR_RANGE will reflect the actual color range in use.
# NV_CTRL_GPU_SCALING_DEFAULT_TARGET - not supported
# NV_CTRL_GPU_SCALING_DEFAULT_METHOD - not supported
# NV_CTRL_DITHERING_MODE - Controls the dithering mode, when
# NV_CTRL_CURRENT_DITHERING is Enabled.
# AUTO: allow the driver to choose the dithering mode automatically.
# DYNAMIC_2X2: use a 2x2 matrix to dither from the GPU's pixel
# pipeline to the bit depth of the flat panel.  The matrix values
# are changed from frame to frame.
# STATIC_2X2: use a 2x2 matrix to dither from the GPU's pixel
# do not change from frame to frame.
# TEMPORAL: use a pseudorandom value from a uniform distribution calculated at
# every pixel to achieve stochastic dithering.  This method produces a better
# visual result than 2x2 matrix approaches.
# NV_CTRL_CURRENT_DITHERING - Returns the current dithering state.
# NV_CTRL_CURRENT_DITHERING_MODE - Returns the current dithering
# NV_CTRL_THERMAL_SENSOR_READING - Returns the thermal sensor's current
# reading.
# R--S
# NV_CTRL_THERMAL_SENSOR_PROVIDER - Returns the hardware device that
# provides the thermal sensor.
# NV_CTRL_THERMAL_SENSOR_TARGET - Returns what hardware component
# the thermal sensor is measuring.
# NV_CTRL_SHOW_MULTIGPU_VISUAL_INDICATOR - when TRUE, OpenGL will
# draw information about the current MULTIGPU mode.
# NV_CTRL_GPU_CURRENT_PROCESSOR_CLOCK_FREQS - Returns GPU's processor
# clock freqs.
# NV_CTRL_GVIO_VIDEO_FORMAT_FLAGS - query the flags (various information
# for the specified NV_CTRL_GVIO_VIDEO_FORMAT_*.  So that this can be
# queried with existing interfaces, the video format should be specified
# in the display_mask field; eg:
# XNVCTRLQueryTargetAttribute(dpy,
# Note: The NV_CTRL_GVIO_VIDEO_FORMAT_FLAGS_3G_1080P_NO_12BPC flag is set
# NV_CTRL_GPU_PCIE_MAX_LINK_SPEED - returns maximum PCIe link speed,
# in gigatransfers per second (GT/s).
# NV_CTRL_3D_VISION_PRO_RESET_TRANSCEIVER_TO_FACTORY_SETTINGS - Resets the
# 3D Vision Pro transceiver to its factory settings.
# -W-T
# NV_CTRL_3D_VISION_PRO_TRANSCEIVER_CHANNEL - Controls the channel that is
# currently used by the 3D Vision Pro transceiver.
# RW-T
# NV_CTRL_3D_VISION_PRO_TRANSCEIVER_MODE - Controls the mode in which the
# 3D Vision Pro transceiver operates.
# NV_CTRL_3D_VISION_PRO_TM_LOW_RANGE is bidirectional
# NV_CTRL_3D_VISION_PRO_TM_MEDIUM_RANGE is bidirectional
# NV_CTRL_3D_VISION_PRO_TM_HIGH_RANGE may be bidirectional just up to a
# NV_CTRL_3D_VISION_PRO_TM_COUNT is the total number of
# NV_CTRL_SYNCHRONOUS_PALETTE_UPDATES - controls whether updates to the color
# lookup table (LUT) are synchronous with respect to X rendering.  For example,
# if an X client sends XStoreColors followed by XFillRectangle, the driver will
# guarantee that the FillRectangle request is not processed until after the
# updated LUT colors are actually visible on the screen if
# NV_CTRL_SYNCHRONOUS_PALETTE_UPDATES is enabled.  Otherwise, the rendering may
# occur first.
# This makes a difference for applications that use the LUT to animate, such as
# XPilot.  If you experience flickering in applications that use LUT
# animations, try enabling this attribute.
# When synchronous updates are enabled, XStoreColors requests will be processed
# at your screen's refresh rate.
# NV_CTRL_DITHERING_DEPTH - Controls the dithering depth when
# NV_CTRL_CURRENT_DITHERING is ENABLED.  Some displays connected
# to the GPU via the DVI or LVDS interfaces cannot display the
# full color range of ten bits per channel, so the GPU will
# dither to either 6 or 8 bits per channel.
# NV_CTRL_CURRENT_DITHERING_DEPTH - Returns the current dithering
# depth value.
# NV_CTRL_3D_VISION_PRO_TRANSCEIVER_CHANNEL_FREQUENCY - Returns the
# frequency of the channel(in kHz) of the 3D Vision Pro transceiver.
# Use the display_mask parameter to specify the channel number.
# R--T
# NV_CTRL_3D_VISION_PRO_TRANSCEIVER_CHANNEL_QUALITY - Returns the
# quality of the channel(in percentage) of the 3D Vision Pro transceiver.
# NV_CTRL_3D_VISION_PRO_TRANSCEIVER_CHANNEL_COUNT - Returns the number of
# channels on the 3D Vision Pro transceiver.
# NV_CTRL_3D_VISION_PRO_PAIR_GLASSES - Puts the 3D Vision Pro
# transceiver into pairing mode to gather additional glasses.
# NV_CTRL_3D_VISION_PRO_PAIR_GLASSES_STOP - stops any pairing
# NV_CTRL_3D_VISION_PRO_PAIR_GLASSES_BEACON - starts continuous
# Any other value, N - Puts the 3D Vision Pro transceiver into
# NV_CTRL_3D_VISION_PRO_UNPAIR_GLASSES - Tells a specific pair
# of glasses to unpair. The glasses will "forget" the address
# of the 3D Vision Pro transceiver to which they have been paired.
# To unpair all the currently paired glasses, specify
# the glasses id as 0.
# NV_CTRL_3D_VISION_PRO_DISCOVER_GLASSES - Tells the 3D Vision Pro
# transceiver about the glasses that have been paired using
# NV_CTRL_3D_VISION_PRO_PAIR_GLASSES_BEACON. Unless this is done,
# the 3D Vision Pro transceiver will not know about glasses paired in
# beacon mode.
# NV_CTRL_3D_VISION_PRO_IDENTIFY_GLASSES - Causes glasses LEDs to
# flash for a short period of time.
# NV_CTRL_3D_VISION_PRO_GLASSES_SYNC_CYCLE - Controls the
# sync cycle duration(in milliseconds) of the glasses.
# Use the display_mask parameter to specify the glasses id.
# NV_CTRL_3D_VISION_PRO_GLASSES_MISSED_SYNC_CYCLES - Returns the
# number of state sync cycles recently missed by the glasses.
# NV_CTRL_3D_VISION_PRO_GLASSES_BATTERY_LEVEL - Returns the
# battery level(in percentage) of the glasses.
# NV_CTRL_GVO_ANC_PARITY_COMPUTATION - Controls the SDI device's computation
# of the parity bit (bit 8) for ANC data words.
# RW---
# NV_CTRL_3D_VISION_PRO_GLASSES_PAIR_EVENT - This attribute is sent
# as an event when glasses get paired in response to pair command
# from any of the clients.
# ---T
# NV_CTRL_3D_VISION_PRO_GLASSES_UNPAIR_EVENT - This attribute is sent
# as an event when glasses get unpaired in response to unpair command
# NV_CTRL_GPU_PCIE_CURRENT_LINK_WIDTH - returns the current
# NV_CTRL_GPU_PCIE_CURRENT_LINK_SPEED - returns the current
# PCIe link speed, in megatransfers per second (GT/s).
# NV_CTRL_GVO_AUDIO_BLANKING - specifies whether the GVO device should delete
# audio ancillary data packets when frames are repeated.
# When a new frame is not ready in time, the current frame, including all
# ancillary data packets, is repeated.  When this data includes audio packets,
# this can result in stutters or clicks.  When this option is enabled, the GVO
# device will detect when frames are repeated, identify audio ancillary data
# packets, and mark them for deletion.
# This option is applied when the GVO device is bound.
# NV_CTRL_CURRENT_METAMODE_ID - switch modes to the MetaMode with
# the specified ID.
# NV_CTRL_DISPLAY_ENABLED - Returns whether or not the display device
# is currently enabled.
# NV_CTRL_FRAMELOCK_INCOMING_HOUSE_SYNC_RATE: this is the rate
# of an incomming house sync signal to the frame lock board, in milliHz.
# NV_CTRL_FXAA - enables FXAA. A pixel shader based anti-
# aliasing method.
# NV_CTRL_DISPLAY_RANDR_OUTPUT_ID - the RandR Output ID (type RROutput)
# that corresponds to the specified Display Device target.  If a new
# enough version of RandR is not available in the X server,
# DISPLAY_RANDR_OUTPUT_ID will be 0.
# R-D-
# NV_CTRL_FRAMELOCK_DISPLAY_CONFIG - Configures whether the display device
# should listen, ignore or drive the framelock sync signal.
# Note that whether or not a display device may be set as a client/server
# depends on the current configuration.  For example, only one server may be
# set per Quadro Sync device, and displays can only be configured as a client
# if their refresh rate sufficiently matches the refresh rate of the server
# Note that when querying the ValidValues for this data type, the values are
# reported as bits within a bitmask (ATTRIBUTE_TYPE_INT_BITS);
# RWD
# NV_CTRL_TOTAL_DEDICATED_GPU_MEMORY - Returns the total amount of dedicated
# GPU video memory, in MB, on the specified GPU. This excludes any TurboCache
# padding included in the value returned by NV_CTRL_TOTAL_GPU_MEMORY.
# NV_CTRL_USED_DEDICATED_GPU_MEMORY- Returns the amount of video memory
# currently used on the graphics card in MB.
# NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_IMMEDIATE
# Some GPUs can make a tradeoff between double-precision floating-point
# performance and clock speed.  Enabling double-precision floating point
# performance may benefit CUDA or OpenGL applications that require high
# bandwidth double-precision performance.  Disabling this feature may benefit
# graphics applications that require higher clock speeds.
# This attribute is only available when toggling double precision boost
# can be done immediately (without need for a rebooot).
# NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_REBOOT
# requires a reboot.
# NV_CTRL_DPY_HDMI_3D - Returns whether the specified display device is
# currently using HDMI 3D Frame Packed Stereo mode. Clients may use this
# to help interpret the refresh rate returned by NV_CTRL_REFRESH_RATE or
# NV_CTRL_REFRESH_RATE_3, which will be doubled when using HDMI 3D mode.
# using a NV_CTRL_TARGET_TYPE_GPU target.
# NV_CTRL_BASE_MOSAIC - Returns whether Base Mosaic is currently enabled on the
# given GPU.  Querying the valid values of this attribute returns capabilities.
# NV_CTRL_MULTIGPU_MASTER_POSSIBLE - Returns whether the GPU can be configured
# as the master GPU in a Multi GPU configuration (SLI, SLI Mosaic,
# Base Mosaic).
# NV_CTRL_GPU_POWER_MIZER_DEFAULT_MODE - Returns the default PowerMizer mode
# for the given GPU.
# NV_CTRL_XV_SYNC_TO_DISPLAY_ID - When XVideo Sync To VBlank is enabled, this
# controls which display device will be synched to if the display is enabled.
# Returns NV_CTRL_XV_SYNC_TO_DISPLAY_ID_AUTO if no display has been
# selected.
# NV_CTRL_BACKLIGHT_BRIGHTNESS - The backlight brightness of an internal panel.
# RWD-
# NV_CTRL_GPU_LOGO_BRIGHTNESS - Controls brightness
# of the logo on the GPU, if any.  The value is variable from 0% - 100%.
# NV_CTRL_GPU_SLI_LOGO_BRIGHTNESS - Controls brightness of the logo
# on the SLI bridge, if any.  The value is variable from 0% - 100%.
# NV_CTRL_THERMAL_COOLER_SPEED - Returns cooler's current operating speed in
# rotations per minute (RPM).
# NV_CTRL_PALETTE_UPDATE_EVENT - The Color Palette has been changed and the
# color correction info needs to be updated.
# NV_CTRL_VIDEO_ENCODER_UTILIZATION - Returns the video encoder engine
# utilization as a percentage.
# NV_CTRL_GSYNC_ALLOWED - when TRUE, OpenGL will enable G-SYNC when possible;
# when FALSE, OpenGL will always use a fixed monitor refresh rate.
# NV_CTRL_GPU_NVCLOCK_OFFSET - This attribute controls the GPU clock offsets
# (in MHz) used for overclocking per performance level.
# Use the display_mask parameter to specify the performance level.
# Note: To enable overclocking support, set the X configuration
# option "Coolbits" to value "8".
# This offset can have any integer value between
# NVCTRLAttributeValidValues.u.range.min and
# NVCTRLAttributeValidValues.u.range.max (inclusive).
# This attribute is available on GeForce GTX 400 series and later
# Geforce GPUs.
# NV_CTRL_GPU_MEM_TRANSFER_RATE_OFFSET - This attribute controls
# the memory transfer rate offsets (in MHz) used for overclocking
# per performance level.
# NV_CTRL_VIDEO_DECODER_UTILIZATION - Returns the video decoder engine
# NV_CTRL_GPU_OVER_VOLTAGE_OFFSET - This attribute controls
# the overvoltage offset in microvolts (uV).
# Note: To enable overvoltage support, set the X configuration
# option "Coolbits" to value "16".
# NV_CTRL_GPU_CURRENT_CORE_VOLTAGE - This attribute returns the
# GPU's current operating voltage in microvolts (uV).
# This attribute is available on GPUs that support
# NV_CTRL_GPU_OVER_VOLTAGE_OFFSET.
# NV_CTRL_CURRENT_COLOR_SPACE - Returns the current color space of the video
# signal.
# This will match NV_CTRL_COLOR_SPACE unless the current mode on this display
# device is an HDMI 2.0 4K@60Hz mode and the display device or GPU does not
# support driving this mode in RGB, in which case YCbCr420 will be returned.
# NV_CTRL_CURRENT_COLOR_RANGE - Returns the current color range of the video
# NV_CTRL_SHOW_GSYNC_VISUAL_INDICATOR - when TRUE, OpenGL will indicate when
# G-SYNC is in use for full-screen applications.
# NV_CTRL_THERMAL_COOLER_CURRENT_LEVEL - Returns cooler's current
# operating level.  This may fluctuate dynamically.  When
# NV_CTRL_GPU_COOLER_MANUAL_CONTROL=TRUE, the driver attempts
# to make this match NV_CTRL_THERMAL_COOLER_LEVEL.  When
# NV_CTRL_GPU_COOLER_MANUAL_CONTROL=FALSE, the driver adjusts the
# current level based on the needs of the GPU.
# NV_CTRL_STEREO_SWAP_MODE - This attribute controls the swap mode when
# Quad-Buffered stereo is used.
# NV_CTRL_STEREO_SWAP_MODE_APPLICATION_CONTROL : Stereo swap mode is derived
# from the value of swap interval.
# If it's odd, the per eye swap mode is used.
# If it's even, the per eye pair swap mode is used.
# NV_CTRL_STEREO_SWAP_MODE_PER_EYE : The driver swaps each eye as it is ready.
# NV_CTRL_STEREO_SWAP_MODE_PER_EYE_PAIR : The driver waits for both eyes to
# complete rendering before swapping.
# NV_CTRL_CURRENT_XV_SYNC_TO_DISPLAY_ID - When XVideo Sync To VBlank is
# enabled, this returns the display id of the device currently synched to.
# Returns NV_CTRL_XV_SYNC_TO_DISPLAY_ID_AUTO if no display is currently
# set.
# NV_CTRL_GPU_FRAMELOCK_FIRMWARE_UNSUPPORTED - Returns true if the
# Quadro Sync card connected to this GPU has a firmware version incompatible
# with this GPU.
# NV_CTRL_DISPLAYPORT_CONNECTOR_TYPE - Returns the connector type used by
# a DisplayPort display.
# NV_CTRL_DISPLAYPORT_IS_MULTISTREAM - Returns multi-stream support for
# DisplayPort displays.
# NV_CTRL_DISPLAYPORT_SINK_IS_AUDIO_CAPABLE - Returns whether a DisplayPort
# device supports audio.
# NV_CTRL_GPU_NVCLOCK_OFFSET_ALL_PERFORMANCE_LEVELS - This attribute
# controls the GPU clock offsets (in MHz) used for overclocking.
# The offset is applied to all performance levels.
# This attribute is available on GeForce GTX 1000 series and later
# NV_CTRL_GPU_MEM_TRANSFER_RATE_OFFSET_ALL_PERFORMANCE_LEVELS - This
# attribute controls the memory transfer rate offsets (in MHz) used
# for overclocking.  The offset is applied to all performance levels.
# NV_CTRL_FRAMELOCK_FIRMWARE_VERSION - Queries the firmware major version of
# the Frame Lock device.
# NV_CTRL_FRAMELOCK_FIRMWARE_MINOR_VERSION - Queries the firmware minor
# version of the Frame Lock device.
# NV_CTRL_SHOW_GRAPHICS_VISUAL_INDICATOR - when TRUE, graphics APIs will
# indicate various runtime information such as flip/blit, vsync status, API
# in use.
# String Attributes:
# String attributes can be queryied through the XNVCTRLQueryStringAttribute()
# and XNVCTRLQueryTargetStringAttribute() function calls.
# String attributes can be set through the XNVCTRLSetStringAttribute()
# function call.  (There are currently no string attributes that can be
# set on non-X Screen targets.)
# Unless otherwise noted, all string attributes can be queried/set using an
# NV_CTRL_TARGET_TYPE_X_SCREEN target.  Attributes that cannot take an
# NV_CTRL_TARGET_TYPE_X_SCREEN target also cannot be queried/set through
# XNVCTRLQueryStringAttribute()/XNVCTRLSetStringAttribute() (Since
# these assume an X Screen target).
# NV_CTRL_STRING_PRODUCT_NAME - the product name on which the
# specified X screen is running, or the product name of the specified
# This attribute may be queried through XNVCTRLQueryTargetStringAttribute()
# using a NV_CTRL_TARGET_TYPE_GPU or NV_CTRL_TARGET_TYPE_X_SCREEN target to
# return the product name of the GPU, or a NV_CTRL_TARGET_TYPE_FRAMELOCK to
# return the product name of the Frame Lock device.
# R--GF
# NV_CTRL_STRING_VBIOS_VERSION - the video bios version on the GPU on
# which the specified X screen is running.
# NV_CTRL_STRING_NVIDIA_DRIVER_VERSION - string representation of the
# NVIDIA driver version number for the NVIDIA X driver in use.
# NV_CTRL_STRING_DISPLAY_DEVICE_NAME - name of the display device
# specified in the display_mask argument.
# NV_CTRL_STRING_TV_ENCODER_NAME - not supported
# NV_CTRL_STRING_GVIO_FIRMWARE_VERSION - indicates the version of the
# Firmware on the GVIO device.
# NV_CTRL_STRING_GVO_FIRMWARE_VERSION - renamed
# NV_CTRL_STRING_CURRENT_MODELINE - Return the ModeLine currently
# being used by the specified display device.
# using an NV_CTRL_TARGET_TYPE_GPU or NV_CTRL_TARGET_TYPE_X_SCREEN target.
# The ModeLine string may be prepended with a comma-separated list of
# "token=value" pairs, separated from the ModeLine string by "::".
# This "token=value" syntax is the same as that used in
# NV_CTRL_BINARY_DATA_MODELINES
# NV_CTRL_STRING_ADD_MODELINE - Adds a ModeLine to the specified
# display device.  The ModeLine is not added if validation fails.
# The ModeLine string should have the same syntax as a ModeLine in
# the X configuration file; e.g.,
# "1600x1200"  229.5  1600 1664 1856 2160  1200 1201 1204 1250  +HSync +VSync
# -WDG
# NV_CTRL_STRING_DELETE_MODELINE - Deletes an existing ModeLine
# from the specified display device.  The currently selected
# ModeLine cannot be deleted.  (This also means you cannot delete
# the last ModeLine.)
# NV_CTRL_STRING_CURRENT_METAMODE - Returns the metamode currently
# being used by the specified X screen.  The MetaMode string has the
# same syntax as the MetaMode X configuration option, as documented
# in the NVIDIA driver README.
# The returned string may be prepended with a comma-separated list of
# "token=value" pairs, separated from the MetaMode string by "::".
# NV_CTRL_BINARY_DATA_METAMODES.
# NV_CTRL_STRING_ADD_METAMODE - Adds a MetaMode to the specified
# X Screen.
# It is recommended to not use this attribute, but instead use
# NV_CTRL_STRING_OPERATION_ADD_METAMODE.
# -W--
# NV_CTRL_STRING_DELETE_METAMODE - Deletes an existing MetaMode from
# the specified X Screen.  The currently selected MetaMode cannot be
# deleted.  (This also means you cannot delete the last MetaMode).
# The MetaMode string should have the same syntax as the MetaMode X
# configuration option, as documented in the NVIDIA driver README.
# -WD--
# NV_CTRL_STRING_VCSC_PRODUCT_NAME - deprecated
# Queries the product name of the VCSC device.
# This attribute must be queried through XNVCTRLQueryTargetStringAttribute()
# using a NV_CTRL_TARGET_TYPE_VCSC target.
# R---V
# NV_CTRL_STRING_VCSC_PRODUCT_ID - deprecated
# Queries the product ID of the VCSC device.
# NV_CTRL_STRING_VCSC_SERIAL_NUMBER - deprecated
# Queries the unique serial number of the VCS device.
# NV_CTRL_STRING_VCSC_BUILD_DATE - deprecated
# Queries the date of the VCS device.  the returned string is in the following
# format: "Week.Year"
# NV_CTRL_STRING_VCSC_FIRMWARE_VERSION - deprecated
# Queries the firmware version of the VCS device.
# NV_CTRL_STRING_VCSC_FIRMWARE_REVISION - deprecated
# Queries the firmware revision of the VCS device.
# using a NV_CTRL_TARGET_TYPE_VCS target.
# NV_CTRL_STRING_VCSC_HARDWARE_VERSION - deprecated
# Queries the hardware version of the VCS device.
# NV_CTRL_STRING_VCSC_HARDWARE_REVISION - deprecated
# Queries the hardware revision of the VCS device.
# NV_CTRL_STRING_MOVE_METAMODE - Moves a MetaMode to the specified
# index location.  The MetaMode must already exist in the X Screen's
# list of MetaModes (as returned by the NV_CTRL_BINARY_DATA_METAMODES
# attribute).  If the index is larger than the number of MetaModes in
# the list, the MetaMode is moved to the end of the list.  The
# MetaMode string should have the same syntax as the MetaMode X
# The MetaMode string must be prepended with a comma-separated list
# of "token=value" pairs, separated from the MetaMode string by "::".
# Currently, the only valid token is "index", which indicates where
# in the MetaMode list the MetaMode should be moved to.
# Other tokens may be added in the future.
# NV_CTRL_STRING_VALID_HORIZ_SYNC_RANGES - returns the valid
# horizontal sync ranges used to perform mode validation for the
# specified display device.  The ranges are in the same format as the
# "HorizSync" X config option:
# The values are in kHz.
# Additionally, the string may be prepended with a comma-separated
# list of "token=value" pairs, separated from the HorizSync string by
# "::".  Valid tokens:
# Additional tokens and/or values may be added in the future.
# Example: "source=edid :: 30.000-62.000"
# NV_CTRL_STRING_VALID_VERT_REFRESH_RANGES - returns the valid
# vertical refresh ranges used to perform mode validation for the
# "VertRefresh" X config option:
# The values are in Hz.
# list of "token=value" pairs, separated from the VertRefresh string by
# Example: "source=edid :: 50.000-75.000"
# NV_CTRL_STRING_SCREEN_RECTANGLE - returns the physical X Screen's
# initial position and size (in absolute coordinates) within the
# desktop as the "token=value" string:  "x=#, y=#, width=#, height=#"
# Querying this attribute returns success only when Xinerama is enabled
# or the X server ABI is greater than equal to 12.
# NV_CTRL_STRING_XINERAMA_SCREEN_INFO - renamed
# NV_CTRL_STRING_SCREEN_RECTANGLE should be used instead.
# NV_CTRL_STRING_TWINVIEW_XINERAMA_INFO_ORDER - used to specify the
# order that display devices will be returned via Xinerama when
# nvidiaXineramaInfo is enabled.  Follows the same syntax as the
# nvidiaXineramaInfoOrder X config option.
# for backwards compatibility:
# NV_CTRL_STRING_SLI_MODE - returns a string describing the current
# SLI mode, if any, or FALSE if SLI is not currently enabled.
# This string should be used for informational purposes only, and
# should not be used to distinguish between SLI modes, other than to
# recognize when SLI is disabled (FALSE is returned) or
# enabled (the returned string is non-NULL and describes the current
# SLI configuration).
# R---*/
# NV_CTRL_STRING_PERFORMANCE_MODES - returns a string with all the
# performance modes defined for this GPU along with their associated
# NV Clock and Memory Clock values.
# Not all tokens will be reported on all GPUs, and additional tokens
# may be added in the future.
# For backwards compatibility we still provide nvclock, memclock, and
# processorclock those are the same as nvclockmin, memclockmin and
# processorclockmin.
# Note: These clock values take into account the offset
# set by clients through NV_CTRL_GPU_NVCLOCK_OFFSET and
# NV_CTRL_GPU_MEM_TRANSFER_RATE_OFFSET.
# Each performance modes are returned as a comma-separated list of
# "token=value" pairs.  Each set of performance mode tokens are separated
# by a ";".  Valid tokens:
# perf=0, nvclock=324, nvclockmin=324, nvclockmax=324, nvclockeditable=0,
# memclock=324, memclockmin=324, memclockmax=324, memclockeditable=0,
# memtransferrate=648, memtransferratemin=648, memtransferratemax=648,
# memtransferrateeditable=0 ;
# perf=1, nvclock=324, nvclockmin=324, nvclockmax=640, nvclockeditable=0,
# memclock=810, memclockmin=810, memclockmax=810, memclockeditable=0,
# memtransferrate=1620, memtransferrate=1620, memtransferrate=1620,
# NV_CTRL_STRING_VCSC_FAN_STATUS - deprecated
# Returns a string with status of all the fans in the Visual Computing System,
# if such a query is supported.  Fan information is reported along with its
# tachometer reading (in RPM) and a flag indicating whether the fan has failed
# or not.
# Valid tokens:
# NV_CTRL_STRING_VCSC_TEMPERATURES - Deprecated
# Returns a string with all Temperature readings in the Visual Computing
# System, if such a query is supported.  Intake, Exhaust and Board Temperature
# values are reported in Celcius.
# NV_CTRL_STRING_VCSC_PSU_INFO - Deprecated
# Returns a string with all Power Supply Unit related readings in the Visual
# Computing System, if such a query is supported.  Current in amperes, Power
# in watts, Voltage in volts and PSU state may be reported.  Not all PSU types
# support all of these values, and therefore some readings may be unknown.
# NV_CTRL_STRING_GVIO_VIDEO_FORMAT_NAME - query the name for the specified
# NV_CTRL_GVIO_VIDEO_FORMAT_*.  So that this can be queried with existing
# interfaces, XNVCTRLQueryStringAttribute() should be used, and the video
# format specified in the display_mask field; eg:
# XNVCTRLQueryStringAttribute(dpy,
# NV_CTRL_STRING_GVO_VIDEO_FORMAT_NAME - renamed
# NV_CTRL_STRING_GVIO_VIDEO_FORMAT_NAME should be used instead.
# NV_CTRL_STRING_GPU_CURRENT_CLOCK_FREQS - returns a string with the
# associated NV Clock, Memory Clock and Processor Clock values.
# Current valid tokens are "nvclock", "nvclockmin", "nvclockmax",
# "memclock", "memclockmin", "memclockmax", "processorclock",
# "processorclockmin" and "processorclockmax".
# Clock values are returned as a comma-separated list of
# "token=value" pairs.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_HARDWARE_REVISION - Returns the
# hardware revision of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_FIRMWARE_VERSION_A - Returns the
# firmware version of chip A of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_FIRMWARE_DATE_A - Returns the
# date of the firmware of chip A of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_FIRMWARE_VERSION_B - Returns the
# firmware version of chip B of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_FIRMWARE_DATE_B - Returns the
# date of the firmware of chip B of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_TRANSCEIVER_ADDRESS - Returns the RF address
# of the 3D Vision Pro transceiver.
# NV_CTRL_STRING_3D_VISION_PRO_GLASSES_FIRMWARE_VERSION_A - Returns the
# firmware version of chip A of the glasses.
# NV_CTRL_STRING_3D_VISION_PRO_GLASSES_FIRMWARE_DATE_A - Returns the
# date of the firmware of chip A of the glasses.
# NV_CTRL_STRING_3D_VISION_PRO_GLASSES_ADDRESS - Returns the RF address
# of the glasses.
# NV_CTRL_STRING_3D_VISION_PRO_GLASSES_NAME - Controls the name the
# glasses should use.
# Glasses' name should start and end with an alpha-numeric character.
# NV_CTRL_STRING_CURRENT_METAMODE_VERSION_2 - Returns the metamode currently
# being used by the specified X screen.  The MetaMode string has the same
# syntax as the MetaMode X configuration option, as documented in the NVIDIA
# driver README.  Also, see NV_CTRL_BINARY_DATA_METAMODES_VERSION_2 for more
# details on the base syntax.
# The returned string may also be prepended with a comma-separated list of
# NV_CTRL_STRING_DISPLAY_NAME_TYPE_BASENAME - Returns a type name for the
# display device ("CRT", "DFP", or "TV").  However, note that the determination
# of the name is based on the protocol through which the X driver communicates
# to the display device.  E.g., if the driver communicates using VGA ,then the
# basename is "CRT"; if the driver communicates using TMDS, LVDS, or DP, then
# the name is "DFP".
# NV_CTRL_STRING_DISPLAY_NAME_TYPE_ID - Returns the type-based name + ID for
# the display device, e.g. "CRT-0", "DFP-1", "TV-2".  If this device is a
# DisplayPort multistream device, then this name will also be prepended with the
# device's port address like so: "DFP-1.0.1.2.3".  See
# NV_CTRL_STRING_DISPLAY_NAME_TYPE_BASENAME for more information about the
# construction of type-based names.
# NV_CTRL_STRING_DISPLAY_NAME_DP_GUID - Returns the GUID of the DisplayPort
# display device.  e.g. "DP-GUID-f16a5bde-79f3-11e1-b2ae-8b5a8969ba9c"
# The display device must be a DisplayPort 1.2 device.
# NV_CTRL_STRING_DISPLAY_NAME_EDID_HASH - Returns the SHA-1 hash of the
# display device's EDID in 8-4-4-4-12 UID format. e.g.
# "DPY-EDID-f16a5bde-79f3-11e1-b2ae-8b5a8969ba9c"
# The display device must have a valid EDID.
# NV_CTRL_STRING_DISPLAY_NAME_TARGET_INDEX - Returns the current NV-CONTROL
# target ID (name) of the display device.  e.g. "DPY-1", "DPY-4"
# This name for the display device is not guarenteed to be the same between
# different runs of the X server.
# NV_CTRL_STRING_DISPLAY_NAME_RANDR - Returns the RandR output name for the
# display device.  e.g.  "VGA-1", "DVI-I-0", "DVI-D-3", "LVDS-1", "DP-2",
# "HDMI-3", "eDP-6".  This name should match  If this device is a DisplayPort
# 1.2 device, then this name will also be prepended with the device's port
# address like so: "DVI-I-3.0.1.2.3"
# NV_CTRL_STRING_GPU_UUID - Returns the UUID of the given GPU.
# NV_CTRL_STRING_GPU_UTILIZATION - Returns the current percentage usage
# of the various components of the GPU.
# Current valid tokens are "graphics", "memory", "video" and "PCIe".
# Utilization values are returned as a comma-separated list of
# using an NV_CTRL_TARGET_TYPE_GPU.
# NV_CTRL_STRING_MULTIGPU_MODE - returns a string describing the current
# MULTIGPU mode, if any, or FALSE if MULTIGPU is not currently enabled.
# NV_CTRL_STRING_PRIME_OUTPUTS_DATA - returns a semicolon delimited list of
# strings that describe all PRIME configured displays.
# Binary Data Attributes:
# Binary data attributes can be queryied through the XNVCTRLQueryBinaryData()
# and XNVCTRLQueryTargetBinaryData() function calls.
# There are currently no binary data attributes that can be set.
# Unless otherwise noted, all Binary data attributes can be queried
# using an NV_CTRL_TARGET_TYPE_X_SCREEN target.  Attributes that cannot take
# an NV_CTRL_TARGET_TYPE_X_SCREEN target also cannot be queried through
# XNVCTRLQueryBinaryData() (Since an X Screen target is assumed).
# NV_CTRL_BINARY_DATA_EDID - Returns a display device's EDID information
# This attribute may be queried through XNVCTRLQueryTargetBinaryData()
# NV_CTRL_BINARY_DATA_MODELINES - Returns a display device's supported
# ModeLines.  ModeLines are returned in a buffer, separated by a single
# '\0' and terminated by two consecutive '\0' s like so:
# Each ModeLine string may be prepended with a comma-separated list
# of "token=value" pairs, separated from the ModeLine string with a
# Note that a ModeLine can have several sources; the "source" token
# can appear multiple times in the "token=value" pairs list.
# Additional source values may be specified in the future.
# Additional tokens may be added in the future, so it is recommended
# that any token parser processing the returned string from
# NV_CTRL_BINARY_DATA_MODELINES be implemented to gracefully ignore
# unrecognized tokens.
# "source=xserver, source=vesa, source=edid :: "1024x768_70"  75.0  1024 1048 1184 1328  768 771 777 806  -HSync -VSync"
# "source=xconfig, xconfig-name=1600x1200_60.00 :: "1600x1200_60_0"  161.0  1600 1704 1880 2160  1200 1201 1204 1242  -HSync +VSync"
# NV_CTRL_BINARY_DATA_METAMODES - Returns an X Screen's supported
# MetaModes.  MetaModes are returned in a buffer separated by a
# single '\0' and terminated by two consecutive '\0' s like so:
# Each MetaMode string may be prepended with a comma-separated list
# of "token=value" pairs, separated from the MetaMode string with
# "::".  Currently, valid tokens are:
# NV_CTRL_BINARY_DATA_METAMODES be implemented to gracefully ignore
# NV_CTRL_BINARY_DATA_XSCREENS_USING_GPU - Returns the list of X
# screens currently driven by the given GPU.
# The format of the returned data is:
# This attribute can only be queried through XNVCTRLQueryTargetBinaryData()
# NV_CTRL_BINARY_DATA_GPUS_USED_BY_XSCREEN - Returns the list of GPUs
# currently in use by the given X screen.
# NV_CTRL_BINARY_DATA_GPUS_USING_FRAMELOCK - Returns the list of
# GPUs currently connected to the given frame lock board.
# using a NV_CTRL_TARGET_TYPE_FRAMELOCK target.  This attribute cannot be
# R-DF
# NV_CTRL_BINARY_DATA_DISPLAY_VIEWPORT - Returns the Display Device's
# viewport box into the given X Screen (in X Screen coordinates.)
# NV_CTRL_BINARY_DATA_FRAMELOCKS_USED_BY_GPU - Returns the list of
# Framelock devices currently connected to the given GPU.
# NV_CTRL_BINARY_DATA_GPUS_USING_VCSC - Deprecated
# Returns the list of GPU devices connected to the given VCS.
# using a NV_CTRL_TARGET_TYPE_VCSC target.  This attribute cannot be
# queried using a NV_CTRL_TARGET_TYPE_X_SCREEN and cannot be queried using
# a  NV_CTRL_TARGET_TYPE_X_GPU
# R-DV
# NV_CTRL_BINARY_DATA_VCSCS_USED_BY_GPU - Deprecated
# Returns the VCSC device that is controlling the given GPU.
# queried using a NV_CTRL_TARGET_TYPE_X_SCREEN
# NV_CTRL_BINARY_DATA_COOLERS_USED_BY_GPU - Returns the coolers that
# are cooling the given GPU.
# NV_CTRL_BINARY_DATA_GPUS_USED_BY_LOGICAL_XSCREEN - Returns the list of
# GPUs currently driving the given X screen.  If Xinerama is enabled, this
# will return all GPUs that are driving any X screen.
# NV_CTRL_BINARY_DATA_THERMAL_SENSORS_USED_BY_GPU - Returns the sensors that
# are attached to the given GPU.
# NV_CTRL_BINARY_DATA_GLASSES_PAIRED_TO_3D_VISION_PRO_TRANSCEIVER - Returns
# the id of the glasses that are currently paired to the given
# 3D Vision Pro transceiver.
# using a NV_CTRL_TARGET_TYPE_3D_VISION_PRO_TRANSCEIVER target.
# NV_CTRL_BINARY_DATA_DISPLAY_TARGETS - Returns all the display devices
# currently connected to any GPU on the X server.
# This attribute can only be queried through XNVCTRLQueryTargetBinaryData().
# NV_CTRL_BINARY_DATA_DISPLAYS_CONNECTED_TO_GPU - Returns the list of
# display devices that are connected to the GPU target.
# NV_CTRL_BINARY_DATA_METAMODES_VERSION_2  - Returns values similar to
# NV_CTRL_BINARY_DATA_METAMODES(_VERSION_1) but also returns extended syntax
# information to indicate a specific display device, as well as other per-
# display deviceflags as "token=value" pairs.  For example:
# The display device names have the form "DPY-%d", where the integer
# part of the name is the NV-CONTROL target ID for that display device
# for this instance of the X server.  Note that display device NV-CONTROL
# target IDs are not guaranteed to be the same from one run of the X
# server to the next.
# NV_CTRL_BINARY_DATA_DISPLAYS_ENABLED_ON_XSCREEN - Returns the list of
# display devices that are currently scanning out the X screen target.
# NV_CTRL_BINARY_DATA_DISPLAYS_ASSIGNED_TO_XSCREEN - Returns the list of
# display devices that are currently assigned the X screen target.
# NV_CTRL_BINARY_DATA_GPU_FLAGS - Returns a list of flags for the
# given GPU.  A flag can, for instance, be a capability which enables
# or disables some features according to the GPU state.
# Stereo and display composition transformations are mutually exclusive.
# Overlay and display composition transformations are mutually exclusive.
# Depth 8 and display composition transformations are mutually exclusive.
# NV_CTRL_BINARY_DATA_DISPLAYS_ON_GPU - Returns the list of valid
# display devices that can be connected to the GPU target.
# String Operation Attributes:
# These attributes are used with the XNVCTRLStringOperation()
# function; a string is specified as input, and a string is returned
# as output.
# Unless otherwise noted, all attributes can be operated upon using
# an NV_CTRL_TARGET_TYPE_X_SCREEN target.
# NV_CTRL_STRING_OPERATION_ADD_METAMODE - provide a MetaMode string
# as input, and returns a string containing comma-separated list of
# "token=value" pairs as output.  Currently, the only output token is
# "id", which indicates the id that was assigned to the MetaMode.
# All ModeLines referenced in the MetaMode must already exist for
# each display device (as returned by the
# NV_CTRL_BINARY_DATA_MODELINES attribute).
# The input string can optionally be prepended with a string of
# comma-separated "token=value" pairs, separated from the MetaMode
# string by "::".  Currently, the only valid token is "index" which
# indicates the insertion index for the MetaMode.
# Input: "index=5 :: 1600x1200+0+0, 1600x1200+1600+0"
# Output: "id=58"
# which causes the MetaMode to be inserted at position 5 in the
# MetaMode list (all entries after 5 will be shifted down one slot in
# the list), and the X server's containing mode stores 58 as the
# VRefresh, so that the MetaMode can be uniquely identifed through
# XRandR and XF86VidMode.
# ----
# NV_CTRL_STRING_OPERATION_GTF_MODELINE - provide as input a string
# of comma-separated "token=value" pairs, and returns a ModeLine
# string, computed using the GTF formula using the parameters from
# the input string.  Valid tokens for the input string are "width",
# "height", and "refreshrate".
# Input: "width=1600, height=1200, refreshrate=60"
# Output: "160.96  1600 1704 1880 2160  1200 1201 1204 1242  -HSync +VSync"
# This operation does not have any impact on any display device's
# modePool, and the ModeLine is not validated; it is simply intended
# for generating ModeLines.
# NV_CTRL_STRING_OPERATION_CVT_MODELINE - provide as input a string
# string, computed using the CVT formula using the parameters from
# "height", "refreshrate", and "reduced-blanking".  The
# "reduced-blanking" argument can be "0" or "1", to enable or disable
# use of reduced blanking for the CVT formula.
# Input: "width=1600, height=1200, refreshrate=60, reduced-blanking=1"
# Output: "130.25  1600 1648 1680 1760  1200 1203 1207 1235  +HSync -VSync"
# NV_CTRL_STRING_OPERATION_BUILD_MODEPOOL - build a ModePool for the
# specified display device on the specified target (either an X
# screen or a GPU).  This is typically used to generate a ModePool
# for a display device on a GPU on which no X screens are present.
# Currently, a display device's ModePool is static for the life of
# the X server, so XNVCTRLStringOperation will return FALSE if
# requested to build a ModePool on a display device that already has
# a ModePool.
# The string input to BUILD_MODEPOOL may be NULL.  If it is not NULL,
# then it is interpreted as a double-colon ("::") separated list
# of "option=value" pairs, where the options and the syntax of their
# values are the X configuration options that impact the behavior of
# modePool construction; namely:
# An example input string might look like:
# This request currently does not return a string.
# DG
# NV_CTRL_STRING_OPERATION_GVI_CONFIGURE_STREAMS - Configure the streams-
# to-jack+channel topology for a GVI (Graphics capture board).
# The string input to GVI_CONFIGURE_STREAMS may be NULL.  If this is the
# case, then the current topology is returned.
# If the input string to GVI_CONFIGURE_STREAMS is not NULL, the string
# is interpreted as a semicolon (";") separated list of comma-separated
# lists of "option=value" pairs that define a stream's composition.  The
# available options and their values are:
# This example shows a possible configuration for capturing 3G input:
# Applications should query the following attributes to determine
# possible combinations:
# Note: A jack+channel pair can only be tied to one link/stream.
# Upon successful configuration or querying of this attribute, a string
# representing the current topology for all known streams on the device
# will be returned.  On failure, NULL is returned.
# NV_CTRL_STRING_OPERATION_PARSE_METAMODE - Parses the given MetaMode string
# and returns the validated MetaMode string - possibly re-calculating various
# values such as ViewPortIn.  If the MetaMode matches an existing MetaMode,
# the details of the existing MetaMode are returned.  If the MetaMode fails to
# be parsed, NULL is returned.
# NV-CONTROL major op numbers. these constants identify the request type
# various lists that go with attrs, but are handled more compactly
# this way. these lists are indexed by the possible values of their attrs
# and are explained in NVCtrl.h
# Attribute Targets
# Targets define attribute groups.  For example, some attributes are only
# valid to set on a GPU, others are only valid when talking about an
# X Screen.  Target types are then what is used to identify the target
# group of the attribute you wish to set/query.
# Here are the supported target types:
# Visual Computing System - deprecated.  To be removed along with all
# VCS-specific attributes in a later release.
# e.g., fan
# Targets, to indicate where a command should be executed.
# Xlib.ext.xfixes -- XFIXES extension module
# Xlib.support.__init__ -- support code package
# The platform specific modules should not be listed here
# Xlib.support.vms_connect -- VMS-type display connection functions
# Use dummy display if none is set.  We really should
# check DECW$DISPLAY instead, but that has to wait
# Always return a host, since we don't have AF_UNIX sockets
# Always use TCP/IP sockets.  Later it would be nice to
# be able to use DECNET och LOCAL connections.
# VMS doesn't have xauth
# Xlib.support.lock -- allocate a lock
# This might be nerdy, but by assigning methods like this
# instead of defining them all, we create a single bound
# method object once instead of one each time one of the
# methods is called.
# This gives some speed improvements which should reduce the
# impact of the threading infrastructure in the regular code,
# when not using threading.
# More optimisations: we use a single lock for all lock instances
# Xlib.support.connect -- OS-independent display connection functions
# List the modules which contain the corresponding functions
# Figure out which OS we're using.
# sys.platform is either "OS-ARCH" or just "OS".
# Xlib.support.unix_connect -- Unix-type display connection functions
# Darwin funky socket.
# Use $DISPLAY if display isn't provided
# Host is mandatory when protocol is TCP.
# TCP socket, note the special case: `unix:0.0` is equivalent to `:0.0`.
# Unix socket.
# Use abstract address.
# If no protocol/host was specified, fallback to TCP.
# Make sure that the connection isn't inherited in child processes.
# According to PEP446, in Python 3.4 and above,
# it is not inherited in child processes by default.
# However, just in case, we explicitly make it non-inheritable.
# Also, we don't use the code like the following,
# because there would be no possibility of backporting to past versions.
# We just check if the socket has `set_inheritable`.
# On Windows,
# Python doesn't support fcntl module because Windows doesn't have fcntl API.
# At least by not importing fcntl, we will be able to import python-xlib on Windows.
# so.. unfortunately, for Python 3.3 and below, on Windows,
# we can't make sure that the connection isn't inherited in child processes for now.
# Translate socket address into the xauth domain
# Convert the prettyprinted IP number into 4-octet string.
# Sometimes these modules are too damn smart...
# We need to do this to handle ssh's X forwarding.  It sets
# $DISPLAY to localhost:10, but stores the xauth cookie as if
# DISPLAY was :10.  Hence, if localhost and not found, try
# again as a Unix socket.
# Find authorization cookie
# We could parse .Xauthority, but xauth is simpler
# although more inefficient
# If there's a cookie, it is of the format
# We're interested in the two last parts for the
# connection establishment
# Translate hexcode into binary
# Xlib.xobject.__init__ -- glue for Xlib.xobject package
# Xlib.xobject.cursor -- cursor object
# Xlib.xobject.colormap -- colormap object
# Xlib.xobject.resource -- any X resource object
# Xlib.xobject.icccm -- ICCCM structures
# withdrawn is totally bogus according to
# ICCCM, but some window managers seem to
# use this value to identify dockapps.
# Oh well.
# Xlib.xobject.fontable -- fontable objects (GC, font)
# Xlib.xobject.drawable -- drawable objects (window and pixmap)
# Other X resource objects
# Inter-client communication conventions
# Trivial little method for putting PIL images.  Will break on anything
# but depth 1 or 24...
# FIXME: at least basic support for compound text would be nice.
# elif prop.property_type == self.display.get_atom('COMPOUND_TEXT'):
# Helper function for getting structured properties.
# pname and ptype are atoms, and pstruct is a Struct object.
# Returns a DictWrapper, or None
# Helper function for setting structured properties.
# hints is a mapping or a DictWrapper, keys is a mapping.  keys
# will be modified.  onerror is the error handler.
# Keys for get_config_var() that are never converted to Python integers.
# Downstream distributors can overwrite the default install scheme.
# This is done to support downstream modifications where distributors change
# the installation layout (eg. different site-packages directory).
# So, distributors will change the default scheme to one that correctly
# represents their layout.
# This presents an issue for projects/people that need to bootstrap virtual
# environments, like virtualenv. As distributors might now be customizing
# the default install scheme, there is no guarantee that the information
# returned by sysconfig.get_default_scheme/get_paths is correct for
# a virtual environment, the only guarantee we have is that it is correct
# for the *current* environment. When bootstrapping a virtual environment,
# we need to know its layout, so that we can place the files in the
# correct locations.
# The "*_venv" install scheme is a scheme to bootstrap virtual environments,
# essentially identical to the default posix_prefix/nt schemes.
# Downstream distributors who patch posix_prefix/nt scheme are encouraged to
# leave the following schemes unchanged
# For the OS-native venv scheme, we essentially provide an alias:
# NOTE: site.py has copy of this function.
# Sync it when modify this function.
# NOTE: When modifying "purelib" scheme, update site._get_path() too.
# Mutex guarding initialization of _CONFIG_VARS.
# True iff _CONFIG_VARS has been fully initialized.
# sys.executable can be empty if argv[0] has been changed and Python is
# unable to retrieve the real program name
# In a virtual environment, `sys._home` gives us the target directory
# `_PROJECT_BASE` for the executable that created it when the virtual
# python is an actual executable ('venv --copies' or Windows).
# In a source build, the executable is in a subdirectory of the root
# that we want (<root>\PCbuild\<platname>).
# `_BASE_PREFIX` is used as the base installation is where the source
# will be.  The realpath is needed to prevent mount point confusion
# that can occur with just string comparisons.
# set for cross builds
# On POSIX-y platforms, Python will:
# - Build from .h files in 'headers' (which is only added to the
# - Install .h files to 'include'
# On Windows we want to substitute 'lib' for schemes rather
# than the native value (without modifying vars, in case it
# was passed in)
# _sysconfigdata is generated at build time, see _generate_posix_vars()
# For cross builds, the path to the target's sysconfigdata must be specified
# so it can be imported. It cannot be in PYTHONPATH, as foreign modules in
# sys.path can cause crashes when loaded by the host interpreter.
# Rely on truthiness as a valueless env variable is still an empty string.
# See OS X note in _generate_posix_vars re _sysconfigdata.
# set basic install directories
# Add EXT_SUFFIX, SOABI, and Py_GIL_DISABLED
# public APIs
# Normalized versions of prefix and exec_prefix are handy to have;
# in fact, these are the standard versions used most places in the
# Distutils.
# FIXME: This gets overwriten by _init_posix.
# sys.abiflags may not be defined on all platforms.
# Setting 'userbase' is done below the call to the
# init function to enable using 'get_config_var' in
# the init-function.
# e.g., 't' for free-threaded or '' for default build
# Always convert srcdir to an absolute path
# If srcdir is a relative path (typically '.' or '..')
# then it should be interpreted relative to the directory
# containing Makefile.
# srcdir is not meaningful since the installation is
# spread about the filesystem.  We choose the
# directory containing the Makefile since we know it
# OS X platforms require special customization to handle
# multi-architecture, multi-os-version installers
# Avoid claiming the lock once initialization is complete.
# Test again with the lock held to avoid races. Note that
# we test _CONFIG_VARS here, not _CONFIG_VARS_INITIALIZED,
# to ensure that recursive calls to get_config_vars()
# don't re-enter init_config_vars().
# If the site module initialization happened after _CONFIG_VARS was
# initialized, a virtual environment might have been activated, resulting in
# variables like sys.prefix changing their value, so we need to re-init the
# config vars (see GH-126789).
# XXX what about the architecture? NT is Intel or Alpha
# Wheel tags use the ABI names from Android's own tools.
# We can't use "platform.architecture()[0]" because a
# This algorithm does multiple expansion, so if vars['foo'] contains
# "${bar}", it will expand ${foo} to ${bar}, and then expand
# ${bar}... and so forth.  This is fine as long as 'vars' comes from
# 'parse_makefile()', which takes care of such expansions eagerly,
# according to make's variable expansion semantics.
# Regexes needed for parsing Makefile (and similar syntaxes,
# like old-style Setup files).
# `$$' is a literal `$' in make
# insert literal `$'
# do variable interpolation here
# Variables with a 'PY_' prefix in the makefile. These need to
# be made available without that prefix through sysconfig.
# Special care is needed to ensure that variable expansion works, even
# if the expansion uses the name without a prefix.
# get it on a subsequent round
# do it like make: fall back to environment
# Adds unresolved variables to the done dict.
# This is disabled when called from distutils.sysconfig
# bogus variable reference (e.g. "prefix=$/opt/python");
# just drop it since we can't deal
# strip spurious spaces
# save the results in the global dictionary
# load the installed Makefile:
# load the installed pyconfig.h:
# On AIX, there are wrong paths to the linker scripts in the Makefile
# -- these paths are relative to the Python source, but when installed
# the scripts are in another directory.
# There's a chicken-and-egg situation on OS X with regards to the
# _sysconfigdata module after the changes introduced by #15298:
# get_config_vars() is called by get_platform() as part of the
# `make pybuilddir.txt` target -- which is a precursor to the
# _sysconfigdata.py module being constructed.  Unfortunately,
# get_config_vars() eventually calls _init_posix(), which attempts
# to import _sysconfigdata, which we won't have built yet.  In order
# for _init_posix() to work, if we're on Darwin, just mock up the
# _sysconfigdata module manually and populate it with the build vars.
# This is more than sufficient for ensuring the subsequent call to
# get_platform() succeeds.
# Create file used for sys.path fixup -- see Modules/getpath.c
# This directory is a Python package.
# Copyright 2009 Brian Quinlan. All Rights Reserved.
# This import is required to load the multiprocessing.connection submodule
# so that it can be accessed later as `mp.connection`
# Please note that we do not take the self._lock when
# calling clear() (to avoid deadlocking) so this method can
# only be called safely from the same thread as all calls to
# clear() even if you hold the lock. Otherwise we
# might try to read from the closed pipe.
# call not protected by ProcessPoolExecutor._shutdown_lock
# Register for `_python_exit()` to be called just before joining all
# non-daemon threads. This is used instead of `atexit.register()` for
# compatibility with subinterpreters, which no longer support daemon threads.
# See bpo-39812 for context.
# Controls how many more calls than processes will be queued in the call queue.
# A smaller number will mean that processes spend more time idle waiting for
# work while a larger number will make Future.cancel() succeed less frequently
# (Futures in the call queue cannot be cancelled).
# On Windows, WaitForMultipleObjects is used to wait for processes to finish.
# It can wait on, at most, 63 objects. There is an overhead of two objects:
# - the result queue reader
# - the thread wakeup reader
# Hack to embed stringification of remote traceback in local traceback
# Traceback object needs to be garbage-collected as its frames
# contain references to all the objects in the exception scope
# work_item can be None if another process terminated. In this
# case, the executor_manager_thread fails all work_items
# with BrokenProcessPool
# The parent will notice that the process stopped and
# mark the pool broken
# Wake up queue management thread
# Liberate the resource as soon as possible, to avoid holding onto
# open files or shared memory that is not needed anymore
# Store references to necessary internals of the executor.
# A _ThreadWakeup to allow waking up the queue_manager_thread from the
# main Thread and avoid deadlocks caused by permanently locked queues.
# A weakref.ref to the ProcessPoolExecutor that owns this thread. Used
# to determine if the ProcessPoolExecutor has been garbage collected
# and that the manager can exit.
# When the executor gets garbage collected, the weakref callback
# will wake up the queue management thread so that it can terminate
# if there is no pending work item.
# A list of the ctx.Process instances used as workers.
# A ctx.Queue that will be filled with _CallItems derived from
# _WorkItems for processing by the process workers.
# A ctx.SimpleQueue of _ResultItems generated by the process workers.
# A queue.Queue of work ids e.g. Queue([5, 6, ...]).
# Maximum number of tasks a worker process can execute before
# exiting safely
# A dict mapping work ids to _WorkItems e.g.
# Main loop for the executor manager thread.
# gh-109047: During Python finalization, self.call_queue.put()
# creation of a thread can fail with RuntimeError.
# Delete reference to result_item to avoid keeping references
# while waiting on new results.
# When only canceled futures remain in pending_work_items, our
# next call to wait_result_broken_or_wakeup would hang forever.
# This makes sure we have some running futures or none at all.
# Since no new work items can be added, it is safe to shutdown
# this thread if there are no pending work items.
# Fills call_queue with _WorkItems from pending_work_items.
# This function never blocks.
# Wait for a result to be ready in the result_queue while checking
# that all worker processes are still running, or for a wake up
# signal send. The wake up signals come either from new tasks being
# submitted, from the executor being shutdown/gc-ed, or from the
# shutdown of the python interpreter.
# Process the received a result_item. This can be either the PID of a
# worker that exited gracefully or a _ResultItem
# Received a _ResultItem so mark the future as completed.
# work_item can be None if another process terminated (see above)
# Check whether we should start shutting down the executor.
# No more work items can be added if:
# Terminate the executor because it is in a broken state. The cause
# argument can be used to display more information on the error that
# lead the executor into becoming broken.
# Mark the process pool broken so that submits fail right now.
# All pending tasks are to be marked failed with the following
# BrokenProcessPool error
# Mark pending tasks as failed.
# set_exception() fails if the future is cancelled: ignore it.
# Trying to check if the future is cancelled before calling
# set_exception() would leave a race condition if the future is
# cancelled between the check and set_exception().
# Delete references to object. See issue16284
# Terminate remaining workers forcibly: the queues or their
# locks may be in a dirty state and block forever.
# clean up resources
# Flag the executor as shutting down and cancel remaining tasks if
# requested as early as possible if it is not gc-ed yet.
# Cancel pending work items if requested.
# Cancel all pending futures and update pending_work_items
# to only have futures that are currently running.
# Drain work_ids_queue since we no longer need to
# add items to the call queue.
# Make sure we do this only once to not waste time looping
# on running processes over and over.
# Send the right number of sentinels, to make sure all children are
# properly terminated.
# If broken, call_queue was closed and so can no longer be used.
# Release the queue's resources as soon as possible.
# If .join() is not called on the created processes then
# some ctx.Queue methods may deadlock on Mac OS X.
# This is an upper bound on the number of children alive.
# sysconf not available or setting not available
# indetermined limit, assume that limit is determined
# by available memory only
# minimum number of semaphores available
# according to POSIX
# https://github.com/python/cpython/issues/90622
# Management thread
# Map of pids to processes
# Shutdown is a two-step process.
# _ThreadWakeup is a communication channel used to interrupt the wait
# of the main loop of executor_manager_thread from another thread (e.g.
# when calling executor.submit or executor.shutdown). We do not use the
# _result_queue to send wakeup signals to the executor_manager_thread
# as it could result in a deadlock if a worker process dies with the
# _result_queue write lock still acquired.
# Care must be taken to only call clear and close from the
# executor_manager_thread, since _ThreadWakeup.clear() is not protected
# by a lock.
# Create communication channels for the executor
# Make the call queue slightly larger than the number of processes to
# prevent the worker processes from idling. But don't make it too big
# because futures in the call queue cannot be cancelled.
# Killed worker processes can produce spurious "broken pipe"
# tracebacks in the queue's own worker thread. But we detect killed
# processes anyway, so silence the tracebacks.
# Start the processes so that their sentinels are known.
# ie, using fork.
# gh-132969: avoid error when state is reset and executor is still running,
# which will happen when shutdown(wait=False) is called.
# if there's an idle process, we don't need to spawn a new one.
# Assertion disabled as this codepath is also used to replace a
# worker that unexpectedly dies, even when using the 'fork' start
# method. That means there is still a potential deadlock bug. If a
# 'fork' mp_context worker dies, we'll be forking a new one when
# we know a thread is running (self._executor_manager_thread).
#assert self._safe_to_dynamically_spawn_children or not self._executor_manager_thread, 'https://github.com/python/cpython/issues/90622'
# To reduce the risk of opening too many files, remove references to
# objects that use file descriptors.
# Possible future states (for internal use by the futures package).
# The future was cancelled by the user...
# ...and _Waiter.add_cancelled() was called by a worker.
# Logger for internal use by the futures package.
# Careful not to keep a reference to the popped value
# reverse to keep finishing order
# Remove waiter from unfinished futures
# Break a reference cycle with the exception in self._exception
# The following methods should only be used by Executors and in tests.
# self._condition.notify_all() is not necessary because
# self.cancel() triggers a notification.
# Yield must be hidden in closure so that the futures are submitted
# before the first iterator value is required.
# Careful not to keep a reference to the popped future
# Lock that ensures that new workers are not created while the interpreter is
# shutting down. Must be held while mutating _threads_queues and _shutdown.
# At fork, reinitialize the `_global_shutdown_lock` lock in the child process
# Break a reference cycle with the exception 'exc'
# attempt to increment idle count if queue is empty
# Delete references to object. See GH-60488
# Exit if:
# Flag the executor as shutting down as early as possible if it
# is not gc-ed yet.
# Notice other workers
# Used to assign unique thread names when thread_name_prefix is not supplied.
# ThreadPoolExecutor is often used to:
# * CPU bound task which releases GIL
# * I/O bound task (which releases GIL, of course)
# We use process_cpu_count + 4 for both types of tasks.
# But we limit it to 32 to avoid consuming surprisingly large resource
# on many core machine.
# if idle threads are available, don't spin new threads
# When the executor gets lost, the weakref callback will wake up
# the worker threads.
# Drain work queue and mark pending futures failed
# Drain all work items from the queue, and then cancel their
# associated futures.
# Send a wake-up to prevent threads calling
# _work_queue.get(block=True) from permanently blocking.
# informational
# redirection
# client error
# server errors
# HTTP Working Group                                        T. Berners-Lee
# INTERNET-DRAFT                                            R. T. Fielding
# <draft-ietf-http-v10-spec-00.txt>                     H. Frystyk Nielsen
# Expires September 8, 1995                                  March 8, 1995
# URL: http://www.ics.uci.edu/pub/ietf/http/draft-ietf-http-v10-spec-00.txt
# Network Working Group                                      R. Fielding
# Request for Comments: 2616                                       et al
# Obsoletes: 2068                                              June 1999
# Category: Standards Track
# URL: http://www.faqs.org/rfcs/rfc2616.html
# Log files
# ---------
# Here's a quote from the NCSA httpd docs about log file format.
# | The logfile format is as follows. Each line consists of:
# |
# | host rfc931 authuser [DD/Mon/YYYY:hh:mm:ss] "request" ddd bbbb
# |        host: Either the DNS name or the IP number of the remote client
# |        rfc931: Any information returned by identd for this person,
# |                - otherwise.
# |        authuser: If user sent a userid for authentication, the user name,
# |                  - otherwise.
# |        DD: Day
# |        Mon: Month (calendar name)
# |        YYYY: Year
# |        hh: hour (24-hour format, the machine's timezone)
# |        mm: minutes
# |        ss: seconds
# |        request: The first line of the HTTP request as sent by the client.
# |        ddd: the status code returned by the server, - if not available.
# |        bbbb: the total number of bytes sent,
# |              *not including the HTTP/1.0 header*, - if not available
# | You can determine the name of the file accessed through request.
# (Actually, the latter is only true if you know the server configuration
# at the time the request was made!)
# For gethostbyaddr()
# Default error message template
# Seems to make sense in testing environment
# The Python system version, truncated to its first component.
# The server software version.  You may want to override this.
# The format is multiple whitespace-separated strings,
# where each string is of the form name[/version].
# The default request version.  This only affects responses up until
# the point where the request line is parsed, so it mainly decides what
# the client gets back when sending a malformed request line.
# Most web servers default to HTTP 0.9, i.e. don't send a status line.
# set in case of error on the first line
# Enough to determine protocol version
# RFC 2145 section 3.1 says there can be only one "." and
# gh-87389: The purpose of replacing '//' with '/' is to protect
# against open redirect attacks possibly triggered if the path starts
# with '//' because http clients treat //path as an absolute URI
# without scheme (similar to http://path) rather than a path.
# Reduce to a single /
# Examine the headers and look for a Connection directive.
# Examine the headers and look for an Expect directive
#actually send the response if not already done.
#a read or a write timed out.  Discard this connection
# Message body is omitted for cases described in:
# HTML encode to prevent Cross Site Scripting attacks
# (see bug #1100201)
# https://en.wikipedia.org/wiki/List_of_Unicode_characters#Control_codes
# Essentially static class variables
# The version of the HTTP protocol we support.
# Set this to HTTP/1.1 to enable automatic keepalive
# MessageClass used to parse headers
# hack to maintain backwards compatibility
# redirect browser - doing basically what apache does
# check for trailing "/" which should return 404. See Issue17324
# The test for this was added in test_httpserver.py
# However, some OS platforms accept a trailingSlash as a filename
# See discussion on python-dev and Issue34711 regarding
# parsing and rejection of filenames with a trailing slash
# Use browser cache if possible
# compare If-Modified-Since and time of last file modification
# ignore ill-formed values
# obsolete format with no timezone, cf.
# https://tools.ietf.org/html/rfc7231#section-7.1.1.1
# compare to UTC datetime of last modification
# remove microseconds, like in If-Modified-Since
# Append / for directories or @ for symbolic links
# Note: a link to a directory displays with @ and links with /
# abandon query parameters
# Don't forget explicit trailing slash when normalizing. Issue17324
# Ignore components that are not a simple file/directory name
# Utilities for CGIHTTPRequestHandler
# Query component should not be involved.
# Similar to os.path.split(os.path.normpath(path)) but specific to URL
# path semantics rather than local operating system semantics.
# IndexError if more '..' than prior parts
# Determine platform specifics
# Make rfile unbuffered -- we need to read one line and then pass
# the rest to a subprocess, so we can't use buffered input.
# find an explicit query string, if present.
# dissect the part after the directory name into a script name &
# a possible additional path, to be stored in PATH_INFO.
# Reference: https://www6.uniovi.es/~antonio/ncsa_httpd/cgi/env.html
# XXX Much of the following could be prepared ahead of time!
# XXX REMOTE_IDENT
# XXX Other HTTP_* headers
# Since we're setting the env in the parent, provide empty
# values to override previously set values
# Unix -- fork as we should
# Always flush before forking
# throw away additional data [see bug #427345]
# Non-Unix -- use subprocess
# On Windows, use python.exe, not pythonw.exe
# ensure dual-stack is not disabled; ref #38907
# suppress exception when protocol is IPv4
# HTTPMessage, parse_headers(), and the HTTP status code constants are
# intentionally omitted for simplicity
# connection states
# another hack to maintain backwards compatibility
# Mapping status codes to official W3C names
# maximal line length when calling readline().
# Header name/value ABNF (http://tools.ietf.org/html/rfc7230#section-3.2)
# VCHAR          = %x21-7E
# obs-text       = %x80-FF
# header-field   = field-name ":" OWS field-value OWS
# field-name     = token
# field-value    = *( field-content / obs-fold )
# field-content  = field-vchar [ 1*( SP / HTAB ) field-vchar ]
# field-vchar    = VCHAR / obs-text
# obs-fold       = CRLF 1*( SP / HTAB )
# token          = 1*tchar
# tchar          = "!" / "#" / "$" / "%" / "&" / "'" / "*"
# VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1
# the patterns for both name and value are more lenient than RFC
# definitions to allow for backwards compatibility
# These characters are not allowed within HTTP URL paths.
# Prevents CVE-2019-9740.  Includes control characters such as \r\n.
# We don't restrict chars above \x7f as putrequest() limits us to ASCII.
# Arguably only these _should_ allowed:
# We are more lenient for assumed real world compatibility purposes.
# These characters are not allowed within HTTP method names
# to prevent http header injection.
# We always set the Content-Length header for these methods because some
# servers will otherwise respond with a 411
# XXX The only usage of this method is in
# http.server.CGIHTTPRequestHandler.  Maybe move the code there so
# that it doesn't need to be part of the public API.  The API has
# never been defined so this could cause backwards compatibility
# issues.
# See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details.
# The bytes from the socket object are iso-8859-1 strings.
# See RFC 2616 sec 2.2 which notes an exception for MIME-encoded
# text following RFC 2047.  The basic status line parsing only
# accepts iso-8859-1.
# If the response includes a content-length header, we need to
# make sure that the client doesn't read more than the
# specified number of bytes.  If it does, it will block until
# the server times out and closes the connection.  This will
# happen if a self.fp.read() is done (without a size) whether
# self.fp is buffered or not.  So, no self.fp.read() by
# clients unless they know what they are doing.
# The HTTPResponse object is returned via urllib.  The clients
# of http and urllib expect different attributes for the
# headers.  headers is used here and supports urllib.  msg is
# provided as a backwards compatibility layer for http
# clients.
# from the Status-Line of the response
# HTTP-Version
# Status-Code
# Reason-Phrase
# is "chunked" being used?
# bytes left to read in current chunk
# number of bytes left in response
# conn will close at end of response
# Presumably, the server closed the connection before
# sending a valid response.
# empty version will cause next test to fail.
# The status code is a three-digit number
# we've already started reading the response
# read until we get a non-100 response
# skip the header from the 100 response
# Some servers might still return "0.9", treat it as 1.0 anyway
# use HTTP/1.1 code for HTTP/1.x where x>=1
# are we using the chunked-style of transfer encoding?
# will the connection close at the end of the response?
# do we have a Content-Length?
# NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"
# ignore nonsensical negative lengths
# does the body have a fixed length? (of zero)
# 1xx codes
# if the connection remains open, and we aren't using chunked, and
# a content-length was not provided, then assume that the connection
# WILL close.
# An HTTP/1.1 proxy is assumed to stay open unless
# explicitly closed.
# Some HTTP/1.0 implementations have support for persistent
# connections, using rules different than HTTP/1.1.
# For older HTTP, Keep-Alive indicates persistent connection.
# At least Akamai returns a "Connection: Keep-Alive" header,
# which was supposed to be sent by the client.
# Proxy-Connection is a netscape hack.
# otherwise, assume it will close
# set "closed" flag
# These implementations are for the benefit of io.BufferedReader.
# XXX This class should probably be revised to act more like
# the "raw stream" that BufferedReader expects.
# End of "raw stream" methods
# NOTE: it is possible that we will not ever call self.close(). This
# IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be
# clip the read to the "end of response"
# Ideally, we would raise IncompleteRead if the content-length
# wasn't satisfied, but it might break compatibility.
# Amount is not given (unbounded read) so we must check self.length
# we read everything
# we do not use _safe_read() here because this may be a .will_close
# connection, and the user is reading more bytes than will be provided
# (for example, reading in 1k chunks)
# Read the next chunk size from the file
# strip chunk-extensions
# close the connection as protocol synchronisation is
# probably lost
# read and discard trailer up to the CRLF terminator
### note: we shouldn't have any trailers!
# a vanishingly small number of sites EOF without
# sending the trailer
# return self.chunk_left, reading a new chunk if necessary.
# chunk_left == 0: at the end of the current chunk, need to close it
# chunk_left == None: No current chunk, should read next.
# This function returns non-zero or None if the last chunk has
# been read.
# Can be 0 or None
# We are at the end of chunk, discard chunk end
# toss the CRLF at the end of the chunk
# last chunk: 1*("0") [ chunk-extension ] CRLF
# we read everything; close the "file"
# Having this enables IOBase.readline() to read more than one
# byte at a time
# Fallback to IOBase readline which uses peek() and read()
# Strictly speaking, _get_chunk_left() may cause more than one read,
# but that is ok, since that is to satisfy the chunked protocol.
# if n is negative or larger than chunk_left
# peek doesn't worry about protocol
# peek is allowed to return more than requested.  Just request the
# entire chunk, and truncate what we get.
# We override IOBase.__iter__ so that it doesn't check for closed-ness
# For compatibility with old-style urllib responses.
# Function also used by urllib.request to be able to set the check_hostname
# attribute on a context object.
# send ALPN extension to indicate HTTP/1.1 protocol
# enable PHA for TLS 1.3 connections if available
# do an explicit check for not None here to distinguish
# between unset and set but empty
# file-like object.
# does it implement the buffer protocol (bytes, bytearray, array)?
# This is stored as an instance variable to allow unit
# tests to replace it with a suitable mockup
# ipv6 addresses have [...]
# http://foo.com:/ == http://foo.com/
# Might fail in OSs that don't implement TCP_NODELAY
# close it manually... there may be other refs
# create a consistent interface to message_body
# Let file-like take precedence over byte-like.  This
# is needed to allow the current position of mmap'ed
# files to be taken into account.
# this is solely to check to see if message_body
# implements the buffer API.  it /would/ be easier
# to capture if PyObject_CheckBuffer was exposed
# to Python.
# the object implements the buffer interface and
# can be passed directly into socket methods
# chunked encoding
# end chunked transfer
# if a prior response has been completed, then forget about it.
# in certain cases, we cannot issue another request on this connection.
# this occurs when:
# if there is no prior response, then we can request at will.
# if point (2) is true, then we will have passed the socket to the
# response (effectively meaning, "there is no prior response"), and
# will open a new one when a new request is made.
# Note: if a prior response exists, then we *can* start a new request.
# Save the method for use later in the response phase
# Issue some standard headers for better HTTP/1.1 compliance
# this header is issued *only* for HTTP/1.1
# connections. more specifically, this means it is
# only issued when the client uses the new
# HTTPConnection() class. backwards-compat clients
# will be using HTTP/1.0 and those clients may be
# issuing this header themselves. we should NOT issue
# it twice; some web servers (such as Apache) barf
# when they see two Host: headers
# If we need a non-standard port,include it in the
# header.  If the request is going through a proxy,
# but the host of the actual URL, not the host of the
# proxy.
# As per RFC 273, IPv6 address should be wrapped with []
# when used as Host header
# note: we are assuming that clients will not attempt to set these
# we only want a Content-Encoding of "identity" since we don't
# support encodings such as x-gzip or x-deflate.
# we can accept "chunked" Transfer-Encodings, but no others
# NOTE: no TE header implies *only* "chunked"
#self.putheader('TE', 'chunked')
# if TE is supplied in the header, then it must appear in a
# Connection header.
#self.putheader('Connection', 'TE')
# For HTTP/1.0, the server will assume "not chunked"
# ASCII also helps prevent CVE-2019-9740.
# prevent http header injection
# Prevent CVE-2019-9740.
# Prevent CVE-2019-18348.
# Honor explicitly requested Host: and Accept-Encoding: headers.
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
# if content-length cannot be automatically determined, fall
# back to chunked encoding
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
# if a prior response exists, then it must be completed (otherwise, we
# cannot read this response's header to determine the connection-close
# behavior)
# note: if a prior response existed, but was connection-close, then the
# socket and response were made independent of this HTTPConnection
# object since a new request requires that we open a whole new
# this means the prior response had one of two states:
# this effectively passes the connection to the response
# remember this, so we can tell when it is complete
# Subclasses that define an __init__ must call Exception.__init__
# or define self.args.  Otherwise, str() will fail.
####
# Copyright 2000 by Timothy O'Malley <timo@alum.mit.edu>
# Permission to use, copy, modify, and distribute this software
# and its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# Timothy O'Malley  not be used in advertising or publicity
# Timothy O'Malley DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
# SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS, IN NO EVENT SHALL Timothy O'Malley BE LIABLE FOR
# ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
# Id: Cookie.py,v 2.29 2000/08/23 05:28:49 timo Exp
# Import our required modules
# Define an exception visible to External modules
# These quoting routines conform to the RFC2109 specification, which in
# turn references the character definitions from RFC2068.  They provide
# a two-way quoting algorithm.  Any non-text character is translated
# into a 4 character sequence: a forward-slash followed by the
# three-digit octal equivalent of the character.  Any '\' or '"' is
# quoted with a preceding '\' slash.
# Because of the way browsers really handle cookies (as opposed to what
# the RFC says) we also encode "," and ";".
# These are taken from RFC2068 and RFC2109.
# If there aren't any doublequotes,
# then there can't be any special characters.  See RFC 2109.
# We have to assume that we must decode this string.
# Down to work.
# Remove the "s
# Check for special sequences.  Examples:
# The _getdate() routine is used to set the expiration time in the cookie's HTTP
# header.  By default, _getdate() returns the current time in the appropriate
# "expires" format for a Set-Cookie header.  The one optional argument is an
# offset from now, in seconds.  For example, an offset of -3600 means "one hour
# ago".  The offset may be a floating-point number.
# RFC 2109 lists these attributes as reserved:
# For historical reasons, these attributes are also reserved:
# This is an extension from Microsoft:
# This dictionary provides a mapping from the lowercase
# variant on the left to the appropriate traditional
# formatting on the right.
# Set default attributes
# It's a good key, so save it.
# Print javascript
# Build up our result
# First, the key=value pair
# Now add any defined attributes
# Pattern for finding cookie
# This used to be strict parsing based on the RFC2109 and RFC2068
# specifications.  I have since discovered that MSIE 3.0x doesn't
# follow the character rules outlined in those specs.  As a
# result, the parsing rules here are less strict.
# re.ASCII may be removed if safe.
# At long last, here is the cookie class.  Using this class is almost just like
# using a dictionary.  See this module's docstring for example usage.
# allow assignment of constructed Morsels (e.g. for pickling)
# self.update() wouldn't call our custom __setitem__
# Our starting point
# Length of string
# Parsed (type, key, value) triples
# A key=value pair was previously encountered
# We first parse the whole cookie string and reject it if it's
# syntactically invalid (this helps avoid some classes of injection
# attacks).
# Start looking for a cookie
# No more cookies
# We ignore attributes which pertain to the cookie
# mechanism as a whole, such as "$Version".
# See RFC 2965. (Does anyone care?)
# Invalid cookie string
# The cookie string is valid, apply it.
# current morsel
# only for the default HTTP port
# set to True to enable debugging via the logging module
# There are a few catch-all except: statements in this module, for
# catching input that's bad in unexpected ways.  Warn if any
# exceptions are caught there.
# Date/time conversion
# translate month name to number
# month numbers start with 1 (January)
# maybe it's already a number
# make sure clock elements are defined
# find "obvious" year
# convert UTC time tuple to seconds since epoch (not timezone-adjusted)
# adjust time using timezone string, to get absolute time since epoch
# fast exit for strictly conforming string
# No, we need some messy parsing...
# Useless weekday
# tz is time zone specifier string
# loose regexp parse
# bad format
# XXX there's an extra bit of the timezone I'm ignoring here: is
# Header parsing
# quoted value
# unquoted value
# no value, a lone token
# concatenated headers, as per RFC 2616 section 4.2
# skip junk
# escape " and \
# RFC 2109 attrs (may turn up in Netscape cookies, too)
# XXX: The following does not strictly adhere to RFCs in that empty
# names and values are legal (the former will only appear once and will
# be overwritten if multiple occurrences are present). This is
# mostly to deal with backwards compatibility.
# allow for a distinction between present and empty and missing
# altogether
# This is an RFC 2109 cookie.
# convert expires date to seconds since epoch
# None if invalid
# This may well be wrong.  Which RFC is HDN defined in, if any (for
# For the current implementation, what about IPv6?  Remember to look
# Note that, if A or B are IP addresses, the only relevant part of the
# definition of the domain-match algorithm is the direct string-compare.
# A does not have form NB, or N is the empty string
# equal IP addresses
# remove port, if present
# fix bad RFC 2396 absoluteURI
# Characters in addition to A-Z, a-z, 0-9, '_', '.', and '-' that don't
# need to be escaped to form a valid HTTP URL (RFCs 2396 and 1738).
# There's no knowing what character encoding was used to create URLs
# containing %-escapes, but since we have to pick one to escape invalid
# path characters, we pick UTF-8, as recommended in the HTML 4.0
# specification:
# http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.2.1
# And here, kind of: draft-fielding-uri-rfc2396bis-03
# (And in draft IRI specification: draft-duerst-iri-05)
# (And here, for new URI schemes: RFC 2718)
#a = h[:i]  # this line is only here to show what a is
# normalise case, as per RFC 2965 section 3.3.3
# Sigh.  We need to know whether the domain given in the
# cookie-attribute had an initial dot, in order to follow RFC 2965
# (as clarified in draft errata).  Needed for the returned $Domain
# Version is always set to 0 by parse_ns_headers if it's a Netscape
# cookie, so this must be an invalid RFC 2965 cookie.
# Try and stop servers setting V0 cookies designed to hack other
# servers that know both V0 and V1 protocols.
# XXX This should probably be compared with the Konqueror
# (kcookiejar.cpp) and Mozilla implementations, but it's a
# losing battle.
# domain like .foo.bar
# domain like .co.uk
# Path has already been checked by .path_return_ok(), and domain
# blocking done by .domain_return_ok().
# strict check of non-domain cookies: Mozilla does this, MSIE5 doesn't
# Liberal check of.  This is here as an optimization to avoid
# having to load lots of MSIE cookie files unless necessary.
#_debug("   request domain %s does not match cookie domain %s",
# Used as second parameter to dict.get() method, to distinguish absent
# dict key from one with a None value.
# add cookies in order of most specific (ie. longest) path first
# set version of Cookie header
# What should it be if multiple matching Set-Cookie headers have
# Answer: there is no answer; was supposed to be settled by
# quote cookie value if necessary
# (not for Netscape protocol, which already has any quotes
# add cookie-attributes to be returned in Cookie header
# if necessary, advertise that we know RFC 2965
# Build dictionary of standard cookie-attributes (standard) and
# dictionary of other cookie-attributes (rest).
# Note: expiry time is normalised to seconds since epoch.  V0
# cookies should have the Expires cookie-attribute, and V1 cookies
# should have Max-Age, but since V1 includes RFC 2109 cookies (and
# since V0 cookies may be a mish-mash of Netscape and RFC 2109), we
# accept either (but prefer Max-Age).
# don't lose case distinction for unknown fields
# boolean cookie-attribute is present, but has no value
# (like "discard", rather than "port=80")
# only first value is significant
# RFC 2965 section 3.3.3
# Prefer max-age to expires (like Mozilla)
# convert RFC 2965 Max-Age to seconds since epoch
# XXX Strictly you're supposed to follow RFC 2616
# standard is dict of standard cookie-attributes, rest is dict of the
# rest of them
# set the easy defaults
# invalid version, ignore cookie
# (discard is also set if expires is Absent)
# set default path
# Netscape spec parts company from reality here
# set default domain
# but first we have to remember whether it starts with a dot
# set default port
# Port attr present, but has no value: default to request port.
# Cookie should then only be sent back on that port.
# No port attr present.  Cookie can be sent back on any port.
# set default expires and discard
# Expiry date in past is request to delete cookie.  This can't be
# in DefaultCookiePolicy, because can't delete cookies there.
# treat 2109 cookies as Netscape cookies rather than
# as RFC2965 cookies
# get cookie-attributes for RFC 2965 and Netscape protocols
# no relevant cookie headers: quick exit
# RFC 2109 and Netscape cookies
# Look for Netscape cookies (from Set-Cookie headers) that match
# corresponding RFC 2965 cookies (from Set-Cookie2 headers).
# For each match, keep the RFC 2965 cookie and ignore the Netscape
# cookie (RFC 2965 section 9.1).  Actually, RFC 2109 cookies are
# bundled in with the Netscape cookies for this purpose, which is
# reasonable behaviour.
# derives from OSError for backwards-compatibility with Python 2.4.0
# There really isn't an LWP Cookies 2.0 format, but this indicates
# that there is extra information in here (domain_dot and
# port_spec) while still being compatible with libwww-perl, I hope.
# httponly is a cookie flag as defined in rfc6265
# when encoded in a netscape cookie file,
# the line is prepended with "#HttpOnly_"
# last field may be absent, so keep any trailing tab
# skip comments and blank lines XXX what is $ for?
# cookies.txt regards 'Set-Cookie: foo' as a cookie
# with no name, whereas http.cookiejar regards it as a
# cookie with no value.
# assume path_specified is false
# Copyright (C) 2002-2007 Python Software Foundation
# Author: Ben Gertzfield
# Contact: email-sig@python.org
# See also Charset.py
# 4 bytes out for each 3 bytes (or nonzero fraction thereof) in.
# BAW: should encode() inherit b2a_base64()'s dubious behavior in
# adding a newline to the encoded string?
# For convenience and backwards compatibility w/ standard base64 module
# Author: Ben Gertzfield, Barry Warsaw
# Match encoded-word strings in the form =?charset?q?Hello_World?=
# Field name regexp, including trailing colon, but not separating whitespace,
# according to RFC 2822.  Character range is from tilde to exclamation mark.
# For use with .match()
# Find a header embedded in a putative header value.  Used to check for
# header injection attack.
# If it is a Header object, we can just return the encoded chunks.
# If no encoding, just return the header with no charset.
# First step is to parse all the encoded parts into triplets of the form
# (encoded_string, encoding, charset).  For unencoded strings, the last
# two parts will be None.
# Now loop over words and remove words that consist of whitespace
# between two encoded strings.
# The next step is to decode each encoded word by applying the reverse
# base64 or quopri transformation.  decoded_words is now a list of the
# form (decoded_word, charset).
# This is an unencoded word.
# Postel's law: add missing padding
# Now convert all words to bytes and collapse consecutive runs of
# similarly encoded words.
# None means us-ascii but we can simply pass it on to h.append()
# Take the separating colon and space into account.
# We must preserve spaces between encoded and non-encoded word
# boundaries, which means for us we need to add a space when we go
# from a charset to None/us-ascii, or from None/us-ascii to a
# charset.  Only do this for the second and subsequent chunks.
# Don't add a space if the None/us-ascii string already has
# a space (trailing or leading depending on transition)
# Rich comparison operators for equality only.  BAW: does it make sense to
# have or explicitly disable <, <=, >, >= operators?
# other may be a Header or a string.  Both are fine so coerce
# ourselves to a unicode (of the unencoded header value), swap the
# args and do another comparison.
# Ensure that the bytes we're storing can be decoded to the output
# character set, otherwise an early error is raised.
# A maxlinelen of 0 means don't wrap.  For all practical purposes,
# choosing a huge number here accomplishes that and makes the
# _ValueFormatter algorithm much simpler.
# Step 1: Normalize the chunks so that all runs of identical charsets
# get collapsed into a single unicode string.
# If the charset has no header encoding (i.e. it is an ASCII encoding)
# then we must split the header at the "highest level syntactic break"
# possible. Note that we don't have a lot of smarts about field
# syntax; we just try to break on semi-colons, then commas, then
# whitespace.  Eventually, this should be pluggable.
# Otherwise, we're doing either a Base64 or a quoted-printable
# encoding which means we don't need to split the line on syntactic
# breaks.  We can basically just find enough characters to fit on the
# current line, minus the RFC 2047 chrome.  What makes this trickier
# though is that we have to split at octet boundaries, not character
# boundaries but it's only safe to split at character boundaries so at
# best we can only get close.
# The first element extends the current line, but if it's None then
# nothing more fit on the current line so start a new line.
# There are no encoded lines, so we're done.
# There was only one line.
# Everything else are full lines in themselves.
# The first line's length.
# The RFC 2822 header folding algorithm is simple in principle but
# complex in practice.  Lines may be folded any place where "folding
# white space" appears by inserting a linesep character in front of the
# FWS.  The complication is that not all spaces or tabs qualify as FWS,
# and we are also supposed to prefer to break at "higher level
# syntactic breaks".  We can't do either of these without intimate
# knowledge of the structure of structured headers, which we don't have
# here.  So the best we can do here is prefer to break at the specified
# splitchars, and hope that we don't choose any spaces or tabs that
# aren't legal FWS.  (This is at least better than the old algorithm,
# where we would sometimes *introduce* FWS after a splitchar, or the
# algorithm before that, where we would turn all white space runs into
# single spaces or tabs.)
# Find the best split point, working backward from the end.
# There might be none, on a long first line.
# There will be a header, so leave it on a line by itself.
# We don't use continuation_ws here because the whitespace
# after a header should always be a space.
# Copyright (C) 2001-2007 Python Software Foundation
# Author: Barry Warsaw
# Some convenience routines.  Don't import Parser and Message as side-effects
# of importing email since those cascadingly import most of the rest of the
# email package.
# This clause with its potential 'raise' may only happen when an
# application program creates an Address object using an addr_spec
# keyword.  The email library code itself must always supply username
# and domain.
# Header Classes #
# At some point we need to put fws here if it was in the source.
# This is used only for folding, not for creating 'decoded'.
# We are translating here from the RFC language (address/mailbox)
# to our API language (group/address).
# Assume it is Address/Group stuff
# Mixin that handles the params dict.  Must be subclassed and
# a property value_parser for the specific header provided.
# The MIME RFCs specify that parameter ordering is arbitrary.
# The header factory #
# Ensure that each new instance gets a unique header factory
# (as opposed to clones, which share the factory).
# The logic of the next three methods is chosen such that it is possible to
# switch a Message object between a Compat32 policy and a policy derived
# from this class and have the results stay consistent.  This allows a
# Message object constructed with this policy to be passed to a library
# that only handles Compat32 objects, or to receive such an object and
# convert it to use the newer style by just changing its policy.  It is
# also chosen because it postpones the relatively expensive full rfc5322
# parse until as late as possible when parsing from source, since in many
# applications only a few headers will actually be inspected.
# XXX this error message isn't quite right when we use splitlines
# (see issue 22233), but I'm not sure what should happen here.
# We can't use splitlines here because it splits on more than \r and \n.
# Make the default policy use the class default header_factory
# Intrapackage imports
# Regular expression that matches `special' characters in parameters, the
# existence of which force quoting of the parameter value.
# Split header parameters.  BAW: this may be too simple.  It isn't
# strictly RFC 2045 (section 5.1) compliant, but it catches most headers
# found in the wild.  We may eventually need a full fledged parser.
# RDM: we might have a Header here; for now just stringify it.
# A tuple is used for RFC 2231 encoded parameter values where items
# are (charset, language, value).  charset is a string, not a Charset
# instance.  RFC 2231 encoded values are never quoted, per RFC.
# Encode as per RFC 2231
# BAW: Please check this.  I think that if quote is set it should
# force quoting even if not necessary.
# RDM This might be a Header, so for now stringify it.
# This is different than utils.collapse_rfc2231_value() because it doesn't
# try to convert the value to a unicode.  Message.get_param() and
# Message.get_params() are both currently defined to return the tuple in
# the face of RFC 2231 parameters.
# Defaults for multipart messages
# Default content type
# Unix From_ line
# Payload manipulation.
# Here is the logic table for this code, based on the email5.0.0 code:
# ------  ------  ------------  ------------------------------
# Note that Barry planned to factor out the 'decode' case, but that
# isn't so easy now that we handle the 8 bit data, which needs to be
# converted in both the decode and non-decode path.
# For backward compatibility, Use isinstance and this error message
# instead of the more logical is_multipart test.
# cte might be a Header, so for now stringify it.
# payload may be bytes here.
# This won't happen for RFC compliant messages (messages
# containing only ASCII code points in the unicode input).
# If it does happen, turn the string into bytes in a way
# guaranteed not to fail.
# XXX: this is a bit of a hack; decode_b should probably be factored
# out somewhere, but I haven't figured out where yet.
# Some decoding problem.
# This 'if' is for backward compatibility, it allows unicode
# through even though that won't work correctly if the
# message is serialized.
# MAPPING INTERFACE (partial)
# "Internal" methods (public API, but only intended for use by a parser
# or generator, not normal application code.
# Additional useful stuff
# Use these three methods instead of the three above.
# This should have no parameters
# RFC 2045, section 5.2 says if its invalid, use text/plain
# Like get_params() but preserves the quoting of values.  BAW:
# should this be part of the public interface?
# Must have been a bare attribute
# BAW: should we be strict?
# Set the Content-Type, you get a MIME-Version
# Skip the first param; it's the old type.
# RFC 2046 says that boundaries may begin but not end in w/s
# There was no Content-Type header, and we don't know what type
# to set it to, so raise an exception.
# The original Content-Type header had no boundary attribute.
# Tack one on the end.  BAW: should we raise an exception
# instead???
# Replace the existing Content-Type header with the new value
# RFC 2231 encoded, so decode it, and it better end up as ascii.
# LookupError will be raised if the charset isn't known to
# Python.  UnicodeError will be raised if the encoded text
# contains a character not in the charset.
# charset characters must be in us-ascii range
# RFC 2046, $4.1.2 says charsets are not case sensitive
# I.e. def walk(self): ...
# Certain malformed messages can have content type set to `multipart/*`
# but still have single part body, in which case payload.copy() can
# fail with AttributeError.
# payload is not a list, it is most probably a string.
# For related, we treat everything but the root as an attachment.
# The root may be indicated by 'start'; if there's no start or we
# can't find the named start, treat the first subpart as the root.
# Otherwise we more or less invert the remaining logic in get_body.
# This only really works in edge cases (ex: non-text related or
# alternatives) if the sending agent sets content-disposition.
# Only skip the first example of each candidate type.
# There is existing content, move it to the first subpart.
# For urllib.parse.unquote
# Useful constants and functions
# '.', '"', and '(' do not end phrases in order to support obs-phrase
# Match a RFC 2047 word, looks like =?utf-8?q?someword?=
# TokenList and its subclasses
# Strip whitespace from front, back, and around dots.
# Because the first token, the attribute (name) eats CFWS, the second
# token is always the section if there is one.
# This is part of the "handle quoted extended parameters" hack.
# The RFC specifically states that the ordering of parameters is not
# guaranteed and may be reordered by the transport layer.  So we have
# to assume the RFC 2231 pieces can come in any order.  However, we
# output them in the order that we first see a given name, which gives
# us a stable __str__.
# Using order preserving dict from Python 3.7+
# Our arbitrary error recovery is to ignore duplicate parameters,
# to use appearance order if there are duplicate rfc 2231 parts,
# and to ignore gaps.  This mimics the error recovery of get_param.
# Else assume the *0* was missing...note that this is different
# from get_param, but we registered a defect for this earlier.
# We could get fancier here and look for a complete
# duplicate extended parameter and ignore the second one
# seen.  But we're not doing that.  The old code didn't.
# source had surrogate escaped bytes.  What we do now
# is a bit of an open question.  I'm not sure this is
# the best choice, but it is what the old algorithm did
# XXX: there should really be a custom defect for
# unknown character set to make it easy to find,
# because otherwise unknown charset is a silent
# failure.
# Set this false so that the value doesn't wind up on a new line even
# if it and the parameters would fit there but not on the first line.
# message-id tokens may not be folded.
# Terminal classes and instances
# This terminates the recursion.
# XXX these need to become classes and used as instances so
# that a program can't change them in a parse tree and screw
# up other parse trees.  Maybe should have  tests for that, too.
# Parse strings according to RFC822/2047/2822/5322 rules.
# This is a stateless parser.  Each get_XXX function accepts a string and
# returns either a Terminal or a TokenList representing the RFC object named
# by the method and a string containing the remaining unparsed characters
# from the input.  Thus a parser method consumes the next syntactic construct
# of a given type and returns a token representing the construct plus the
# unparsed remainder of the input string.
# For example, if the first element of a structured header is a 'phrase',
# returns the complete phrase from the start of the string value, plus any
# characters left in the string after the phrase is removed.
# The ? after the CTE was followed by an encoded word escape (=XX).
# Encoded words should be followed by a WS
# XXX: but what about bare CR and LF?  They might signal the start or
# end of an encoded word.  YAGNI for now, since our current parsers
# will never send us strings with bare CR or LF.
# XXX: Need to figure out how to register defects when
# appropriate here.
# Split in the middle of an atom if there is a rfc2047 encoded word
# which does not have WSP on both sides. The defect will be registered
# the next time through the loop.
# This needs to only be performed when the encoded word is valid;
# otherwise, performing it on an invalid encoded word can cause
# the parser to go in an infinite loop.
# Collapse the whitespace between two encoded words that occur in a
# bare-quoted-string.
# XXX: need to figure out how to register defects when
# Although it is not legal per RFC5322, SMTP uses '<>' in certain
# circumstances.
# Both the optional display name and the angle-addr can start with cfws.
# The only way to figure out if we are dealing with a name-addr or an
# addr-spec is to try parsing each one.
# Crap after mailbox; treat it as an invalid mailbox.
# The mailbox info will still be available.
# This should never happen in email parsing, since CFWS-only is a
# legal alternative to group-list in a group, which is the only
# place group-list appears.
# The formal grammar isn't very helpful when parsing an address.  mailbox
# and group, especially when allowing for obsolete forms, start off very
# similarly.  It is only when you reach one of @, <, or : that you know
# what you've got.  So, we try each one in turn, starting with the more
# likely of the two.  We could perhaps make this more efficient by looking
# for a phrase and then branching based on the next character, but that
# would be a premature optimization.
# Crap after address; treat it as an invalid mailbox.
# Must be a , at this point.
# Parse id-left.
# obs-id-left is same as local-part of add-spec.
# Even though there is no id-right, if the local part
# ends with `>` let's just parse it too and return
# along with the defect.
# Parse id-right.
# Value after parsing a valid msg_id should be None.
# XXX: As I begin to add additional header parsers, I'm realizing we probably
# have two level of parser routines: the get_XXX methods that get a token in
# the grammar, and parse_XXX methods that parse an entire field value.  So
# get_address_list above should really be a parse_ method, as probably should
# be get_unstructured.
# The [CFWS] is implicit in the RFC 2045 BNF.
# XXX: This routine is a bit verbose, should factor out a get_int method.
# XXX: should we have an ExtendedAttribute TokenList?
# It is possible CFWS would also be implicitly allowed between the section
# and the 'extended-attribute' marker (the '*') , but we've never seen that
# in the wild and we will therefore ignore the possibility.
# Now for some serious hackery to handle the common invalid case of
# double quotes around an extended value.  We also accept (with defect)
# a value marked as encoded that isn't really.
# Assume the charset/lang is missing and the token is the value.
# Treat the rest of value as bare quoted string content.
# Junk after the otherwise valid parameter.  Mark it as
# invalid, but it will have a value.
# Must be a ';' at this point.
# XXX: If we really want to follow the formal grammar we should make
# mantype and subtype specialized TokenLists here.  Probably not worth it.
# The RFC requires that a syntactically invalid content-type be treated
# as text/plain.  Perhaps we should postel this, but we should probably
# only do that if we were checking the subtype value against IANA.
# We should probably validate the values, since the list is fixed.
# Header folding
# Header folding is complex, with lots of rules and corner cases.  The
# following code does its best to obey the rules and handle the corner
# cases, but you can be sure there are few bugs:)
# This folder generally canonicalizes as it goes, preferring the stringified
# version of each token.  The tokens contain information that supports the
# folder, including which tokens can be encoded in which ways.
# Folded text is accumulated in a simple list of strings ('lines'), each
# one of which should be less than policy.max_line_length ('maxlen').
# max_line_length 0/None means no limit, ie: infinitely long.
# Folded lines to be output
# When we have whitespace between two encoded
# words, we may need to encode the whitespace
# at the beginning of the second word.
# Points to the last encoded character if there's an ew on
# the line
# This is set to True if we need to encode this part
# Encode if tstr contains special characters.
# Encode if tstr contains newlines.
# If policy.utf8 is false this should really be taken from a
# 'charset' property on the policy.
# Mime parameter folding (using RFC2231) is extra special.
# It fits on a single line
# But not on this one, so start a new one.
# XXX what if encoded_part has no leading FWS?
# Either this is not a major syntactic break, so we don't
# want it on a line by itself even if it fits, or it
# doesn't fit on a line by itself.  Either way, fall through
# to unpacking the subparts and wrapping them.
# It's not a Terminal, do each piece individually.
# It's a terminal, wrap it as an encoded word, possibly
# combining it with previously encoded words if allowed.
# This whitespace has been added to the lines in _fold_as_ew()
# so clear it now.
# It's a terminal which should be kept non-encoded
# (e.g. a ListSeparator).
# This part is too long to fit.  The RFC wants us to break at
# "major syntactic breaks", so unless we don't consider this
# to be one, check if it will fit on the next line by itself.
# We're going to fold the data onto a new line here.  Due to
# the way encoded strings handle continuation lines, we need to
# be prepared to encode any whitespace if the next line turns
# out to start with an encoded word.
# It's not a terminal, try folding the subparts.
# To fold a quoted string we need to create a list of terminal
# tokens that will render the leading and trailing quotes
# and use quoted pairs in the value as appropriate.
# It doesn't need CTE encoding, but encode it anyway so we can
# wrap it.
# We can't figure out how to wrap, it, so give up.
# We can't fold it onto the next line either...
# We're joining this to non-encoded text, so don't encode
# the leading blank.
# Likewise for the trailing space.
# The RFC2047 chrome takes up 7 characters plus the length
# of the charset name.
# If we are at the start of a continuation line, prepend whitespace
# (we only want to do this when the line starts with an encoded word
# but if we're folding in this helper function, then we know that we
# are going to be writing out an encoded word.)
# Since the chunk to encode is guaranteed to fit into less than 100 characters,
# shrinking it by one at a time shouldn't take long.
# Special case for RFC2231 encoding: start from decoded values and use
# RFC2231 encoding iff needed.
# Note that the 1 and 2s being added to the length calculations are
# accounting for the possibly-needed spaces and semicolons we'll be adding.
# XXX What if this ';' puts us over maxlen the first time through the
# loop?  We should split the header value onto a newline in that case,
# but to do that we need to recognize the need earlier or reparse the
# header, so I'm going to ignore that bug for now.  It'll only put us
# one character over.
# We need multiple sections.  We are allowed to mix encoded and
# non-encoded sections, but we aren't going to.  We'll encode them all.
# We need room for the leading blank, the trailing semicolon,
# and at least one character of the value.  If we don't
# have that, we'd be stuck, so in that case fall back to
# the RFC standard width.
# Author: Barry Warsaw, Thomas Wouters, Anthony Baxter
# If the header value contains surrogates, return a Header using
# the unknown-8bit charset to encode the bytes as encoded words.
# Assume it is already a header object
# If we have raw 8bit data in a byte string, we have no idea
# what the encoding is.  There is no safe way to split this
# string.  If it's ascii-subset, then we could do a normal
# ascii split, but if it's multibyte then we could break the
# string.  There's no way to know so the least harm seems to
# be to not split the string and risk it being too long.
# Assume it is a Header-like object.
# The Header class interprets a value of None for maxlinelen as the
# default value of 78, as recommended by RFC 2822.
# Copyright (C) 2004-2006 Python Software Foundation
# Authors: Baxter, Wouters and Warsaw
# RFC 2822 $3.6.8 Optional fields.  ftext is %d33-57 / %d59-126, Any character
# except controls, SP, and ":".
# Text stream of the last partial line pushed into this object.
# See issue 22233 for why this is a text stream and not a list.
# A deque of full, pushed lines
# The stack of false-EOF checking predicates.
# A flag indicating whether the file has been closed or not.
# Don't forget any trailing partial line.
# Pop the line off the stack and see if it matches the current
# false-EOF predicate.
# RFC 2046, section 5.1.2 requires us to recognize outer level
# boundaries at any level of inner nesting.  Do this, but be sure it's
# in the order of most to least nested.
# We're at the false EOF.  But push the last line back first.
# Let the consumer push a line back into the buffer.
# No new complete lines, wait for more.
# Crack into lines, preserving the linesep characters.
# If the last element of the list does not end in a newline, then treat
# it as a partial line.  We only check for '\n' here because a line
# ending with '\r' might be a line that was split in the middle of a
# '\r\n' sequence (see bugs 1555570 and 1721862).
# Assume this is an old-style factory
# Non-public interface for supporting Parser's headersonly flag
# Look for final set of defects
# Create a new message and start by parsing headers.
# Collect the headers, searching for a line that doesn't match the RFC
# 2822 header or continuation pattern (including an empty line).
# If we saw the RFC defined header/body separator
# (i.e. newline), just throw it away. Otherwise the line is
# part of the body so push it back.
# Done with the headers, so parse them and figure out what we're
# supposed to see in the body of the message.
# Headers-only parsing is a backwards compatibility hack, which was
# necessary in the older parser, which could raise errors.  All
# remaining lines in the input are thrown into the message body.
# message/delivery-status contains blocks of headers separated by
# a blank line.  We'll represent each header block as a separate
# nested message object, but the processing is a bit different
# than standard message/* types because there is no body for the
# nested messages.  A blank line separates the subparts.
# We need to pop the EOF matcher in order to tell if we're at
# the end of the current file, not the end of the last block
# of message headers.
# The input stream must be sitting at the newline or at the
# EOF.  We want to see if we're at the end of this subpart, so
# first consume the blank line, then test the next line to see
# if we're at this subpart's EOF.
# Not at EOF so this is a line we're going to need.
# The message claims to be a message/* type, then what follows is
# another RFC 2822 message.
# The message /claims/ to be a multipart but it has not
# defined a boundary.  That's a problem which we'll handle by
# reading everything until the EOF and marking the message as
# defective.
# Make sure a valid content type was specified per RFC 2045:6.4.
# Create a line match predicate which matches the inter-part
# boundary as well as the end-of-multipart boundary.  Don't push
# this onto the input stream until we've scanned past the
# preamble.
# If we're looking at the end boundary, we're done with
# this multipart.  If there was a newline at the end of
# the closing boundary, then we need to initialize the
# epilogue with the empty string (see below).
# We saw an inter-part boundary.  Were we in the preamble?
# According to RFC 2046, the last newline belongs
# to the boundary.
# We saw a boundary separating two parts.  Consume any
# multiple boundary lines that may be following.  Our
# interpretation of RFC 2046 BNF grammar does not produce
# body parts within such double boundaries.
# Recurse to parse this subpart; the input stream points
# at the subpart's first line.
# Because of RFC 2046, the newline preceding the boundary
# separator actually belongs to the boundary, not the
# previous subpart's payload (or epilogue if the previous
# part is a multipart).
# Set the multipart up for newline cleansing, which will
# happen if we're in a nested multipart.
# I think we must be in the preamble
# We've seen either the EOF or the end boundary.  If we're still
# capturing the preamble, we never saw the start boundary.  Note
# that as a defect and store the captured text as the payload.
# If we're not processing the preamble, then we might have seen
# EOF without seeing that end boundary...that is also a defect.
# Everything from here to the EOF is epilogue.  If the end boundary
# ended in a newline, we'll need to make sure the epilogue isn't
# Any CRLF at the front of the epilogue is not technically part of
# the epilogue.  Also, watch out for an empty string epilogue,
# which means a single newline.
# Otherwise, it's some non-multipart type, so the entire rest of the
# file contents becomes the payload.
# Passed a list of lines that make up the headers for the current msg
# Check for continuation
# The first line of the headers was a continuation.  This
# is illegal, so let's note the defect, store the illegal
# line, and ignore it for purposes of headers.
# Check for envelope header, i.e. unix-from
# Strip off the trailing newline
# Something looking like a unix-from at the end - it's
# probably the first line of the body, so push back the
# line and stop.
# Weirdly placed unix-from line.  Note this as a defect
# and ignore it.
# Split the line on the colon separating field name from value.
# There will always be a colon, because if there wasn't the part of
# the parser that calls us would have started parsing the body.
# If the colon is on the start of the line the header is clearly
# malformed, but we might be able to salvage the rest of the
# message. Track the error but keep going.
# Done with all the lines, so handle the last header.
# Copyright (C) 2001-2006 Python Software Foundation
# Must encode spaces, which quopri.encodestring() doesn't do
# There's no payload.  For backwards compatibility we use 7bit
# We play a trick to make this go fast.  If decoding from ASCII succeeds,
# we know the data must be 7bit, otherwise treat it as 8bit.
# Copyright (C) 2001-2010 Python Software Foundation
# XXX: no longer used by the code below.
# Just delegate to the file object
# We use the _XXX constants for operating on data that comes directly
# from the msg, and _encoded_XXX constants for operating on data that
# has already been converted (to bytes in the BytesGenerator) and
# inserted into a temporary buffer.
# Because we use clone (below) when we recursively process message
# subparts, and because clone uses the computed policy (not None),
# submessages will automatically get set to the computed policy when
# they are processed by this code.
# Use policy setting, which we've adjusted
# Protected interface - undocumented ;/
# Note that we use 'self.write' when what we are writing is coming from
# the source, and self._fp.write when what we are writing is coming from a
# buffer (because the Bytes subclass has already had a chance to transform
# the data in its write method in that case).  This is an entirely
# pragmatic split determined by experiment; we could be more general by
# always using write and having the Bytes subclass write method detect when
# it has already transformed the input; but, since this whole thing is a
# hack anyway this seems good enough.
# BytesGenerator overrides this to return BytesIO.
# BytesGenerator overrides this to encode strings to bytes.
# We have to transform the line endings.
# XXX logic tells me this else should be needed, but the tests fail
# with it and pass without it.  (NLCRE.split ends with a blank element
# if and only if there was a trailing newline.)
# We can't write the headers yet because of the following scenario:
# say a multipart message includes the boundary string somewhere in
# its body.  We'd have to calculate the new boundary /before/ we write
# the headers so that we can write the correct Content-Type:
# parameter.
# The way we do this, so as to make the _handle_*() methods simpler,
# is to cache any subpart writes into a buffer.  Then we write the
# headers and the buffer contents.  That way, subpart handlers can
# Do The Right Thing, and can still modify the Content-Type: header if
# If we munged the cte, copy the message again and re-fix the CTE.
# Preserve the header order if the CTE header already exists.
# Write the headers.  First we see if the message object wants to
# handle that itself.  If not, we'll do it generically.
# Get the Content-Type: for the message, then try to dispatch to
# self._handle_<maintype>_<subtype>().  If there's no handler for the
# full MIME type, then dispatch to self._handle_<maintype>().  If
# that's missing too, then dispatch to self._writeBody().
# Default handlers
# A blank line always separates headers from body
# Handlers for writing types and subtypes
# XXX: This copy stuff is an ugly hack to avoid modifying the
# existing message.
# Default body handler
# The trick here is to write out each part separately, merge them all
# together, and then make sure that the boundary we've chosen isn't
# present in the payload.
# e.g. a non-strict parse of a message with no starting boundary.
# Scalar payload
# BAW: What about boundaries that are wrapped in double-quotes?
# Create a boundary that doesn't appear in any of the
# message texts.
# If there's a preamble, write it out, with a trailing CRLF
# dash-boundary transport-padding CRLF
# body-part
# *encapsulation
# --> delimiter transport-padding
# --> CRLF body-part
# delimiter transport-padding CRLF
# close-delimiter transport-padding
# The contents of signed parts has to stay unmodified in order to keep
# the signature intact per RFC1847 2.1, so we disable header wrapping.
# RDM: This isn't enough to completely preserve the part, but it helps.
# We can't just write the headers directly to self's file object
# because this will leave an extra newline between the last header
# block and the boundary.  Sigh.
# Strip off the unnecessary trailing empty line
# Now join all the blocks with an empty line.  This has the lovely
# effect of separating each block with an empty line, but not adding
# an extra one after the last one.
# The payload of a message/rfc822 part should be a multipart sequence
# of length 1.  The zeroth element of the list should be the Message
# object for the subpart.  Extract that object, stringify it, and
# write it out.
# Except, it turns out, when it's a string instead, which happens when
# and only when HeaderParser is used on a message of mime type
# message/rfc822.  Such messages are generated by, for example,
# Groupwise when forwarding unadorned messages.  (Issue 7970.)  So
# in that case we just emit the string body.
# This used to be a module level function; we use a classmethod for this
# and _compile_re so we can continue to provide the module level function
# for backward compatibility by doing
# at the end of the module.  It *is* internal, so we could drop that...
# Craft a random boundary.  If text is given, ensure that the chosen
# boundary doesn't appear in the text.
# This is almost the same as the string version, except for handling
# strings with 8bit bytes.
# If the string has surrogates the original source was bytes, so
# just write it back out.
# Just skip this
# Helper used by Generator._make_boundary
# XXX: is this error a good idea or not?  We can remove it later,
# but we can't add it later, so do it for now.
# If we don't understand a message subtype, we are supposed to treat it as
# if it were application/octet-stream, per
# tools.ietf.org/html/rfc2046#section-5.2.4.  Feedparser doesn't do that,
# so do our best to fix things up.  Note that it is *not* appropriate to
# model message/partial content as Message objects, so they are handled
# here as well.  (How to reassemble them is out of scope for this comment :)
# XXX: This is a cleaned-up version of base64mime.body_encode (including a bug
# fix in the calculation of unencoded_bytes_per_line).  It would be nice to
# drop both this and quoprimime.body_encode in favor of enhanced binascii
# routines that accepted a max_line_length parameter.
# Use heuristics to decide on the "best" encoding.
# This is a little unfair to qp; it includes lineseps, base64 doesn't.
# http://tools.ietf.org/html/rfc2046#section-5.2.1 mandate.
# 8bit will get coerced on serialization if policy.cte_type='7bit'.  We
# may end up claiming 8bit when it isn't needed, but the only negative
# result of that should be a gateway that needs to coerce to 7bit
# having to look through the whole embedded message to discover whether
# or not it actually has to do anything.
# http://tools.ietf.org/html/rfc2046#section-5.2.3 mandate.
# http://tools.ietf.org/html/rfc2046#section-5.2.4 says all future
# subtypes should be restricted to 7bit, so assume that.
# XXX: quoprimime.body_encode won't encode newline characters in data,
# so we can't use it.  This means max_line_length is ignored.  Another
# bug to fix later.  (Note: encoders.quopri is broken on line ends.)
# Flags for types of header encodings
# Quoted-Printable
# Base64
# the shorter of QP and base64, but only for headers
# In "=?charset?q?hello_world?=", the =?, ?q?, and ?= add up to 7
# input        header enc  body enc output conv
# iso-8859-5 is Cyrillic, and not especially used
# iso-8859-6 is Arabic, also not particularly used
# iso-8859-7 is Greek, QP will not make it readable
# iso-8859-8 is Hebrew, QP will not make it readable
# iso-8859-11 is Thai, QP will not make it readable
# Aliases for other commonly-used names for character sets.  Map
# them to the real ones used in email.
# Map charsets to their Unicode codec strings.
# Hack: We don't want *any* conversion for stuff marked us-ascii, as all
# sorts of garbage might be sent to us in the guise of 7-bit us-ascii.
# Let that stuff pass through without conversion to/from Unicode.
# Convenience functions for extending the above mappings
# Convenience function for encoding strings, taking into account
# that they might be unknown-8bit (ie: have surrogate-escaped bytes)
# RFC 2046, $4.1.2 says charsets are not case sensitive.  We coerce to
# unicode because its .lower() is locale insensitive.  If the argument
# is already a unicode, we leave it at that, but ensure that the
# charset is ASCII, as the standard (RFC XXX) requires.
# Set the input charset after filtering through the aliases
# We can try to guess which encoding and conversion to use by the
# charset_map dictionary.  Try that first, but let the user override
# Set the attributes, allowing the arguments to override the default.
# Now set the codecs.  If one isn't defined for input_charset,
# guess and try a Unicode codec with the same name as input_codec.
# 7bit/8bit encodings return the string unchanged (modulo conversions)
# See which encoding we should use.
# Calculate the number of characters that the RFC 2047 chrome will
# contribute to each line.
# Now comes the hard part.  We must encode bytes but we can't split on
# bytes because some character sets are variable length and each
# encoded word must stand on its own.  So the problem is you have to
# encode to bytes to figure out this word's length, but you must split
# on characters.  This causes two problems: first, we don't know how
# many octets a specific substring of unicode characters will get
# encoded to, and second, we don't know how many ASCII characters
# those octets will get encoded to.  Unless we try it.  Which seems
# inefficient.  In the interest of being correct rather than fast (and
# in the hope that there will be few encoded headers in any such
# message), brute force it. :(
# This last character doesn't fit so pop it off.
# Does nothing fit on the first line?
# quopromime.body_encode takes a string, but operates on it as if
# it were a list of byte codes.  For a (minimal) history on why
# this is so, see changeset 0cf700464177.  To correctly encode a
# character set, then, we must turn it into pseudo bytes via the
# latin1 charset, which will encode any byte as a single code point
# between 0 and 255, which is what body_encode is expecting.
# Parse a date field
# The timezone table does not include the military time zones defined
# in RFC822, other than Z.  According to RFC1123, the description in
# RFC822 gets the signs wrong, so we can't rely on any such time
# zones.  RFC1123 recommends that numeric timezone indicators be used
# instead of timezone names.
# Atlantic (used in Canada)
# Eastern
# Central
# Mountain
# Pacific
# This happens for whitespace-only input.
# The FWS after the comma after the day-of-week is optional, so search and
# adjust for this.
# There's a dayname here. Skip it
# RFC 850 date, deprecated
# Dummy tz
# Some non-compliant MUAs use '.' to separate time elements.
# Check for a yy specified in two-digit format, then convert it to the
# appropriate four-digit format, according to the POSIX standard. RFC 822
# calls for a two-digit yy, but RFC 2822 (which obsoletes RFC 822)
# mandates a 4-digit yy. For more information, see the documentation for
# the time module.
# The year is between 1969 and 1999 (inclusive).
# The year is between 2000 and 2068 (inclusive).
# Convert a timezone offset into seconds ; -0500 -> -18000
# Daylight Saving Time flag is set to -1, since DST is unknown.
# No zone info, so localtime is better assumption than GMT
# Delay the import, since mktime_tz is rarely used
# Note that RFC 2822 now specifies `.' as obs-phrase, meaning that it
# is obsolete syntax.  RFC 2822 requires that we recognize obsolete
# syntax, so allow dots in phrases.
# Bad email address technically, no domain.
# email address is just an addrspec
# this isn't very efficient since we start over
# address is a group
# Address is a phrase then a route addr
# Invalid domain, return an empty address instead of returning a
# local part to denote failed parsing.
# bpo-34155: Don't parse domains with two `@` like
# `a@malicious.org@important.com`.
# have already advanced pos from getcomment
# Set union
# Set union, in-place
# Set difference
# Set difference, in-place
# Make indexing, slices, and 'in' work
# Build a mapping of octets to the expansion of that octet.  Since we're only
# going to have 256 of these things, this isn't terribly inefficient
# space-wise.  Remember that headers and bodies have different sets of safe
# characters.  Initialize both maps with the full expansion, and then override
# the safe bytes with the more compact form.
# Safe header bytes which need no encoding.
# Headers have one other special encoding; spaces become underscores.
# Safe body bytes which need no encoding.
# Return empty headers as an empty string.
# Iterate over every byte, encoding if necessary.
# Now add the RFC chrome to each encoded chunk and glue the chunks
# together.
# quote special characters
# leave space for the '=' at the end of a line
# break up the line into pieces no longer than maxlinelen - 1
# make sure we don't break up an escape sequence
# handle rest of line, special case if line ends in whitespace
# It's a whitespace character at end-of-line, and we have room
# for the three-character quoted encoding.
# There's room for the whitespace character and a soft break.
# There's room only for a soft break.  The quoted whitespace
# will be the only content on the subsequent line.
# add back final newline if present
# BAW: I'm not sure if the intent was for the signature of this function to be
# the same as base64MIME.decode() or not...
# BAW: see comment in encode() above.  Again, we're building up the
# decoded string with string concatenation, which could be done much more
# efficiently.
# Otherwise, c == "=".  Are we at the end of the line?  If so, add
# a soft line break.
# Decode if in form =AB
# Otherwise, not in form =AB, pass literally
# Special case if original string did not end with eol
# Header decoding is done a bit differently
# This check is based on the fact that unless there are surrogates, utf8
# (Python's default encoding) can encode any string.  This is the fastest
# way to check for surrogates, see bpo-11454 (moved to gh-55663) for timings.
# How to deal with a string containing bytes before handing it to the
# application through the 'normal' interface.
# Turn any escaped bytes into unicode 'unknown' char.  If the escaped
# bytes happen to be utf-8 they will instead get decoded, even if they
# were invalid in the charset the source was supposed to be in.  This
# seems like it is not a bad thing; a defect was still registered.
# The address MUST (per RFC) be ascii, so raise a UnicodeError if it isn't.
# lazy import to improve module import time
# If strict is true, if the resulting list of parsed addresses is greater
# than the number of fieldvalues in the input list, a parsing error has
# occurred and consequently a list containing a single empty 2-tuple [('',
# '')] is returned in its place. This is done to avoid invalid output.
# Malformed input: getaddresses(['alice@example.com <bob@example.com>'])
# Invalid output: [('', 'alice@example.com'), ('', 'bob@example.com')]
# Safe output: [('', '')]
# Treat output as invalid if the number of addresses is not equal to the
# expected number of addresses.
# When a comma is used in the Real Name part it is not a deliminator.
# So strip those out before counting the commas.
# Expected number of addresses: 1 + number of commas
# Ignore parenthesis in quoted real names.
# The parser would have parsed a correctly formatted domain-literal
# The existence of an [ after parsing indicates a parsing failure
# Note: we cannot use strftime() because that honors the locale and RFC
# 2822 requires that day and month names be the English abbreviations.
# Lazy imports to speedup module import time
# (no other functions in email.utils need these modules)
# rfc822.unquote() doesn't properly de-backslash-ify in Python pre-2.3.
# RFC2231-related functions - parameter encoding and decoding
# Map parameter's name to a list of continuations.  The values are a
# 3-tuple of the continuation number, the string value, and a flag
# specifying whether a particular segment is %-encoded.
# Sort by number, treating None as 0 if there is no 0,
# and ignore it if there is already a 0.
# And now append all values in numerical order, converting
# %-encodings for the encoded segments.  If any of the
# continuation names ends in a *, then the entire string, after
# decoding segments and concatenating, must have the charset and
# language specifiers at the beginning of the string.
# Decode as "latin-1", so the characters in s directly
# represent the percent-encoded octet values.
# collapse_rfc2231_value treats this as an octet sequence.
# While value comes to us as a unicode string, we need it to be a bytes
# object.  We do not want bytes() normal utf-8 decoder, we want a straight
# interpretation of the string as character bytes.
# Issue 17369: if charset/lang is None, decode_rfc2231 couldn't parse
# the value, so use the fallback_charset.
# charset is not a known codec.
# datetime doesn't provide a localtime function yet, so provide one.  Code
# adapted from the patch in issue 9527.  This may not be perfect, but it is
# better than not having it.
# An ecoded word looks like this:
# for more information about charset see the charset module.  Here it is one
# of the preferred MIME charset names (hopefully; you never know when parsing).
# cte (Content Transfer Encoding) is either 'q' or 'b' (ignoring case).  In
# theory other letters could be used for other encodings, but in practice this
# (almost?) never happens.  There could be a public API for adding entries
# to the CTE tables, but YAGNI for now.  'q' is Quoted Printable, 'b' is
# Base64.  The meaning of encoded_string should be obvious.  'lang' is optional
# as indicated by the brackets (they are not part of the syntax) but is almost
# never encountered in practice.
# The general interface for a CTE decoder is that it takes the encoded_string
# as its argument, and returns a tuple (cte_decoded_string, defects).  The
# cte_decoded_string is the original binary that was encoded using the
# specified cte.  'defects' is a list of MessageDefect instances indicating any
# problems encountered during conversion.  'charset' and 'lang' are the
# corresponding strings extracted from the EW, case preserved.
# The general interface for a CTE encoder is that it takes a binary sequence
# as input and returns the cte_encoded_string, which is an ascii-only string.
# Each decoder must also supply a length function that takes the binary
# sequence as its argument and returns the length of the resulting encoded
# The main API functions for the module are decode, which calls the decoder
# referenced by the cte specifier, and encode, which adds the appropriate
# RFC 2047 "chrome" to the encoded string, and can optionally automatically
# select the shortest possible encoding.  See their docstrings below for
# Quoted Printable
# regex based decoder.
# dict mapping bytes to their encoded form
# In headers spaces are mapped to '_'.
# First try encoding with validate=True, fixing the padding if needed.
# This will succeed only if encoded includes no invalid characters.
# Since we had correct padding, this is likely an invalid char error.
# The non-alphabet characters are ignored as far as padding
# goes, but we don't know how many there are.  So try without adding
# padding to see if it works.
# Add as much padding as could possibly be necessary (extra padding
# is ignored).
# This only happens when the encoded string's length is 1 more
# than a multiple of 4, which is invalid.
# bpo-27397: Just return the encoded string since there's no
# way to decode.
# Recover the original bytes and do CTE decoding.
# Turn the CTE decoded bytes into unicode.
# Bias toward q.  5 is arbitrary.
# These are parsing defects which the parser was able to work around.
# XXX: backward compatibility, just in case (it was never emitted).
# These errors are specific to header parsing.
# This defect only occurs during unicode parsing, not when
# parsing messages decoded from binary.
# Do not include _structure() since it's part of the debugging API.
# This function will become a method of the Message class
# These two functions are imported into the Iterators.py interface module.
# Copyright (C) 2002-2006 Python Software Foundation
# Initialise _payload to an empty list as the Message superclass's
# implementation of is_multipart assumes that _payload is a list for
# multipart messages.
# Author: Keith Dart
# Originally from the imghdr module.
# It's convenient to use this base class method.  We need to do it
# this way or we'll get an exception
# And be sure our default type is set correctly
# If no _charset was specified, check to see if there are non-ascii
# characters present. If not, use 'us-ascii', otherwise use utf-8.
# XXX: This can be removed once #7304 is fixed.
# Author: Anthony Baxter
# Originally from the sndhdr module.
# There are others in sndhdr that don't have MIME types. :(
# Additional ones to be added to sndhdr? midi, mp3, realaudio, wma??
# Try to identify a sound file type.
# sndhdr.what() had a pretty cruddy interface, unfortunately.  This is why
# we re-do it here.  It would be easier to reverse engineer the Unix 'file'
# command and use the standard 'magic' file, as shipped with a modern Unix.
# 'RIFF' <len> 'WAVE' 'fmt ' <len>
# The public API prohibits attaching multiple subparts to MIMEBase
# derived subtypes since none of them are, by definition, of content
# type multipart/*
# Special case: '/.' on PATH_INFO doesn't get stripped,
# because we don't strip the last element of PATH_INFO
# if there's only one path part left.  Instead of fixing this
# above, we fix it here so that PATH_INFO gets normalized to
# an empty string in the environ.
# Set up base environment
# skip content length, type,etc.
# comma-separate multiple headers
# backpointer for logging
# serve one request, then exit
# (c) 2005 Ian Bicking and contributors; written for Paste (http://pythonpaste.org)
# Licensed under the MIT license: https://opensource.org/licenses/mit-license.php
# Also licenced under the Apache License, 2.0: https://opensource.org/licenses/apache2.0.php
# We use this to check if the application returns without
# calling start_response:
# We want to make sure __iter__ is called
# Extension, we don't care about its type
# @@: these need filling out:
# Implicitly check that we can turn it into an integer:
# @@: need one more person to verify this interpretation of RFC 2616
# More exc_info checks?
# Technically a bytestring is legal, which is why it's a really bad
# idea, because it may cause the response to be returned
# character-by-character
# Weekday and month names for HTTP date/time formatting; always English!
# Dummy so we can use 1-based month numbers
# Take the basic environment from native-unicode os.environ. Attempt to
# fix up the variables that come from the HTTP request to compensate for
# the bytes->unicode decoding step that will already have taken place.
# On win32, the os.environ is natively Unicode. Different servers
# decode the request bytes using different encodings.
# On IIS, the HTTP request will be decoded as UTF-8 as long
# as the input is a valid UTF-8 sequence. Otherwise it is
# decoded using the system code page (mbcs), with no way to
# detect this has happened. Because UTF-8 is the more likely
# encoding, and mbcs is inherently unreliable (an mbcs string
# that happens to be valid UTF-8 will not be decoded as mbcs)
# always recreate the original bytes as UTF-8.
# Apache mod_cgi writes bytes-as-unicode (as if ISO-8859-1) direct
# to the Unicode environ. No modification needed.
# Python 3's http.server.CGIHTTPRequestHandler decodes
# using the urllib.unquote default of UTF-8, amongst other
# For other servers, guess that they have written bytes to
# the environ using stdio byte-oriented interfaces, ending up
# with the system code page.
# Recover bytes from unicode environ, using surrogate escapes
# where available (Python 3.1+).
# Configuration parameters; can override per-subclass or per-instance
# We are transmitting direct to client
# Version that should be used for response
# String name of server software, if any
# os_environ is used to supply configuration from the OS environment:
# by default it's a copy of 'os.environ' as of import time, but you can
# override this in e.g. your __init__ method.
# Collaborator classes
# set to None to disable
# must be a Headers-like class
# Error handling (also per-subclass or per-instance)
# Print entire traceback to self.get_stderr()
# State variables (don't mess with these)
# Note to self: don't move the close()!  Asynchronous servers shouldn't
# call close() from finish_response(), so if you close() anywhere but
# the double-error branch here, you'll break asynchronous servers by
# prematurely closing.  Async servers must return from 'run()' without
# closing if there might still be output to iterate over.
# We expect the client to close the connection abruptly from time
# to time.
# If we get an error handling an error, just give up already!
# ...and let the actual server figure it out.
# Call close() on the iterable returned by the WSGI application
# in case of an exception.
# We only call close() when no exception is raised, because it
# will set status, result, headers, and environ fields to None.
# See bpo-29183 for more details.
# XXX Try for chunked encoding if origin server and client is 1.1
# avoid dangling circular ref
# Before the first output, send the stored headers
# make sure we know content-length
# XXX check Content-Length and truncate if too many bytes written?
# No platform-specific transmission by default
# Only zero Content-Length if not set by the application (so
# that HEAD requests can be satisfied properly, see #3839)
# XXX check if content-length was too short?
# XXX else: attempt advanced recovery techniques for HTML or text?
# Pure abstract methods; *must* be overridden in subclasses
# Do not allow os.environ to leak between requests in Google App Engine
# and other multi-run CGI use cases.  This is not easily testable.
# See http://bugs.python.org/issue7250
# By default, IIS gives a PATH_INFO that duplicates the SCRIPT_NAME at
# the front, causing problems for WSGI applications that wish to implement
# routing. This handler strips any such duplicated path.
# IIS can be configured to pass the correct PATH_INFO, but this causes
# another bug where PATH_TRANSLATED is wrong. Luckily this variable is
# rarely used and is not guaranteed by WSGI. On IIS<7, though, the
# setting can only be made on a vhost level, affecting all other script
# mappings, many of which break when exposed to the PATH_TRANSLATED bug.
# For this reason IIS<7 is almost never deployed with the fix. (Even IIS7
# rarely uses it because there is still no UI for it.)
# There is no way for CGI code to tell whether the option was set, so a
# separate handler class is provided.
# Optional: def close(self) -> object: ...
# 00 00 -- -- - utf-32-be
# 00 XX -- -- - utf-16-be
# XX 00 00 00 - utf-32-le
# XX 00 00 XX - utf-16-le
# XX 00 XX -- - utf-16-le
# 00 XX - utf-16-be
# XX 00 - utf-16-le
#ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
#return '\\u%04x' % (n,)
# Check for specials.  Note that this type of test is processor
# and/or platform-specific, so do tests which don't depend on the
# internals.
# Subclasses of int/float may override __repr__, but we still
# want to encode them as integers/floats in JSON. One example
# within the standard library is IntEnum.
# see comment above for int
# JavaScript is weakly typed for these, so it makes sense to
# also allow them.  Many encoders seem to do something like this.
# see comment for int/float in _make_iterencode
# Note that this exception is used from _json
#msg = "Invalid control character %r at" % (terminator,)
# Use speedup if available
# Use a slice to prevent IndexError from being raised, the following
# check will raise a more specific ValueError if the string is empty
# Normally we expect nextchar == '"'
# To skip some function call overhead we optimize the fast paths where
# the JSON key separator is ": " or just ":".
# anything left in actual is unexpected
# elements need not be hashable
# elements must be hashable
# IsolatedAsyncioTestCase will be imported lazily.
# Lazy import of IsolatedAsyncioTestCase from .async_case
# It imports asyncio, which is relatively heavy, but most tests
# do not need it.
# let unexpected exceptions pass through
# assertNoLogs
# assertLogs
# Pretend it's signal.default_int_handler instead.
# Not quite the same thing as SIG_IGN, but the closest we
# can make it: do nothing.
# if we aren't the installed handler, then delegate immediately
# to the default handler
# Names intentionally have a long prefix
# to reduce a chance of clashing with user-defined attributes
# from inherited test case
# The class doesn't call loop.run_until_complete(self.setUp()) and family
# but uses a different approach:
# 1. create a long-running task that reads self.setUp()
# 2. await the awaitable object passing in and set the result
# 3. Outer code puts the awaitable and the future object into a queue
# The trick is necessary because every run_until_complete() call
# creates a new task with embedded ContextVar context.
# To share contextvars between setUp(), test and tearDown() we need to execute
# them inside the same task.
# Note: the test case modifies event loop policy if the policy was not instantiated
# yet, unless loop_factory=asyncio.EventLoop is set.
# asyncio.get_event_loop_policy() creates a default policy on demand but never
# returns None
# I believe this is not an issue in user level tests but python itself for testing
# should reset a policy in every test module
# by calling asyncio.set_event_loop_policy(None) in tearDownModule()
# or set loop_factory=asyncio.EventLoop
# A trivial trampoline to addCleanup()
# the function exists because it has a different semantics
# and signature:
# addCleanup() accepts regular functions
# but addAsyncCleanup() accepts coroutines
# We intentionally don't add inspect.iscoroutinefunction() check
# for func argument because there is no way
# to check for async function reliably:
# 1. It can be "async def func()" itself
# 2. Class can implement "async def __call__()" method
# 3. Regular "def func()" that returns awaitable object
# Force loop to be initialized and set as the current loop
# so that setUp functions can use get_event_loop() and get the
# correct loop instance.
# on Linux / Mac OS X 'foo.PY' is not importable, but on
# Windows it is. Simpler to do a case insensitive match
# a better check would be to check that the name is a
# valid Python module name.
# on Windows both '\' and '/' are used as path
# separators. Better to replace both than rely on os.path.sep
# defaults for testing
# even if DeprecationWarnings are ignored by default
# print them anyway unless other warnings settings are
# specified by the warnings arg or the -W python flag
# here self.warnings is set either to the value passed
# to the warnings args or to None.
# If the user didn't pass a value self.warnings will
# be None. This means that the behavior is unchanged
# and depends on the values passed to -W.
# this allows "python -m unittest -v" to still work for
# test discovery.
# to support python -m unittest ...
# createTests will load tests from self.module
# handle command line args for test discovery
# for testing
# didn't accept the tb_locals or durations argument
# didn't accept the verbosity, buffer or failfast arguments
# it is assumed to be a TestRunner instance
# what about .pyc (etc)
# we would need to avoid loading the same tests multiple times
# from '.py', *and* '.pyc'
# Tracks packages which we have called into via load_tests, to
# avoid infinite re-entrancy.
# We don't load any tests from base types that should not be loaded.
# Last error so we can give it to the user if needed.
# Even the top level import failed: report that error.
# We can't traverse some part of the name.
# This is a package (no __path__ per importlib docs), and we
# encountered an error importing something. We cannot tell
# the difference between package.WrongNameTestClass and
# package.wrong_module_name so we just report the
# ImportError - it is more informative.
# Otherwise, we signal that an AttributeError has occurred.
# static methods follow a different path
# make top_level_dir optional if called from load_tests in a package
# all test modules must be importable from the top level directory
# should we *unconditionally* put the start directory in first
# in sys.path to minimise likelihood of conflicts between installed
# modules and development versions?
# support for discovery from dotted module names
# builtin module
# here we have been given a module rather than a package - so
# all we can do is search the *same* directory the module is in
# should an exception be raised instead
# override this method to use alternative matching strategy
# Handle the __init__ in this package
# name is '.' when start_dir == top_level_dir (and top_level_dir is by
# definition not a package).
# name is in self._loading_packages while we have called into
# loadTestsFromModule with name.
# Either an error occurred, or load_tests was used by the
# package.
# Handle the contents.
# we found a package that didn't use load_tests.
# valid Python identifiers only
# if the test file matches, load it
# Mark this package as being in load_tests (possibly ;))
# loadTestsFromModule(package) has loaded tests for us.
# support for suite implementations that have overridden self._tests
# Some unittest tests add non TestCase/TestSuite objects to
# the suite.
# test may actually be a function
# so its class will be a builtin-type
# Inspired by the ErrorHolder from Twisted:
# http://twistedmatrix.com/trac/browser/trunk/twisted/trial/runner.py
# attribute used by TestResult._exc_info_to_string
# could call result.addError(...) - but this test-like object
# shouldn't be run anyway
# By default, we don't do anything with successful subtests, but
# more sophisticated test results might want to record them.
# support for a TextTestRunner using an old TestResult class
# Pass test repr and not the test object itself to avoid resources leak
# The hasattr check is for test_result's OldResult test.  That
# way this method works on objects that lack the attribute.
# (where would such result instances come from? old stored pickles?)
# Detect loops in chained exceptions.
# Skip test runner traceback levels
# Skip assert*() traceback levels
# mock.py
# Test tools for mocking and patching.
# Maintained by Michael Foord
# Backport for other versions of Python available from
# https://pypi.org/project/mock
# Workaround for issue #12370
# Without this, the __class__ properties wouldn't be set correctly
# can't use isinstance on Mock objects because they override __class__
# The base class for all mocks is NonCallableMock
# Autospecced functions will return a FunctionType with "mock" attribute
# which is the actual mock object that needs to be used.
# If it's a type and should be modelled as a type, use __init__.
# Skip the `self` argument in __init__
# Skip the `cls` argument of a class method
# Use the original decorated method to extract the correct function signature
# If we really want to model an instance of the passed type,
# __call__ should be looked up, not __init__.
# Certain callable types are not supported by inspect.signature()
# we explicitly don't copy func.__dict__ into this copy as it would
# expose original attributes that should be mocked
# checks for list or tuples
# XXXX badly named!
# already an instance
# *could* be broken by a class overriding __mro__ or __dict__ via
# a metaclass
# creates a function with signature (*args, **kwargs) that delegates to a
# mock. It still does signature checking by calling a lambda with the same
# signature as the original.
# creates an async function with signature (*args, **kwargs) that delegates to a
# Mock is not configured yet so the attributes are set
# to a function and then the corresponding mock helper function
# is called when the helper is accessed similar to _setup_func.
# setattr(mock, attribute, wrapper) causes late binding
# hence attribute will always be the last value in the loop
# Use partial(wrapper, attribute) to ensure the attribute is bound
# Without this help(unittest.mock) raises an exception
# setting a mock (value) as a child or return value of itself
# should not modify the mock
# Internal class to identify if we wrapped an iterator object or not.
# Store a mutex as a class attribute in order to protect concurrent access
# to mock attributes. Using a class attribute allows all NonCallableMock
# instances to share the mutex for simplicity.
# See https://github.com/python/cpython/issues/98624 for why this is
# every instance has its own class
# so we can create magic methods on the
# class without stomping on other mocks
# Check if spec is an async object or function
# we sort on the number of dots so that
# attributes are set before we set attributes on
# XXXX should we get the attribute without triggering code
# execution?
# property setters go through here
# only set _new_name and not name so that mock_calls is tracked
# but not method calls
# for magic methods that are still MagicProxy objects and
# not set on the instance itself
# If an autospecced object is attached using attach_mock the
# child would be a function with mock object as attribute from
# which signature has to be derived.
# Any asynchronous magic becomes an AsyncMock
# Any synchronous method on AsyncMock becomes a MagicMock
# Denylist for forbidden attribute names in safe mode
# XXXX backwards compatibility
# but this will blow up on first call - so maybe we should fail early?
# stub method that can be replaced with one with a specific signature
# can't use self in-case a function / method we are mocking uses self
# in the signature
# handle call_args
# needs to be set here so assertions on call arguments pass before
# execution in the case of awaited calls
# initial stuff for method_calls:
# initial stuff for mock_calls:
# follow up the chain of mocks:
# handle method_calls:
# handle mock_calls:
# follow the parental chain:
# separate from _increment_mock_call so that awaited functions are
# executed separately from their call, also AsyncMock overrides this method
# _check_spec_arg_typos takes kwargs from commands like patch and checks that
# they don't contain common misspellings of arguments related to autospeccing.
# NB. Keep the method in sync with decorate_async_callable()
# NB. Keep the method in sync with decorate_callable()
# normalise False to None
# set spec to the object we are replacing
# If we're patching out a class and there is a spec
# Determine the Klass to use
# add a name to mocks
# we can only tell if the instance should be callable if the
# spec is not a list
# spec is ignored, new *must* be default, spec_set is treated
# as a boolean. Should we check spec is not None and that spec_set
# is a bool?
# can't set keyword args when we aren't creating the mock
# XXXX If new is a Mock we could call new.configure_mock(**kwargs)
# needed for proxy objects like django settings
# If the patch hasn't been started this will fail
# need to wrap in a list for python 3, where items is a view
# support any argument supported by dict(...) constructor
# dict like object with no copy method
# must support iteration over keys
# dict like object with no update method
# we added divmod and rdivmod here instead of numerics
# because there is no idivmod
# not including __prepare__, __instancecheck__, __subclasscheck__
# (as they are metaclass methods)
# __del__ is not supported at all as it causes problems if it exists
# Magic methods used for async `with` statements
# Magic methods that are only used with async calls but are synchronous functions themselves
# if ret_val was already an iterator, then calling iter on it should
# return the iterator unchanged
# make magic work for kwargs in init
# fix magic broken by upper level init
# remove unneeded magic methods
# don't overwrite existing attributes if called a second time
# Don't reset return values for magic methods,
# otherwise `m.__str__` will start
# to return `MagicMock` instances, instead of `str` instances.
# iscoroutinefunction() checks _is_coroutine property to say if an
# object is a coroutine. Without this check it looks to see if it is a
# function/method, which in this case it is not (since it is an
# AsyncMock).
# It is set through __dict__ because when spec_set is True, this
# attribute is likely undefined.
# This is nearly just like super(), except for special handling
# of coroutines
# It is impossible to propagate a StopIteration
# through coroutines because of PEP 479
# could be (name, args) or (name, kwargs) or (args, kwargs)
# this order is important for ANY to work!
# can't pass a list instance to the mock constructor as it will be
# interpreted as a list of strings
# None we mock with a normal mock without a spec
# for a top level object no _new_name should be set
# descriptors don't have a spec
# because we don't know what type they return
# should only happen at the top level because we don't
# recurse for functions
# Pop wraps from kwargs because it must not be passed to configure_mock.
# MagicMock already does the useful magic methods for us
# XXXX do we need a better way of getting attributes without
# triggering code execution (?) Probably not - we need the actual
# object to mock it so we would rather trigger a property than mock
# the property descriptor. Likewise we want to mock out dynamically
# provided attributes.
# XXXX what about attributes that raise exceptions other than
# AttributeError on being fetched?
# we could be resilient against it, or catch and propagate the
# exception when the attribute is fetched from the mock
# Wrap child attributes also.
# so functions created with _set_signature become instance attributes,
# *plus* their underlying mock exists in _mock_children of the parent
# mock. Adding to _mock_children may be unnecessary where we are also
# setting as an instance attribute?
# kwargs are passed with respect to the parent mock so, they are not used
# for creating return_value of the parent mock. So, this condition
# should be true only for the parent mock if kwargs are given.
# instance attribute - shouldn't skip
# Normal method => skip if looked up on type
# (if looked up on instance, self is already skipped)
# function is a dynamically provided attribute
# python function
# instance method
# Only reset the side effect if the user hasn't overridden it.
# Event for any call
# Events for each of the calls
# We change sys.argv[0] to make help message more useful
# use executable without path, unquoted
# (it's just a hint anyway)
# (if you have spaces in your executable you get what you deserve!)
# text-mode streams translate to \r\n if needed
# didn't accept the durations argument
# if self.warnings is set, use it to filter all the warnings
# explicitly break a reference cycle:
# exc_info -> frame -> exc_info
# Swallows all but first exception. If a multi-exception handler
# gets written we should use that here instead.
# bpo-23890: manually break a reference cycle
# store exception, without traceback, for later retrieval
# The __warningregistry__'s need to be in a pristine state for tests
# to work properly.
# store warning for later retrieval
# Now we simply try to choose a helpful failure message
# If a string is longer than _diffThreshold, use normal comparison instead
# of difflib.  See #11763.
# Attribute used by TestSuite for classSetUp
# we allow instantiation with no explicit method name
# but not an *incorrect* or missing method name
# Map types to custom assertEqual functions that will compare
# instances of said type in more detail to generate a more useful
# error message.
# If the test is expecting a failure, we really want to
# stop now and register the expected failure.
# We need to pass an actual exception and traceback to addFailure,
# otherwise the legacy result can choke.
# If the class or method was skipped.
# explicitly break reference cycle:
# outcome.expectedFailure -> frame -> outcome -> outcome.expectedFailure
# clear the outcome, no more needed
# return this for backwards compatibility
# even though we no longer use it internally
# don't switch to '{}' formatting in Python 2.X
# it changes the way unicode input is handled
# Lazy import to avoid importing logging if it is not needed.
# NOTE(gregory.p.smith): I considered isinstance(first, type(second))
# and vice versa.  I opted for the conservative approach in case
# subclasses are not intended to be compared in detail to their super
# class instances using a type equality func.  This means testing
# subtypes won't automagically use the detailed comparison.  Callers
# should use their type specific assertSpamEqual method to compare
# subclasses if the detailed comparison is desired and appropriate.
# See the discussion in http://bugs.python.org/issue2578.
# shortcut
# The sequences are the same, but have differing types.
# Handle case with unhashable elements
# Don't use difflib if the strings are too long
# Append \n to both strings if either is missing the \n.
# This allows the final ndiff to show the \n difference. The
# exception here is if the string is empty, in which case no
# \n should be added
# Generate the message and diff, then raise the exception
# _formatMessage ensures the longMessage option is respected
# find_library(name) returns the pathname of a library, or None.
# This function was copied from Lib/distutils/msvccompiler.py
# I don't think paths are affected by minor version in version 6
# else we don't know what version of the compiler this is
# better be safe than sorry
# CRT is no longer directly loadable. See issue23606 for the
# discussion about alternative approaches.
# If python was built with in debug mode
# See MSDN for the REAL search order.
# AIX has two styles of storing shared libraries
# GNU auto_tools refer to these as svr4 and aix
# svr4 (System V Release 4) is a regular file, often with .so as suffix
# AIX style uses an archive (suffix .a) with members (e.g., shr.o, libssl.so)
# see issue#26439 and _aix.py for more details
# Andreas Degert's find functions, using gcc, /sbin/ldconfig, objdump
# Run GCC's linker with the -t (aka --trace) option and examine the
# library name it prints out. The GCC command will fail because we
# haven't supplied a proper program with main(), but that does not
# matter.
# No C compiler available, give up
# E.g. bad executable
# Raised if the file was already removed, which is the normal
# behaviour of GCC if linking fails
# Check if the given file is an elf file: gcc can report
# some files that are linker scripts and not actual
# shared objects. See bpo-41976 for more details
# use /usr/ccs/bin/dump on solaris
# E.g. command not found
# assuming GNU binutils / ELF
# objdump is not available, give up
# "libxyz.so.MAJOR.MINOR" => [ MAJOR, MINOR ]
# XXX assuming GLIBC's ldconfig (with option -p)
# See issue #9998 for why this is needed
# result will be None
# See issue #9998
# test code
# find and load_version
# load
# issue-26439 - fix broken test call for AIX
# librpm.so is only available as 32-bit shared library
# check _OTHER_ENDIAN attribute (present if typ is primitive type)
# if typ is array
# if typ is structure or union
# Note: The Structure metaclass checks for the *presence* (not the
# value!) of a _swappedbytes_ attribute to determine the bit order in
# structures containing bit fields.
# The most useful windows datatypes
#UCHAR = ctypes.c_uchar
# in the windows header files, these are structures.
# WPARAM is defined as UINT_PTR (unsigned type)
# LPARAM is defined as LONG_PTR (signed type)
# HANDLE types
# in the header files: void *
# Some important structure definitions
# Pointer types
# On OS X 10.3, we use RTLD_GLOBAL as default mode
# because RTLD_LOCAL does not work at least on some
# libraries.  OS X 10.3 is Darwin 7, so we check for
# that.
# WINOLEAPI -> HRESULT
# WINOLEAPI_(type)
# STDMETHODCALLTYPE
# STDMETHOD(name)
# STDMETHOD_(type, name)
# STDAPICALLTYPE
# Alias to create_string_buffer() for backward compatibility
# docstring set later (very similar to CFUNCTYPE.__doc__)
# Check if sizeof(ctypes_type) against struct.calcsize.  This
# should protect somewhat against a misconfigured libffi.
# Most _type_ codes are the same as used in struct
# if int and long have the same size, make c_int an alias for c_long
# if long and long long have the same size, make c_longlong an alias for c_long
#### backward compatibility:
##c_uchar = c_ubyte
# backwards compatibility (to a bug)
# _SimpleCData.c_wchar_p_from_param
# _SimpleCData.c_char_p_from_param
# UTF-16 requires a surrogate pair (2 wchar_t) for non-BMP
# characters (outside [U+0000; U+FFFF] range). +1 for trailing
# NUL character.
# 32-bit wchar_t (1 wchar_t per Unicode character). +1 for
# trailing NUL character.
# default values for repr
# If the filename that has been provided is an iOS/tvOS/watchOS
# .fwork file, dereference the location to the true origin of the
# binary.
# XXX Hm, what about HRESULT as normal parameter?
# Mustn't it derive from c_long then?
# _check_retval_ is called with the function's result when it
# is used as restype.  It checks for the FAILED bit, and
# raises an OSError if it is set.
# The _check_retval_ method is implemented in C, so that the
# method definition itself is not included in the traceback
# when it raises an error - that is what we want (and Python
# doesn't have a way to raise an exception in the caller's
# frame).
## void *memmove(void *, const void *, size_t);
## void *memset(void *, int, size_t)
# COM stuff
# CLASS_E_CLASSNOTAVAILABLE
# S_OK
# Fill in specifically-sized types
# Executable bit size - 32 or 64
# Used to filter the search in an archive by size, e.g., -X64
# "libxyz.so.MAJOR.MINOR" => [MAJOR, MINOR]
# "nested-function, but placed at module level
# as an ld_header was found, return known paths, archives and members
# these lines start with a digit
# blank line (separator), consume line and end for loop
# get_ld_headers parsing:
# 1. Find a line that starts with /, ./, or ../ - set as ld_header
# 2. If "INDEX" in occurs in a following line - return ld_header
# 3. get info (lines starting with [0-9])
# be sure to read to the end-of-file - getting all entries
# potential member lines contain "["
# otherwise, no processing needed
# Strip off trailing colon (:)
# member names in the ld_headers output are between square brackets
# additional processing to deal with AIX legacy names for 64-bit members
# AIX 64-bit member is one of shr64.o, shr_64.o, or shr4_64.o
# 32-bit legacy names - both shr.o and shr4.o exist.
# shr.o is the preferred name so we look for shr.o first
# the expression ending for versions must start as
# '.so.[0-9]', i.e., *.so.[at least one digit]
# while multiple, more specific expressions could be specified
# to search for .so.X, .so.X.Y and .so.X.Y.Z
# after the first required 'dot' digit
# any combination of additional 'dot' digits pairs are accepted
# anything more than libFOO.so.digits.digits.digits
# should be seen as a member name outside normal expectations
# look first for a generic match - prepend lib and append .so
# since an exact match with .so as suffix was not found
# look for a versioned name
# If a versioned name is not found, look for AIX legacy member name
# the second (optional) argument is PATH if it includes a /
# /lib is a symbolic link to /usr/lib, skip it
# "lib" is prefixed to emulate compiler name resolution,
# e.g., -lc to libc
# To get here, a member in an archive has not been found
# In other words, either:
# a) a .a file was not found
# b) a .a file did not have a suitable member
# So, look for a .so file
# Check libpaths for .so file
# Note, the installation must prepare a link from a .so
# to a versioned file
# This is common practice by GNU libtool on other platforms
# if we are here, we have not found anything plausible
# These are the defaults as per man dyld(1)
# If DYLD_FRAMEWORK_PATH is set and this dylib_name is a
# framework name, use the first file that exists in the framework
# path if any.  If there is none go on to search the DYLD_LIBRARY_PATH
# if any.
# If DYLD_LIBRARY_PATH is set then use the first file that exists
# in the path.  If none use the original name.
# If we haven't done any searching and found a library and the
# dylib_name starts with "@executable_path/" then construct the
# library name.
# Autogenerated by Sphinx on Thu Aug 14 13:12:07 2025
# as part of the release process.
# A classification of schemes.
# The empty string classifies URLs with no scheme specified,
# being the default value returned by “urlsplit” and “urlparse”.
# These are not actually used anymore, but should stay for backwards
# compatibility.  (They are undocumented, but have a public-looking name.)
# Characters valid in scheme names
# Leading and trailing C0 control and space to be stripped per WHATWG spec.
# == "".join([chr(i) for i in range(0, 0x20 + 1)])
# Unsafe bytes to be removed per WHATWG spec
# Helpers for bytes handling
# For 3.2, we deliberately require applications that
# handle improperly quoted URLs to do their own
# decoding and encoding. If valid use cases are
# presented, we may relax this by using latin-1
# decoding internally for 3.3
# Invokes decode if necessary to create str args
# and returns the coerced inputs along with
# an appropriate result coercion function
# We special-case the empty string to support the
# "scheme=''" default argument to some functions
# Result objects are more helpful than simple tuples
# Scoped IPv6 address may have zone info, which must not be lowercased
# like http://[fe80::822a:a8ff:fe49:470c%tESt]:1234/keys
# For backwards compatibility, alias _NetlocResultMixinStr
# ResultBase is no longer part of the documented API, but it is
# retained since deprecating it isn't worth the hassle
# Structured result objects for string data
# Structured result objects for bytes data
# Set up the encode/decode result pairs
# position of end of domain part of url, default is end
# look for delimiters; the order is NOT important
# find first of this delim
# if found
# use earliest delim position
# return (domain, rest)
# looking for characters like \u2100 that expand to 'a/c'
# IDNA uses NFKC equivalence, so normalize for this check
# ignore characters already included
# but not the surrounding text
# Note that this function must mirror the splitting
# done in NetlocResultMixins._hostinfo().
# No data is allowed before a bracket.
# No data is allowed after the bracket but before the port delimiter.
# Valid bracketed hosts are defined in
# https://www.rfc-editor.org/rfc/rfc3986#page-49 and https://url.spec.whatwg.org/
# Throws Value Error if not IPv6 or IPv4
# typed=True avoids BytesWarnings being emitted during cache key
# comparison since this API supports both bytes and str input.
# Only lstrip url as some applications rely on preserving trailing space.
# (https://url.spec.whatwg.org/#concept-basic-url-parser would strip both)
# the last item is not a directory, so will not be taken into account
# in resolving the relative path
# for rfc3986, ignore all base path should the first character be root.
# filter out elements that would cause redundant slashes on re-joining
# the resolved_path
# ignore any .. segments that would otherwise cause an IndexError
# when popped from resolved_path if resolving for rfc3986
# do some post-processing here. if the last segment was a relative dir,
# then we need to append the trailing '/'
# Note: strings are encoded as UTF-8. This is only an issue if it contains
# unescaped non-ASCII characters, which URIs should not.
# Is it a string-like object?
# Non-ASCII
# The ascii_match[1] group == string[start:end].
# Non-ASCII tail
# Use memoryview() to reject integers and iterables,
# acceptable by the bytes constructor.
# If max_num_fields is defined then check that the number of fields
# is less than max_num_fields. This prevents a memory exhaustion DOS
# attack via post bodies with many fields.
# Keeps a cache internally, via __missing__, for efficiency (lookups
# of cached keys don't call Python code at all).
# Handle a cache miss. Store quoted string in cache and return.
# Check if ' ' in string, where string may either be a str or bytes.  If
# there are no spaces, the regular quote will produce the right answer.
# Expectation: A typical program is unlikely to create more than 5 of these.
# Normalize 'safe' by converting to bytes and removing non-ASCII chars
# List comprehensions are faster than generator expressions.
# This saves memory - https://github.com/python/cpython/issues/95865
# It's a bother at times that strings and string-like objects are
# non-sequence items should not work with len()
# non-empty strings will fail this
# Zero-length sequences of all types will get here and succeed,
# but that's a minor nit.  Since the original implementation
# allowed empty dicts that type of behavior probably should be
# preserved for consistency
# Is this a sufficient test for sequence-ness?
# not a sequence
# loop over the sequence
# Most URL schemes require ASCII. If that changes, the conversion
# can be relaxed.
# XXX get rid of to_bytes()
# splittag('/path#tag') --> '/path', 'tag'
# XXX Add a method to expose the timeout on the underlying socket?
# Keep reference around as this was part of the original API.
# XXX issues:
# If an authentication error handler that tries to perform
# authentication for some reason but fails, how should the error be
# signalled?  The client needs to know the HTTP error code.  But if
# the handler knows that the problem was, e.g., that it didn't know
# that hash algo that requested in the challenge, it would be good to
# pass that information along to the client, too.
# ftp errors aren't handled cleanly
# check digest against correct (i.e. non-apache) implementation
# Possible extensions:
# complex proxies  XXX not sure what exactly was meant by this
# abstract factory for opener
# check for SSL
# Legacy interface
# used in User-Agent header sent
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
# Handle temporary file setup.
# copied from cookielib.py
# unwrap('<URL:type://host/path>') --> 'type://host/path'
# issue 16464
# if we change data we need to remove content-length header
# (cause it's most probably calculated for previous value)
# useful for something like authentication
# will not be added to a redirected request
# self.handlers is retained only for backward compatibility
# manage the individual handlers
# oops, coincidental match
# Only exists for backwards compatibility.
# Handlers raise an exception if no one else should try to handle
# the request, or return None if they can't but another handler
# could.  Otherwise, they return the response.
# accept a URL or a Request object
# pre-process request
# post-process response
# XXX http[s] protocols are special-cased
# https is not different than http
# YUCK!
# XXX probably also want an abstract factory that knows when it makes
# sense to skip a superclass in favor of a subclass and when it might
# make sense to include both
# Only exists for backwards compatibility
# Try to preserve the old behavior of having custom classes
# inserted after default ones (works only for custom user
# classes which are not aware of handler_order).
# after all other processing
# According to RFC 2616, "2xx" code indicates that the client's
# request was successfully received, understood, and accepted.
# maximum number of redirections to any single URL
# this is needed because of the state that cookies introduce
# maximum total number of redirections (regardless of URL) before
# assuming we're in a loop
# Strictly (according to RFC 2616), 301 or 302 in response to
# a POST MUST NOT cause a redirection without confirmation
# from the user (of urllib.request, in this case).  In practice,
# essentially all clients do redirect in this case, so we do
# Be conciliant with URIs containing a space.  This is mainly
# redundant with the more complete encoding done in http_error_302(),
# but it is kept for compatibility with other callers.
# Implementation note: To avoid the server sending us into an
# infinite loop, the request object needs to track what URLs we
# have already seen.  Do this by adding a handler-specific
# attribute to the Request object.
# fix a possible malformed URL
# For security reasons we don't allow redirection to anything other
# than http, https or ftp.
# http.client.parse_headers() decodes as ISO-8859-1.  Recover the
# original bytes and percent-encode non-ASCII bytes, and any special
# characters such as the space.
# XXX Probably want to forget about the state of the current
# request, although that might interact poorly with other
# handlers that also use handler-specific request attributes
# loop detection
# .redirect_dict has a key url if url was previously visited.
# Don't close the fp until we are sure that we won't use it
# with HTTPError.
# authority
# URL
# We have an authority, so for RFC 3986-compliant URLs (by ss 3.
# and 3.3.), path is empty or starts with '/'
# Proxies must be in front
# let other handlers take care of it
# need to start over, because the other handlers don't
# grok the proxy's URL type
# e.g. if we have a constructor arg proxies like so:
# {'http': 'ftp://proxy.example.com'}, we may end up turning
# a request for http://acme.example.com/a into one for
# ftp://proxy.example.com/a
# uri could be a single URI or a sequence
# note HTTP URLs do not have a userinfo component
# URI
# host or host:port
# Add a default for prior auth requests
# XXX this allows for multiple auth-schemes, but will stupidly pick
# the last one with a realm specified.
# allow for double- and single-quoted realm values
# (single quotes are a violation of the RFC, but appear in the wild)
# start of the string or ','
# optional whitespaces
# scheme like "Basic"
# mandatory whitespaces
# realm=xxx
# realm='xxx'
# realm="xxx"
# XXX could pre-emptively send auth info already accepted (RFC 2617,
# end of section 2, and section 1.2 immediately after "credentials"
# production).
# parse WWW-Authenticate header: accept multiple challenges per header
# host may be an authority (without userinfo) or a URL with an
# no header found
# Use the first matching Basic challenge.
# Ignore following challenges even if they use the Basic
# scheme.
# http_error_auth_reqed requires that there is no userinfo component in
# authority.  Assume there isn't one, since urllib.request does not (and
# should not, RFC 3986 s. 3.2.1) support requests for URLs containing
# userinfo.
# Return n random bytes.
# Digest authentication is specified in RFC 2617.
# XXX The client does not inspect the Authentication-Info header
# in a successful response.
# XXX It should be possible to test this implementation against
# a mock server that just generates a static set of challenges.
# XXX qop="auth-int" supports is shaky
# Don't fail endlessly - if we failed once, we'll probably
# fail a second time. Hm. Unless the Password Manager is
# prompting for the information. Crap. This isn't great
# but it's better than the current 'repeat until recursion
# depth exceeded' approach <wink>
# The cnonce-value is an opaque
# quoted string value provided by the client and used by both client
# and server to avoid chosen plaintext attacks, to provide mutual
# authentication, and to provide some message integrity protection.
# This isn't a fabulous effort, but it's probably Good Enough.
# mod_digest doesn't send an opaque, even though it isn't
# supposed to be optional
# XXX selector: what about proxies and full urls
# NOTE: As per  RFC 2617, when server sends "auth,auth-int", the client could use either `auth`
# XXX MD5-sess
# before Basic auth
# will parse host:port
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# Proxy-Authorization should not be sent to origin
# server.
# timeout error
# If the server does not send us a 'Connection: close' header,
# HTTPConnection assumes the socket should be left open. Manually
# mark the socket to be closed when this response object goes away.
# This line replaces the .msg attribute of the HTTPResponse
# with .headers, because urllib clients expect the response to
# have the reason in .msg.  It would be good to mark this
# attribute is deprecated and get then to use info() or
# .headers.
# append last part
# Use local file or FTP depending on form of URL
# names for the localhost
# not entirely sure what the rules are here
# username/password handling
# XXX would be nice to have pluggable cache strategies
# XXX this stuff is definitely not thread safe
# first check for old ones
# then check the size
# data URLs as specified in RFC 2397.
# ignores POSTed data
# syntax:
# even base64 encoded data URLs might be quoted so unquote in any case:
# Code move from the old urllib module
# Trim the ftp cache beyond this size
# Helper for non-unix systems
# URL has an empty authority section, so the path begins on the
# third character.
# Add explicitly empty authority to avoid interpreting the path
# as authority.
# Constructor
# See cleanup()
# Undocumented feature: if you assign {} to tempcache,
# it is used to cache files retrieved with
# self.retrieve().  This is not enabled by default
# since it does not work for changing documents (and I
# haven't got the logic to check expiration headers
# yet).
# Undocumented feature: you can use a different
# ftp cache by assigning to the .ftpcache member;
# in case you want logically independent URL openers
# XXX This is not threadsafe.  Bah.
# This code sometimes runs when the rest of this module
# has already been deleted, so it can't use any globals
# or import anything.
# External interface
# Signal special case to open_*()
# raise exception if actual size does not match content-length header
# Each method named open_<type> knows how to open that type of URL
# check whether the proxy contains authorization information
# now we proceed with the url we want to obtain
# Add Connection:close as we don't support persistent connections yet.
# This helps in closing the socket and avoiding ResourceWarning
# something went wrong with the HTTP status line
# First check if there's a specific handler for this error
# cert and key file means the user wants to authenticate.
# enable TLS 1.3 PHA implicitly even for custom contexts.
# XXX thread unsafe!
# Prune the cache, rather arbitrarily
# ignore POSTed data
# XXX is this encoding/decoding ok?
#f.fileno = None     # needed for addinfourl
# In case the server sent a relative URL, join with original:
# For security reasons, we don't allow redirection to anything other
# than http, https and ftp.
# We are using newer HTTPError with older redirect_internal method
# This older method will get deprecated in 3.3
# Try to retrieve as a file
# Set transfer mode to ASCII!
# Try a directory listing. Verify that directory exists.
# Pass back both a suitably decorated object and a retrieval length
# Proxy handling
# in order to prefer lowercase variables, process environment in
# two passes: first matches any, second pass matches lowercase only
# select only environment variables which end in (after making lowercase) _proxy
# fast screen underscore position before more expensive case-folding
# CVE-2016-1000110 - If we are running as CGI script, forget HTTP_PROXY
# (non-all-lowercase) as it may be set from the web server by a "Proxy:"
# header from the client
# If "proxy" is lowercase, it will still be used thanks to the next block
# not case-folded, checking here for lower-case env vars only
# don't bypass, if no_proxy isn't specified
# '*' is special case for always bypass
# strip port off host
# check if the host ends with any of the DNS suffixes
# otherwise, don't bypass
# This code tests an OSX specific data structure but is testable on all
# Check for simple host names:
# Items in the list are strings like these: *.local, 169.254/16
# System libraries ignore invalid prefix lengths
# Same as _proxy_bypass_macosx_sysconf, testable on all platforms
# "<local>" should bypass the proxy server for all intranet addresses
# Std module, so should be around - but you never know!
# Returned as Unicode but problems if not converted to ASCII
# Use one setting for all protocols.
# See if address has a type:// prefix
# Add type:// prefix to address without specifying type
# The default proxy type of Windows is HTTP
# Use SOCKS proxy for HTTP(S) protocols
# The default SOCKS proxy type of Windows is SOCKS4
# Either registry key not found etc, or the value in an
# unexpected format.
# proxies already set up to be empty so nothing to do
# Std modules, so should be around - but you never know!
# ^^^^ Returned as Unicode but problems if not converted to ASCII
# By default use environment variables
# the default entry is considered last
# the first default entry wins
# states:
# remove optional comment and strip line
# before trying to convert to int we need to make
# sure that robots.txt has valid syntax otherwise
# it will crash
# check if all values are sane
# According to http://www.sitemaps.org/protocol.html
# "This directive is independent of the user-agent line,
# Therefore we do not change the state of the parser.
# Until the robots.txt file has been read or found not
# to exist, we must assume that no url is allowable.
# This prevents false positives when a user erroneously
# calls can_fetch() before calling read().
# search for given user agent matches
# the first match counts
# try the default entry last
# agent not found ==> access granted
# an empty value means allow all
# split the name token and make it lower case
# we have the catch-all agent
# URLError is a sub-type of OSError, but it doesn't share any of
# the implementation.  need to override __init__ and __str__.
# It sets self.args for compatibility with other OSError
# subclasses, but args doesn't have the typical format with errno in
# slot 0 and strerror in slot 1.  This may be better than nothing.
# since URLError specifies a .reason attribute, HTTPError should also
# Copyright 2001-2022 by Vinay Sajip. All Rights Reserved.
# Permission to use, copy, modify, and distribute this software and its
# documentation for any purpose and without fee is hereby granted,
# provided that the above copyright notice appear in all copies and that
# both that copyright notice and this permission notice appear in
# supporting documentation, and that the name of Vinay Sajip
# not be used in advertising or publicity pertaining to distribution
# of the software without specific, written prior permission.
# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING
# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# The following module attributes are no longer updated.
#---------------------------------------------------------------------------
#_startTime is used as the base when calculating the relative time of events
#raiseExceptions is used to see if exceptions during handling should be
#propagated
# If you don't want threading information in the log, set this to False
# If you don't want multiprocessing information in the log, set this to False
# If you don't want process information in the log, set this to False
# If you don't want asyncio task information in the log, set this to False
# Default levels and level names, these can be replaced with any positive set
# of values having corresponding names. There is a pseudo-level, NOTSET, which
# is only really there as a lower limit for user-defined levels. Handlers and
# loggers are initialized with NOTSET so that they will log all messages, even
# at user-defined levels.
# See Issues #22386, #27937 and #29220 for why it's this way
#pragma: no cover
# _srcfile is used when walking the stack to check when we've got the first
# caller stack frame, by skipping frames whose filename is that of this
# module's source. It therefore should contain the filename of this module's
# source file.
# Ordinarily we would use __file__ for this, but frozen modules don't always
# have __file__ set, for some reason (see Issue #21736). Thus, we get the
# filename from a handy code object from a function defined in this module.
# (There's no particular reason for picking addLevelName.)
# _srcfile is only used in conjunction with sys._getframe().
# Setting _srcfile to None will prevent findCaller() from being called. This
# way, you can avoid the overhead of fetching caller information.
# The following is based on warnings._is_internal_frame. It makes sure that
# frames of the import mechanism are skipped when logging at module level and
# using a stacklevel value greater than one.
#_lock is used to serialize access to shared data structures in this module.
#This needs to be an RLock because fileConfig() creates and configures
#Handlers, and so might arbitrary user threads. Since Handler code updates the
#shared dictionary _handlers, it needs to acquire the lock. But if configuring,
#the lock would already have been acquired - so we need an RLock.
#The same argument applies to Loggers and Manager.loggerDict.
# Wrap the lock acquisition in a try-except to prevent the lock from being
# abandoned in the event of an asynchronous exception. See gh-106238.
# Prevent a held logging lock from blocking a child from logging.
# Windows and friends.
# no-op when os.register_at_fork does not exist.
# A collection of instances with a _at_fork_reinit method (logging.Handler)
# to be called in the child after forking.  The weakref avoids us keeping
# discarded Handler instances alive.
# _prepareFork() was called in the parent before forking.
# The lock is reinitialized to unlocked state.
# The following statement allows passing of a dictionary as a sole
# argument, so that you can do something like
# Suggested by Stefan Behnel.
# Note that without the test for args[0], we get a problem because
# during formatting, we test to see if the arg is present using
# 'if self.args:'. If the event being logged is e.g. 'Value is %d'
# and if the passed arg fails 'if self.args:' then no formatting
# is done. For example, logger.warning('Value is %d', 0) would log
# 'Value is %d' instead of 'Value is 0'.
# For the use case of passing a dictionary, this should not be a
# problem.
# Issue #21172: a request was made to relax the isinstance check
# to hasattr(args[0], '__getitem__'). However, the docs on string
# formatting still seem to suggest a mapping object is required.
# Thus, while not removing the isinstance check, it does now look
# for collections.abc.Mapping rather than, as before, dict.
# used to cache the traceback text
# ns to float seconds
# Get the number of whole milliseconds (0-999) in the fractional part of seconds.
# Eg: 1_677_903_920_999_998_503 ns --> 999_998_503 ns--> 999 ms
# Convert to float by adding 0.0 for historical reasons. See gh-89047
# ns -> sec conversion can round up, e.g:
# 1_677_903_920_999_999_900 ns --> 1_677_903_921.0 sec
# Errors may occur if multiprocessing has not finished loading
# yet - e.g. if a custom import hook causes third-party code
# to run when multiprocessing calls import. See issue 8200
# for an example
# See issues #9427, #1553375. Commented out for now.
#if getattr(self, 'fullstack', False):
# Cache the traceback text to avoid converting it multiple times
# (it's constant anyway)
# assume callable - will raise if not
#map of handler names to handlers
# added to allow handlers to be removed in reverse of order initialized
# This function can be called during module teardown, when globals are
# set to None. It can also be called from another thread. So we need to
# pre-emptively grab the necessary globals and check if they're None,
# to prevent race conditions and failures during interpreter shutdown.
# Add the handler to the global _handlerList (for cleanup on shutdown)
#get the module data lock, as we're updating a shared structure.
# see issue 13807
# Walk the stack frame up until we're out of logging,
# so as to print the calling context.
# couldn't find the right stack frame, for some reason
# Issue 18671: output logging message and arguments
# See issue 36272
# see issue 5971
# issue 35046: merged two stream.writes into one.
# Issue #27493: add support for Path objects to be passed in
#keep the absolute path, otherwise derived classes which use this
#may come a cropper when the current directory changes
# bpo-26789: FileHandler keeps a reference to the builtin open()
# function to be able to open or reopen the file during Python
#We don't open the stream, but we still need to call the
#Handler constructor to set level, formatter, lock etc.
# Issue #19523: call unconditionally to
# prevent a handler leak when delay is set
# Also see Issue #42378: we also rely on
# self._closed being set to True there
#The if means ... if not c.parent.name.startswith(nm)
#On some versions of IronPython, currentframe() returns None if
#IronPython isn't run with -X:Frames.
## We've got options here.
## If we want to use the last (deepest) frame:
## If we want to mimic the warnings module:
#return ("sys", 1, "(unknown function)", None)
## If we want to be pedantic:
#raise ValueError("call stack is not deep enough")
#IronPython doesn't track Python frames, so findCaller raises an
#exception on some versions of IronPython. We trap it here so that
#IronPython can use logging.
#break out
# exclude PlaceHolders - the last check is to ensure that lower-level
# descendants aren't returned - if there are placeholders, a logger's
# parent field might point to a grandparent or ancestor thereof.
# Boilerplate convenience methods
# Configuration classes and functions
# Add thread safety in case someone mistakenly calls
# basicConfig() from multiple threads
# Utility functions at module level.
# Basically delegate everything to the root logger.
#errors might occur, for example, if files are locked
#we just ignore them if raiseExceptions is not set
# MemoryHandlers might not want to be flushed on close,
# but circular imports prevent us scoping this to just
# those handlers.  hence the default to True.
# Ignore errors which might be caused
# because handlers have been closed but
# references to them are still around at
# application exit.
# ignore everything, as we're shutting down
#else, swallow
#Let's try and shutdown automatically on application exit...
# Null handler
# Warnings integration
# bpo-46557: Log str(s) as msg instead of logger.warning("%s", s)
# since some log aggregation tools group logs by the msg arg
# Copyright 2001-2023 by Vinay Sajip. All Rights Reserved.
# critical section
# Handlers add themselves to logging._handlers
#for inter-handler references
#the target handler may not be loaded yet, so keep for later...
#now all handlers are loaded, fixup inter-handler references...
# configure the root first
#and now the others...
#we don't want to lose the existing loggers,
#since other threads may have pointers to them.
#existing is set to contain all existing loggers,
#and as we go through the new configuration we
#remove any which are configured. At the end,
#what's left in existing is the set of loggers
#which were in the previous configuration but
#which are not in the new configuration.
#The list needs to be sorted so that we can
#avoid disabling child loggers of explicitly
#named loggers. With a sorted list it is easier
#to find the child loggers.
#We'll keep the list of existing loggers
#which are children of named loggers here...
#now set up the new ones...
# start with the entry after qn
#Disable any old loggers. There's no point deleting
#them as other threads may continue to hold references
#and by disabling them, you stop them doing any logging.
#However, don't disable children of named loggers, as that's
#probably not what was intended by the user.
#for log in existing:
#If the converted value is different, save for next time
# Can't replace a tuple entry.
#print d, rest
#rest should be empty
# str for py3k
# defer importing multiprocessing as much as possible
# Depending on the multiprocessing start context, we cannot create
# a multiprocessing.managers.BaseManager instance 'mm' to get the
# runtime type of mm.Queue() or mm.JoinableQueue() (see gh-119819).
# Since we only need an object implementing the Queue API, we only
# do a protocol check, but we do not use typing.runtime_checkable()
# and typing.Protocol to reduce import time (see gh-121723).
# Ideally, we would have wanted to simply use strict type checking
# instead of a protocol-based type checking since the latter does
# not check the method signatures.
# Note that only 'put_nowait' and 'get' are required by the logging
# queue handler and queue listener (see gh-124653) and that other
# methods are either optional or unused.
# Do formatters first - they don't refer to anything else
# Next, do filters - they don't refer to anything else, either
# Next, do handlers - they refer to formatters and filters
# As handlers can refer to other handlers, sort the keys
# to allow a deterministic order of configuration
# Now do any that were deferred
# Next, do loggers - they refer to handlers and filters
# look after name
# And finally, do the root logger
# for use in exception handler
# logging.Formatter and its subclasses expect the `fmt`
# parameter instead of `format`. Retry passing configuration
# with `fmt`.
# Add defaults only if it exists.
# Prevents TypeError in custom formatter callables that do not
# accept it.
# A TypeError would be raised if "validate" key is passed in with a formatter callable
# that does not accept "validate" as a parameter
# if user hasn't mentioned it, the default will be fine
# unbounded
# for restoring in case of error
# Special case for handler which refers to another handler
# restore for deferred cfg
# Another special case for handler which refers to other handlers
# if 'handlers' not in config:
# raise ValueError('No handlers specified for a QueueHandler')
#The argument name changed from strm to stream
#Retry with old name.
#This is so that code can be used with older Python versions
#(e.g. by Django)
#Remove any existing handlers
# verified, can process
#Apply new configuration.
# Copyright 2001-2021 by Vinay Sajip. All Rights Reserved.
# Some constants...
# number of seconds in a day
# Issue 18940: A file may not have been created if delay is True.
# If rotation/rollover is wanted, it doesn't make sense to use another
# mode. If for example 'w' were specified, then if there were multiple
# runs of the calling application, the logs from previous runs would be
# lost if the 'w' is respected, because the log file would be truncated
# on each run.
# delay was set...
# are we rolling over?
# gh-116263: Never rollover an empty file
# See bpo-45401: Never rollover anything other than regular files
# Calculate the real rollover interval, which is just the number of
# seconds between rollovers.  Also set the filename suffix used when
# a rollover occurs.  Current 'when' events supported:
# S - Seconds
# M - Minutes
# H - Hours
# D - Days
# midnight - roll over at midnight
# W{0-6} - roll over on a certain day; 0 - Monday
# Case of the 'when' specifier is not important; lower or upper case
# will work.
# one second
# one minute
# one hour
# one day
# one week
# extMatch is a pattern for matching a datetime suffix in a file name.
# After custom naming, it is no longer guaranteed to be separated by
# periods from other parts of the filename.  The lookup statements
# (?<!\d) and (?!\d) ensure that the datetime suffix (which itself
# starts and ends with digits) is not preceded or followed by digits.
# This reduces the number of false matches and improves performance.
# multiply by units requested
# The following line added because the filename passed in could be a
# path object (see Issue #27493), but self.baseFilename will be a string
# If we are rolling over at midnight or weekly, then the interval is already known.
# What we need to figure out is WHEN the next interval is.  In other words,
# if you are rolling over at midnight, then your base interval is 1 day,
# but you want to start that one day clock at midnight, not now.  So, we
# have to fudge the rolloverAt value in order to trigger the first rollover
# at the right time.  After that, the regular interval will take care of
# the rest.  Note that this code doesn't care about leap seconds. :)
# This could be done with less code, but I wanted it to be clear
# r is the number of seconds left between now and the next rotation
# Rotate time is before the current time (for example when
# self.rotateAt is 13:45 and it now 14:15), rotation is
# tomorrow.
# If we are rolling over on a certain day, add in the number of days until
# the next rollover, but offset by 1 since we just calculated the time
# until the next day starts.  There are three cases:
# Case 1) The day to rollover is today; in this case, do nothing
# Case 2) The day to rollover is further in the interval (i.e., today is
# Case 3) The day to rollover is behind us in the interval (i.e., today
# The calculations described in 2) and 3) above need to have a day added.
# This is because the above time calculation takes us to midnight on this
# day, i.e. the start of the next day.
# 0 is Monday
# DST kicks in before next rollover, so we need to deduct an hour
# DST bows out before next rollover, so we need to add an hour
# See #89564: Never rollover anything other than regular files
# The file is not a regular file, so do not rollover, but do
# set the next rollover time to avoid repeated checks.
# Our files could be just about anything after custom naming,
# but they should contain the datetime suffix.
# Try to find the datetime suffix in the file name and verify
# that the file name can be generated by this handler.
# get the time that this sequence started at and make it a TimeTuple
# Already rolled over.
# Reduce the chance of race conditions by stat'ing by path only
# once and then fstat'ing our new fd if we opened a new log stream.
# See issue #14632: Thanks to John Mulligan for the problem report
# and patch.
# stat the file by path, checking for existence
# compare file system stat with that of our stream file handle
# we have an open file handle, clean it up
# See Issue #21742: _open () might fail.
# open a new file handle and get new stat info from that fd
# Exponential backoff parameters.
# Issue 19182
# Either retryTime is None, in which case this
# is the first time back after a disconnect, or
# we've waited long enough.
# next time, no delay before trying
#Creation failed, so set the retry time and return.
#self.sock can be None either because we haven't reached the retry
#time yet, or because we have reached the retry time and retried,
#but are still unable to connect.
# so we can call createSocket next time
# just to get traceback text into record.exc_text ...
# See issue #14436: If msg or args are objects, they may not be
# available on the receiving end. So we convert the msg % args
# to a string, save it as msg and zap the args.
# Issue #25685: delete 'message' if present: redundant with 'msg'
#try to reconnect next time
# from <linux/sys/syslog.h>:
# ======================================================================
# priorities/facilities are encoded into a single 32-bit quantity, where
# the bottom 3 bits are the priority (0-7) and the top 28 bits are the
# facility (0-big number). Both the priorities and the facilities map
# roughly one-to-one to strings in the syslogd(8) source code.  This
# mapping is included in this file.
# priorities (these are ordered)
# Originally added to work around GH-43683. Unnecessary since GH-50043 but kept
# for backwards compatibility.
# it worked, so set self.socktype to the used type
# user didn't specify falling back, so fail
# Syslog server may be unavailable during handler initialisation.
# C's openlog() function also ignores connection errors.
# Moreover, we ignore these errors while logging, so it's not worse
# to ignore it also here.
# prepended to all messages
# some old syslog daemons expect a NUL terminator
# We need to convert record level to lowercase, maybe this will
# change in the future.
# Message is a string. Convert to bytes as required by RFC 5424
# Administrative privileges are required to add a source to the registry.
# This may not be available for a user that just wants to add to an
# existing source - handle this specific case.
# This will probably be a pywintypes.error. Only raise if it's not
# an "access denied" error, else let it pass
# not access denied
#self._welu.RemoveSourceFromRegistry(self.appname, self.logtype)
# support multiple hosts on one IP address...
# need to strip optional :port from host, if present
# See issue #30904: putrequest call above already adds this header
# on Python 3.x.
# h.putheader("Host", host)
#can't do anything with the result
# See Issue #26559 for why this has been added
# will be set to listener if configured via dictConfig()
# The format operation gets traceback text into record.exc_text
# (if there's exception data), and also returns the formatted
# message. We can then use this to replace the original
# msg + args, as these might be unpickleable. We also zap the
# exc_info, exc_text and stack_info attributes, as they are no longer
# needed and, if not None, will typically not be pickleable.
# bpo-35726: make copy of record to avoid affecting other handlers in the chain.
# see gh-114706 - allow calling this more than once
# XXX TO DO:
# - popup menu
# - support partial or total redisplay
# - key bindings (instead of quick-n-dirty bindings on Canvas):
# - more doc strings
# - add icons for "file", "module", "class", "method"; better "python" icon
# - callback for selection???
# - multiple-item selection
# - tooltips
# - redo geometry without magic numbers
# - keep track of object ids to allow more careful cleaning
# - optimize tree redraw after expand of subnode
# Look for Icons subdirectory in the same directory as this module
# cache of PhotoImage instances for icons
# XXX could be more subtle
# XXX This hard-codes too many geometry constants!
# draw children
# _IsExpandable() was mistaken; that's allowed
# XXX This leaks bindings until canvas is deleted:
##stipple="gray50",     # XXX Seems broken in Tk 8.0.x
# XXX .lower(id) before Python 1.5.2
# padding carefully selected (on Windows) to match Entry widget:
# The first row doesn't matter what the dy is, just measure its
# size to get the value of the subsequent dy
# Example application
# XXX wish there was a "file" icon
# A canvas widget with scroll bars and some useful bindings
#if isinstance(master, Toplevel) or isinstance(master, Tk):
# htest #
# XXX would be nice to inherit from Delegator
# Could go away if inheriting from Delegator
# Perhaps rename to pushfilter()?
# XXX Perhaps should only support popfilter()?
# .pyw is for Windows; .pyi is for typing stub files.
# The extension order is needed for iomenu open/save dialogs.
# Fix for HiDPI screens on Windows.  CALL BEFORE ANY TK OPERATIONS!
# URL for arguments for the ...Awareness call below.
# https://msdn.microsoft.com/en-us/library/windows/desktop/dn280512(v=vs.85).aspx
# Called in pyshell and turtledemo.
# Int required.
# Method...Item and Class...Item use this.
# Normally pyshell.flist.open, but there is no pyshell.flist for htest.
# The browser depends on pyclbr and importlib which do not support .pyi files.
# Use list since values should already be sorted.
# If obj.name != key, it has already been suffixed.
# This class is also the base class for pathbrowser.PathBrowser.
# Init and close are inherited, other methods are overridden.
# PathBrowser.__init__ does not call __init__ below.
# create top
# place dialog below parent if running htest
# create scrolled canvas
# If pass file on command line.
# Add nested objects for htest.
# If pass file on command line, unittest fails.
# Default for testing; defaults to True in main() for running.
# python execution server on localhost loopback
# someday pass in host, port for remote debug capability
# In case IDLE started with -n.
# In case python started with -S.
# Override warnings module to write to warning_stream.  Initialize to send IDLE
# internal warnings to the console.  ScriptBinding.check_syntax() will
# temporarily redirect the stream to the shell window to display warnings when
# checking user's code.
# None, at least on Windows, if no console.
# if file (probably __stderr__) is invalid, skip warning.
# Patch linecache.checkcache():
#TODO: don't read/write this from/to .idlerc when testing
# whenever a file is changed, restore breakpoints
# possible due to update in restore_file_breaks
# only add if missing, i.e. do once
# update the subprocess debugger
# but debugger may not be active right now....
# XXX 13 Dec 2002 KBK Currently the file must be saved before it can
# this enables setting "BREAK" tags to be visible
# can happen if IDLE closes due to the .update() call
# XXX 13 Dec 2002 KBK Not used currently
# override FileList's class variable, instances return PyShellEditorWindow
# instead of EditorWindow when new edit windows are created.
# Don't remove shell color tags before "iomark"
# Temporarily monkey-patch the delegate's .insert() method to
# always use the "stdin" tag.  This is needed for undo-ing
# deletions to preserve the "stdin" tag, because UndoDelegator
# doesn't preserve tags for deleted text.
# See bpo-38141.
# Remove ' ='.
# gh-127060: Disable traceback colors
# Maybe IDLE is installed and is being accessed via sys.path,
# or maybe it's not installed and the idle.py script is being
# run from the IDLE source directory.
# GUI makes several attempts to acquire socket, listens for connection
# if PORT was 0, system will assign an 'ephemeral' port. Find it out:
# if PORT was not 0, probably working with a remote execution server
# To allow reconnection within the 2MSL wait (cf. Stevens TCP
# V1, 18.6),  set SO_REUSEADDR.  Note that this can be problematic
# on Windows since the implementation allows two active sockets on
# the same address!
#time.sleep(20) # test to simulate GUI not accepting connection
# Accept the connection from the Python execution server
# close only the subprocess debugger
# Only close subprocess debugger, don't unregister gui_adap!
# Kill subprocess, spawn a new one, accept connection.
# annotate restart in shell window and mark it
# restart subprocess debugger
# Restarted debugger connects to current instance of debug GUI
# reload remote debugger breakpoints for all PyShellEditWindows
# no socket
# process already terminated
# Issue 13506
# include Current Working Directory
# lost connection or subprocess terminated itself, restart
# [the KBI is from rpc.SocketIO.handle_EOF()]
# we received a response to the currently active seq number:
# shell may have closed
# Reschedule myself
# XXX Should GC the remote tree when closing the window
# at the moment, InteractiveInterpreter expects str
# InteractiveInterpreter.runsource() calls its runcode() method,
# which is overridden (see below)
#mark end of offending line
# Iterate list because mutate cache.
# The code better not raise an exception!
# Override classes
# Override menus
# Extend right-click context menu
# New classes
# initialized below
# indentwidth must be 8 when using tabs.  See note in EditorWindow:
# Changes when debug active
# page help() text to shell.
# import must be done here to capture i/o rebinding.
# XXX KBK 27Dec07 use text viewer someday, but must work w/o subproc
# millisec
# Insert UserInputTaggingDelegator at the top of the percolator,
# but make calls to text.insert() skip it.  This causes only insert
# events generated in Tcl/Tk to go through this delegator.
# Menu options "View Last Restart" and "Restart Shell" are disabled
# Should not be possible.
# No selection, do nothing.
# All we need is the variable
# Restore std streams
# Break cycles
# User code should use separate default Tk root window
# no nested mainloop to exit.
# nested mainloop()
# may be EOF if we quit our mainloop with Ctrl-C
# Active selection -- always use default binding
# exit the nested mainloop() in readline()
# Let the default binding (delete next char) take over
# Insert a linefeed without entering anything (still autoindented)
# Let the default binding (insert '\n') take over
# If some text is selected, recall the selection
# (but only if this before the I/O mark)
# If we're strictly before the line containing iomark, recall
# the current line, less a leading prompt, less leading or
# trailing whitespace
# Check if there's a relevant stdin range -- if so, use it.
# Note: "stdin" blocks may include several successive statements,
# so look for "console" tags on the newline before each statement
# (and possibly on prompts).
# The following is needed to handle empty statements.
# No stdin mark -- just get the current line, less any prompt
# If we're between the beginning of the line and the iomark, i.e.
# in the prompt area, move to the end of the prompt
# If we're in the current input and there's only whitespace
# beyond the cursor, erase that whitespace first
# If we're in the current input before its last line,
# insert a newline right at the insert point
# We're in the last line; append a newline and submit it
# Break out of recursive mainloop()
# remove leading and trailing empty or whitespace lines
# replace orig base indentation with new indentation
# Strip off last newline and surrounding whitespace.
# (To allow you to hit return twice to end a statement.)
# -n mode only
###pass  # ### 11Aug07 KBK if we are expecting exceptions
# let's find out what they are and be specific.
# no selection, so the index 'sel.first' doesn't exist
# bool value
# process sys.argv and sys.path:
# check the IDLE settings configuration (but command line overrides)
# Setup root.  Don't break user code run in IDLE process.
# Don't change environment when testing.
# set application icon
# start editor and/or shell windows:
# filename is a directory actually, disconsider it
# couldn't open shell
# On OSX: when the user has double-clicked on a file that causes
# IDLE to be launched the shell window will open just in front of
# the file she wants to see. Lower the interpreter window when
# there are open files.
# Handle remaining options. If any of these are set, enable_shell
# was set also, so shell must be true to reach here.
# If there is a shell window and no cmd or script in progress,
# check for problematic issues and print warning message(s) in
# the IDLE shell window; this is less intrusive than always
# opening a separate window.
# Warn if the "Prefer tabs when opening documents" system
# preference is set to "Always".
# keep IDLE running while files are open.
# Make sure turned off; see issue 18081
# Set True by test.test_idle.
# testing
# AutoComplete, fetch_encodings
# Calltip
# start_debugger
# remote_object_tree_item
# encoding
# multiple objects
# StackTreeItem
# Use tcl and, if startup fails, messagebox.
# Undo modifications of tkinter by idlelib imports; see bpo-25507.
# Avoid AttributeError if run again; see bpo-37038.
# In case subprocess started with -S (maybe in future).
# gh-121008: When testing IDLE, don't create a Tk object to avoid side
# effects such as installing a PyOS_InputHook hook.
# Thread shared globals: Establish a queue between a subthread (which handles
# the socket) and the main thread (which runs user code), plus global
# completion, exit and interruptible (the main thread) flags:
#time.sleep(15) # test subprocess not responding
# exiting but got an extra KBI? Try again!
# Issue 32207: calling handle_tk_events here adds spurious
# queue.Empty traceback to event handling exceptions.
# Link didn't work, print same exception to __stderr__
# A single request only
# 3.10+ hints are not directly accessible from python (#44026).
# found an exclude, break for: and delete tb[0]
# no excludes, have left RPC code, break while:
# exception was in IDLE internals, don't prune!
# see: bpo-26806
# mimic the original sys.setrecursionlimit()'s input handling
# add the delta to the default recursion limit, to compensate
# Pseudofiles for shell-remote communication (also used in pyshell)
# GH-78889: accessing unpickleable attributes freezes Shell.
# IDLE only needs methods; allow 'width' for possible use.
# import must be done here to capture i/o binding
# Keep a reference to stdin so that it won't try to exit IDLE if
# sys.stdin gets changed from within IDLE's shell. See issue17838.
# SystemExit called with an argument.
# Return to the interactive prompt.
# For testing, hook, viewer.
# For testing.
# Make sure turned off; see bpo-18081.
# Lives in PYTHON subprocess
# Lives in IDLE process
# Reason last statement is continued (or C_NONE if it's not).
# Find what looks like the start of a popular statement.
# Match blank line or non-indenting comment line.
# Match any flavor of string; the terminating quote is optional
# so that we're robust in the face of incomplete program text.
# Match a line that starts with something interesting;
# used to find the first item of a bracket structure.
# Match start of statements that should be followed by a dedent.
# Chew up non-special chars as quickly as possible.  If match is
# successful, m.end() less 1 is the index of the last boring char
# matched.  If match is unsuccessful, the string starts with an
# interesting char.
# Calling this triples access time; see bpo-32940
# ord('x')
# Map all ascii to 120 to avoid __missing__ call, then replace some.
# open brackets => '(';
# close brackets => ')'.
# Keep these.
# Peek back from the end for a good place to start,
# but don't try too often; pos will be left None, or
# bumped to a legitimate synch point.
# start of colon line (-1+1=0)
# Nothing looks like a block-opener, or stuff does
# but is_char_in_string keeps returning true; most likely
# we're in or near a giant string, the colorizer hasn't
# caught up enough to be helpful, or there simply *aren't*
# any interesting stmts.  In any of these cases we're
# going to have to parse the whole thing to be sure, so
# give it one last try from the start, but stop wasting
# time here regardless of the outcome.
# Peeking back worked; look forward until _synchre no longer
# matches.
# Map all uninteresting characters to "x", all open brackets
# to "(", all close brackets to ")", then collapse runs of
# uninteresting characters.  This can cut the number of chars
# by a factor of 10-40, and so greatly speed the following loop.
# Replacing x\n with \n would be incorrect because
# x may be preceded by a backslash.
# March over the squashed version of the program, accumulating
# the line numbers of non-continued stmts, and determining
# whether & why the last stmt is a continuation.
# level is nesting level; lno is line number
# cases are checked in decreasing order of frequency
# else we're in an unclosed bracket structure
# else the program is invalid, but we can't complain
# consume the string
# unterminated single-quoted string
# else comment char or paren inside string
# didn't break out of the loop, so we're still
# inside a string
# before the previous \n in code, we were in the first
# line of the string
# with outer loop
# consume the comment
# The last stmt may be continued for all 3 reasons.
# String continuation takes precedence over bracket
# continuation, which beats backslash continuation.
# Push the final line number as a sentinel value, regardless of
# whether it's continued.
# Set p and q to slice indices of last interesting stmt.
# Index of newest line.
# End of goodlines[i]
# Make p be the index of the stmt at line number goodlines[i].
# Move p back to the stmt at line number goodlines[i-1].
# tricky: sets p to 0 if no preceding newline
# The stmt code[p:q] isn't a continuation, but may be blank
# or a non-indenting comment line.
# nothing but junk!
# Analyze this stmt, to find the last open bracket (if any)
# and last interesting character (if any).
# stack of open bracket indices
# suck up all except ()[]{}'"#\\
# we skipped at least one boring char
# back up over totally boring whitespace
# index of last boring char
# consume string
# Note that study1 did this with a Python loop, but
# we use a regexp here; the reason is speed in both
# cases; the string may be huge, but study1 pre-squashed
# strings to a couple of characters per line.  study1
# also needed to keep track of newlines, and we don't
# have to.
# consume comment and trailing newline
# beyond backslash
# the program is invalid, but can't complain
# beyond escaped char
# end while p < q:
# one beyond open bracket
# find first list item; set i to start of its line
# index of first interesting char
# this line is junk; advance to next line
# nothing interesting follows the bracket;
# reproduce the bracket line's indentation + a level
# See whether the initial line starts an assignment stmt; i.e.,
# look for an = operator
# This line is unreachable because the # makes a comment of
# everything after it.
# found a legit =, but it may be the last interesting
# thing on the line
# move beyond the =
# oh well ... settle for moving beyond the first chunk
# of non-whitespace chars
# Used in _init_tk_type, changed by test.
## Define functions that query the Mac graphics type.
## _tk_type and its initializer are private to this section.
# When running IDLE, GUI is present, test/* may not be.
# When running tests, test/* is present, GUI may not be.
# If not, guess most common.  Does not matter for testing.
## Fix the menu and related functions.
# The command below is a hook in aquatk that is called whenever the app
# receives a file open event. The callback can have multiple arguments,
# one for every file that should be opened.
# Some versions of the Tk framework don't have a console object
# The menu that is attached to the Tk root (".") is also used by AquaTk for
# all windows that don't specify a menu of their own. The default menubar
# contains a number of menus, none of which are appropriate for IDLE. The
# Most annoying of those is an 'About Tck/Tk...' menu in the application
# menu.
# This function replaces the default menubar by a mostly empty one, it
# should only contain the correct application menu and the window menu.
# Due to a (mis-)feature of TkAqua the user will also see an empty Help
# Remove the last 3 items of the file menu: a separator, close window and
# quit. Close window will be reinserted just above the save item, where
# it should be according to the HIG. Quit is in the application menu.
# Remove the 'About' entry from the help menu, it is in the application
# menu
# Remove the 'Configure Idle' entry from the options menu, it is in the
# application menu as 'Preferences'
# Synchronize with editor.EditorWindow.about_dialog.
# Synchronize with editor.EditorWindow.config_dialog.
# Ensure that the root object has an instance_dict attribute,
# mirrors code in EditorWindow (although that sets the attribute
# on an EditorWindow instance that is then passed as the first
# argument to ConfigDialog)
# Synchronize with editor.EditorWindow.help_dialog.
# The binding above doesn't reliably work on all versions of Tk
# on macOS. Adding command definition below does seem to do the
# right thing for now.
# for Carbon AquaTk, replace the default Tk apple menu
# replace default About dialog with About IDLE one
# replace default "Help" item in Help menu
# remove redundant "IDLE Help" from menu
# Modified keyword list is used in fetch_completions.
# In builtins.
# Context keywords.
# Two types of completions; defined here for autocomplete_w import below.
# Tuples passed to open_completions.
# Control-Space.
# '.' for attributes.
# '/' in quotes for file name.
# This string includes all chars that may be in an identifier.
# TODO Update this here and elsewhere.
# not in subprocess or no-gui test
# id of delayed call, and the index of the text insert when
# the delayed call was issued. If _delayed_completion_id is
# None, there is no delayed call.
# Makes mocking easier.
# A modifier was pressed along with the tab or
# there is only previous whitespace on this line, so tab.
# Cancel another delayed call, if it exists.
# Find the beginning of the string.
# fetch_completions will look at the file system to determine
# whether the string value constitutes an actual file name
# XXX could consider raw strings here and unescape the string
# value if it's not raw.
# Find last separator or string start
# Find string start
# Need object with attributes.
# Main module names.
# Cached values for maximized window dimensions, one for each set
# of screen dimensions.
# Can't zoom/restore window height for windows not in the 'normal'
# state, e.g. maximized and full-screen windows.
# Maximize the window's height.
# Restore the window's height.
# .wm_geometry('') makes the window revert to the size requested
# by the widgets it contains.
# Get window geometry info for maximized windows.
# The 'zoomed' state is not supported by some esoteric WMs,
# such as Xvfb.
# On Windows, the returned Y coordinate is the one before
# maximizing, so we use 0 which is correct unless a user puts
# their dock on the top of the screen (very rare).
# Get the "root y" coordinate for non-maximized windows with their
# y coordinate set to that of maximized windows.  This is needed
# to properly handle different title bar heights for non-maximized
# vs. maximized windows, as seen e.g. in Windows 10.
# Adjust the maximum window height to account for the different
# title bar heights of non-maximized vs. maximized windows.
# Add htest?
# Only find next match if replace succeeded.
# A bad re can cause it to fail.
# XXX ought to replace circular instead of top-to-bottom when wrapping
# mock undo delegator methods
# Open is inherited from SDBase.
# TODO - why is this here and not in a create_command_buttons?
# Sometimes, destroy() is called twice
# If this is Idle's last window then quit the mainloop
# (Needed for clean exit on Windows 98)
# Subclass can override
# This can happen when the Window menu was torn off.
# Simply ignore it.
# replace in subclasses
# not in Find in Files
# row 0 (and maybe 1), cols 0, 1
# next row, cols 0, 1
# col 2, all rows
# Look for start of next paragraph if the index passed in is a blank line
# Once start line found, search for end of paragraph (a blank line)
# Search back to beginning of paragraph (first blank line before)
# This should perhaps be replaced with textwrap.wrap
# XXX Should take double space after period (etc.) into account
# Can happen when line ends in whitespace
# XXX Should reformat remaining paragraphs as well
# Remove header from the comment lines
# Reformat to maxformatwidth chars or a 20 char width,
# whichever is greater.
# re-split and re-insert the comment header.
# If the block ends in a \n, we don't want the comment prefix
# inserted after it. (Im not sure it makes sense to reformat a
# comment block that is not made of complete lines, but whatever!)
# Can't think of a clean solution, so we hack away
# Copied from editor.py; importing it would cause an import cycle.
# Try to prevent inconsistent indentation.
# User must change indent width manually after using tabs.
# 'Strip Trailing Whitespace" on "Format" menu.
# Since text.delete() marks file as changed, even if not,
# only call it when needed to actually delete something.
# File ends with at least 1 newline;
# & is not Shell.
# Delete extra user endlines.
# Stop if file empty.
# Because tk indexes are slice indexes and never raise,
# a file with only newlines will be emptied.
# patchcheck.py does the same.
# The default tab setting for a Text widget, in average-width characters.
# TODO remove unneeded function since .chm no longer installed
# for file names
# Delay import: runscript imports pyshell imports EditorWindow.
# look for html docs in a couple of standard places
# "python2" rpm
# standard location
# Windows only, block only executed once.
# documentation may be stored inside a python framework
# Safari requires real file:-URLs
#self.top.instance_dict makes flist.inversedict available to
#configdialog.py so it can access all EditorWindow instances
# keys: Tkinter event names
# values: Tkinter variable instances
# Override in PyShell
# new in 8.5
# Command-W on editor windows doesn't work without this.
# Some OS X systems have only one mouse button, so use
# control-click for popup context menus there. For two
# buttons, AquaTk defines <2> as the right button, not <3>.
# Elsewhere, use right-click for popup menus.
# self.fregion used in smart_indent_event to access indent_region.
# usetabs true  -> literal tab characters are used by indent and
# Although use-spaces=0 can be configured manually in config-main.def,
# configuration of tabs v. spaces is not supported in the configuration
# dialog.  IDLE promotes the preferred Python indentation: use spaces!
# tabwidth is the display width of a literal tab character.
# CAUTION:  telling Tk to use anything other than its default
# tab setting causes it to use an entirely different tabbing algorithm,
# treating tab stops as fixed distances from the left margin.
# Nobody expects this, so for now tabwidth should never be changed.
# must remain 8 until Tk is fixed.
# indentwidth is the number of screen characters per indent level.
# The recommended Python indentation is four spaces.
# Store the current value of the insertofftime now so we can restore
# it if needed.
# When searching backwards for a reliable place to begin parsing,
# first start num_context_lines[0] lines back, then
# num_context_lines[1] lines back if that didn't work, and so on.
# The last value should be huge (larger than the # of lines in a
# conceivable file).
# Making the initial values larger slows things down more often.
# IOBinding implements file I/O and printing functionality
# initialized below in self.ResetColorizer
# optionally initialized later below
# Some abstractions so IDLE extensions are cross-IDE
# Add pseudoevents for former extension fixed keys.
# (This probably needs to be done once in the process.)
# Former extension bindings depends on frame.text being packed
# (called from self.ResetColorizer()).
#refresh-calltip must come after paren-closed to work right
# Divide the width of the Text widget by the font width,
# which is taken to be the width of '0' (zero).
# http://www.tcl.tk/man/tcl8.6/TkCmd/text.htm#M21
# state&4==Control. If <Control-Home>, use the Tk binding.
# In Shell on input line, go to just after prompt
# shift was not pressed
# there was no previous selection
# extend back
# extend forward
# Insert some padding to avoid obscuring some of the statusbar
# by the resize widget.
# Insert the application menu
# see issue1207589
# ("Label", "<<virtual-event>>", "statefuncname"), ...
# Example
# Synchronize with macosx.overrideRootMenu.about_dialog.
# Synchronize with macosx.overrideRootMenu.config_dialog.
# Synchronize with macosx.overrideRootMenu.help_dialog.
# There is no selection, so do nothing and maybe interrupt.
# no shift(==1) or control(==4) pressed
# XXX This, open_module_browser, and open_path_browser
# would fit better in iomenu.IOBinding.
# can add more colorizers here...
# Called from self.filename_change_hook and from configdialog.py
# error at line end
# Restore the original value
# Called from configdialog.py
# Update the code context widget first, since its height affects
# the height of the text widget.  This avoids double re-rendering.
# Next, update the line numbers widget, since its width affects
# the width of the text widget.
# Finally, update the main text widget.
# Called from configdialog.deactivate_current_config.
# Called from configdialog.activate_config_changes.
# Update menu accelerators.
# Skip empty menus
# First delete the extra help entries, if any.
# Then rebuild them.
# And update the menu dictionary.
# TODO: move to iomenu.
# move to top
# clean and save the recent files list
# for each edit window instance, construct the recent files menu
# clear, and rebuild:
# zap \n
# Don't use both values on macOS because
# that doesn't match platform conventions.
# Add a proxy icon to the window title
# Maintain the modification status for the window
# Geometry manager hasn't run yet
# bpo-35379: close called twice
# unless override: unregister from flist, terminate if last window
# Map built-in config-extension section names to file names.
# Create a Tkinter variable object.
# Tk implementations of "virtual text methods" -- each platform
# reusing IDLE's support code needs to define these for its GUI's
# flavor of widget.
# Is character at text_index in a Python string?  Return 0 for
# "guaranteed no", true for anything else.  This info is expensive
# to compute ab initio, but is probably already known by the
# platform's colorizer.
# Return true iff colorizer hasn't (re)gotten this far
# yet, or the character is tagged as being in a string
# The colorizer is missing: assume the worst
# If a selection is defined in the text widget, return (start,
# end) as Tkinter text indices, otherwise return (None, None)
# Return the text widget's current view of what a tab stop means
# (equivalent width in spaces).
# Set the text widget's current view of what a tab stop means.
# Set text widget tab width
### begin autoindent code ### Delete whitespace left, until hitting a real char or closest
# preceding virtual tab stop.
# easy: delete preceding newline
# at start of buffer
# easy: delete preceding real char
# Ick.  It may require *inserting* spaces if we back up over a
# tab character!  This is written to be clear, not fast.
# Debug prompt is multilined....
# if intraline selection:
# elif multiline selection:
# only whitespace to the left
# tab to the next 'stop' within or to right of line's text:
# Close undo block and expose new line in finally clause.
# Count leading whitespace for indent size.
# The cursor is in or at leading indentation in a continuation
# line; just inject an empty line at the start.
# Strip whitespace before insert point unless it's in the prompt.
# Strip whitespace after insert point.
# Insert new line.
# Adjust indentation for continuations and block open/close.
# First need to find the last statement.
# The current statement hasn't ended yet.
# After the first line of a string do not indent at all.
# Inside a string which started before this line;
# just mimic the current indent.
# Line up with the first (if any) element of the
# last open bracket structure; else indent one
# level beyond the indent of the line with the
# last open bracket.
# If more than one line in this statement already, just
# mimic the current indent; else if initial line
# has a start on an assignment stmt, indent to
# beyond leftmost =; else to beyond first chunk of
# non-whitespace on initial line.
# This line starts a brand new statement; indent relative to
# indentation of initial line of closest preceding
# interesting statement.
# Our editwin provides an is_char_in_string function that works
# with a Tk text index, but PyParse only knows about offsets into
# a string. This builds a function for PyParse that accepts an
# offset.
# XXX this isn't bound to anything -- see tabwidth comments
####### Make string that displays as n leading blanks.
# Delete from beginning of line to insert point, then reinsert
# column logical (meaning use tabs if appropriate) spaces.
# Guess indentwidth from text content.
# Return guessed indentwidth.  This should not be believed unless
# it's in a reasonable range (e.g., it will be 0 if no indented
# blocks are found).
# "line.col" -> line, as an int
# Stopping the tokenizer early can trigger spurious errors.
### end autoindent code ###
# issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5
# if not keylist:
# Convert strings of the form -singlelowercase to -singleuppercase.
# Convert certain keynames to their symbol.
# Remove Key- from string.
# Convert Cancel to Ctrl-Break.
# dscherer@cmu.edu
# Convert Control to Ctrl-.
# Change - to +.
# Change >< to space.
# Remove <.
# Remove >.
# On Windows, tcl/tk breaks 'words' only on spaces, as in Command Prompt.
# We want Motif style everywhere. See #21474, msg218992 and followup.
# make sure word.tcl is loaded
# error if close master window first - timer event, after script
# text.bind("<<close-all-windows>>", edit.close_event)
# Does not stop error, neither does following
# edit.text.bind("<<close-window>>", edit.close_event)
# != after cursor move
# will be decremented
# will be incremented
# abort history_next
# abort history_prev
# avoid duplicates
# the event type constants, which define the meaning of mc_type
# the modifier state constants, which define the meaning of mc_state
# define the list of modifiers, to be used in complex event types.
# a dictionary to map a modifier name into its number
# In 3.4, if no shell window is ever open, the underlying Tk widget is
# destroyed before .__del__ methods here are called.  The following
# is used to selectively ignore shutdown exceptions to avoid
# 'Exception ignored' messages.  See http://bugs.python.org/issue20167
# A binder is a class which binds functions to one type of event. It has two
# methods: bind and unbind, which get a function and a parsed sequence, as
# returned by _parse_sequence(). There are two types of binders:
# _SimpleBinder handles event types with no modifiers and no detail.
# No Python functions are called when no events are binded.
# _ComplexBinder handles event types with modifiers and a detail.
# A Python function is called each time an event is generated.
# An int in range(1 << len(_modifiers)) represents a combination of modifiers
# (if the least significant bit is on, _modifiers[0] is on, and so on).
# _state_subsets gives for each combination of modifiers, or *state*,
# a list of the states which are a subset of it. This list is ordered by the
# number of modifiers is the state - the most specific state comes first.
# _state_codes gives for each state, the portable code to be passed as mc_state
# This class binds many functions, and only unbinds them when it is deleted.
# self.handlerids is the list of seqs and ids of binded handler functions.
# The binded functions sit in a dictionary of lists of lists, which maps
# a detail (or None) and a state into a list of functions.
# When a new detail is discovered, handlers for all the possible states
# are binded.
# Call all functions in doafterhandler and remove them from list
# we don't want to change the lists of functions while a handler is
# running - it will mess up the loop and anyway, we usually want the
# change to happen from the next event. So we have a list of functions
# for the handler to run after it finishes calling the binded functions.
# It calls them only once.
# ishandlerrunning is a list. An empty one means no, otherwise - yes.
# this is done so that it would be mutable.
# define the list of event types to be handled by MultiEvent. the order is
# compatible with the definition of event type constants.
# which binder should be used for every event type?
# A dictionary to map a type name into its number
# _ComplexBinder
# a dictionary which maps a virtual event to a tuple with:
#print("bind(%s, %s, %s)" % (sequence, func, add),
#print("event_add(%s, %s)" % (repr(virtual), repr(sequences)),
#print("Tkinter event_add(%s)" % seq, file=sys.__stderr__)
#print("Tkinter event_delete: %s" % seq, file=sys.__stderr__)
# Reload changed options in the following classes.
# Each theme element key is its display name.
# The first value of the tuple is the sample area tag name.
# The second value is the display name list sort index.
# XXX Decide whether to keep or delete these key bindings.
# Key bindings for this dialog.
# self.bind('<Escape>', self.Cancel) #dismiss dialog, no save
# self.bind('<Alt-a>', self.Apply) #apply changes, save
# self.bind('<F1>', self.Help) #context help
# Attach callbacks after loading config to avoid calling them.
# Changing the default padding on OSX results in unreadable
# text in the buttons.
# Add space above buttons.
# class TabPage(Frame):  # A template for Page classes.
# Define frames and widgets.
# frame_font.
# frame_sample.
# Grid and pack widgets:
# Set sorted no-duplicate editor font selection list and font_name.
# Set font size dropdown.
# Set font weight.
# Display-name: internal-config-tag-name.
# Create widgets:
# body frame and section frames.
# frame_custom.
# Prevent perhaps invisible selection of word or slice.
# event.widget.winfo_top_level().highlight_target.set(elem)
#, command=self.set_highlight_targetBinding
# frame_theme.
# Pack widgets:
# body.
# Set current theme type radiobutton.
# Set current theme.
# Load available theme option menus.
# Default theme selected.
# User theme selected.
# Load theme element option menu.
# User didn't cancel and they chose a new color.
# Current theme is a built-in.
# User cancelled custom theme creation.
# Create new custom theme based on previously active theme.
# Current theme is user defined.
# Apply any of the old theme's unsaved changes to the new theme.
# Save the new theme.
# Change GUI over to the new theme.
# bg not possible
# Both fg and bg can be set.
# Set the color sample area.
# Default theme
# User theme
# Cursor sample needs special painting.
# Handle any unsaved changes to this theme.
# Make testing easier.  Could change implementation.
# Remove theme from changes, config, and file.
# Reload user theme list.
# Revert to default theme.
# User can't back out of these changes, they must be applied now.
# body and section frames.
# frame_key_sets.
# frame_target.
# Set current keys type radiobutton.
# Set current keys.
# Load available keyset option menus.
# User key set selected.
# Load keyset element list.
# Event is an extension binding.
# unsaved changes
# Current key set is a built-in.
# User cancelled custom key set creation.
# Create new custom key set based on previously active key set.
# Add key set to changed items.
# Trim off the angle brackets.
# Handle any unsaved changes to prev key set.
# Save the new key set.
# Change GUI over to the new key set.
# 'set' is dict mapping virtual event to list of key events.
# Trim double angle brackets.
# Handle any unsaved changes to this key set.
# Remove key set from changes, config, and file.
# Reload user key set list.
# Revert to default key set.
# Integer values need StringVar because int('') raises.
# frame_run.
# frame_win_size.
# frame_cursor.
# frame_autocomplete.
# frame_paren.
# frame_format.
# Set variables for all windows.
# Frame_shell.
# Frame_editor.
# Spacer -- better solution?
# frame_auto_squeeze_min_lines
# frame_save.
# frame_line_numbers_default.
# frame_context.
# Set variables for shell windows.
# Set variables for editor windows.
# Requires extension names.
# TEMPORARY
# Create the frame holding controls for each extension.
# Spacer.  Replace with config?
# Former built-in extensions are already filtered out.
# Bring 'enable' options to the beginning of the list.
# Need this until .Get fixed.
# Bad values overwritten by entry.
# Create an entry for each configuration option.
# Create a row with a label and entry/checkbutton.
# type == 'str'
# Limit size to fit non-expanding space with larger font.
# if self.defaultCfg.has_section(section):
# Currently, always true; if not, indent to return.
# Set the option.
# self = frame_help in dialog (until ExtPage class).
# Pack frame_help.
# No entries in list.
# Some entries.
# There currently is a selection.
# There currently is not a selection.
# Selected will be un-selected
# Set additional help sources.
# Call after all tests in a module to avoid memory leaks.
# Create a canvas object and a vertical scrollbar for scrolling it.
# Reset the view.
# Create a frame inside the canvas which will be scrolled with it.
# Track changes to the canvas and frame width and sync them,
# also updating the scrollbar.
# Update the scrollbars to match the size of the inner frame.
# Update the inner frame's width to fill the canvas.
# object browser
# - for classes/modules, add "open source" to object browser
# TODO return sorted(self.object)
# We want the restore event be called before the usual return and
# backspace events.
# Bind the check-restore event to the function restore_event,
# so that we can then use activate_restore (which calls event_add)
# and deactivate_restore (which calls event_delete).
# If user bound non-closer to <<paren-closed>>, quit.
# Allow calltips to see ')'
# self.create_tag(indices)
# self.set_timeout()
# disable the last timer, if there is one.
# any one of the create_tag_XXX methods can be used depending on
# the style
# any one of the set_timeout_XXX methods can be used depending on
# After CHECK_DELAY, call a function which disables the "paren" tag
# if the event is for the most recent timer and the insert has changed,
# or schedules another call for itself.
# associate a counter with an event; only disable the "paren"
# tag if the event is for the most recent timer.
# An instance of Debugger or proxy thereof.
# When closing debugger window with [x] in 3.x
# Skip this frame.
# catch both idlelib/debugger.py and idlelib/debugger_r.py
# on both Posix and Windows
# If passed, a proxy of remote instance.
# Deal with the scenario where we've already got a program running
# in the debugger and we want to start another. If that is the case,
# our second 'run' was invoked from an event dispatched not from
# the main event loop, but from the nested event loop in 'interaction'
# below. So our stack looks something like this:
# This kind of nesting of event loops causes all kinds of problems
# (see e.g. issue #24455) especially when dealing with running as a
# subprocess, where there's all kinds of extra stuff happening in
# there - insert a traceback.print_stack() to check it out.
# By this point, we've already called restart_subprocess() in
# ScriptBinding. However, we also need to unwind the stack back to
# that outer event loop.  To accomplish this, we:
# That leaves us back at the outer main event loop, at which point our
# after event can fire, and we'll come back to this routine with a
# clean stack.
# Clean up pyshell if user clicked debugger control close widget.
# (Causes a harmless extra cycle through close_debugger() if user
# toggled debugger from pyshell Debug menu)
# Now close the debugger control window....
# TODO redo entire section, tries not needed.
# Nested main loop: Tkinter's main loop is not reentrant, so use
# Tcl's vwait facility, which reenters the event loop until an
# event handler sets the variable we're waiting on.
# lineno is stackitem[1]
# At least on with the stock AquaTk version on OSX 10.4 you'll
# get a shaking GUI that eventually kills IDLE if the width
# argument is specified.
# XXX odict never passed.
# XXX 20 == observed height of Entry widget
# Needed for initial comparison below.
#names = sorted(dict)
# Because of (temporary) limitations on the dict_keys type (not yet
# public or pickleable), have the subprocess to send a list of
# keys, not a dict_keys object.  sorted() will take a dict_keys
# (no subprocess) or a list.
# There is also an obscure bug in sorted(dict) where the
# interpreter gets into a loop requesting non-existing dict[0],
# dict[1], dict[2], etc from the debugger_r.DictProxy.
# TODO recheck above; see debugger_r 159ff, debugobj 60.
# repr(value)
# Strip extra quotes caused by calling repr on the (already)
# repr'd value sent across the RPC interface:
# XXX Could we use a <Configure> callback for the following?
# Alas!
# TODO: htest?
# Enable running IDLE with idlelib in a non-standard location.
# This was once used to run development versions of IDLE.
# Because PEP 434 declared idle.py a public interface,
# removal should require deprecation.
# This is subject to change
# Warn we cycled around
# search backwards through words before
# search onwards through words after
# Provide instance variables referenced by debugger
# XXX This should be done differently
# cli_args is list of strings that extends sys.argv
# Workaround for macOS 11 Uni2; see bpo-42508.
# XXX: tabnanny should work on binary files as well
# The error messages from tabnanny are too confusing...
# If successful, return the compiled code
# User cancelled.
# XXX KBK 03Jul04 When run w/o subprocess, runtime warnings still
# XXX This should really be a function of EditorWindow...
# vertical scrollbar
# horizontal scrollbar - only when wrap is set to NONE
# Place dialog below parent if running htest.
# We need to bind event beyond <Key> so that the function will be called
# before the default specific IDLE function
# The widget (Text) on which we place the AutoCompleteWindow
# Tags to mark inserted text with
# The widgets we create
# The default foreground and background of a selection. Saved because
# they are changed to the regular colors of list items when the
# completion start is not a prefix of the selected completion
# The list of completions
# A list with more completions, or None
# The completion mode, either autocomplete.ATTRS or .FILES.
# The current completion start, on the text box (a string)
# The index of the start of the completion
# The last typed start, used so that when the selection changes,
# the new start will be as close as possible to the last typed one.
# Do we have an indication that the user wants the completion window
# (for example, he clicked the list)
# event ids
# Flag set if last keypress was a tab
# Flag set to avoid recursive <Configure> callback invocations.
# There is not even one completion which s is a prefix of.
# Find the end of the range of completions where s is a prefix of.
# only one possible completion
# We should return the maximum prefix of first and last
# start is a prefix of the selected completion
# If there are more completions, show them, and call me again.
# Handle the start we already have
# There is exactly one matching completion
# Prevent grabbing focus on macOS.
#acw.update_idletasks() # Need for tk8.6.8 on macOS: #40128.
# work around bug in Tk 8.5.18+ (issue #24570)
# Initialize the listbox selection
# bind events
# Avoid running on recursive <Configure> callback invocations.
# Since the <Configure> event may occur after the completion window is gone,
# catch potential TclError exceptions when accessing acw.  See: bpo-41611.
# Position the completion list window
# On Windows an update() call is needed for the completion
# list window to be created, so that we can fetch its width
# and height.  However, this is not needed on other platforms
# (tested on Ubuntu and macOS) but at one point began
# causing freezes on macOS.  See issues 37849 and 41611.
# enough height below
# not enough height above
# place acw below current line
# place acw above current line
# See issue 15786.  When on Windows platform, Tk will misbehave
# to call winconfig_event multiple times, we need to prevent this,
# otherwise mouse button double click will not be able to used.
# See issue 734176, when user click on menu, acw.focus_get()
# will get KeyError.
# Hide autocomplete list if it exists and does not have focus or
# mouse click on widget / text area.
# On Windows platform, it will need to delay the check for
# acw.focus_get() when click on acw, otherwise it will return
# None and close the window
# ButtonPress event only bind to self.widget
# Put the selected completion in the text, and close the list
# Normal editing of text
# keysym == "BackSpace"
# If start is a prefix of the selection, but is not '' when
# completing file names, put the whole
# selected completion. Anyway, close the list.
# Move the selection in the listbox
# two tabs in a row; insert current selection and close acw
# first tab; let AutoComplete handle the completion
# A modifier key, so ignore
# Regular character with a non-length-1 keycode
# Unknown event, close the window and let it through.
# If we didn't catch an event which moved the insert, close window
# The selection doesn't change.
# unbind events
# Re-focusOn frame.text (See issue #15786)
# destroy widgets
# TODO: autocomplete/w htest here
# + StringVar, Button
#Set the default value
# Only module without unittests because of intention to replace.
# TODOs added Oct 2014, tjr
# This is currently '' when testing.
# TODO Use default as fallback, at least if not None
# Should also print Warning(file, section, option).
# Currently may raise ValueError
#return a default value
# TODO use to select userCfg vs defaultCfg
# See https://bugs.python.org/issue4630#msg356516 for following.
# self.blink_off_time = <first editor text>['insertofftime']
# expanduser() found user home dir
# still no path to home!
# traditionally IDLE has defaulted to os.getcwd(), is this adequate?
# TODO continue without userDIr instead of exit
#returning default, print warning
# Provide foreground and background colors for each theme
# element (other than cursor) even though some values are not
# yet used by idle, to allow for their use in the future.
# Default values are generally black and white.
# TODO copy theme from a class attribute.
#cursor (only foreground can be set)
#shell window
# Skip warning for new elements.
# Print warning that will return a default color
#user has added own extension
# specific exclusions because we are storing config for mainlined old
# extensions in config-extensions.def for backward compatibility
#the extension is enabled
# TODO both True contradict
# TODO return here?
#add the non-configurable bindings
#trim off the angle brackets
# macOS (OS X) Tk variants do not support the "Alt"
# keyboard modifier.  Replace it with "Option".
# TODO (Ned?): the "Option" modifier does not work properly
#the extension defines keybindings
#the binding is already in use
#disable this binding
#add binding
# TODO make keyBindings a file or class attribute used for test above
# and copied in function below.
# TODO: = dict(sorted([(v-event, keys), ...]))?
# virtual-event: list of key events.
# Otherwise return default in keyBindings.
#malformed config entry with no ';'
#make these empty
#so value won't be added to list
#config entry contains ';' as expected
#neither are empty strings
# if font in pixels, ignore actual size
#same keys
# List of unhashable dicts.
# Make sure we use a string.
# The setting equals a default setting, remove it from user cfg.
# If we got here, set the option.
# Remove it for replacement.
# Save these even if unchanged!
# ConfigDialog caller must add the following call
# self.save_all_changed_extensions()  # Uses a different mechanism.
# TODO Revise test output, write expanded unittest
# htest # (not really, but ignore in coverage)
#print('***', line, crc, '***')  # Uncomment for diagnosis.
# Cfg has variable '0xnnnnnnnn' address.
# Run revised _dump() (700+ lines) as htest?  More sorting.
# Perhaps as window with tabs for textviews, making it config viewer.
# underscore prefixes character to underscore
# Currently always true in Shell.
# Process the normal chars up to tab or newline.
# Deal with tab or newline.
# Avoid the `current_column == 0` edge-case, and while we're
# at it, don't bother adding 0.
# If the current column was exactly linewidth, divmod
# would give (1,0), even though a new line hadn't yet
# been started. The same is true if length is any exact
# multiple of linewidth. Therefore, subtract 1 before
# dividing a non-empty line.
# If a tab passes the end of the line, consider the entire
# tab as being on the next line.
# After the tab or newline.
# Process remaining chars (no more tabs or newlines).
# Avoid divmod(-1, linewidth).
# Text ended with newline; don't count an extra line after it.
# The base Text widget is needed to change text before iomark.
# AquaTk defines <2> as the right button, not <3>.
# X windows only.
# Item structure: (label, method_name).
# Get the base Text widget of the PyShell object, used to change
# text before the iomark. PyShell deliberately disables changing
# text before the iomark via its 'text' attribute, which is
# actually a wrapper for the actual Text widget. Squeezer,
# however, needs to make such changes.
# Twice the text widget's border width and internal padding;
# pre-calculated here for the get_line_width() method.
# Replace the PyShell instance's write method with a wrapper,
# which inserts an ExpandingButton instead of a long text.
# Only auto-squeeze text which has just the "stdout" tag.
# Only auto-squeeze text with at least the minimum
# configured number of lines.
# First, a very quick check to skip very short texts.
# Now the full line-count check.
# Create an ExpandingButton instance.
# Insert the ExpandingButton into the Text widget.
# Add the ExpandingButton to the Squeezer's list.
# Set tag_name to the first valid tag found on the "insert" cursor.
# The insert cursor doesn't have a "stdout" or "stderr" tag.
# Find the range to squeeze.
# If the last char is a newline, remove it from the range.
# Delete the text.
# Prepare an ExpandingButton.
# insert the ExpandingButton to the Text
# Insert the ExpandingButton to the list of ExpandingButtons,
# while keeping the list ordered according to the position of
# the buttons in the Text widget.
# Add htest.
# order of patterns matters
# Win filename, maybe starting with spaces
# filename or path, ltrim
# Win abs path with embedded spaces, ltrim
# Our own right-button menu
# Customize EditorWindow
# Act as output file
# Try the previous line.  This is handy e.g. in tracebacks,
# where you tend to right-click on the displayed source line
# These classes are currently not used but might come in handy
# XXX Should use IdlePrefs.ColorPrefs
# Extend the format menu.
# show no border on the top level window
# This command is only needed and available on Tk >= 8.4.0 for OSX.
# Without it, call tips intrude on the typing process by grabbing
# the focus.
# Needed on MacOS -- see #34275.
# The tip window must be completely outside the anchor widget;
# otherwise when the mouse enters the tip window we get
# a leave event and it disappears, and then we get an enter
# event and it reappears, and so on forever :-(
# Note: This is a simplistic implementation; sub-classes will likely
# want to override this.
# See ToolTip for an example
# Note: This is called by __del__, so careful when overriding/extending
# Dialog title for invalid key sequence
# Set self.modifiers, self.modifier_label.
# Make testing easier.  Replace in #30751.
# Basic entry key sequence.
# Basic entry controls.
# Basic entry modifiers.
# Basic entry help text.
# Basic entry key list.
# Advanced entry key sequence.
# Advanced entry help text.
# Switch between basic and advanced.
# Short name.
# Hide while setting geometry.
# Needed for winfo_reqwidth().
# Center dialog over parent (or below htest box).
# Geometry set, unhide.
# IDLE passes 'None' to select pickle.DEFAULT_PROTOCOL.
#----------------- end class RPCServer --------------------
# XXX Check for other types -- not currently needed
# this thread does all reading of requests or responses
# wait for notification from socket handling thread
# meaning: 0 => reading count; 1 => reading data
# send queued response if there is one available
# poll for message on link
# socket not ready
# process or queue a request
# don't acknowledge the 'queue' request!
# return if completed message transaction
# must be a response for a different thread:
# response involving unknown sequence number is discarded,
# probably intended for prior incarnation of server
# call our (possibly overridden) exit function
#----------------- end class SocketIO --------------------
# Token mix-in class
# Server
## cgt xxx
# Client
# Requests coming from the client are odd numbered
# Helper to get a list of methods from an object
# Adds names to dictionary argument 'methods'
# XXX KBK 09Sep03  We need a proper unit test for this module.  Previously
# Set '_' to None to avoid recursion
# let's use ascii while utf8-bmp codec doesn't present
# Query and Section name result from splitting GetCfgSectionNameDialog
# of configSectionNameDialog.py (temporarily config_sec.py) into
# generic and specific parts.  3.6 only, July 2016.
# ModuleName.entry_ok came from editor.EditorWindow.load_module.
# HelpSource was extracted from configHelpSourceEdit.py (temporarily
# config_help.py), with darwin code moved from ok to path_ok.
# Platform is set for one test.
# Needed for Font call.
# Hide while configuring, especially geometry.
# Otherwise fail when directly run unittest.
# Need here for winfo_reqwidth below.
# Unhide now that geometry set.
# Do not replace.
# Bind to self the widgets needed for entry_ok or unittest.
# Display or blank error by setting ['text'] =.
# Override to add widgets.
#self.bell(displayof=self)
# Example: usually replace.
# [Ok] moves focus.  (<Return> does not.)  Move it back.
# Used in ConfigDialog.GetNewKeysName, .GetNewThemeName (837)
# Used in open_module (editor.EditorWindow until move to iobinding).
# XXX Ought to insert current file's directory in front of path.
# Some special modules require this (e.g. os.path)
# Used in editor.EditorWindow.goto_line_event.
# Used in ConfigDialog.HelpListItemAdd/Edit, (941/9)
# Extracted from browse_file so can mock for unittests.
# Cannot unittest as cannot simulate button clicks.
# Test by running htest, such as by running this file.
# localize for test override
#no path specified
# for Mac Safari
# Used in runscript.run_custom_event
# N.B. this import overridden in PyShellFileList.
# For EditorWindow.getrawvar (shared Tcl variables)
# This can happen when bad filename is passed on command line:
# Don't create window, perform 'action', e.g. open in same window
# TODO check and convert to htest
## About IDLE ##
## IDLE Help ##
# Text widget we're rendering into.
# Current block level text tags to apply.
# Current character level text tags.
# Exclude html header links.
# Track indentation level.
# Displaying preformatted text?
# Heading prefix (like '25.5'?) to remove.
# In a nested <dl>?
# In a simple list (no double spacing)?
# Pair headers with text indexes for toc.
# Text within header tags for toc.
# Previous tag info (opener?, tag).
# Begin a new block for <p> tags after a closed tag.
# Avoid extra lines, e.g. after <pre> tags.
# Avoid extra line.
# Lines average 4/3 of editor line height.
# Only expand the text widget.
# Try copy_strip, present message.
# all ASCII chars that may be in an identifier
# all ASCII chars that may be the first char of an identifier
# lookup table for whether 7-bit ASCII chars are valid in a Python identifier
# lookup table for whether 7-bit ASCII chars are valid as the first
# char in a Python identifier
# We add the newline because PyParse requires a newline
# at end. We add a space so that index won't be at end
# of line, so that its status will be the same as the
# char before it, if should.
# We add the newline because PyParse requires it. We add a
# space so that index won't be at end of line, so that its
# status will be the same as the char before it, if should.
# We want what the parser has, minus the last newline and space.
# Parser.code apparently preserves the statement we are in, so
# that stopatindex can be used to synchronize the string with
# the text box indices.
# find which pairs of bracketing are openers. These always
# correspond to a character of rawtext.
# find the rightmost bracket to which index belongs
# The bracket to which we belong should be an opener.
# If it's an opener, it has to have a character.
# We are after a real char, so it is a ')' and we give the
# index before it.
# the set of built-in identifiers which are also keywords,
# i.e. keyword.iskeyword() returns True for them
# Start at the end (pos) and work backwards.
# Go backwards as long as the characters are valid ASCII
# identifier characters. This is an optimization, since it
# is faster in the common case where most of the characters
# are ASCII.
# If the above loop ended due to reaching a non-ASCII
# character, continue going backwards using the most generic
# test for whether a string contains only valid identifier
# characters.
# The identifier candidate starts here. If it isn't a valid
# identifier, don't eat anything. At this point that is only
# possible if the first character isn't a valid first
# character for an identifier.
# All characters in str[i:pos] are valid ASCII identifier
# characters, so it is enough to check that the first is
# valid as the first character of an identifier.
# All keywords are valid identifiers, but should not be
# considered identifiers here, except for True, False and None.
# This string includes all chars that may be in a white space
# Eat whitespaces, comments, and if postdot_phase is False - a dot
# Eat a whitespace
# Eat a dot
# The next line will fail if we are *inside* a comment,
# but we shouldn't be.
# Eat a comment
# If we didn't eat anything, quit.
# We didn't find a dot, so the expression end at the
# last identifier pos.
# There is an identifier to eat
# Now, to continue the search, we must find a dot.
# (the loop continues now)
# We are at a bracketing limit. If it is a closing
# bracket, eat the bracket, otherwise, stop the search.
# We were not at the end of a closing bracket
# [] and () may be used after an identifier, so we
# continue. postdot_phase is True, so we don't allow a dot.
# We can't continue after other types of brackets
# Scan a string prefix
# We've found an operator or something.
#=======================================
# In the PYTHON subprocess:
# calls rpc.SocketIO.remotecall() via run.MyHandler instance
# pass frame and traceback object IDs instead of the objects themselves
#----------called by an IdbProxy----------
#----------called by a FrameProxy----------
#----------called by a CodeProxy----------
#----------called by a DictProxy----------
#### Needed until dict_keys type is finished and pickleable.
# xxx finished. pickleable?
### Will probably need to extend rpc.py:SocketIO._proxify at that time.
# Can't pickle module 'builtins'.
#----------end class IdbAdapter----------
# In the IDLE process:
### 'temporary' until dict_keys is a pickleable built-in type
##print("*** Failed DictProxy.__getattr__:", name)
##print("*** Interaction: (%s, %s, %s)" % (message, fid, modified_info))
##print("*** IdbProxy.call %s %s %s" % (methodname, args, kwargs))
##print("*** IdbProxy.call %s returns %r" % (methodname, value))
# Ignores locals on purpose!
# passing frame and traceback IDs, not the objects themselves
# loadfile encoding.
# One instance per editor Window so methods know which to save, close.
# Open returns focus to self.editwin if aborted.
# EditorWindow.open_module, others, belong here.
# Undo command bindings
# Save in case parent window is closed (ie, during askopenfile()).
# If editFile is valid and already open, flist.open will
# shift focus to its existing window.
# If the current window exists and is a fresh unnamed,
# unmodified editor window (not an interpreter shell),
# pass self.loadfile to flist.open so it will load the file
# in the current window (if the file is not already open)
# instead of a new window.
# Code for use outside IDLE:
# Wait for the editor window to appear
# If the file does not contain line separators, it is None.
# If the file contains mixed line separators, it is a tuple.
# We need to save the conversion results first
# before being able to execute the code
# may be a PyShell
# Saving shell.
# Changes 'end-1c' value.
# This is either plain ASCII, or Tk was returning mixed-encoding
# text to us. Don't try to guess further.
# Preserve a BOM that might have been present on opening
# See whether there is anything non-ASCII in it.
# If not, no need to figure out the encoding.
# Check if there is an encoding declared
# Fallback: save as UTF-8, with BOM - ignoring the incorrect
# declared encoding
# shell undo is reset after every prompt, looks saved, probably isn't
#posix platform
#win32 platform
#no printing for this platform
#we can try to print for this platform
# things can get ugly on NT if there is no printer available.
# Note: The Text widget will be accessible as self.anchor_widget
# align to left of window
# Only called in calltip.Calltip, where lines are truncated
# If the event was triggered by the same event that unbound
# this function, the function will be called nevertheless,
# so do nothing in this case.
# Hide the call-tip if the insertion cursor moves outside of the
# parenthesis.
# Not hiding the call-tip.
# Re-schedule this function to be called again in a short while.
# See the explanation in checkhide_event.
# ValueError may be raised by MultiCall
# Rename to stackbrowser or possibly consolidate with browser.
# Titlecase names are overrides.
# self.object not necessarily dict.
# to obtain a traceback object
# dismiss dialog
# License, copyright, and credits are of type _sitebuiltins._Printer
# Encode CREDITS.txt to utf-8 for proper version of Loewis.
# Specify others as ascii until need utf-8, so catch errors.
# Cache is used to only remove added attributes
# when changing the delegate.
# May raise AttributeError
# Function is really about resetting delegator dict
# to original state.  Cache is just a means
# TODO: use also in codecontext.py
# All values are passed through getint(), since some
# values may be pixel objects, which can't simply be added to ints.
# Ensure focus is always redirected to the main editor text widget.
# Redirect mouse scrolling to the main editor text widget.
# Note that without this, scrolling with the mouse only scrolls
# the line numbers.
# Redirect mouse button events to the main editor text widget,
# except for the left mouse button (1).
# Note: X-11 sends Button-4 and Button-5 events for the scroll wheel.
# Convert double- and triple-click events to normal click events,
# since event_generate() doesn't allow generating such events.
# start_line is set upon <Button-1> to allow selecting a range of rows
# by dragging.  It is cleared upon <ButtonRelease-1>.
# last_y is initially set upon <B1-Leave> and is continuously updated
# upon <B1-Motion>, until <B1-Enter> or the mouse button is released.
# It is used in text_auto_scroll(), which is called repeatedly and
# does have a mouse event available.
# auto_scrolling_after_id is set whenever text_auto_scroll is
# scheduled via .after().  It is used to stop the auto-scrolling
# upon <B1-Enter>, as well as to avoid scheduling the function several
# times in parallel.
# On mouse up, we're no longer dragging.  Set the shared persistent
# variables to None to represent this.
# i.e. if not currently dragging
# See: https://github.com/tcltk/tk/blob/064ff9941b4b80b85916a8afe86a6c21fd388b54/library/text.tcl#L670
# Schedule the initial call to text_auto_scroll(), if not already
# scheduled.
# Cancel the scheduling of text_auto_scroll(), if it exists.
# Insert the delegator after the undo delegator, so that line numbers
# are properly updated after undo and redo actions.
# no need to update the sidebar
# Insert the TextChangeDelegator after the last delegator, so that
# the sidebar reflects final changes to the text widget contents.
# Create top frame, with scrollbar and listbox
# Tie listbox and scrollbar together
# Bind events to the list box
# Mark as empty
# Methods to override for specific actions
# subprocess and test
# See __init__ for usage
# If not inside parentheses, no calltip.
# If a calltip is shown for the current parentheses, do
# No expression before the opening parenthesis, e.g.
# because it's in a string or the opener for a tuple:
# At this point, the current index is after an opening
# parenthesis, in a section of code, preceded by a valid
# expression. If there is a calltip shown, it's not for the
# same index and should be closed.
# Simple, fast heuristic: If the preceding expression includes
# an opening parenthesis, it likely includes a function call.
# Only protect user code.
# An uncaught exception closes idle, and eval can raise any
# exception, especially if user classes are involved.
# The following are used in get_argspec and some in tests
# enough for bytes
# for wrapped signatures
# Determine function object fob to inspect.
# Buggy user object could raise anything.
# No popup for non-callables.
# For Get_argspecTest.test_buggy_getattr_class, CallA() & CallB().
# Initialize argspec and wrap it to get lines.
# If fob has no argument, use default callable argspec.
# Augment lines from docstring, if any, and join to get argspec.
# This creates a cycle that persists until root is deleted.
# need for report_error()
# search pattern
# regular expression?
# match case?
# match whole word?
# wrap around buffer?
# search backwards?
# Access methods
# Higher level access methods
# called only in search.py: 66
# if True, see setcookedpat
# Derived class could override this with something fancier
# Compilation failed -- stop
# m.start(), m.end() == match slice indexes
# Fails on invalid index
# Frame imported in ...Base
# Importing OutputWindow here fails due to import loop
# EditorWindow -> GrepDialog -> OutputWindow -> EditorWindow
# leave here!
# Tk window has been closed, OutputWindow.text = None,
# so in OW.write, OW.text.insert fails.
# tkinter import not needed because module does not create widgets,
# although many methods operate on text widget arguments.
#$ event <<redo>>
#$ win <Control-y>
#$ unix <Alt-z>
#$ event <<undo>>
#$ win <Control-z>
#$ unix <Control-z>
#$ event <<dump-undo-state>>
#$ win <Control-backslash>
#$ unix <Control-backslash>
# or a CommandSequence instance
# Clients should call undo_block_start() and undo_block_stop()
# around a sequence of editing cmds to be treated as a unit by
# undo & redo.  Nested matching calls are OK, and the inner calls
# then act like nops.  OK too if no editing cmds, or only one
# editing cmd, is issued in between:  if no cmds, the whole
# sequence has no effect; and if only one cmd, that cmd is entered
# directly into the undo list, as if undo_block_xxx hadn't been
# called.  The intent of all that is to make this scheme easy
# to use:  all the client has to worry about is making sure each
# _start() call is matched by a _stop() call.
# no need to wrap a single cmd
# this blk of cmds, or single cmd, has already
# been done, so don't execute it again
##print "truncating undo list"
# Base class for Undoable commands
# Undoable insert command
# Insert before the final newline
##sys.__stderr__.write("do: %s\n" % self)
##sys.__stderr__.write("redo: %s\n" % self)
##sys.__stderr__.write("undo: %s\n" % self)
# Undoable delete command
# Don't delete the final newline
# Wrapper for a sequence of undoable cmds to be undone/redone
# as a unit
# Calculate the border width and horizontal padding required to
# align the context with the text in the main Text widget.
# Calculate the required horizontal padding and border width.
# Don't request more than we get.
# Get the current context and initiate the recurring update event.
# Grid the context widget above the text widget.
# The indentation level we are currently in.
# For a line to be interesting, it must begin with a block opening
# keyword, and have less indentation than lastindent.
# Also show the if statement.
# Haven't scrolled.
# Retain only context info applicable to the region
# between topvisible and new_topvisible.
# self.topvisible > new_topvisible: # Scroll up.
# Retain only context info associated
# with lines above new_topvisible.
# Last context_depth context lines.
# Update widget.
# No context lines are showing.
# Line number clicked.
# Lines not displayed due to maxlines.
# not followed by ...
# a character which means it can't be a
# pattern-matching statement
# a keyword
# a lone underscore
# pattern-matching case
# Called from htest, TextFrame, Editor, and Turtledemo.
# Not automatic because ColorDelegator does not know 'text'.
# No delegate - stop any colorizing.
# "hit" is used by ReplaceDialog to mark matches. It shouldn't be changed by Colorizer, but
# that currently isn't technically possible. This should be moved elsewhere in the future
# when fixing the "hit" tag's visibility, or when the replace dialog is replaced with a
# non-modal alternative.
##print head, "get", mark, next, "->", repr(line)
# We're in an inconsistent state, and the call to
# update may tell us to stop.  It may also change
# the correct value for "next" (since this is a
# line.col string, not a true mark).  So leave a
# crumb telling the next invocation to resume here
# in case update tells us to leave.
# widget instance
# widget's root
# widget's (full) Tk pathname
# Rename the Tcl command within Tcl:
# Create a new Tcl command whose name is the widget's pathname, and
# whose action is to dispatch on the operation passed to the widget:
# Restore the original widget Tcl command.
# Should not be needed
# if instance is deleted after close, as in Percolator.
# can be a Tcl_Obj
# redundant with self.redir
# These two could be deleted after checking recipient code.
# guess the type of an existing database, if not creating a new one
# db doesn't exist or 'n' flag was specified to create a new db
# file doesn't exist and the new flag was used so use default type
# db type cannot be determined
# Check for ndbm first -- this has a .pag and a .dir file
# some dbm emulations based on Berkeley DB generate a .db file
# some do not, but they should be caught by the bsd checks
# guarantee we can actually open the file using dbm
# kind of overkill, but since we are dealing with emulations
# it seems like a prudent step
# Check for dumbdbm next -- this has a .dir and a .dat file
# First check for presence of files
# dumbdbm files with no keys are empty
# See if the file exists, return None if not
# Read the start of the file -- the magic number
# Return "" if not at least 4 bytes
# Check for SQLite3 header string.
# Convert to 4-byte int in native byte order -- return "" if impossible
# Check for GNU dbm
# Later versions of Berkeley db hash file have a 12-byte pad in
# front of the file type
# The on-disk directory and data files can remain in mutually
# inconsistent states for an arbitrarily long time (see comments
# at the end of __setitem__).  This is only repaired when _commit()
# gets called.  One place _commit() gets called is from __del__(),
# and if that occurs at program shutdown time, module globals may
# already have gotten rebound to None.  Since it's crucial that
# _commit() finish successfully, we can't ignore shutdown races
# here, and _commit() must not reference any globals.
# for _commit()
# The directory file is a text file.  Each line looks like
# where key is the string key, pos is the offset into the dat
# file of the associated value's first byte, and siz is the number
# of bytes in the associated value.
# The data file is a binary file pointed into by the directory
# file, and holds the values associated with keys.  Each value
# begins at a _BLOCKSIZE-aligned byte offset, and is a raw
# binary 8-bit string value.
# The index is an in-memory dict, mirroring the directory file.
# maps keys to (pos, siz) pairs
# Handle the creation
# Mod by Jack: create data file if needed
# Read directory file into the in-memory index dict.
# Write the index dict to the directory file.  The original directory
# file (if any) is renamed with a .bak extension first.  If a .bak
# file currently exists, it's deleted.
# CAUTION:  It's vital that _commit() succeed, and _commit() can
# be called from __del__().  Therefore we must never reference a
# global in this routine.
# Use Latin-1 since it has no qualms with any value in any
# position; UTF-8, though, does care sometimes.
# may raise KeyError
# Append val to the data file, starting at a _BLOCKSIZE-aligned
# offset.  The data file is first padded with NUL bytes (if needed)
# to get to an aligned offset.  Return pair
# Write val to the data file, starting at offset pos.  The caller
# is responsible for ensuring that there's enough room starting at
# pos to hold val, without overwriting some other value.  Return
# pair (pos, len(val)).
# key is a new key whose associated value starts in the data file
# at offset pos and with length siz.  Add an index record to
# the in-memory index dict, and append one to the directory file.
# See whether the new value is small enough to fit in the
# (padded) space currently occupied by the old value.
# The new value doesn't fit in the (padded) space used
# by the old value.  The blocks used by the old value are
# forever lost.
# Note that _index may be out of synch with the directory
# file now:  _setval() and _addval() don't update the directory
# file.  This also means that the on-disk directory and data
# files are in a mutually inconsistent state, and they'll
# remain that way until _commit() is called.  Note that this
# is a disaster (for the database) if the program crashes
# (so that _commit() never gets called).
# The blocks used by the associated value are lost.
# XXX It's unclear why we do a _commit() here (the code always
# XXX has, so I'm not changing it).  __setitem__ doesn't try to
# XXX keep the directory file in synch.  Why should we?  Or
# XXX why shouldn't __setitem__?
# Modify mode depending on the umask
# Turn off any bits that are set in the umask
# We use the URI format when opening the database.
# This is an optimization only; it's ok if it fails.
# Directory of system wheel packages. Some Linux distribution packaging
# policies recommend against bundling dependencies. For example, Fedora
# installs wheel packages in the /usr/share/python-wheels/ directory and don't
# install the ensurepip._bundled package.
# NOTE: The compile-time `WHEEL_PKG_DIR` is unset so there is no place
# NOTE: for looking up the wheels.
# NOTE: `WHEEL_PKG_DIR` does not contain any wheel files for `pip`.
# Prefer pip from the wheel package directory, if present.
# Extract '21.2.4' from 'pip-21.2.4-py3-none-any.whl'
# Run the bootstrapping in a subprocess to avoid leaking any state that happens
# after pip has executed. Particularly, this avoids the case when pip holds onto
# the files in *additional_paths*, preventing us to remove them at the end of the
# invocation.
# run code in isolated mode if currently running isolated
# We deliberately ignore all pip environment variables
# when invoking pip
# See http://bugs.python.org/issue19734 for details
# We also ignore the settings in the default pip configuration file
# See http://bugs.python.org/issue20053 for details
# Discard the return value
# By default, installing pip installs all of the
# following scripts (X.Y == running Python version):
# pip 1.5+ allows ensurepip to request that some of those be left out
# omit pip, pipX
# omit pip
# Put our bundled wheels into a temporary directory and construct the
# additional paths that need added to sys.path
# Construct the arguments to be passed to the pip command
# Nothing to do if pip was never installed, or has been removed
# If the installed pip version doesn't match the available one,
# leave it alone
# drive exists but is not accessible
# fix for bpo-35306
# broken symlink pointing to itself
# EBADF - guard against macOS `stat` throwing EBADF
# The `_raw_path` slot store a joined string path. This is set in the
# `__init__()` method.
# The '_resolving' slot stores a boolean indicating whether the path
# is being processed by `PathBase.resolve()`. This prevents duplicate
# work from occurring when `resolve()` calls `stat()` or `readlink()`.
# If the suffix is non-empty, we can't make the stem empty.
# If the stem is empty, we can't make the suffix non-empty.
# Maximum number of symlinks to follow in resolve()
# Convenience functions for querying the stat results
# Non-encodable path
# Path doesn't exist or is a broken symlink
# (see http://web.archive.org/web/20200623061726/https://bitbucket.org/pitrou/pathlib/issues/12/ )
# Need to exist and be a dir
# Path doesn't exist
# Junctions are a Windows-only feature, not present in POSIX nor the
# majority of virtual filesystems. There is no cross-platform idiom
# to check for junctions (using stat().st_mode).
# type-check for the buffer interface before truncating the file
# The user has expressed a case sensitivity choice, but we don't
# know the case sensitivity of the underlying filesystem, so we
# must use scandir() for everything, including non-wildcard parts.
# We call 'absolute()' rather than using 'os.getcwd()' directly to
# enable users to replace the implementation of 'absolute()' in a
# subclass and benefit from the new behaviour here. This works because
# os.path.abspath('.') == os.getcwd().
# If the user has *not* overridden the `readlink()` method, then symlinks are unsupported
# and (in non-strict mode) we can improve performance by not calling `stat()`.
# Delete '..' segment immediately following root
# Delete '..' segment and its predecessor
# Like Linux and macOS, raise OSError(errno.ELOOP) if too many symlinks are
# encountered during resolution.
# If the symlink target is absolute (like '/etc/hosts'), set the current
# path to its uppermost parent (like '/').
# Add the symlink target's reversed tail parts (like ['hosts', 'etc']) to
# the stack of unresolved path parts.
# The `_raw_paths` slot stores unnormalized string paths. This is set
# in the `__init__()` method.
# The `_drv`, `_root` and `_tail_cached` slots store parsed and
# normalized parts of the path. They are set when any of the `drive`,
# `root` or `_tail` properties are accessed for the first time. The
# three-part division corresponds to the result of
# `os.path.splitroot()`, except that the tail is further split on path
# separators (i.e. it is a list of strings), and that the root and
# tail are normalized.
# The `_str` slot stores the string representation of the path,
# computed from the drive, root and tail when `__str__()` is called
# for the first time. It's used to implement `_str_normcase`
# The `_str_normcase_cached` slot stores the string path with
# normalized case. It is set when the `_str_normcase` property is
# accessed for the first time. It's used to implement `__eq__()`
# `__hash__()`, and `_parts_normcase`
# The `_parts_normcase_cached` slot stores the case-normalized
# string path after splitting on path separators. It's set when the
# `_parts_normcase` property is accessed for the first time. It's used
# to implement comparison methods like `__lt__()`.
# The `_hash` slot stores the hash of the case-normalized string
# path. It's set when `__hash__()` is called for the first time.
# GH-103631: Convert separators for backwards compatibility.
# Avoid calling super().__init__, as an optimisation
# String with normalized case, for hashing and equality checks
# Cached parts with normalized case, for comparisons.
# e.g. //server/share
# e.g. //?/unc/server/share
# The value of this property should not be cached on the path object,
# as doing so would introduce a reference cycle.
# Optimization: work with raw paths on POSIX.
# It's a path on a local drive => 'file:///c:/a/b'
# It's a path on a network drive => 'file://host/share/a/b'
# It's a posix path => 'file:///etc/hosts'
# The string representation of an empty path is a single dot ('.'). Empty
# paths shouldn't match wildcards, so we change it to the empty string.
# Subclassing os.PathLike makes isinstance() checks slower,
# which in turn makes Path construction slower. Register instead!
# Call io.text_encoding() here to ensure any warning is raised at an
# appropriate stack level.
# GH-65238: pathlib doesn't preserve trailing slash. Add it back.
# Normalize results
# There is a CWD on each drive-letter drive.
# Fast path for "empty" paths, e.g. Path("."), Path("") or Path().
# We pass only one argument to with_segments() to avoid the cost
# of joining, and we exploit the fact that getcwd() returns a
# fully-normalized string by storing it in _str. This is used to
# implement Path.cwd().
# First try to bump modification time
# Implementation note: GNU touch uses the UTIME_NOW option of
# the utimensat() / futimens() functions.
# Avoid exception chaining
# Remove empty authority
# Remove 'localhost' authority
# Remove slash before DOS device/UNC path
# Replace bar with colon in DOS drive
# We may need its compression method
# Pre-3.2 compatibility names
# constants for Zip file compression methods
# Other ZIP compression methods not supported
# we recognize (but not necessarily support) all features up to that version
# Below are some formats and associated data for reading/writing headers using
# the struct module.  The names and structures of headers/records are those used
# in the PKWARE description of the ZIP file format:
# (URL valid as of January 2008)
# The "end of central directory" structure, magic number, size, and indices
# (section V.I in the format document)
# These last two indices are not part of the structure as defined in the
# spec, but they are used internally by this module as a convenience
# The "central directory" structure, magic number, size, and indices
# of entries in the structure (section V.F in the format document)
# indexes of entries in the central directory structure
# General purpose bit flags
# Zip Appnote: 4.4.4 general purpose bit flag: (2 bytes)
# Bits 1 and 2 have different meanings depending on the compression used.
# _MASK_COMPRESS_OPTION_2 = 1 << 2
# _MASK_USE_DATA_DESCRIPTOR: If set, crc-32, compressed size and uncompressed
# size are zero in the local header and the real values are written in the data
# descriptor immediately following the compressed data.
# Bit 4: Reserved for use with compression method 8, for enhanced deflating.
# _MASK_RESERVED_BIT_4 = 1 << 4
# _MASK_UNUSED_BIT_7 = 1 << 7
# _MASK_UNUSED_BIT_8 = 1 << 8
# _MASK_UNUSED_BIT_9 = 1 << 9
# _MASK_UNUSED_BIT_10 = 1 << 10
# Bit 12: Reserved by PKWARE for enhanced compression.
# _MASK_RESERVED_BIT_12 = 1 << 12
# _MASK_ENCRYPTED_CENTRAL_DIR = 1 << 13
# Bit 14, 15: Reserved by PKWARE
# _MASK_RESERVED_BIT_14 = 1 << 14
# _MASK_RESERVED_BIT_15 = 1 << 15
# The "local file header" structure, magic number, size, and indices
# (section V.A in the format document)
# The "Zip64 end of central directory locator" structure, magic number, and size
# The "Zip64 end of central directory" record, magic number, size, and indices
# (section V.G in the format document)
# use memoryview for zero-copy slices
# file has correct magic number
# If the seek fails, the file is not large enough to contain a ZIP64
# end-of-archive record, so just return the end record we were given.
# Assume no 'zip64 extensible data'
# Update the original endrec using data from the ZIP64 record
# Determine file size
# Check to see if this is ZIP file with no archive comment (the
# "end of central directory" structure should be the last item in the
# file if this is the case).
# the signature is correct and there's no comment, unpack structure
# Append a blank comment and record start offset
# Try to read the "Zip64 end of central directory" structure
# Either this is not a ZIP file, or it is a ZIP file with an archive
# comment.  Search the end of the file for the "end of central directory"
# record signature. The comment is the last item in the ZIP file and may be
# up to 64K long.  It is assumed that the "end of central directory" magic
# number does not appear in the comment.
# found the magic number; attempt to unpack and interpret
# Zip file is corrupted.
#as claimed by the zip file
# Unable to find a valid end of central directory structure
# Terminate the file name at the first null byte.  Null bytes in file
# names are used as tricks by viruses in archives.
# This is used to ensure paths in generated ZIP files always use
# forward slashes as the directory separator, as required by the
# ZIP format specification.
# Original file name in archive
# Terminate the file name at the first null byte and
# ensure paths always use forward slashes as the directory separator.
# Normalized file name
# year, month, day, hour, min, sec
# Standard values:
# Type of compression for the file
# Level for the compressor
# Comment for each file
# ZIP extra data
# System which created ZIP archive
# Assume everything else is unix-y
# Version which created ZIP archive
# Version needed to extract archive
# Must be zero
# ZIP flag bits
# Volume number of file header
# External file attributes
# Size of the compressed file
# Size of the uncompressed file
# Start of the next local header or central directory
# Other attributes are set by class ZipFile:
# header_offset         Byte offset to the file header
# CRC                   CRC-32 of the uncompressed file
# Maintain backward compatibility with the old protected attribute name.
# Set these to zero because we write them after the file data
# We always explicitly pass zip64 within this module.... This
# remains for anyone using ZipInfo.FileHeader as a public API.
# Try to decode the extra field.
# ZIP64 extension (large files and/or large archives)
# Unicode Path Extra Field
# Create ZipInfo instance to store file information
# The ZIP format specification requires to use forward slashes
# as the directory separator, but in practice some ZIP files
# created on Windows can use backward slashes.  For compatibility
# with the extraction code which already handles this:
# ZIP encryption uses the CRC32 one-byte primitive for scrambling some
# internal keys. We noticed that a direct implementation is faster than
# relying on binascii.crc32().
# ZIP supports a password-based form of encryption. Even though known
# plaintext attacks have been found against it, it is still useful
# to be able to get data out of such a file.
# compresslevel is ignored for ZIP_LZMA
# Provide the tell method for unseekable stream
# Max size supported by decompressor.
# Read from compressed files in 4k blocks.
# Chunk size to read during seek
# compare against the file type from extended local headers
# compare against the CRC otherwise
# The first 12 bytes in the cypher stream is an encryption header
# Shortcut common case - newline found in buffer.
# Return up to 512 bytes to reduce allocation overhead for tight loops.
# Update the CRC using the given data.
# No need to compute the CRC if we don't have a reference value
# Check the CRC if we're at the end of the file
# Read up to n compressed bytes with at most one read() system call,
# decrypt and decompress them.
# Read from file.
## Handle unconsumed data.
# Just move the _offset index if the new position is in the _readbuffer
# Fast seek uncompressed unencrypted file
# disable CRC checking after first seeking - it would be invalid
# seek actual file taking already buffered data into account
# flush read buffer
# Position is before the current position. Reset the ZipExtFile
# Accept any data that supports the buffer protocol
# Flush any data from the compressor, and update header info
# Write updated header info
# Write CRC and file sizes after the file data
# Seek backwards and write file header (which will now include
# correct CRC and file sizes)
# Preserve current position in file
# Successfully written: Add file to our caches
# Level of printing: 0 through 3
# Find file info given name
# List of ZipInfo instances for archive
# Method of compression
# Check that we don't try to write with nonconforming codecs
# Check if we were passed a file-like object
# No, it's a filename
# set the modified flag so central directory gets written
# even if no files are added to the archive
# Some file-like objects can provide tell() but not seek()
# See if file is a zip file
# seek to start of directory and overwrite
# file is not a zip file, just append
# bytes in central directory
# offset of central directory
# archive comment
# "concat" is zero, unless zip was concatenated to another file
# If Zip64 extension structures are present, account for them
# self.start_dir:  Position of start of central directory
# Convert date/time code to (year, month, day, hour, min, sec)
# update total bytes read from central directory
# Read by chunks, to avoid an OverflowError or a
# MemoryError with very large embedded files.
# Check CRC-32
# check for valid comment length
# Make sure we have an info object
# 'name' is already an info object
# Get info object for name
# Open for reading:
# Skip the file header:
# Zip 2.7: compressed patched data
# strong encryption
# UTF-8 filename
# check for encrypted flag & handle password
# Size and CRC are overwritten with correct data after processing the file
# Compressed data includes an end-of-stream (EOS) marker
# permissions: ?rw-------
# Compressed size can be larger than uncompressed size
# remove trailing dots and spaces
# rejoin, removing empty parts.
# build the destination pathname, replacing
# interpret absolute pathname as relative, remove drive letter or
# UNC path, redundant separators, "." and ".." components.
# filter illegal characters on Windows
# Create all upper directories if necessary.
# drwxrwxr-x
# ?rw-------
# Uncompressed size
# Start of header bytes
# write ending records
# write central directory
# Append a ZIP64 field to the extra's
# Write end-of-zip-archive record
# Need to write the ZIP64 end-of-archive records
# This is a package directory, add it
# Add all *.py files and package subdirectories
# Recursive call
# This is NOT a package directory, add its files at top level
# legacy mode: use whatever file is present
# Use .pyc file.
# Use the __pycache__/*.pyc file, but write it to the legacy pyc
# file name in the archive.
# Compile py into PEP 3147 pyc file.
# new mode: use given optimization level
# else: ignore
# used privately for tests
# Only allow for FastLookup when supplied zipfile is read-only
# compute stack level so that the caller of the caller sees any warning.
# PyPy no longer special cased after 7.3.19 (or maybe 7.3.18)
# See jaraco/zipp#143
# Text mode:
# see bpo-38901
# See issue 24875. We need system_site_packages to be False
# until after pip is installed.
# We had set it to False before, now
# restore it and rewrite the configuration
# gh-90329: Don't display a warning for short/long names
# see gh-96861
# Always create the simplest name in the venv. It will either be a
# link back to executable, or a copy of the appropriate launcher
# Issue 21197: create lib64 as a symlink to lib on 64-bit non-OS X POSIX
# Issue #21643
# Assign and update the command to use when launching the newly created
# environment, in case it isn't simply the executable script (e.g. bpo-45337)
# bpo-45337: Fix up env_exec_cmd to account for file system redirections.
# Some redirects only apply to CreateFile and not CreateProcess
# can't link to itself!
# may need to use a more specific exception
# Issue 18807: make copies if
# symlinks are not wanted
# For symlinking, we need all the DLLs to be available alongside
# the executables.
# copy init.tcl
# gh-98251: We do not want to just use '-I' because that masks
# legitimate user preferences (such as not writing bytecode). All we
# really need is to ensure that the path variables do not overrule
# normal venv handling.
# gh-124651: need to quote the template strings properly
# fallbacks to POSIX shell compliant quote
# at top-level, remove irrelevant dirs
# ignore files in top level
# Some constants, most notably the ACS_* ones, are only added to the C
# _curses module's dictionary after initscr() is called.  (Some
# versions of SGI's curses don't define values for those constants
# until initscr() has been called.)  This wrapper function calls the
# underlying C initscr(), and then copies the constants from the
# _curses module to the curses package's dictionary.  Don't do 'from
# curses import *' if you'll be needing the ACS_* constants.
# we call setupterm() here because it raises an error
# instead of calling exit() in error cases.
# This is a similar wrapper for start_color(), which adds the COLORS and
# COLOR_PAIRS variables which are only available after start_color() is
# called.
# Import Python has_key() implementation if _curses doesn't contain has_key()
# Wrapper for the entire curses-based application.  Runs a function which
# should be the rest of your curses-based application.  If the application
# raises an exception, wrapper() will restore the terminal to a sane state so
# you can read the resulting traceback.
# Initialize curses
# Turn off echoing of keys, and enter cbreak mode,
# where no buffering is performed on keyboard input
# In keypad mode, escape sequences for special keys
# (like the cursor keys) will be interpreted and
# a special value like curses.KEY_LEFT will be returned
# Start color, too.  Harmless if the terminal doesn't have
# color; user can test with has_color() later on.  The try/catch
# works around a minor bit of over-conscientiousness in the curses
# module -- the error return from C start_color() is ignorable.
# Set everything back to normal
# The try-catch ignores the error we trigger from some curses
# versions by trying to write into the lowest-rightmost spot
# in the window.
# Remember where to put the cursor back since we are in insert_mode
# ^a
# ^d
# ^e
# ^f
# ^g
# ^j
# ^k
# first undo the effect of self._end_of_line
# ^l
# ^n
# ^o
# ^p
# ^@
# ^A
# ^B
# ^C
# ^D
# ^E
# ^F
# ^G
# ^H
# ^I
# ^J
# ^K
# ^L
# ^M
# ^N
# ^O
# ^P
# ^Q
# ^R
# ^S
# ^T
# ^U
# ^V
# ^W
# ^X
# ^Y
# ^Z
# ^[
# ^\
# ^]
# ^^
# ^_
# space
# Emulation of has_key() function for platforms that don't use ncurses
# Table mapping curses keys to the terminfo capability name
# Figure out the correct capability name for the keycode.
#Check the current terminal description for that capability;
#if present, return true, else return false.
# Compare the output of this implementation and the ncurses has_key,
# on platforms where has_key is already available
# Written by Brian Quinlan (brian@sweetapp.com).
# Based on code written by Fredrik Lundh.
# decorator factory
# generate response
# wrap response in a singleton tuple
# Instance can implement _listMethod to return a list of
# if the instance has a _dispatch method then we
# don't have enough information to provide a list
# of methods
# See http://xmlrpc.usefulinc.com/doc/sysmethodsig.html
# Instance can implement _methodHelp to return help for a method
# don't have enough information to provide help
# Note that we aren't checking that the method actually
# be a callable object of some kind
# XXX A marshalling error in any response will fail the entire
# multicall. If someone cares they should fix this.
# call the matching registered function
# call the `_dispatch` method on the instance
# call the instance's method directly
# Class attribute listing the accessible path components;
# paths not on this list will result in a 404 error.
#if not None, encode responses larger than this, if possible
#a common MTU
#Override form StreamRequestHandler: full buffering of output
#and no Nagle.
# a re to match a gzip Accept-Encoding
# If .rpc_paths is empty, just assume all paths are legal
# Check that the path is legal
# Get arguments by reading body of request.
# We read this in chunks to avoid straining
# socket.read(); around the 10 or 15Mb mark, some platforms
# begin to have problems (bug #792570).
#response has been sent
# In previous versions of SimpleXMLRPCServer, _dispatch
# could be overridden in this class, instead of in
# SimpleXMLRPCDispatcher. To maintain backwards compatibility,
# check to see if a subclass implements _dispatch and dispatch
# using that method if present.
# This should only happen if the module is buggy
# internal error, report as HTTP server error
# Send information about the exception if requested
#support gzip encoding of request
# Report a 404 error
# Warning: this is for debugging purposes only! Never set this to True in
# production code, as will be sending out sensitive information (exception
# and stack trace details) when exceptions are raised inside
# SimpleXMLRPCRequestHandler.do_POST
# report low level exception back to server
# (each dispatcher should have handled their own
# exceptions)
# POST data is normally available through stdin
# Self documenting XML-RPC Server.
# XXX Note that this regular expression does not allow for the
# hyperlinking of arbitrary strings being used as method
# names. Only methods with names consisting of word characters
# and '.'s are hyperlinked.
# setup variables used for HTML documentation
# argspec, documentation
# XML-RPC CLIENT LIBRARY
# an XML-RPC client interface for Python.
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
# this version is designed to work with Python 2.1 or newer.
# 1999-01-14 fl  Created
# 1999-01-15 fl  Changed dateTime to use localtime
# 1999-01-16 fl  Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl  Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl  Fixed dateTime constructor, etc.
# 1999-02-02 fl  Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl  Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl  Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl  Changed boolean to check the truth value of its argument
# 2001-02-24 fl  Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl  Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl  Make sure response tuple is a singleton
# 2001-03-29 fl  Don't require empty params element (from Nicholas Riley)
# 2001-06-10 fl  Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl  Base xmlrpclib.Error on built-in Exception (from Paul Prescod)
# 2001-09-03 fl  Allow Transport subclass to override getparser
# 2001-09-10 fl  Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl  Remove containers from memo cache when done with them
# 2001-10-01 fl  Use faster escape method (80% dumps speedup)
# 2001-10-02 fl  More dumps microtuning
# 2001-10-04 fl  Make sure import expat gets a parser (from Guido van Rossum)
# 2001-10-10 sm  Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm  Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl  Use repr() to marshal doubles (from Paul Felix)
# 2002-03-17 fl  Avoid buffered read when possible (from James Rucker)
# 2002-04-07 fl  Added pythondoc comments
# 2002-04-16 fl  Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl  Added error constants (from Andrew Kuchling)
# 2002-06-27 fl  Merged with Python CVS version
# 2002-10-22 fl  Added basic authentication (based on code from Phillip Eby)
# 2003-01-22 sm  Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm  Use cStringIO if available
# 2003-04-25 ak  Add support for nil
# 2003-06-15 gn  Add support for time.struct_time
# 2003-07-12 gp  Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
# 2014-12-02 ch/doko  Add workaround for gzip bomb vulnerability
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by Fredrik Lundh.
# info@pythonware.com
# The XML-RPC client interface is
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#python can be built without zlib/gzip support
# Internal stuff
# xmlrpc integer limits
# Error constants (from Dan Libby's specification at
# http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)
# Ranges of errors
# Specific errors
# Base class for all kinds of client-side errors.
# Indicates an HTTP-level protocol error.  This is raised by the HTTP
# transport layer, if the server returns an error code other than 200
# (OK).
# @param url The target URL.
# @param errcode The HTTP error code.
# @param errmsg The HTTP error message.
# @param headers The HTTP header dictionary.
# Indicates a broken XML-RPC response package.  This exception is
# raised by the unmarshalling layer, if the XML-RPC response is
# malformed.
# Indicates an XML-RPC fault response package.  This exception is
# raised by the unmarshalling layer, if the XML-RPC response contains
# a fault string.  This exception can also be used as a class, to
# generate a fault XML-RPC message.
# @param faultCode The XML-RPC fault code.
# @param faultString The XML-RPC fault string.
# Special values
# XML-RPC only uses the naive portion of the datetime
# XML-RPC doesn't use '-' separator in the date part
# Get date/time value.
# @return Date/time value, as an ISO 8601 string.
# decode xml element contents into a DateTime structure.
# Wrapper for binary data.  This can be used to transport any kind
# of binary data over XML-RPC, using BASE64 encoding.
# @param data An 8-bit string containing arbitrary data.
# Make a copy of the bytes!
# Get buffer contents.
# @return Buffer contents, as an 8-bit string.
# XXX encoding?!
# decode xml element contents into a Binary structure
# XML parsers
# fast expat parser for Python 2.0 and later.
# XML-RPC marshalling and unmarshalling code
# XML-RPC marshaller.
# @param encoding Default encoding for 8-bit strings.  The default
# @see dumps
# by the way, if you don't understand what's going on in here,
# that's perfectly ok.
# fault instance
# parameter block
# FIXME: the xml-rpc specification allows us to leave out
# the entire <params> block if there are no parameters.
# however, changing this may break older code (including
# old versions of xmlrpclib.py), so this is better left as
# is for now.  See @XMLRPC3 for more information. /F
# check if this object can be marshalled as a structure
# check if this class is a sub-class of a basic type,
# because we don't know how to marshal these types
# (e.g. a string sub-class)
# XXX(twouters): using "_arbitrary_instance" as key as a quick-fix
# for the p3yk merge, this should probably be fixed more neatly.
# backward compatible
# check for special wrappers
# store instance attributes as a struct (really?)
# XML-RPC unmarshaller.
# @see loads
# and again, if you don't understand what's going on in here,
# return response tuple and target method
# FIXME: assert standalone == 1 ???
# prepare to handle this element
# call the appropriate end tag handler
# unknown tag ?
# accelerator support
# dispatch data
# element decoders
# struct keys are always strings
# map arrays to Python lists
# map structs to Python dictionaries
# if we stumble upon a value element with no internal
# elements, treat it as a string element
# no params
## Multicall support
# some lesser magic to store calls made to a MultiCall object
# for batch execution
# convenience functions
# Create a parser object, and connect it to an unmarshalling instance.
# This function picks the fastest available XML parser.
# return A (parser, unmarshaller) tuple.
# Convert a Python tuple or a Fault instance to an XML-RPC packet.
# @def dumps(params, **options)
# @param params A tuple or Fault instance.
# @keyparam methodname If given, create a methodCall request for
# @keyparam methodresponse If given, create a methodResponse packet.
# @keyparam encoding The packet encoding.
# @return A string containing marshalled data.
# utf-8 is default
# standard XML-RPC wrappings
# a method call
# a method response, or a fault structure
# return as is
# Convert an XML-RPC packet to a Python object.  If the XML-RPC packet
# represents a fault condition, this function raises a Fault exception.
# @param data An XML-RPC packet, given as an 8-bit string.
# @return A tuple containing the unpacked data, and the method name
# @see Fault
# Encode a string using the gzip content encoding such as specified by the
# Content-Encoding: gzip
# in the HTTP header, as described in RFC 1952
# @param data the unencoded data
# @return the encoded data
# Decode a string using the gzip content encoding such as specified by the
# @param data The encoded data
# @keyparam max_decode Maximum bytes to decode (20 MiB default), use negative
# @return the unencoded data
# @raises ValueError if data is not correctly coded.
# @raises ValueError if max gzipped payload length exceeded
# Return a decoded file-like object for the gzip encoding
# as described in RFC 1952.
# @param response A stream supporting a read() method
# @return a file-like object that the decoded data can be read() from
#response doesn't support tell() and read(), required by
#GzipFile
# request dispatcher
# some magic to bind an XML-RPC method to an RPC server.
# supports "nested" methods (e.g. examples.getStateName)
# Standard transport class for XML-RPC over HTTP.
# <p>
# You can create custom transports by subclassing this method, and
# overriding selected methods.
# client identifier (may be overridden)
#if true, we'll request gzip encoding
# if positive, encode request using gzip if it exceeds this threshold
# note that many servers will get confused, so only use it if you know
# that they can decode such a request
#None = don't encode
# Send a complete request, and parse the response.
# Retry request if a cached connection has disconnected.
# @param host Target host.
# @param handler Target PRC handler.
# @param request_body XML-RPC request body.
# @param verbose Debugging flag.
# @return Parsed response.
#retry request once if cached connection has gone cold
# issue XML-RPC request
#All unexpected errors leave connection in
# a strange state, so we clear it.
#We got an error response.
#Discard any response data and raise exception
# Create parser.
# @return A 2-tuple containing a parser and an unmarshaller.
# get parser and unmarshaller
# Get authorization info from host parameter
# Host may be a string, or a (host, x509-dict) tuple; if a string,
# it is checked for a "user:pw@host" format, and a "Basic
# Authentication" header is added if appropriate.
# @param host Host descriptor (URL or (URL, x509 info) tuple).
# @return A 3-tuple containing (actual host, extra headers,
# get rid of whitespace
# Connect to server.
# @return An HTTPConnection object
#return an existing connection if possible.  This allows
#HTTP/1.1 keep-alive.
# create a HTTP connection object from a host descriptor
# Clear any cached connection object.
# Used in the event of socket errors.
# Send HTTP request.
# @param handler Target RPC handler (a path relative to host)
# @param request_body The XML-RPC request body
# @param debug Enable debugging if debug is true.
# @return An HTTPConnection.
# Send request headers.
# This function provides a useful hook for subclassing
# @param connection httpConnection.
# @param headers list of key,value pairs for HTTP headers
# Send request body.
#optionally encode the request
# Parse response.
# @param file Stream.
# @return Response tuple and target method.
# read response data from httpresponse, and parse it
# Check for new http response object, otherwise it is a file object.
# Standard transport class for XML-RPC over HTTPS.
# FIXME: mostly untested
# create a HTTPS connection object from a host descriptor
# host may be a string, or a (host, x509-dict) tuple
# Standard server proxy.  This class establishes a virtual connection
# to an XML-RPC server.
# This class is available as ServerProxy and Server.  New code should
# use ServerProxy, to avoid confusion.
# @def ServerProxy(uri, **options)
# @param uri The connection point on the server.
# @keyparam transport A transport factory, compatible with the
# @keyparam encoding The default encoding used for 8-bit strings
# @keyparam verbose Use a true value to enable debugging output.
# @see Transport
# establish a "logical" server connection
# get the url
# call a method on the remote server
# magic method dispatcher
# note: to call a remote object with a non-standard name, use
# result getattr(server, "strange-python-name")(args)
# simple test program (from the XML-RPC specification)
# local server, available from Lib/xmlrpc/server.py
# its documentation for any purpose is hereby granted without fee,
# provided that the above copyright notice appear in all copies and
# that both that copyright notice and this permission notice appear in
# supporting documentation.
# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO
# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL,
# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# Categories of actions:
# [completion]
# Reader should really be "any reader" but there's too much usage of
# HistoricalReader methods and fields in the code below for us to
# refactor at the moment.
# etc
## this should probably be done
## in a handler for SIGCONT?
# r.posxy = 0, 0  # XXX this is invalid
# we're past the end of the previous line
# move between eols
# this is something of a hack
# We need to copy over the state so that it's consistent between
# console and reader, and console does not overwrite/append stuff
# ------------ start of baudrate definitions ------------
# Add (possibly) missing baudrates (check termios man page) to termios
# Check the termios man page (Line speed) to know where these
# values come from.
# Clean up variables to avoid unintended usage
# ------------ end of baudrate definitions ------------
# this is exactly the minumum necessary to support what we
# do with poll objects
# note: The 'timeout' argument is received as *milliseconds*
# we make sure the cursor is on the screen, and that we're
# using all of the screen if we can
# use hardware scrolling if we have it.
# In macOS terminal we need to deactivate line wrap via ANSI escape code
# hpa don't work in windows telnet :-(
# this is frustrating; there's no reason to test (say)
# self.dch1 inside the loop -- but alternative ways of
# structuring this function are equally painful (I'm trying to
# avoid writing code generators these days...)
# reuse the oldline as much as possible, but stop as soon as we
# encounter an ESCAPE, because it might be the start of an escape
# sequene
# if we need to insert a single character right after the first detected change
# if it's a single character change in the middle of the line
# if this is the last character to fit in the line and we edit in the middle of the line
# ANSI escape characters are present, so we can't assume
# anything about the position of the cursor.  Moving the cursor
# to the left margin should work to get to a known position.
# using .get() means that things will blow up
# only if the bps is actually needed (which I'm
# betting is pretty unlkely)
# Mapping of human-readable key names to their terminal-specific codes
# Function keys F1-F20 mapping
# Known CTRL-arrow keycodes
# for xterm, gnome-terminal, xfce terminal, etc.
# for rxvt
# "read_init_file",
# "redisplay",
# "set_pre_input_hook",
# ---- multiline extensions ----
# Class fields
# Instance fields
# don't show error messages by default
# rlcompleter.py seems to not like unicode
# but feed unicode anyway if we have no choice
# emulate the behavior of the standard readline that sorts
# the completions before displaying them.
# --- simplified support for reading multiline Python statements ---
# Force single-line input if we are in raw_input() mode.
# Although there is no direct way to add a \n in this mode,
# multiline buffers can still show up using various
# commands, e.g. navigating the history.
# check if last character before "pos" is a colon, ignoring
# whitespaces and comments.
# ignore whitespaces and comments
# even if we found a non-whitespace character before
# original pos, we keep going back until newline is reached
# to make sure we ignore comments
# this is needed to hide the completion menu, if visible
# if there are already several lines and the cursor
# is not on the last one, always insert a new \n.
# if there's already a new line before the cursor then
# even if the cursor is followed by whitespace, we assume
# the user is trying to terminate the block
# auto-indent the next line like the previous line
# XXX we don't support parsing GNU-readline-style init files
# multiline extension (really a hack) for the end of lines that
# are actually continuations inside a single multiline_input()
# history item: we use \r\n instead of just \n.  If the history
# file is passed to GNU readline, the extra \r are just ignored.
# multiline history support
# like readline.c
# Extension
# Internal hook
# Stubs
# don't run _setup twice
# set up namespace in rlcompleter, which requires it to be a bona fide dict
# this is not really what readline.c does.  Better than nothing I guess
# pipes completely broken in Windows
# Escape non-encodable characters to avoid encoding errors later
# We've hereby abandoned whatever text hasn't been written,
# but the pager is still in control of the terminal.
# Ignore broken pipes caused by quitting the pager program.
# Ignore ctl-c like the pager itself does.  Otherwise the pager is
# left running and the terminal is in raw mode and unusable.
# Reference: https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#input-sequences
# syntax classes:
# XXX perhaps should use some unicodedata here?
# was 'end'
# was 'home'
# the entries in the terminfo database for xterms
# seem to be wrong.  this is a less than ideal
# workaround
## state
## cached metadata to speed up screen refreshes
# Enable the use of `insert` without a `prepare` call - necessary to
# facilitate the tab completion hack implemented for
# <https://bugs.python.org/issue25660>.
# Since the last call to calc_screen:
# screen and screeninfo may differ due to a completion menu being shown
# pos and cxy may differ due to edits, cursor movements, or completion menus
# Lines that are above both the old and new cursor position can't have changed,
# unless the terminal has been resized (which might cause reflowing) or we've
# entered or left paste mode (which changes prompts, causing reflowing).
# No need to keep formatting lines.
# The console can't show them.
# Only the first line's prompt can come from the cache
# Takes all of the line plus the newline
# Takes the newline
# -1 cause backslash is not in buffer
# +1 cause newline is in buffer
# prevent potential future infinite loop
# optimize for the common case: typing at the end of the buffer
# need to remove line-wrapping backslash
# there's a newline in buffer
# this call sets up self.cxy, so call it first.
# We use the same timeout as in readline.c: 100ms
# Keep MyPy happy off Windows
# Virtual-Key Codes: https://learn.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes
# VK_END
# VK_HOME
# VK_LEFT
# VK_UP
# VK_RIGHT
# VK_DOWN
# VK_DELETE
# VK_F1
# VK_F2
# VK_F3
# VK_F4
# VK_F5
# VK_F6
# VK_F7
# VK_F8
# VK_F9
# VK_F10
# VK_F11
# VK_F12
# VK_F13
# VK_F14
# VK_F15
# VK_F16
# VK_F17
# VK_F18
# VK_F19
# VK_F20
# Virtual terminal output sequences
# Reference: https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#output-sequences
# Check `windows_eventqueue.py` for input sequences
# State of control keys: https://learn.microsoft.com/en-us/windows/console/key-event-record-str
# Save original console modes so we can recover on cleanup.
# Console I/O is redirected, fallback...
# Scrolling the buffer as the current input is greater than the visible
# portion of the window.  We need to scroll the visible portion and the
# entire history
# If we wrapped we want to start at the next line
# Recover to original mode before running REPL
# Only process keys and keydown events
# Make enter unix-like
# Turn backspace directly into the command
# Handle special keys like arrow keys and translate them into the appropriate command
# queue the key, return the meta command
# keymap.py uses this for meta
# If virtual terminal is enabled, scanning VT sequences
# Do not swallow characters that have been entered via AltGr:
# Windows internally converts AltGr to CTRL+ALT, see
# https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-vkkeyscanw
# Poor man's Windows select loop
# Windows interop
# kill spaces and tabs at the end, but only if they follow '\n'.
# meant to remove the auto-indentation only (although it would of
# course also remove explicitly-added indentation).
# ooh, look at the hack:
# skip internal commands in history
# Make sure that history does not change because of commands
# too bad, we remove the color
# sort_in_column=False (default)     sort_in_column=True
# "fill" the table with empty words, so we always have the same amout
# of rows for each column
# this gets somewhat user interface-y, and as a result the logic gets
# very convoluted.
#### Desired behaviour of the completions commands.
# the considerations are:
# (1) how many completions are possible
# (2) whether the last command was a completion
# (3) if we can assume that the completer is going to return the same set of
# if there's no possible completion, beep at the user and point this out.
# this is easy.
# if there's only one possible completion, stick it in.  if the last thing
# user did was a completion, point out that he isn't getting anywhere, but
# only if the ``assume_immutable_completions`` is True.
# now it gets complicated.
# for the first press of a completion key:
# for the second bang on the completion key
# for subsequent bangs, rotate the menu around (if there are sufficient
# choices).
### Class variables
# see the comment for the complete command
# display completions inside []
### Instance variables
# We display the completions menu below the current prompt
# If we're not in the middle of multiline edit, don't append to screeninfo
# since that screws up the position calculation in pos2xy function.
# This is a hack to prevent the cursor jumping
# into the completions menu when pressing left or down arrow.
# XXX how to split?
# In multiline contexts, we're only interested in the current line.
# sanity check, buffer is empty when a special key comes
# escape sequence not recognized by our keymap: propagate it
# outside so that i can be recognized as an M-... key (see also
# the docstring in keymap.py
# sys._baserepl() above does this internally, we do it here
# set sys.{ps1,ps2} just before invoking the interactive interpreter. This
# mimics what CPython does in pythonrun.c
# curses.ascii.ctrl()
# Important: don't add things to this module, as they will end up in the REPL's
# default globals.  Use _pyrepl.main instead.
# Avoid caching this file by linecache and incorrectly report tracebacks.
# See https://github.com/python/cpython/issues/129098.
# remove lengths of any escape sequences
# CTRL-Z on Windows
# Always return a copy of the control characters list to ensure
# there are not any additional references to self.cc
# like r"\C-c"
# like "interrupt"
# (naming modules after builtin functions is not such a hot idea...)
# an KeyTrans instance translates Event objects into Command objects
# hmm, at what level do we want [C-i] and [tab] to be equivalent?
# [meta-a] and [esc a]?  obviously, these are going to be equivalent
# for the UnixConsole, but should they be for PygameConsole?
# it would in any situation seem to be a bad idea to bind, say, [tab]
# and [C-i] to *different* things... but should binding one bind the
# other?
# executive, temporary decision: [tab] and [C-i] are distinct, but
# [meta-key] is identified with [esc key].  We demand that any console
# class does quite a lot towards emulating a unix terminal.
# small optimization:
# Module providing various facilities to other parts of the package
# multiprocessing/util.py
# Copyright (c) 2006-2008, R Oudkerk
# we want threading to install it's
# cleanup function before multiprocessing does
# Logging
# XXX multiprocessing should cleanup before logging
# Abstract socket support
# Function returning a temp directory which will be removed on exit
# Maximum length of a socket file path is usually between 92 and 108 [1],
# but Linux is known to use a size of 108 [2]. BSD-based systems usually
# use a size of 104 or 108 and Windows does not create AF_UNIX sockets.
# [1]: https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sys_un.h.html
# [2]: https://man7.org/linux/man-pages/man7/unix.7.html.
# On Windows platforms, we do not create AF_UNIX sockets.
# current_process() can be None if the finalizer is called
# late during Python finalization
# Most of the time, the default temporary directory is /tmp. Thus,
# listener sockets files "$TMPDIR/pymp-XXXXXXXX/sock-XXXXXXXX" do
# not have a path length exceeding SUN_PATH_MAX.
# If users specify their own temporary directory, we may be unable
# to create those files. Therefore, we fall back to the system-wide
# temporary directory /tmp, assumed to exist on POSIX systems.
# See https://github.com/python/cpython/issues/132124.
# Files created in a temporary directory are suffixed by a string
# generated by tempfile._RandomNameSequence, which, by design,
# is 8 characters long.
# Thus, the length of socket filename will be:
# Fallback to the default system-wide temporary directory.
# This ignores user-defined environment variables.
# On POSIX systems, /tmp MUST be writable by any application [1].
# We however emit a warning if this is not the case to prevent
# obscure errors later in the execution.
# On some legacy systems, /var/tmp and /usr/tmp can be present
# and will be used instead.
# [1]: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s18.html
# At this point, the system-wide temporary directory is not usable
# but we may assume that the user-defined one is, even if we will
# not be able to write socket files out there.
# at most max(map(len, dirlist)) + 14 + 14 = 36 characters
# get name of a temp directory which will be automatically cleaned up
# keep a strong reference to shutil.rmtree(), since the finalizer
# can be called late during Python shutdown
# Support for reinitialization of objects when bootstrapping a child process
# Finalization using weakrefs
# Need to bind these locally because the globals can have
# been cleared at shutdown
# This function may be called after this module's globals are
# destroyed.  See the _exit_function function in this module for more
# notes.
# Careful: _finalizer_registry may be mutated while this function
# is running (either by a GC run or by another thread).
# list(_finalizer_registry) should be atomic, while
# list(_finalizer_registry.items()) is not.
# key may have been removed from the registry
# Clean up on exit
# We hold on to references to functions in the arglist due to the
# situation described below, where this function is called after this
# module's globals are destroyed.
# We check if the current process is None here because if
# it's None, any call to ``active_children()`` will raise
# an AttributeError (active_children winds up trying to
# get attributes from util._current_process).  One
# situation where this can happen is if someone has
# manipulated sys.modules, causing this module to be
# garbage collected.  The destructor for the module type
# then replaces all values in the module dict with None.
# For instance, after setuptools runs a test it replaces
# sys.modules with a copy created earlier.  See issues
# #9775 and #15881.  Also related: #4106, #9205, and
# #9207.
# Some fork aware types
# Close fds except those specified
# Close sys.stdin and replace stdin with os.devnull
# Flush standard streams, if any
# Start a program with only specified fds kept open
# cleanup multiprocessing
# Stop the ForkServer process if it's running
# Stop the ResourceTracker process if it's running
# bpo-37421: Explicitly call _run_finalizers() to remove immediately
# temporary directories created by multiprocessing.util.get_temp_dir().
# Wrapper for an fd used while launching a process
# Start child process using a server process
# Keep a duplicate of the data pipe's write end as a sentinel of the
# parent process used by the child process.
# This should not happen usually, but perhaps the forkserver
# process itself got killed
# Package analogous to 'threading.py' but using processes
# multiprocessing/__init__.py
# This package is intended to duplicate the functionality (and much of
# the API) of threading.py but uses processes instead of threads.  A
# subpackage 'multiprocessing.dummy' has the same API but is a simple
# wrapper for 'threading'.
# Copy stuff from default context
# XXX These should not really be documented or public.
# Alias for main module -- will be reset by bootstrapping child processes
# Module which supports allocation of memory from an mmap
# multiprocessing/heap.py
# Inheritable class which wraps an mmap, and from which blocks can be allocated
# We have reopened a preexisting mmap.
# Reopen existing mmap
# XXX Temporarily preventing buildbot failures while determining
# XXX the correct long-term fix. See issue 23060
#assert _winapi.GetLastError() == _winapi.ERROR_ALREADY_EXISTS
# Arena is created anew (if fd != -1, it means we're coming
# from rebuild_arena() below)
# Choose a non-storage backed directory if possible,
# to improve performance
# enough free space?
# Class allowing allocation of chunks of memory from arenas
# Minimum malloc() alignment
# 4 MB
# Current arena allocation size
# A sorted list of available block sizes in arenas
# Free block management:
# - map each block size to a list of `(Arena, start, stop)` blocks
# - map `(Arena, start)` tuple to the `(Arena, start, stop)` block
# - map `(Arena, stop)` tuple to the `(Arena, start, stop)` block
# Map arenas to their `(Arena, start, stop)` blocks in use
# List of pending blocks to free - see comment in free() below
# Statistics
# alignment must be a power of 2
# Create a new arena with at least the given *size*
# We carve larger and larger arenas, for efficiency, until we
# reach a large-ish size (roughly L3 cache-sized)
# Possibly delete the given (unused) arena
# Reusing an existing arena is faster than creating a new one, so
# we only reclaim space if it's large enough.
# returns a large enough block -- it might be much larger
# make block available and try to merge with its neighbours in the arena
# deregister this block so it can be merged with a neighbour
# Arena is entirely free, discard it from this process
# Free all the blocks in the pending list - called with the lock held.
# free a block returned by malloc()
# Since free() can be called asynchronously by the GC, it could happen
# that it's called while self._lock is held: in that case,
# self._lock.acquire() would deadlock (issue #12352). To avoid that, a
# trylock is used instead, and if the lock can't be acquired
# immediately, the block is added to a list of blocks to be freed
# synchronously sometimes later from malloc() or free(), by calling
# _free_pending_blocks() (appending and retrieving from a list is not
# strictly thread-safe but under CPython it's atomic thanks to the GIL).
# can't acquire the lock right now, add the block to the list of
# pending blocks to free
# we hold the lock
# return a block of right size (possibly rounded up)
# reinitialize after fork
# allow pending blocks to be marked available
# if the returned block is larger than necessary, mark
# the remainder available
# Class wrapping a block allocated out of a Heap -- can be inherited by child process
# Code used to start processes when using the spawn or forkserver
# start methods.
# multiprocessing/spawn.py
# _python_exe is the assumed path to the python executable.
# People embedding Python want to modify it.
# Figure out whether to initialise main in the subprocess as a module
# or through direct execution (or to leave it alone entirely)
# Prepare current process
# Multiprocessing module helpers to fix up the main module in
# spawned subprocesses
# __main__.py files for packages, directories, zip archives, etc, run
# their "main only" code unconditionally, so we don't even try to
# populate anything in __main__, nor do we make any changes to
# __main__ attributes
# If this process was forked, __main__ may already be populated
# Otherwise, __main__ may contain some non-main code where we need to
# support unpickling it properly. We rerun it as __mp_main__ and make
# the normal __main__ an alias to that
# Unfortunately, the main ipython launch script historically had no
# "if __name__ == '__main__'" guard, so we work around that
# by treating it like a __main__.py file
# See https://github.com/ipython/ipython/issues/4698
# Otherwise, if __file__ already has the setting we expect,
# there's nothing more to do
# If the parent process has sent a path through rather than a module
# name we assume it is an executable script that may contain
# non-main code that needs to be executed
# Module providing manager classes for dealing
# with shared objects
# multiprocessing/managers.py
# Register some things for pickling
# Type for identifying shared objects
# Function for communication with a manager's server process
# Functions for finding the method names of an object
# Server which is run in a process controlled by a manager
# do authentication later
# what about stderr?
# Server.serve_client() calls sys.exit(0) on EOF
# Perhaps include debug info about 'c'?
# Doesn't use (len(self.id_to_obj) - 1) as we shouldn't count ident='0'
# convert to string because xmlrpclib
# only has 32 bit signed integers
# If no external references exist but an internal (to the
# manager) still does and a new external reference is created
# from it, restore the manager's tracking of it from the
# previously stashed internal ref.
# Two-step process in case the object turns out to contain other
# proxy objects (e.g. a managed list of managed lists).
# Otherwise, deleting self.id_to_obj[ident] would trigger the
# deleting of the stored value (another managed object) which would
# in turn attempt to acquire the mutex that is already held here.
# thread-safe
# Class to represent state of a manager
# Mapping from serializer name to Listener and Client types
# Definition of BaseManager
# XXX not final address if eg ('', 0)
# pipe over which we will retrieve address of server
# spawn process which runs a server
# get address of server
# register a finalizer
# bpo-36368: protect server process from KeyboardInterrupt signals
# create server
# inform parent process of the server's address
# run the manager
# isinstance?
# Subclass of set which get cleared after a fork
# Definition of BaseProxy
# Each instance gets a `_serial` number. Unlike `id(...)`, this number
# is never reused.
# self._tls is used to record the connection used by this
# thread to communicate with the manager at token.address
# self._all_serials is a set used to record the identities of all
# shared objects for which the current process owns references and
# which are in the manager at token.address
# Should be set to True only when a proxy object is being created
# on the manager server; primary use case: nested proxy objects.
# RebuildProxy detects when a proxy is being created on the manager
# and sets this value appropriately.
# check whether manager is still alive
# tell manager this process no longer cares about referent
# check whether we can close this thread's connection because
# the process owns no more references to objects for this manager
# the proxy may just be for a manager which has shutdown
# Function used for unpickling
# Functions to create proxies and proxy types
# Types/callables which we will register with SyncManager
# Proxy types used by SyncManager
# Definition of SyncManager
# types returned by methods of PoolProxy
# Definition of SharedMemoryManager and SharedMemoryServer
# The address of Linux abstract namespaces can be bytes
# Unless set up as a shared proxy, don't make shared_memory_context
# a standard part of kwargs.  This makes things easier for supplying
# simple functions.
# bpo-36867: Ensure the resource_tracker is running before
# launching the manager process, so that concurrent
# shared_memory manipulation both in the manager and in the
# current process does not create two resource_tracker
# processes.
# Module providing the `Process` class which emulates `threading.Thread`
# multiprocessing/process.py
# Public functions
# check for processes which have finished
# The `Process` class
# Avoid a refcycle if the target function holds an indirect
# reference to the process object (see bpo-30775)
# delay finalization of the old process object until after
# _run_after_forkers() is executed
# We subclass bytes to avoid accidental transmission of auth keys over network
# Create object representing the parent process
# Create object representing the main process
# Note that some versions of FreeBSD only allow named
# semaphores to have names of up to 14 characters.  Therefore
# we choose a short prefix.
# On MacOSX in a sandbox it may be necessary to use a
# different prefix -- see #19478.
# Everything in self._config will be inherited by descendant
# Give names to some return codes
# For debug and leak testing
# Start child process using fork
# Child process not yet created. See #1731717
# e.errno == errno.ECHILD == 10
# This shouldn't block if wait() returned successfully.
# Module which supports allocation of ctypes objects from shared memory
# multiprocessing/sharedctypes.py
# Functions for pickling/unpickling
# Function to create properties
# Synchronized wrappers
# We use a background thread for sharing fds on Unix, and for sharing sockets on
# Windows.
# A client which wants to pickle a resource registers it with the resource
# sharer and gets an identifier in return.  The unpickling process will connect
# to the resource sharer, sends the identifier and its pid, and then receives
# the resource.
# Start child process using a fresh interpreter
# large enough for pid_t
# Forkserver class
# Method used by unit tests to stop the server
# close the "alive" file descriptor asks the server to stop
# forkserver was launched before, is it still running?
# dead, launch it again
# all client processes own the write end of the "alive" pipe;
# when they all terminate the read end becomes ready.
# gh-135335: flush stdout/stderr in case any of the preloaded modules
# wrote to them, otherwise children might inherit buffered data
# Dummy signal handler, doesn't do anything
# unblocking SIGCHLD allows the wakeup fd to notify our event loop
# protect the process from ^C
# calling os.write() in the Python signal handler is racy
# map child pids to client fds
# EOF because no more client processes left
# Got SIGCHLD
# exhaust
# Scan for child processes
# Send exit code to client process
# client vanished
# This shouldn't happen really
# Incoming fork request
# Receive fds from client
# Send pid to client process
# close unnecessary stuff and reset signal handlers
# Run process object received over pipe
# Read and write signed numbers
# Module providing the `Pool` class for managing a process pool
# multiprocessing/pool.py
# If threading is available then ThreadPool should be provided.  Therefore
# we avoid top-level imports which are liable to fail on some systems.
# Constants representing the state of a pool
# Miscellaneous
# Code run by worker processes
# Class representing a process pool
# Notify that the cache is empty. This is important because the
# pool keeps maintaining workers until the cache gets drained. This
# eliminates a race condition in which a task is finished after the
# the pool's _handle_workers method has enter another iteration of the
# loop. In this situation, the only event that can wake up the pool
# is the cache to be emptied (no more tasks available).
# Attributes initialized early to make sure that they exist in
# __del__() if __init__() raises an exception
# The _change_notifier queue exist to wake up self._handle_workers()
# when the cache (self._cache) is empty or when there is a change in
# the _state variable of the thread that runs _handle_workers.
# Copy globals as function locals to make sure that they are available
# during Python shutdown when the Pool is destroyed.
# worker exited
# Keep maintaining workers until the cache gets drained, unless the pool
# is terminated.
# send sentinel to stop workers
# iterating taskseq cannot fail
# tell result handler to finish when cache is empty
# tell workers there is no more work
# If we don't make room available in outqueue then
# attempts to add the sentinel (None) to outqueue may
# block.  There is guaranteed to be no more than 2 sentinels.
# task_handler may be blocked trying to put items on inqueue
# this is guaranteed to only be called once
# Notify that the worker_handler state has been changed so the
# _handle_workers loop can be unblocked (and exited) in order to
# send the finalization sentinel all the workers.
# We must wait for the worker handler to exit before terminating
# workers because we don't want workers to be restarted behind our back.
# Terminate workers which haven't already finished.
# worker has not yet exited
# Class whose instances are returned by `Pool.apply_async()`
# create alias -- see #17805
# Class whose instances are returned by `Pool.map_async()`
# only store first exception
# only consider the result ready once all jobs are done
# Class whose instances are returned by `Pool.imap()`
# Class whose instances are returned by `Pool.imap_unordered()`
# drain inqueue, and put sentinels at its head to make workers finish
# Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py
# This is undocumented.  In previous versions of multiprocessing
# its only effect was to make socket objects inheritable on Windows.
# Type of default context -- underlying context can be set at most once
# Context types for fixed start method
# process is spawned, nothing to do
# bpo-33725: running arbitrary code after fork() is no longer reliable
# on macOS since macOS 10.14 (Mojave). Use spawn by default instead.
# Force the start method
# Check that the current thread is spawning a child process
# Server process to keep track of unlinked resources (like shared memory
# segments, semaphores etc.) and clean them.
# On Unix we run a server process which keeps track of unlinked
# resources. The server ignores SIGINT and SIGTERM and reads from a
# pipe.  Every other process of the program has a copy of the writable
# end of the pipe, so we get EOF when all other processes have exited.
# Then the server process unlinks any remaining resource names.
# This is important because there may be system limits for such resources: for
# instance, the system only supports a limited number of named semaphores, and
# shared-memory segments live in the RAM. If a python process leaks such a
# resource, this resource will not be removed till the next reboot.  Without
# this resource tracker process, "killall python" would probably leave unlinked
# resources.
# Dummy resource used in tests
# Use sem_unlink() to clean up named semaphores.
# sem_unlink() may be missing if the Python build process detected the
# absence of POSIX named semaphores. In that case, no named semaphores were
# ever opened, so no cleanup would be necessary.
# gh-109629: this happens if an explicit call to the ResourceTracker
# gets interrupted by a garbage collection, invoking a finalizer (*)
# that itself calls back into ResourceTracker.
# making sure child processess are cleaned before ResourceTracker
# gets destructed.
# see https://github.com/python/cpython/issues/88887
# This shouldn't happen (it might when called by a finalizer)
# so we check for it anyway.
# not running
# closing the "alive" file descriptor stops main()
# os.waitstatus_to_exitcode may raise an exception for invalid values
# Clean-up to avoid dangling processes.
# _pid can be None if this process is a child from another
# python process, which has started the resource_tracker.
# The resource_tracker has already been terminated.
# process will out live us, so no need to wait on pid
# bpo-33613: Register a signal mask that will block the signals.
# This signal mask will be inherited by the child that is going
# to be spawned and will protect the child from a race condition
# that can make the child die before it registers signal handlers
# for SIGINT and SIGTERM. The mask is unregistered after spawning
# The code below is certainly not reentrant-safe, so bail out
# resource tracker was launched before, is it still running?
# message was sent in probe
# We cannot use send here as it calls ensure_running, creating
# a cycle.
# posix guarantees that writes to a pipe of less than PIPE_BUF
# bytes are atomic, and that PIPE_BUF >= 512
# protect the process from ^C and "killall python" etc
# keep track of registered/unregistered resources
# all processes have terminated; cleanup any remaining resources
# The test 'dummy' resource is expected to leak.
# We skip the warning (and *only* the warning) for it.
# For some reason the process which created and registered this
# resource has failed to unregister it. Presumably it has
# died.  We therefore unlink it.
# Module implementing queues
# multiprocessing/queues.py
# Queue type using a pipe, buffer and thread
# Can raise ImportError (see issues #3770 and #23400)
# For use by concurrent.futures
# unserialize the data after having released the lock
# Raises NotImplementedError on Mac OSX because of broken sem_getvalue()
# Close a Queue on error.
# gh-94777: Prevent queue writing to a pipe which is no longer read.
# gh-107219: Close the connection writer which can unblock
# Queue._feed() if it was stuck in send_bytes().
# Start thread which transfers data from buffer to pipe
# gh-109047: During Python finalization, creating a thread
# can fail with RuntimeError.
# Send sentinel to the thread queue object when garbage collected
# serialize the data before acquiring the lock
# Since this runs in a daemon thread the resources it uses
# may be become unusable while the process is cleaning up.
# We ignore errors which happen after the process has
# started to cleanup.
# Since the object has not been sent in the queue, we need
# to decrease the size of the queue. The error acts as
# if the object had been silently removed from the queue
# and this step is necessary to have a properly working
# queue.
# A queue type which also supports join() and task_done() methods
# Note that if you do not call task_done() for each finished task then
# eventually the counter's semaphore may overflow causing Bad Things
# to happen.
# Simplified Queue type -- really just a locked pipe
# writes to a message oriented win32 pipe are atomic
# Exit code used by Popen.terminate()
# We define a Popen class similar to the one from subprocess, but
# whose constructor takes a process object as its argument.
# read end of pipe will be duplicated by the child process
# -- see spawn_main() in spawn.py.
# bpo-33929: Previously, the read end of pipe was "stolen" by the child
# process, but it leaked a handle if the child process had been
# terminated before it could steal the handle from the parent process.
# bpo-35797: When running in a venv, we bypass the redirect
# executor and launch our base Python.
# start process
# set attributes of self
# send information to child
# gh-113009: Don't set self.returncode. Even if GetExitCodeProcess()
# returns an exit code different than STILL_ACTIVE, the process can
# still be running. Only set self.returncode once WaitForSingleObject()
# returns WAIT_OBJECT_0 in wait().
# Module which deals with pickling of objects.
# multiprocessing/reduction.py
# Pickler subclass
# Platform specific definitions
# We just duplicate the handle in the current process and
# let the receiving process steal the handle.
# retrieve handle from process which currently owns it
# The handle has already been duplicated for this process.
# We must steal the handle from the process whose pid is self._pid.
# Unix
# On MacOSX we should acknowledge receipt of fds -- see Issue14669
# Try making some callable types picklable
# Make sockets picklable
# A higher level module for using sockets (or Windows named pipes)
# multiprocessing/connection.py
# A very generous timeout when it comes to local connections...
# double check
# Connection classes
# XXX should we use util.Finalize instead of a __del__?
# Get bytesize of arbitrary buffer
# Message can fit in dest
# Interrupt WaitForMultipleObjects() in _send_bytes()
# A connection should only be used by a single thread
# close() was called by another thread while
# WaitForMultipleObjects() was waiting for the overlapped
# operation.
# For wire compatibility with 3.7 and lower
# The payload is large so Nagle's algorithm won't be triggered
# and we'd better avoid the cost of concatenation.
# Issue #20540: concatenate before sending, to avoid delays due
# to Nagle's algorithm on a TCP socket.
# Also note we want to avoid sending a 0-length buffer separately,
# to avoid "broken pipe" errors if the other end closed the pipe.
# default security descriptor: the handle cannot be inherited
# Definitions for connections based on sockets
# SO_REUSEADDR has different semantics on Windows (issue #2550).
# Linux abstract socket namespaces do not need to be explicitly unlinked
# Definitions for connections based on named pipes
# ERROR_NO_DATA can occur if a client has already connected,
# written data and then disconnected -- see Issue 14725.
# Authentication stuff
# MUST be > 20
# multiprocessing.connection Authentication Handshake Protocol Description
# (as documented for reference after reading the existing code)
# On Windows: native pipes with "overlapped IO" are used to send the bytes,
# instead of the length prefix SIZE scheme described below. (ie: the OS deals
# with message sizes for us)
# Protocol error behaviors:
# On POSIX, any failure to receive the length prefix into SIZE, for SIZE greater
# than the requested maxsize to receive, or receiving fewer than SIZE bytes
# results in the connection being closed and auth to fail.
# On Windows, receiving too few bytes is never a low level _recv_bytes read
# error, receiving too many will trigger an error only if receive maxsize
# value was larger than 128 OR the if the data arrived in smaller pieces.
# 0.                                  Open a connection on the pipe.
# 1.  Accept connection.
# 2.  Random 20+ bytes -> MESSAGE
# 3.  send 4 byte length (net order)
# 4.                                  Receive 4 bytes, parse as network byte
# 5.                                  Receive min(SIZE, 256) bytes -> M1
# 6.                                  Assert that M1 starts with:
# 7.                                  Strip that prefix from M1 into -> M2
# 7.1.                                Parse M2: if it is exactly 20 bytes in
# 7.2.                                preferred digest is looked up from an
# 7.3.                                Put divined algorithm name in -> D_NAME
# 8.                                  Compute HMAC-D_NAME of AUTHKEY, M2 -> C_DIGEST
# 9.                                  Send 4 byte length prefix (net order)
# 10. Receive 4 or 4+8 byte length
# 11. Receive min(SIZE, 256) -> C_D.
# 11.1. Parse C_D: legacy servers
# 11.2. modern servers check the length
# 11.2.1. "md5" -> D_NAME
# 11.3. longer? expect and parse a "{digest}"
# 11.4. Don't like D_NAME? <- AuthenticationError
# 12. Compute HMAC-D_NAME of AUTHKEY,
# 13. Compare M_DIGEST == C_D:
# 14a: Match? Send length prefix &
# 14b: Mismatch? Send len prefix &
# 15.                                 Receive 4 or 4+8 byte length prefix (net
# 16.                                 Receive min(SIZE, 256) bytes -> M3.
# 17.                                 Compare M3 == b'#WELCOME#':
# 17a.                                Match? <- RETURN
# 17b.                                Mismatch? <- CLOSE & AuthenticationError
# If this RETURNed, the connection remains open: it has been authenticated.
# Length prefixes are used consistently. Even on the legacy protocol, this
# was good fortune and allowed us to evolve the protocol by using the length
# of the opening challenge or length of the returned digest as a signal as
# to which protocol the other end supports.
# Old hmac-md5 only server versions from Python <=3.11 sent a message of this
# length. It happens to not match the length of any supported digest so we can
# use a message of this length to indicate that we should work in backwards
# compatible md5-only mode without a {digest_name} prefix on our response.
# type: (bytes) -> tuple[str, bytes]
# modern message format: b"{digest}payload" longer than 20 bytes
# legacy message format: 16 or 20 byte b"payload"
# Either this was a legacy server challenge, or we're processing
# a reply from a legacy client that sent an unprefixed 16-byte
# HMAC-MD5 response. All messages using the modern protocol will
# be longer than either of these lengths.
# The MAC protects the entire message: digest header and payload.
# Legacy server without a {digest} prefix on message.
# Generate a legacy non-prefixed HMAC-MD5 reply.
# HMAC-MD5 is not available (FIPS mode?), fall back to
# HMAC-SHA2-256 modern protocol. The legacy server probably
# doesn't support it and will reject us anyways. :shrug:
# Modern protocol, indicate the digest used in the reply.
# Even when sending a challenge to a legacy client that does not support
# digest prefixes, they'll take the entire thing as a challenge and
# respond to it with a raw HMAC-MD5.
# reject large message
# Support for using xmlrpclib for serialization
# Return ALL handles which are currently signalled.  (Only
# returning the first signalled might create starvation issues.)
# Windows limits WaitForMultipleObjects at 64 handles, and we use a
# few for synchronisation, so we switch to batched waits at 60.
# start an overlapped read of length zero
# If o.fileno() is an overlapped pipe handle and
# err == 0 then there is a zero length message
# in the pipe, but it HAS NOT been consumed...
# ... except on Windows 8 and later, where
# the message HAS been consumed.
# request that overlapped reads stop
# wait for all overlapped reads to stop
# If o.fileno() is an overlapped pipe handle then
# a zero length message HAS been consumed.
# Make connection and socket objects shareable if possible
# FreeBSD (and perhaps other BSDs) limit names to 14 characters.
# Shared memory block name prefix
# number of random bytes to use for name
# Defaults; enables close() and unlink() to run without errors.
# POSIX Shared Memory
# Windows Named Shared Memory
# Create and reserve shared memory block with this name
# until it can be attached to by mmap.
# Dynamically determine the existing named shared memory
# block's size which is likely a multiple of mmap.PAGESIZE.
# The shared memory area is organized as follows:
# - 8 bytes: number of items (N) as a 64-bit integer
# - (N + 1) * 8 bytes: offsets of each element from the start of the
# - K bytes: the data area storing item values (with encoding and size
# - N * 8 bytes: `struct` format string for each element
# - N bytes: index into _back_transforms_mapping for each element
# int, float, bool
# NoneType
# The offsets of each list element into the shared memory's
# data area (0 meaning the start of the data area, not the start
# of the shared memory area).
# Obtains size from offset 0 in buffer.
# - 8 bytes for the list length
# - (N + 1) * 8 bytes for the element offsets
# Module implementing synchronization primitives
# multiprocessing/synchronize.py
# Try to import the mp.synchronize module cleanly, if it fails
# raise ImportError for platforms lacking a working sem_open implementation.
# See issue 3770
# Base class for semaphores and mutexes; wraps `_multiprocessing.SemLock`
# We only get here if we are on Unix with forking
# disabled.  When the object is garbage collected or the
# process shuts down we unlink the semaphore name
# Ensure that deserialized SemLock can be serialized again (gh-108520).
# Semaphore
# Bounded semaphore
# Non-recursive lock
# Recursive lock
# Condition variable
# indicate that this thread is going to sleep
# release lock
# wait for notification or timeout
# indicate that this thread has woken
# reacquire lock
# to take account of timeouts since last notify*() we subtract
# woken_count from sleeping_count and rezero woken_count
# wake up one sleeper
# wait for a sleeper to wake
# rezero wait_semaphore in case some timeouts just happened
# Event
# Barrier
# Support for the API of the multiprocessing package using threads
# multiprocessing/dummy/__init__.py
# Analogue of `multiprocessing.connection` which uses queues instead of sockets
# multiprocessing/dummy/connection.py
# Secret Labs' Regular Expression Engine
# re-compatible interface for the sre matching engine
# Copyright (c) 1998-2001 by Secret Labs AB.  All rights reserved.
# This version of the SRE library can be redistributed under CNRI's
# Python 1.6 license.  For any other use, please contact Secret Labs
# AB (info@pythonware.com).
# Portions of this engine have been developed in cooperation with
# CNRI.  Hewlett-Packard provided funding for 1.6 integration and
# other compatibility work.
# assume ascii "locale"
# ignore case
# assume current 8-bit locale
# assume unicode "locale"
# make anchors look for newline
# make dot match newline
# ignore whitespace and comments
# sre extensions (experimental, don't rely on these)
# dump pattern after compilation
# sre exception
# public interface
# SPECIAL_CHARS
# closing ')', '}' and ']'
# '-' (a range in character set)
# '&', '~', (extended character set operations)
# '#' (comment) and WHITESPACE (ignored) in verbose mode
# internals
# Use the fact that dict keeps the insertion order.
# _cache2 uses the simple FIFO policy which has better latency.
# _cache uses the LRU policy which has better hit rate.
# LRU
# FIFO
# internal: compile pattern
# Item in _cache should be moved to the end if found.
# Drop the least recently used item.
# next(iter(_cache)) is known to have linear amortized time,
# but it is used here to avoid a dependency from using OrderedDict.
# For the small _MAXCACHE value it doesn't make much of a difference.
# Append to the end.
# Drop the oldest item.
# internal: compile replacement pattern
# register myself for pickling
# experimental stuff (see python-dev discussions for details)
# combine phrases into a compound pattern
# convert re-style regular expression to sre pattern
# See the __init__.py file for information on usage and redistribution.
# XXX: show string offset and offending character for all errors
# start of string
# end of string
# standard flags
# Maximal value returned by SubPattern.getwidth().
# Must be larger than MAXREPEAT, MAXCODE and sys.maxsize.
# keeps track of state for parsing
# group 0
# a subpattern, in intermediate form
# member sublanguage
# determine the width (min, max) for this subpattern
# handle escape code inside character class
# hexadecimal escape (exactly two digits)
# unicode escape (exactly four digits)
# unicode escape (exactly eight digits)
# raise ValueError for invalid code
# named unicode escape e.g. \N{EM DASH}
# octal escape (up to three digits)
# handle escape code in expression
# hexadecimal escape
# octal escape
# octal escape *or* decimal group reference (sigh)
# got three octal digits; this is an octal escape
# not an octal escape, so this is a group reference
# parse an alternation: a|b|c
# check if all items share a common prefix
# all subitems start with a common "prefix".
# move it out of the branch
# check next one
# check if the branch can be replaced by a character set
# we can store this as a character set instead of a
# branch (the compiler may optimize this even more)
# parse a simple pattern
# precompute constants into local variables
# end of pattern
# end of subpattern
# character set
### check remaining characters
# potential range
# XXX: <fl> should move set optimization to compiler!
# optimization
# charmap optimization can't be added here because
# global flags still are not known
# repeat previous item
# figure out which item to repeat
# Non-Greedy Match
# Possessive Match (Always Greedy)
# Greedy Match
# options
# python extensions
# named group: skip forward to end of name
# named backreference
# non-capturing group
# lookahead assertions
# lookbehind
# conditional backreference group
# non-capturing, atomic group
# global flags
# parse group contents
# unpack non-capturing groups
# Check and fix flags according to the type of pattern (str or bytes)
# parse 're' pattern into list of (opcode, argument) tuples
# parse 're' replacement string into list of literals and
# group references
# The tokenizer implicitly decodes bytes objects as latin-1, we must
# therefore re-encode the final representation.
# end of replacement string
# group
# convert template to internal format
# Copyright (c) 1997-2001 by Secret Labs AB.  All rights reserved.
# internal: compile a (sub)pattern
# _compile_info(code, p, _combine_flags(flags, add_flags, del_flags))
# Atomic Groups are handled by starting with an Atomic
# Group op code, then putting in the atomic group pattern
# and finally a success op code to tell any repeat
# operations within the Atomic Group to stop eating and
# pop their stack if they reach it
# look ahead
# look behind
# _compile_info(code, av, flags)
# end of branch
# compile charset subprogram
# internal: optimize character set
# IGNORECASE and not LOCALE
# character set contains non-UCS1 character codes
# Character set contains non-BMP character codes.
# For range, all BMP characters in the range are already
# proceeded.
# For now, IN_UNI_IGNORE+LITERAL and
# IN_UNI_IGNORE+RANGE_UNI_IGNORE work for all non-BMP
# characters, because two characters (at least one of
# which is not in the BMP) match case-insensitively
# if and only if:
# 1) c1.lower() == c2.lower()
# 2) c1.lower() == c2 or c1.lower().upper() == c2
# Also, both c.lower() and c.lower().upper() are single
# characters for every non-BMP character.
# not ASCII
# compress character map
# use literal/range
# if the case was changed or new representation is more compact
# else original character set is good enough
# use bitmap
# To represent a big charset, first a bitmap of all characters in the
# set is constructed. Then, this bitmap is sliced into chunks of 256
# characters, duplicate chunks are eliminated, and each chunk is
# given a number. In the compiled expression, the charset is
# represented by a 32-bit word sequence, consisting of one word for
# the number of different chunks, a sequence of 256 bytes (64 words)
# of chunk numbers indexed by their original chunk position, and a
# sequence of 256-bit chunks (8 words each).
# Compression is normally good: in a typical charset, large ranges of
# Unicode will be either completely excluded (e.g. if only cyrillic
# letters are to be matched), or completely included (e.g. if large
# subranges of Kanji match). These ranges will be represented by
# chunks of all one-bits or all zero-bits.
# Matching can be also done efficiently: the more significant byte of
# the Unicode character is an index into the chunk number, and the
# less significant byte is a bit index in the chunk (just like the
# CHARSET matching).
# should be hashable
# Convert block indices to word array
# check if this subpattern is a "simple" operator
# look for literal prefix
# internal: compile an info block.  in the current version,
# this contains min/max pattern width, and an optional literal
# prefix or a character map
# look for a literal prefix
# if no prefix, look for charset prefix
##### add an info block
# literal flag
# pattern length
# add literal prefix
# skip
# generate overlap table
# compile info block
# compile the pattern
# internal: convert pattern list to internal format
# map in either direction
# Auto-generated by Tools/build/generate_re_casefix.py.
# Maps the code of lowercased character to codes of different lowercased
# characters which have the same uppercase.
# LATIN SMALL LETTER I: LATIN SMALL LETTER DOTLESS I
# 'i': 'ı'
# LATIN SMALL LETTER S: LATIN SMALL LETTER LONG S
# 's': 'ſ'
# MICRO SIGN: GREEK SMALL LETTER MU
# 'µ': 'μ'
# LATIN SMALL LETTER DOTLESS I: LATIN SMALL LETTER I
# 'ı': 'i'
# LATIN SMALL LETTER LONG S: LATIN SMALL LETTER S
# 'ſ': 's'
# COMBINING GREEK YPOGEGRAMMENI: GREEK SMALL LETTER IOTA, GREEK PROSGEGRAMMENI
# '\u0345': 'ιι'
# GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS: GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA
# 'ΐ': 'ΐ'
# GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS: GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND OXIA
# 'ΰ': 'ΰ'
# GREEK SMALL LETTER BETA: GREEK BETA SYMBOL
# 'β': 'ϐ'
# GREEK SMALL LETTER EPSILON: GREEK LUNATE EPSILON SYMBOL
# 'ε': 'ϵ'
# GREEK SMALL LETTER THETA: GREEK THETA SYMBOL
# 'θ': 'ϑ'
# GREEK SMALL LETTER IOTA: COMBINING GREEK YPOGEGRAMMENI, GREEK PROSGEGRAMMENI
# 'ι': '\u0345ι'
# GREEK SMALL LETTER KAPPA: GREEK KAPPA SYMBOL
# 'κ': 'ϰ'
# GREEK SMALL LETTER MU: MICRO SIGN
# 'μ': 'µ'
# GREEK SMALL LETTER PI: GREEK PI SYMBOL
# 'π': 'ϖ'
# GREEK SMALL LETTER RHO: GREEK RHO SYMBOL
# 'ρ': 'ϱ'
# GREEK SMALL LETTER FINAL SIGMA: GREEK SMALL LETTER SIGMA
# 'ς': 'σ'
# GREEK SMALL LETTER SIGMA: GREEK SMALL LETTER FINAL SIGMA
# 'σ': 'ς'
# GREEK SMALL LETTER PHI: GREEK PHI SYMBOL
# 'φ': 'ϕ'
# GREEK BETA SYMBOL: GREEK SMALL LETTER BETA
# 'ϐ': 'β'
# GREEK THETA SYMBOL: GREEK SMALL LETTER THETA
# 'ϑ': 'θ'
# GREEK PHI SYMBOL: GREEK SMALL LETTER PHI
# 'ϕ': 'φ'
# GREEK PI SYMBOL: GREEK SMALL LETTER PI
# 'ϖ': 'π'
# GREEK KAPPA SYMBOL: GREEK SMALL LETTER KAPPA
# 'ϰ': 'κ'
# GREEK RHO SYMBOL: GREEK SMALL LETTER RHO
# 'ϱ': 'ρ'
# GREEK LUNATE EPSILON SYMBOL: GREEK SMALL LETTER EPSILON
# 'ϵ': 'ε'
# CYRILLIC SMALL LETTER VE: CYRILLIC SMALL LETTER ROUNDED VE
# 'в': 'ᲀ'
# CYRILLIC SMALL LETTER DE: CYRILLIC SMALL LETTER LONG-LEGGED DE
# 'д': 'ᲁ'
# CYRILLIC SMALL LETTER O: CYRILLIC SMALL LETTER NARROW O
# 'о': 'ᲂ'
# CYRILLIC SMALL LETTER ES: CYRILLIC SMALL LETTER WIDE ES
# 'с': 'ᲃ'
# CYRILLIC SMALL LETTER TE: CYRILLIC SMALL LETTER TALL TE, CYRILLIC SMALL LETTER THREE-LEGGED TE
# 'т': 'ᲄᲅ'
# CYRILLIC SMALL LETTER HARD SIGN: CYRILLIC SMALL LETTER TALL HARD SIGN
# 'ъ': 'ᲆ'
# CYRILLIC SMALL LETTER YAT: CYRILLIC SMALL LETTER TALL YAT
# 'ѣ': 'ᲇ'
# CYRILLIC SMALL LETTER ROUNDED VE: CYRILLIC SMALL LETTER VE
# 'ᲀ': 'в'
# CYRILLIC SMALL LETTER LONG-LEGGED DE: CYRILLIC SMALL LETTER DE
# 'ᲁ': 'д'
# CYRILLIC SMALL LETTER NARROW O: CYRILLIC SMALL LETTER O
# 'ᲂ': 'о'
# CYRILLIC SMALL LETTER WIDE ES: CYRILLIC SMALL LETTER ES
# 'ᲃ': 'с'
# CYRILLIC SMALL LETTER TALL TE: CYRILLIC SMALL LETTER TE, CYRILLIC SMALL LETTER THREE-LEGGED TE
# 'ᲄ': 'тᲅ'
# CYRILLIC SMALL LETTER THREE-LEGGED TE: CYRILLIC SMALL LETTER TE, CYRILLIC SMALL LETTER TALL TE
# 'ᲅ': 'тᲄ'
# CYRILLIC SMALL LETTER TALL HARD SIGN: CYRILLIC SMALL LETTER HARD SIGN
# 'ᲆ': 'ъ'
# CYRILLIC SMALL LETTER TALL YAT: CYRILLIC SMALL LETTER YAT
# 'ᲇ': 'ѣ'
# CYRILLIC SMALL LETTER UNBLENDED UK: CYRILLIC SMALL LETTER MONOGRAPH UK
# 'ᲈ': 'ꙋ'
# LATIN SMALL LETTER S WITH DOT ABOVE: LATIN SMALL LETTER LONG S WITH DOT ABOVE
# 'ṡ': 'ẛ'
# LATIN SMALL LETTER LONG S WITH DOT ABOVE: LATIN SMALL LETTER S WITH DOT ABOVE
# 'ẛ': 'ṡ'
# GREEK PROSGEGRAMMENI: COMBINING GREEK YPOGEGRAMMENI, GREEK SMALL LETTER IOTA
# 'ι': '\u0345ι'
# GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA: GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS
# 'ΐ': 'ΐ'
# GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND OXIA: GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS
# 'ΰ': 'ΰ'
# CYRILLIC SMALL LETTER MONOGRAPH UK: CYRILLIC SMALL LETTER UNBLENDED UK
# 'ꙋ': 'ᲈ'
# LATIN SMALL LIGATURE LONG S T: LATIN SMALL LIGATURE ST
# 'ﬅ': 'ﬆ'
# LATIN SMALL LIGATURE ST: LATIN SMALL LIGATURE LONG S T
# 'ﬆ': 'ﬅ'
# various symbols used by the regular expression engine.
# run this script to update the _sre include files!
# update when constants are added or removed
# SRE standard exception (access as sre.error)
# should this really be here?
# Backward compatibility after renaming in 3.13
# failure=0 success=1 (just because it looks better that way :-)
# The following opcodes are only occurred in the parser output,
# but not in the compiled code.
# remove MIN_REPEAT and MAX_REPEAT
# positions
# categories
# replacement operations for "ignore case" mode
# case insensitive
# honour system locale
# treat target as multiline string
# treat target as a single string
# use unicode "locale"
# use ascii "locale"
# flags for INFO primitive
# has prefix
# entire pattern is literal (given by prefix)
# pattern starts with character from given set
# Must be done first!
# see https://html.spec.whatwg.org/multipage/parsing.html#numeric-character-reference-end-state
# REPLACEMENT CHARACTER
# CARRIAGE RETURN
# EURO SIGN
# <control>
# SINGLE LOW-9 QUOTATION MARK
# LATIN SMALL LETTER F WITH HOOK
# DOUBLE LOW-9 QUOTATION MARK
# HORIZONTAL ELLIPSIS
# DAGGER
# DOUBLE DAGGER
# MODIFIER LETTER CIRCUMFLEX ACCENT
# PER MILLE SIGN
# LATIN CAPITAL LETTER S WITH CARON
# SINGLE LEFT-POINTING ANGLE QUOTATION MARK
# LATIN CAPITAL LIGATURE OE
# LATIN CAPITAL LETTER Z WITH CARON
# LEFT SINGLE QUOTATION MARK
# RIGHT SINGLE QUOTATION MARK
# LEFT DOUBLE QUOTATION MARK
# RIGHT DOUBLE QUOTATION MARK
# BULLET
# EN DASH
# EM DASH
# SMALL TILDE
# TRADE MARK SIGN
# LATIN SMALL LETTER S WITH CARON
# SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
# LATIN SMALL LIGATURE OE
# LATIN SMALL LETTER Z WITH CARON
# LATIN CAPITAL LETTER Y WITH DIAERESIS
# 0x0001 to 0x0008
# 0x000E to 0x001F
# 0x007F to 0x009F
# 0xFDD0 to 0xFDEF
# numeric charref
# named charref
# find the longest matching name (as defined by the standard)
# This file is based on sgmllib.py, but the API is slightly different.
# XXX There should be a way to distinguish between PCDATA (parsed
# character data -- the normal case), RCDATA (replaceable character
# data -- only char and entity references and end tags are special)
# and CDATA (character data -- only end tags are special).
# Regular expressions used for parsing
# Note:
# see the HTML5 specs section "13.2.5.6 Tag open state",
# "13.2.5.8 Tag name state" and "13.2.5.33 Attribute name state".
# https://html.spec.whatwg.org/multipage/parsing.html#tag-open-state
# https://html.spec.whatwg.org/multipage/parsing.html#tag-name-state
# https://html.spec.whatwg.org/multipage/parsing.html#attribute-name-state
# The following variables are not used, but are temporarily left for
# backward compatibility.
# Character reference processing logic specific to attribute values
# See: https://html.spec.whatwg.org/multipage/parsing.html#named-character-reference-state
# Numeric / hex char refs must always be unescaped
# Named character / entity references must only be unescaped
# if they are an exact match, and they are not followed by an equals sign
# Otherwise do not unescape
# Internal -- handle data as far as reasonable.  May leave state
# and data to be processed by a subsequent call.  If 'end' is
# true, force handling all data as if followed by EOF marker.
# if we can't find the next <, either we are at the end
# or there's more text incoming.  If the latter is True,
# we can't pass the text to handle_data in case we have
# a charref cut in half at end.  Try to determine if
# this is the case before proceeding by looking for an
# & near the end and see if it's followed by a space or ;.
# wait till we get all the text
# < or &
# < + letter
# </ + letter
# bogus comment
# bail by consuming &#
# match.group() will contain at least 2 chars
# not the end of the buffer, and can't be confused
# with some other construct
# end while
# Internal -- parse html declarations, return length or -1 if not terminated
# See w3.org/TR/html5/tokenization.html#markup-declaration-open-state
# See also parse_declaration in _markupbase
# this case is actually already handled in goahead()
# find the closing >
# see https://html.spec.whatwg.org/multipage/parsing.html#comment-start-state
# Internal -- parse bogus comment, return length or -1 if not terminated
# see https://html.spec.whatwg.org/multipage/parsing.html#bogus-comment-state
# Internal -- parse processing instr, return end or -1 if not terminated
# >
# Internal -- handle starttag, return end or -1 if not terminated
# See the HTML5 specs section "13.2.5.8 Tag name state"
# Now parse the data between i+1 and j into a tag and attrs
# XHTML-style empty tag: <span attr="value" />
# Internal -- check to see if we have a complete starttag; return end
# or -1 if incomplete.
# Internal -- parse endtag, return end or -1 if incomplete
# See the HTML5 specs section "13.2.5.7 End tag open state"
# https://html.spec.whatwg.org/multipage/parsing.html#end-tag-open-state
# fast check
# </> is ignored
# "missing-end-tag-name" parser error
# find the name: "13.2.5.8 Tag name state"
# Overridable -- finish processing of start+end tag: <tag.../>
# Overridable -- handle start tag
# Overridable -- handle end tag
# Overridable -- handle character reference
# Overridable -- handle entity reference
# Overridable -- handle data
# Overridable -- handle comment
# Overridable -- handle declaration
# Overridable -- handle processing instruction
# maps HTML4 entity name to the Unicode code point
# latin capital letter AE = latin capital ligature AE, U+00C6 ISOlat1
# latin capital letter A with acute, U+00C1 ISOlat1
# latin capital letter A with circumflex, U+00C2 ISOlat1
# latin capital letter A with grave = latin capital letter A grave, U+00C0 ISOlat1
# greek capital letter alpha, U+0391
# latin capital letter A with ring above = latin capital letter A ring, U+00C5 ISOlat1
# latin capital letter A with tilde, U+00C3 ISOlat1
# latin capital letter A with diaeresis, U+00C4 ISOlat1
# greek capital letter beta, U+0392
# latin capital letter C with cedilla, U+00C7 ISOlat1
# greek capital letter chi, U+03A7
# double dagger, U+2021 ISOpub
# greek capital letter delta, U+0394 ISOgrk3
# latin capital letter ETH, U+00D0 ISOlat1
# latin capital letter E with acute, U+00C9 ISOlat1
# latin capital letter E with circumflex, U+00CA ISOlat1
# latin capital letter E with grave, U+00C8 ISOlat1
# greek capital letter epsilon, U+0395
# greek capital letter eta, U+0397
# latin capital letter E with diaeresis, U+00CB ISOlat1
# greek capital letter gamma, U+0393 ISOgrk3
# latin capital letter I with acute, U+00CD ISOlat1
# latin capital letter I with circumflex, U+00CE ISOlat1
# latin capital letter I with grave, U+00CC ISOlat1
# greek capital letter iota, U+0399
# latin capital letter I with diaeresis, U+00CF ISOlat1
# greek capital letter kappa, U+039A
# greek capital letter lambda, U+039B ISOgrk3
# greek capital letter mu, U+039C
# latin capital letter N with tilde, U+00D1 ISOlat1
# greek capital letter nu, U+039D
# latin capital ligature OE, U+0152 ISOlat2
# latin capital letter O with acute, U+00D3 ISOlat1
# latin capital letter O with circumflex, U+00D4 ISOlat1
# latin capital letter O with grave, U+00D2 ISOlat1
# greek capital letter omega, U+03A9 ISOgrk3
# greek capital letter omicron, U+039F
# latin capital letter O with stroke = latin capital letter O slash, U+00D8 ISOlat1
# latin capital letter O with tilde, U+00D5 ISOlat1
# latin capital letter O with diaeresis, U+00D6 ISOlat1
# greek capital letter phi, U+03A6 ISOgrk3
# greek capital letter pi, U+03A0 ISOgrk3
# double prime = seconds = inches, U+2033 ISOtech
# greek capital letter psi, U+03A8 ISOgrk3
# greek capital letter rho, U+03A1
# latin capital letter S with caron, U+0160 ISOlat2
# greek capital letter sigma, U+03A3 ISOgrk3
# latin capital letter THORN, U+00DE ISOlat1
# greek capital letter tau, U+03A4
# greek capital letter theta, U+0398 ISOgrk3
# latin capital letter U with acute, U+00DA ISOlat1
# latin capital letter U with circumflex, U+00DB ISOlat1
# latin capital letter U with grave, U+00D9 ISOlat1
# greek capital letter upsilon, U+03A5 ISOgrk3
# latin capital letter U with diaeresis, U+00DC ISOlat1
# greek capital letter xi, U+039E ISOgrk3
# latin capital letter Y with acute, U+00DD ISOlat1
# latin capital letter Y with diaeresis, U+0178 ISOlat2
# greek capital letter zeta, U+0396
# latin small letter a with acute, U+00E1 ISOlat1
# latin small letter a with circumflex, U+00E2 ISOlat1
# acute accent = spacing acute, U+00B4 ISOdia
# latin small letter ae = latin small ligature ae, U+00E6 ISOlat1
# latin small letter a with grave = latin small letter a grave, U+00E0 ISOlat1
# alef symbol = first transfinite cardinal, U+2135 NEW
# greek small letter alpha, U+03B1 ISOgrk3
# ampersand, U+0026 ISOnum
# logical and = wedge, U+2227 ISOtech
# angle, U+2220 ISOamso
# latin small letter a with ring above = latin small letter a ring, U+00E5 ISOlat1
# almost equal to = asymptotic to, U+2248 ISOamsr
# latin small letter a with tilde, U+00E3 ISOlat1
# latin small letter a with diaeresis, U+00E4 ISOlat1
# double low-9 quotation mark, U+201E NEW
# greek small letter beta, U+03B2 ISOgrk3
# broken bar = broken vertical bar, U+00A6 ISOnum
# bullet = black small circle, U+2022 ISOpub
# intersection = cap, U+2229 ISOtech
# latin small letter c with cedilla, U+00E7 ISOlat1
# cedilla = spacing cedilla, U+00B8 ISOdia
# cent sign, U+00A2 ISOnum
# greek small letter chi, U+03C7 ISOgrk3
# modifier letter circumflex accent, U+02C6 ISOpub
# black club suit = shamrock, U+2663 ISOpub
# approximately equal to, U+2245 ISOtech
# copyright sign, U+00A9 ISOnum
# downwards arrow with corner leftwards = carriage return, U+21B5 NEW
# union = cup, U+222A ISOtech
# currency sign, U+00A4 ISOnum
# downwards double arrow, U+21D3 ISOamsa
# dagger, U+2020 ISOpub
# downwards arrow, U+2193 ISOnum
# degree sign, U+00B0 ISOnum
# greek small letter delta, U+03B4 ISOgrk3
# black diamond suit, U+2666 ISOpub
# division sign, U+00F7 ISOnum
# latin small letter e with acute, U+00E9 ISOlat1
# latin small letter e with circumflex, U+00EA ISOlat1
# latin small letter e with grave, U+00E8 ISOlat1
# empty set = null set = diameter, U+2205 ISOamso
# em space, U+2003 ISOpub
# en space, U+2002 ISOpub
# greek small letter epsilon, U+03B5 ISOgrk3
# identical to, U+2261 ISOtech
# greek small letter eta, U+03B7 ISOgrk3
# latin small letter eth, U+00F0 ISOlat1
# latin small letter e with diaeresis, U+00EB ISOlat1
# euro sign, U+20AC NEW
# there exists, U+2203 ISOtech
# latin small f with hook = function = florin, U+0192 ISOtech
# for all, U+2200 ISOtech
# vulgar fraction one half = fraction one half, U+00BD ISOnum
# vulgar fraction one quarter = fraction one quarter, U+00BC ISOnum
# vulgar fraction three quarters = fraction three quarters, U+00BE ISOnum
# fraction slash, U+2044 NEW
# greek small letter gamma, U+03B3 ISOgrk3
# greater-than or equal to, U+2265 ISOtech
# greater-than sign, U+003E ISOnum
# left right double arrow, U+21D4 ISOamsa
# left right arrow, U+2194 ISOamsa
# black heart suit = valentine, U+2665 ISOpub
# horizontal ellipsis = three dot leader, U+2026 ISOpub
# latin small letter i with acute, U+00ED ISOlat1
# latin small letter i with circumflex, U+00EE ISOlat1
# inverted exclamation mark, U+00A1 ISOnum
# latin small letter i with grave, U+00EC ISOlat1
# blackletter capital I = imaginary part, U+2111 ISOamso
# infinity, U+221E ISOtech
# integral, U+222B ISOtech
# greek small letter iota, U+03B9 ISOgrk3
# inverted question mark = turned question mark, U+00BF ISOnum
# element of, U+2208 ISOtech
# latin small letter i with diaeresis, U+00EF ISOlat1
# greek small letter kappa, U+03BA ISOgrk3
# leftwards double arrow, U+21D0 ISOtech
# greek small letter lambda, U+03BB ISOgrk3
# left-pointing angle bracket = bra, U+2329 ISOtech
# left-pointing double angle quotation mark = left pointing guillemet, U+00AB ISOnum
# leftwards arrow, U+2190 ISOnum
# left ceiling = apl upstile, U+2308 ISOamsc
# left double quotation mark, U+201C ISOnum
# less-than or equal to, U+2264 ISOtech
# left floor = apl downstile, U+230A ISOamsc
# asterisk operator, U+2217 ISOtech
# lozenge, U+25CA ISOpub
# left-to-right mark, U+200E NEW RFC 2070
# single left-pointing angle quotation mark, U+2039 ISO proposed
# left single quotation mark, U+2018 ISOnum
# less-than sign, U+003C ISOnum
# macron = spacing macron = overline = APL overbar, U+00AF ISOdia
# em dash, U+2014 ISOpub
# micro sign, U+00B5 ISOnum
# middle dot = Georgian comma = Greek middle dot, U+00B7 ISOnum
# minus sign, U+2212 ISOtech
# greek small letter mu, U+03BC ISOgrk3
# nabla = backward difference, U+2207 ISOtech
# no-break space = non-breaking space, U+00A0 ISOnum
# en dash, U+2013 ISOpub
# not equal to, U+2260 ISOtech
# contains as member, U+220B ISOtech
# not sign, U+00AC ISOnum
# not an element of, U+2209 ISOtech
# not a subset of, U+2284 ISOamsn
# latin small letter n with tilde, U+00F1 ISOlat1
# greek small letter nu, U+03BD ISOgrk3
# latin small letter o with acute, U+00F3 ISOlat1
# latin small letter o with circumflex, U+00F4 ISOlat1
# latin small ligature oe, U+0153 ISOlat2
# latin small letter o with grave, U+00F2 ISOlat1
# overline = spacing overscore, U+203E NEW
# greek small letter omega, U+03C9 ISOgrk3
# greek small letter omicron, U+03BF NEW
# circled plus = direct sum, U+2295 ISOamsb
# logical or = vee, U+2228 ISOtech
# feminine ordinal indicator, U+00AA ISOnum
# masculine ordinal indicator, U+00BA ISOnum
# latin small letter o with stroke, = latin small letter o slash, U+00F8 ISOlat1
# latin small letter o with tilde, U+00F5 ISOlat1
# circled times = vector product, U+2297 ISOamsb
# latin small letter o with diaeresis, U+00F6 ISOlat1
# pilcrow sign = paragraph sign, U+00B6 ISOnum
# partial differential, U+2202 ISOtech
# per mille sign, U+2030 ISOtech
# up tack = orthogonal to = perpendicular, U+22A5 ISOtech
# greek small letter phi, U+03C6 ISOgrk3
# greek small letter pi, U+03C0 ISOgrk3
# greek pi symbol, U+03D6 ISOgrk3
# plus-minus sign = plus-or-minus sign, U+00B1 ISOnum
# pound sign, U+00A3 ISOnum
# prime = minutes = feet, U+2032 ISOtech
# n-ary product = product sign, U+220F ISOamsb
# proportional to, U+221D ISOtech
# greek small letter psi, U+03C8 ISOgrk3
# quotation mark = APL quote, U+0022 ISOnum
# rightwards double arrow, U+21D2 ISOtech
# square root = radical sign, U+221A ISOtech
# right-pointing angle bracket = ket, U+232A ISOtech
# right-pointing double angle quotation mark = right pointing guillemet, U+00BB ISOnum
# rightwards arrow, U+2192 ISOnum
# right ceiling, U+2309 ISOamsc
# right double quotation mark, U+201D ISOnum
# blackletter capital R = real part symbol, U+211C ISOamso
# registered sign = registered trade mark sign, U+00AE ISOnum
# right floor, U+230B ISOamsc
# greek small letter rho, U+03C1 ISOgrk3
# right-to-left mark, U+200F NEW RFC 2070
# single right-pointing angle quotation mark, U+203A ISO proposed
# right single quotation mark, U+2019 ISOnum
# single low-9 quotation mark, U+201A NEW
# latin small letter s with caron, U+0161 ISOlat2
# dot operator, U+22C5 ISOamsb
# section sign, U+00A7 ISOnum
# soft hyphen = discretionary hyphen, U+00AD ISOnum
# greek small letter sigma, U+03C3 ISOgrk3
# greek small letter final sigma, U+03C2 ISOgrk3
# tilde operator = varies with = similar to, U+223C ISOtech
# black spade suit, U+2660 ISOpub
# subset of, U+2282 ISOtech
# subset of or equal to, U+2286 ISOtech
# n-ary summation, U+2211 ISOamsb
# superset of, U+2283 ISOtech
# superscript one = superscript digit one, U+00B9 ISOnum
# superscript two = superscript digit two = squared, U+00B2 ISOnum
# superscript three = superscript digit three = cubed, U+00B3 ISOnum
# superset of or equal to, U+2287 ISOtech
# latin small letter sharp s = ess-zed, U+00DF ISOlat1
# greek small letter tau, U+03C4 ISOgrk3
# therefore, U+2234 ISOtech
# greek small letter theta, U+03B8 ISOgrk3
# greek small letter theta symbol, U+03D1 NEW
# thin space, U+2009 ISOpub
# latin small letter thorn with, U+00FE ISOlat1
# small tilde, U+02DC ISOdia
# multiplication sign, U+00D7 ISOnum
# trade mark sign, U+2122 ISOnum
# upwards double arrow, U+21D1 ISOamsa
# latin small letter u with acute, U+00FA ISOlat1
# upwards arrow, U+2191 ISOnum
# latin small letter u with circumflex, U+00FB ISOlat1
# latin small letter u with grave, U+00F9 ISOlat1
# diaeresis = spacing diaeresis, U+00A8 ISOdia
# greek upsilon with hook symbol, U+03D2 NEW
# greek small letter upsilon, U+03C5 ISOgrk3
# latin small letter u with diaeresis, U+00FC ISOlat1
# script capital P = power set = Weierstrass p, U+2118 ISOamso
# greek small letter xi, U+03BE ISOgrk3
# latin small letter y with acute, U+00FD ISOlat1
# yen sign = yuan sign, U+00A5 ISOnum
# latin small letter y with diaeresis, U+00FF ISOlat1
# greek small letter zeta, U+03B6 ISOgrk3
# zero width joiner, U+200D NEW RFC 2070
# zero width non-joiner, U+200C NEW RFC 2070
# HTML5 named character references
# Generated by Tools/build/parse_html5_entities.py
# from https://html.spec.whatwg.org/entities.json and
# https://html.spec.whatwg.org/multipage/named-characters.html.
# Map HTML5 named character references to the equivalent Unicode character(s).
# maps the Unicode code point to the HTML entity name
# maps the HTML entity name to the character
# (or a character reference if the character is outside the Latin-1 range)
# If this fails your Python may not be configured for Tk
# set to True to print executed Tcl/Tk commands
# add '\' before special characters and spaces
# undocumented
# widget usually is known
# serial and time are not very interesting
# keysym_num duplicates keysym
# x_root and y_root mostly duplicate x and y
# Delete, so any use of _default_root will immediately raise an exception.
# Rebind before deletion, so repeated calls will not fail.
# check for type of NAME parameter to override weird error message
# raised from Modules/_tkinter.c:SetVar like:
# TypeError: setvar() takes exactly 3 arguments (2 given)
# TODO: Add deprecation warning
# Methods defined on both toplevel and interior widgets
# used for generating child widget names
# XXX font command?
# XXX b/w compat
# XXX b/w compat?
# I'd rather use time.sleep(ms*0.001)
# Required for callable classes (bpo-44404)
# Clipboard handling:
# XXX grab current w/o window argument
# Tcl sometimes returns extra windows, e.g. for
# menus; those need to be skipped
# Missing: (a, c, d, m, o, v, B, R)
# serial field: valid for all events
# number of button: ButtonPress and ButtonRelease events only
# height field: Configure, ConfigureRequest, Create,
# ResizeRequest, and Expose events only
# keycode field: KeyPress and KeyRelease events only
# time field: "valid for events that contain a time field"
# width field: Configure, ConfigureRequest, Create, ResizeRequest,
# and Expose events only
# x field: "valid for events that contain an x field"
# y field: "valid for events that contain a y field"
# keysym as decimal: KeyPress and KeyRelease events only
# x_root, y_root fields: ButtonPress, ButtonRelease, KeyPress,
# KeyRelease, and Motion events
# can be int
# These used to be defined in Widget:
# Pack methods that apply to the master
# Place method that applies to the master
# Grid methods that apply to the master
# new in Tk 8.5
# Support for the "event" command, new in Tk 4.2.
# By Case Roole.
# Image related commands
# Tk needs a list of windows here
# to avoid recursions in the getattr code in case of failure, we
# ensure that self.tk is always _something_.
# Issue #16248: Honor the -E flag to avoid code injection.
# Version sanity checks
# Under unknown circumstances, tcl_version gets coerced to float
# Create and register the tkerror and exit commands
# We need to inline parts of _register here, _ register
# would register differently-named commands.
# Print executed Tcl/Tk commands.
# Ideally, the classes Pack, Place and Grid disappear, the
# pack/place/grid methods are defined on the Widget class, and
# everybody uses w.pack_whatever(...) instead of Pack.whatever(w,
# ...), with pack(), place() and grid() being short for
# pack_configure(), place_configure() and grid_columnconfigure(), and
# forget() being short for pack_forget().  As a practical matter, I'm
# afraid that there is too much code out there that may be using the
# Pack, Place or Grid class, so I leave them intact -- but only as
# backwards compatibility features.  Also note that those methods that
# take a master as argument (e.g. pack_propagate) have been moved to
# the Misc class (which now incorporates all methods common between
# toplevel and interior widgets).  Again, for compatibility, these are
# copied into the Pack, Place or Grid class.
# Thanks to Masazumi Yoshikawa (yosikawa@isi.edu)
# Avoid duplication when calculating names below
# XXX Obsolete -- better use self.tk.call directly!
# TBD: a hack needed because some keys
# are not valid as keyword arguments
# Args: (val, val, ..., cnf={})
# lower, tkraise/lift hide Misc.lower, Misc.tkraise/lift,
# so the preferred name for them is tag_lower, tag_raise
# (similar to tag_bind, and similar to the Text widget);
# unfortunately can't delete the old ones yet (maybe in 1.6)
# Because Checkbutton defaults to a variable with the same name as
# the widget, Checkbutton default names must be globally unique,
# not just unique within the parent widget.
# To avoid collisions with ttk.Checkbutton, use the different
# name template.
# GH-103685.
# Never call the dump command without the -command flag, since the
# output could involve Tcl quoting and would be a pain to parse
# right. Instead just set the command to build a list of triples
# as if we had done the parsing.
## new in tk8.4
# (Image commands are new in 8.0)
# For tests only
# 'command' is the only supported keyword
# tk itself would use image<x>
# May happen if the root was destroyed
# XXX config
# For wantobjects = 0.
# Test:
# The following three commands are needed so the window pops
# up on top on Windows...
# Symbolic constants for Tk
# -anchor and -sticky
# -fill
# -side
# -relief
# -orient
# -tabs
# -wrap
# -align
# -bordermode
# Special tags, marks and insert positions
# e.g. Canvas.delete(ALL)
# Text widget and button states
# Canvas state
# Menu item types
# Selection modes for list boxes
# Activestyle for list boxes
# NONE='none' is also valid
# Various canvas styles
# Arguments to xview/yview
# tk common color chooser dialogue
# this module provides an interface to the native color dialogue
# available in Tk 4.2 and newer.
# written by Fredrik Lundh, May 1997
# fixed initialcolor handling in August 1998
# Assume an RGB triplet.
# Result can be many things: an empty tuple, an empty string, or
# a _tkinter.Tcl_Obj, so this somewhat weird check handles that.
# canceled
# To simplify application code, the color chooser returns
# an RGB tuple together with the Tk color string.
# convenience stuff
# test stuff
# Copy geometry methods of self.frame without overriding Text
# methods -- hack!
# An Introduction to Tkinter
# Copyright (c) 1997 by Fredrik Lundh
# This copyright applies to Dialog, askinteger, askfloat and asktring
# remain invisible for now
# If the parent is not viewable, don't
# make the child transient, or else it
# would be opened withdrawn
# wait for window to appear on screen before calling grab_set
# construction hooks
# standard button semantics
# put focus back
# put focus back to the parent window
# command hooks
# override
# Place a toplevel window at the center of parent or screen
# It is a Python implementation of ::tk::PlaceWindow.
# Remain invisible while we figure out the geometry
# Actualize geometry information
# Avoid the native menu bar which sits on top of everything.
# Become visible at the desired location
# convenience dialogues
# Tkinter font wrapper
# written by Fredrik Lundh, February 1998
# weight/slant
# get actual settings corresponding to the given font
# confirm font exists
# if font config info supplied, apply it
# create new font (raises TclError if the font exists)
# create a font
# XXX Are the following okay for a general audience?
# window needs to be visible for the grab
# Exited by self.quit(how)
# Exit mainloop()
# For the following classes and modules:
# options (all have default values):
# - defaultextension: added to filename if not explicitly given
# - filetypes: sequence of (label, pattern) tuples.  the same pattern
# - initialdir: initial directory.  preserved by dialog instance.
# - initialfile: initial file (ignored by the open dialog).  preserved
# - parent: which window to place the dialog on top of
# - title: dialog title
# - multiple: if true user may select more than one file
# options for the directory chooser:
# - initialdir, parent, title: see above
# - mustexist: if true, user must pick an existing directory
# make sure "filetypes" is a tuple
# keep directory and filename until next time
# convert Tcl path objects to strings
# it already is a string
# file dialogs
# multiple results:
# don't set initialfile or filename, as we have multiple of these
# Need to split result explicitly
# the directory dialog has its own _fix routines.
# keep directory until next time
# FIXME: are the following  perhaps a bit too convenient?
# Since the file name may contain non-ASCII characters, we need
# to find an encoding that likely supports the file name, and
# displays correctly on the terminal.
# Start off with UTF-8
# See whether CODESET is defined
# dialog for opening files
# dialog for saving files
# base class for tk common dialogues
# this module provides a base class for accessing the common
# dialogues available in Tk 4.2 and newer.  use filedialog,
# colorchooser, and messagebox to access the individual
# dialogs.
# hook
# update instance options
# The function below is replaced for some tests.
# tk common message boxes
# this module provides an interface to the native message boxes
# - default: which button to make default (one of the reply codes)
# - icon: which icon to display (see below)
# - message: the message to display
# - type: dialog type; that is, which buttons to display (see below)
# icons
# replies
# message dialog class
# Rename _icon and _type options to allow overriding them in options
# In some Tcl installations, yes/no is converted into a boolean.
# In others we get a Tcl_Obj.
# s might be a Tcl index object, so convert it to a string
# dialog.py -- Tkinter interface to the tk_dialog script.
# The factory function
# The class that does the work
# Don't start recursive dnd
# The rest is here for testing and demonstration purposes only!
# where the pointer is relative to the label widget:
# where the widget is relative to the canvas:
# where the corner of the canvas is relative to the screen:
# where the pointer is relative to the canvas widget:
# compensate for initial pointer offset
# Show highlight border
# Hide highlight border
# if caller passes a Tcl script to tk.call, all the values need to
# be grouped into words (arguments to a command in Tcl dialect)
# each value in mapdict is expected to be a sequence, where each item
# is another sequence containing a state (or several) and a value
# E.g. (script=False):
# if it is empty (something that evaluates to False), then
# format it to Tcl code to denote the "normal" state
# group multiple states
# raise TypeError if not str
# define an element based on an image
# first arg should be the default image name
# next args, if any, are statespec/value pairs which is almost
# a mapdict, but we just need the value
# define an element whose visual appearance is drawn using the
# Microsoft Visual Styles API which is responsible for the
# themed styles on Windows XP and Vista.
# Availability: Tk 8.6, Windows XP and Vista.
# clone an element
# it expects a themename and optionally an element to clone from,
# otherwise it will clone {} (empty element)
# theme name
# elementfrom specified
# a script will be generated according to settings passed, which
# will then be evaluated by Tcl
# will format specific keys according to Tcl code
# format 'configure'
# format 'map'
# format 'layout' which may be empty
# could be any other word, but this one makes sense
# format 'element create'
# find where args end, and where kwargs start
# etype was the first one
# this is a Tcl object
# grab name's options
# found next name
# remove the '-' from the option
# option specified without a value, return its value
# some other (single) Tcl object
# will disable the layout ({}, '', etc)
# could be any other word, but this may make sense
# when calling layout(style) later
# Starting on Tk 8.6, checking this global is no longer needed
# since it allows doing self.tk.call(self._name, "theme", "use")
# using "ttk::setTheme" instead of "ttk::style theme use" causes
# the variable currentTheme to be updated, also, ttk::setTheme calls
# "ttk::style theme use" in order to change theme.
# tkinter name compatibility
# The only, and good, difference I see is about mnemonics, which works
# after calling this method. Control-Tab and Shift-Control-Tab always
# works (here at least).
# overrides Pack.forget
# callback not registered yet, do it now
# A sensible method name for reattaching detached items
# position scale and label according to the compound option
# Dummy required to make frame correct height
# update the label as scale or variable changes
# "force" scale redraw
# value outside range, set value back to the last valid one
