Compare commits

...

10 Commits

Author SHA1 Message Date
Francesco Valla 46d6020c30
Merge 31d4a750d8 into b10e29b8f5 2025-08-29 12:19:24 -04:00
Kyle Schwarz b10e29b8f5 Driver: DXX: Update libredxx 2025-08-28 23:24:04 -04:00
Thomas Stoddard 3c0c0dd44c Docs: Refactor icsneopy examples 2025-08-28 11:40:57 -04:00
Kyle Schwarz cf2cf3e28b CMake: Disable libredxx install 2025-08-28 10:09:30 -04:00
Kyle Schwarz 6926ca8199 CI: Update Linux images 2025-08-25 11:29:46 -04:00
Kyle Schwarz 29dc7b345f Driver: Switch to libredxx
- no more libFTDI
- no more libusb on Linux and macOS
- no more FTDI repack
- no more binary libs
- faster D2XX on Windows (no longer uses COM)
2025-08-25 11:24:03 -04:00
Kyle Schwarz 17285389e3 Tests: Switch to FetchContent 2025-08-21 20:19:22 -04:00
Bryant Jones 328563b7e6 Docs: Add TC10 .py example 2025-08-12 12:36:16 -04:00
Kyle Schwarz 6b60804174 Device: Add com keepalive 2025-08-06 17:17:36 -04:00
Francesco Valla 31d4a750d8 EthernetPacketizer: do a size check on incoming bytestream
An incoming bytestream can be less than 24 bytes, leading to exceptions
when accessing its data (or allocating the vector for its payload).
Perform a size check before trying to decode the bytestream and discard
invalid incoming streams.

Signed-off-by: Francesco Valla <francesco.valla@mta.it>
2025-07-04 10:57:28 +02:00
384 changed files with 1554 additions and 129438 deletions

View File

@ -67,7 +67,7 @@ unit_test windows/x86:
script:
- apt update -y
- apt upgrade -y
- apt install -y g++ ninja-build cmake libusb-1.0-0-dev libpcap-dev git
- apt install -y g++ ninja-build cmake libpcap-dev git
- sh ci/build-posix.sh
artifacts:
when: always
@ -82,7 +82,7 @@ unit_test windows/x86:
script:
- apt update -y
- apt upgrade -y
- apt install -y libusb-1.0-0-dev libpcap-dev
- apt install -y libpcap-dev
- build/libicsneo-unit-tests
tags:
- linux-build
@ -93,7 +93,7 @@ unit_test windows/x86:
script:
- apt update -y
- apt upgrade -y
- apt install -y clang lld ninja-build cmake libusb-1.0-0-dev libpcap-dev git
- apt install -y clang lld ninja-build cmake libpcap-dev git
- CC=clang CXX=clang++ LDFLAGS=-fuse-ld=lld sh ci/build-posix.sh
artifacts:
when: always
@ -108,36 +108,12 @@ unit_test windows/x86:
script:
- apt update -y
- apt upgrade -y
- apt install -y libusb-1.0-0-dev libpcap-dev
- apt install -y libpcap-dev
- build/libicsneo-unit-tests
tags:
- linux-build
timeout: 5m
build linux/ubuntu/2004/amd64/gcc:
<<: *build_linux_ubuntu_gcc
image: ubuntu:20.04
unit_test linux/ubuntu/2004/amd64/gcc:
<<: *test_linux_ubuntu_gcc
image: ubuntu:20.04
dependencies:
- build linux/ubuntu/2004/amd64/gcc
needs:
- build linux/ubuntu/2004/amd64/gcc
build linux/ubuntu/2004/amd64/clang:
<<: *build_linux_ubuntu_clang
image: ubuntu:20.04
unit_test linux/ubuntu/2004/amd64/clang:
<<: *test_linux_ubuntu_clang
image: ubuntu:20.04
dependencies:
- build linux/ubuntu/2004/amd64/clang
needs:
- build linux/ubuntu/2004/amd64/clang
build linux/ubuntu/2204/amd64/gcc:
<<: *build_linux_ubuntu_gcc
image: ubuntu:22.04
@ -162,6 +138,30 @@ unit_test linux/ubuntu/2204/amd64/clang:
needs:
- build linux/ubuntu/2204/amd64/clang
build linux/ubuntu/2404/amd64/gcc:
<<: *build_linux_ubuntu_gcc
image: ubuntu:24.04
unit_test linux/ubuntu/2404/amd64/gcc:
<<: *test_linux_ubuntu_gcc
image: ubuntu:24.04
dependencies:
- build linux/ubuntu/2404/amd64/gcc
needs:
- build linux/ubuntu/2404/amd64/gcc
build linux/ubuntu/2404/amd64/clang:
<<: *build_linux_ubuntu_clang
image: ubuntu:24.04
unit_test linux/ubuntu/2404/amd64/clang:
<<: *test_linux_ubuntu_clang
image: ubuntu:24.04
dependencies:
- build linux/ubuntu/2404/amd64/clang
needs:
- build linux/ubuntu/2404/amd64/clang
#-------------------------------------------------------------------------------
# Fedora
#-------------------------------------------------------------------------------
@ -175,7 +175,7 @@ unit_test linux/ubuntu/2204/amd64/clang:
- echo max_parallel_downloads=10 >>/etc/dnf/dnf.conf
- echo fastestmirror=True >>/etc/dnf/dnf.conf
- dnf upgrade -y
- dnf install -y g++ libpcap-devel cmake ninja-build libusb1-devel git
- dnf install -y g++ libpcap-devel cmake ninja-build git
- sh ci/build-posix.sh
artifacts:
when: always
@ -194,7 +194,7 @@ unit_test linux/ubuntu/2204/amd64/clang:
- echo max_parallel_downloads=10 >>/etc/dnf/dnf.conf
- echo fastestmirror=True >>/etc/dnf/dnf.conf
- dnf upgrade -y
- dnf install -y libpcap-devel libusb1-devel
- dnf install -y libpcap-devel
- build/libicsneo-unit-tests
tags:
- linux-build
@ -209,7 +209,7 @@ unit_test linux/ubuntu/2204/amd64/clang:
- echo max_parallel_downloads=10 >>/etc/dnf/dnf.conf
- echo fastestmirror=True >>/etc/dnf/dnf.conf
- dnf upgrade -y
- dnf install -y clang lld libpcap-devel cmake ninja-build libusb1-devel git
- dnf install -y clang lld libpcap-devel cmake ninja-build git
- CC=clang CXX=clang++ LDFLAGS=-fuse-ld=lld sh ci/build-posix.sh
artifacts:
when: always
@ -228,60 +228,12 @@ unit_test linux/ubuntu/2204/amd64/clang:
- echo max_parallel_downloads=10 >>/etc/dnf/dnf.conf
- echo fastestmirror=True >>/etc/dnf/dnf.conf
- dnf upgrade -y
- dnf install -y libpcap-devel libusb1-devel
- dnf install -y libpcap-devel
- build/libicsneo-unit-tests
tags:
- linux-build
timeout: 5m
build linux/fedora/39/amd64/gcc:
<<: *build_linux_fedora_gcc
image: fedora:39
unit_test linux/fedora/39/amd64/gcc:
<<: *test_linux_fedora_gcc
image: fedora:39
dependencies:
- build linux/fedora/39/amd64/gcc
needs:
- build linux/fedora/39/amd64/gcc
build linux/fedora/39/amd64/clang:
<<: *build_linux_fedora_clang
image: fedora:39
unit_test linux/fedora/39/amd64/clang:
<<: *test_linux_fedora_clang
image: fedora:39
dependencies:
- build linux/fedora/39/amd64/clang
needs:
- build linux/fedora/39/amd64/clang
build linux/fedora/40/amd64/gcc:
<<: *build_linux_fedora_gcc
image: fedora:40
unit_test linux/fedora/40/amd64/gcc:
<<: *test_linux_fedora_gcc
image: fedora:40
dependencies:
- build linux/fedora/40/amd64/gcc
needs:
- build linux/fedora/40/amd64/gcc
build linux/fedora/40/amd64/clang:
<<: *build_linux_fedora_clang
image: fedora:40
unit_test linux/fedora/40/amd64/clang:
<<: *test_linux_fedora_clang
image: fedora:40
dependencies:
- build linux/fedora/40/amd64/clang
needs:
- build linux/fedora/40/amd64/clang
build linux/fedora/41/amd64/gcc:
<<: *build_linux_fedora_gcc
image: fedora:41
@ -306,6 +258,30 @@ unit_test linux/fedora/41/amd64/clang:
needs:
- build linux/fedora/41/amd64/clang
build linux/fedora/42/amd64/gcc:
<<: *build_linux_fedora_gcc
image: fedora:42
unit_test linux/fedora/42/amd64/gcc:
<<: *test_linux_fedora_gcc
image: fedora:42
dependencies:
- build linux/fedora/42/amd64/gcc
needs:
- build linux/fedora/42/amd64/gcc
build linux/fedora/42/amd64/clang:
<<: *build_linux_fedora_clang
image: fedora:42
unit_test linux/fedora/42/amd64/clang:
<<: *test_linux_fedora_clang
image: fedora:42
dependencies:
- build linux/fedora/42/amd64/clang
needs:
- build linux/fedora/42/amd64/clang
#-------------------------------------------------------------------------------
# Python Module
#-------------------------------------------------------------------------------
@ -314,19 +290,19 @@ build python/linux/amd64:
stage: build
tags:
- linux-build
image: python:3.12
image: python:3.13
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
CIBW_BEFORE_ALL: yum install -y flex && sh ci/bootstrap-libpcap.sh && sh ci/bootstrap-libusb.sh
CIBW_BEFORE_ALL: yum install -y flex && sh ci/bootstrap-libpcap.sh
CIBW_BUILD: "*manylinux*" # no musl
CIBW_ARCHS: x86_64
DOCKER_HOST: unix:///var/run/docker.sock
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=/project/libpcap/install:/project/libusb/install
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=/project/libpcap/install
script:
- curl -sSL https://get.docker.com/ | sh
- sh ci/build-wheel-posix.sh
@ -339,10 +315,10 @@ build python/linux/arm64:
tags:
- arm64-linux-build
variables:
CIBW_BEFORE_ALL: yum install -y flex && sh ci/bootstrap-libpcap.sh && sh ci/bootstrap-libusb.sh
CIBW_BEFORE_ALL: yum install -y flex && sh ci/bootstrap-libpcap.sh
CIBW_BUILD: "*manylinux*" # no musl
CIBW_ARCHS: aarch64
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=/project/libpcap/install:/project/libusb/install
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=/project/libpcap/install
script:
- sh ci/build-wheel-posix.sh
artifacts:
@ -354,9 +330,9 @@ build python/macos:
tags:
- macos-arm64
variables:
CIBW_BEFORE_ALL: sh ci/bootstrap-libpcap.sh && sh ci/bootstrap-libusb.sh
CIBW_BEFORE_ALL: sh ci/bootstrap-libpcap.sh
CIBW_ARCHS: arm64
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=$CI_PROJECT_DIR/libpcap/install:$CI_PROJECT_DIR/libusb/install
CIBW_ENVIRONMENT: CMAKE_PREFIX_PATH=$CI_PROJECT_DIR/libpcap/install
MACOSX_DEPLOYMENT_TARGET: 10.14
script:
- sh ci/build-wheel-posix.sh
@ -384,7 +360,7 @@ deploy python/pypi:
TWINE_PASSWORD: $PYPI_TOKEN
tags:
- linux-build
image: python:3.12
image: python:3.13
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:

View File

@ -22,9 +22,8 @@ set(LIBICSNEO_NPCAP_INCLUDE_DIR "" CACHE STRING "Npcap include directory; set to
option(LIBICSNEO_ENABLE_FIRMIO "Enable communication between Linux and CoreMini within the same device" OFF)
option(LIBICSNEO_ENABLE_RAW_ETHERNET "Enable devices which communicate over raw ethernet" ON)
option(LIBICSNEO_ENABLE_CDCACM "Enable devices which communicate over USB CDC ACM" ON)
option(LIBICSNEO_ENABLE_FTDI "Enable devices which communicate over USB FTDI2XX" ON)
option(LIBICSNEO_ENABLE_TCP "Enable devices which communicate over TCP" OFF)
option(LIBICSNEO_ENABLE_FTD3XX "Enable devices which communicate over USB FTD3XX" ON)
option(LIBICSNEO_ENABLE_DXX "Enable devices which communicate over D2XX/D3XX via libredxx" ON)
option(LIBICSNEO_ENABLE_BINDINGS_PYTHON "Enable Python library" OFF)
@ -109,7 +108,6 @@ if(LIBICSNEO_BUILD_DOCS)
endif()
if(WIN32)
add_definitions(-DWIN32_LEAN_AND_MEAN -DNOMINMAX -D_CRT_SECURE_NO_WARNINGS)
set(PLATFORM_SRC
platform/windows/strings.cpp
platform/windows/registry.cpp
@ -122,9 +120,9 @@ if(WIN32)
)
endif()
if(LIBICSNEO_ENABLE_CDCACM OR LIBICSNEO_ENABLE_FTDI)
if(LIBICSNEO_ENABLE_CDCACM)
list(APPEND PLATFORM_SRC
platform/windows/vcp.cpp
platform/windows/cdcacm.cpp
)
endif()
else() # Darwin or Linux
@ -142,12 +140,6 @@ else() # Darwin or Linux
)
endif()
if(LIBICSNEO_ENABLE_FTDI)
list(APPEND PLATFORM_SRC
platform/posix/ftdi.cpp
)
endif()
if(LIBICSNEO_ENABLE_CDCACM)
list(APPEND PLATFORM_SRC
platform/posix/cdcacm.cpp
@ -171,51 +163,9 @@ else() # Darwin or Linux
endif()
endif()
if(LIBICSNEO_ENABLE_FTD3XX)
if(NOT FTD3XX_ROOT) # allow system override
include(FetchContent)
if(WIN32 AND CMAKE_SIZEOF_VOID_P EQUAL 8)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.3.0.10-win-x64.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=459e635496ab47d6069c9d3515fdd6d82cba3d95e7ae34f794d66ffdf336e9d1")
elseif(WIN32 AND CMAKE_SIZEOF_VOID_P EQUAL 4)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.3.0.10-win-i686.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=ce4259ae11772d6ede7d217172156fa392f329b29d9455131f4126a2fb89dad1")
elseif(APPLE AND CMAKE_SIZEOF_VOID_P EQUAL 8)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.0.16-macos-universal2.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=0904ac5eda8e1dc4b5aac3714383bcc7792b42dfeb585dce6cbfb8b67b8c0c51")
elseif(UNIX)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64|amd64|AMD64")
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.0.16-linux-x64.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=cf66bf299fc722f050cdd3c36998a670f1df69f7c0df18afa73707277067114b")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "arm.*|aarch64")
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.0.16-linux-aarch64.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=66341b5112b9841e959e81400b51711be96fec91894477c5cbfc29b10a0c00a6")
elseif(CMAKE_SIZEOF_VOID_P EQUAL 4)
set(LIBICSNEO_FTD3XX_URL "https://github.com/intrepidcs/libftd3xx-repack/releases/download/24.34.0/libftd3xx-1.0.16-linux-armhf.zip")
set(LIBICSNEO_FTD3XX_URL_HASH "SHA256=cec1f959b48a11eb6b829ed43c81b6ba1c0bcf3e797bafcc84a6376e5ffc3c47")
endif()
endif()
endif()
if(NOT LIBICSNEO_FTD3XX_URL)
message(FATAL_ERROR "Unsupported platform for FTD3XX driver")
endif()
FetchContent_Declare(
ftdi3xx
URL ${LIBICSNEO_FTD3XX_URL}
URL_HASH ${LIBICSNEO_FTD3XX_URL_HASH}
)
FetchContent_GetProperties(ftdi3xx)
if(NOT ftdi3xx_POPULATED)
FetchContent_Populate(ftdi3xx)
endif()
set(FTD3XX_ROOT "${ftdi3xx_SOURCE_DIR}")
endif()
find_package(FTD3XX REQUIRED)
if(LIBICSNEO_ENABLE_DXX)
list(APPEND PLATFORM_SRC
platform/ftd3xx.cpp
platform/dxx.cpp
)
endif()
@ -384,12 +334,9 @@ endif()
if(LIBICSNEO_ENABLE_CDCACM)
target_compile_definitions(icsneocpp PRIVATE ICSNEO_ENABLE_CDCACM)
endif()
if(LIBICSNEO_ENABLE_FTDI)
target_compile_definitions(icsneocpp PRIVATE ICSNEO_ENABLE_FTDI)
endif()
if(LIBICSNEO_ENABLE_FTD3XX)
target_compile_definitions(icsneocpp PRIVATE ICSNEO_ENABLE_FTD3XX)
target_link_libraries(icsneocpp PRIVATE FTD3XX::FTD3XX)
if(LIBICSNEO_ENABLE_DXX)
target_compile_definitions(icsneocpp PRIVATE ICSNEO_ENABLE_DXX)
target_link_libraries(icsneocpp PRIVATE libredxx::libredxx)
endif()
if(LIBICSNEO_ENABLE_TCP)
target_compile_definitions(icsneocpp PRIVATE ICSNEO_ENABLE_TCP)
@ -403,25 +350,16 @@ add_subdirectory(third-party/fatfs)
set_property(TARGET fatfs PROPERTY POSITION_INDEPENDENT_CODE ON)
target_link_libraries(icsneocpp PRIVATE fatfs)
# libftdi
if(LIBICSNEO_ENABLE_FTDI)
if(NOT WIN32)
target_include_directories(icsneocpp PUBLIC third-party/libftdi/src)
set(LIBFTDI_DOCUMENTATION OFF CACHE INTERNAL "")
set(LIBFTDI_BUILD_TESTS OFF CACHE INTERNAL "")
set(LIBFTDI_INSTALL OFF CACHE INTERNAL "")
set(LIBFTDI_PYTHON_BINDINGS OFF CACHE INTERNAL "")
set(LIBFTDI_LINK_PYTHON_LIBRARY OFF CACHE INTERNAL "")
set(FTDIPP OFF CACHE INTERNAL "")
set(FTDI_EEPROM OFF CACHE INTERNAL "")
add_subdirectory(third-party/libftdi)
target_include_directories(icsneocpp PRIVATE ${LIBUSB_INCLUDE_DIR})
set_property(TARGET ftdi1-static PROPERTY POSITION_INDEPENDENT_CODE ON)
target_link_libraries(icsneocpp PUBLIC ftdi1-static)
target_link_libraries(icsneocpp PUBLIC ${CMAKE_THREAD_LIBS_INIT})
endif(NOT WIN32)
endif(LIBICSNEO_ENABLE_FTDI)
# dxx
if(LIBICSNEO_ENABLE_DXX)
include(FetchContent)
FetchContent_Declare(libredxx
GIT_REPOSITORY https://github.com/Zeranoe/libredxx.git
GIT_TAG 3acc754d0af4fe529f05dbc2488b2da77ad9729c
)
set(LIBREDXX_DISABLE_INSTALL ON)
FetchContent_MakeAvailable(libredxx)
endif()
# pcap
if(LIBICSNEO_ENABLE_RAW_ETHERNET)
@ -507,17 +445,12 @@ add_subdirectory(bindings)
# googletest
if(LIBICSNEO_BUILD_UNIT_TESTS)
if(WIN32)
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
endif()
if (NOT TARGET gtest)
add_subdirectory(third-party/googletest-master)
endif()
if (CMAKE_VERSION VERSION_LESS 2.8.11)
include_directories("${gtest_SOURCE_DIR}/include")
endif()
include(FetchContent)
FetchContent_Declare(googletest
GIT_REPOSITORY https://github.com/google/googletest.git
GIT_TAG 6986c2b575f77135401a4e1c65a7a42f20e18fef
)
FetchContent_MakeAvailable(googletest)
add_executable(libicsneo-unit-tests
test/unit/main.cpp
@ -533,6 +466,7 @@ if(LIBICSNEO_BUILD_UNIT_TESTS)
test/unit/ringbuffertest.cpp
test/unit/apperrordecodertest.cpp
test/unit/windowsstrings.cpp
test/unit/periodictest.cpp
)
target_link_libraries(libicsneo-unit-tests gtest gtest_main)

View File

@ -28,7 +28,7 @@ std::string APIEvent::describe() const noexcept {
ss << *device; // Makes use of device.describe()
else
ss << "API";
Severity severity = getSeverity();
if(severity == Severity::EventInfo) {
ss << " Info: ";
@ -76,6 +76,7 @@ static constexpr const char* RESTRICTED_ENTRY_FLAG = "Attempted to set a restric
static constexpr const char* NOT_SUPPORTED = "The requested feature is not supported.";
static constexpr const char* FIXED_POINT_OVERFLOW = "Value is too large to convert to fixed point.";
static constexpr const char* FIXED_POINT_PRECISION = "Value is too small for fixed point precision.";
static constexpr const char* SYSCALL_ERROR = "Error returned from syscall, check errno/GetLastError().";
// Device Errors
static constexpr const char* POLLING_MESSAGE_OVERFLOW = "Too many messages have been recieved for the polling message buffer, some have been lost!";
@ -116,7 +117,7 @@ static constexpr const char* ATOMIC_OPERATION_RETRIED = "An operation failed to
static constexpr const char* ATOMIC_OPERATION_COMPLETED_NONATOMICALLY = "An ideally-atomic operation was completed nonatomically.";
static constexpr const char* WIVI_STACK_REFRESH_FAILED = "The Wireless neoVI stack encountered a communication error.";
static constexpr const char* WIVI_UPLOAD_STACK_OVERFLOW = "The Wireless neoVI upload stack has encountered an overflow condition.";
static constexpr const char* A2B_MESSAGE_INCOMPLETE_FRAME = "At least one of the frames of the A2B message does not contain samples for each channel and stream.";
static constexpr const char* A2B_MESSAGE_INCOMPLETE_FRAME = "At least one of the frames of the A2B message does not contain samples for each channel and stream.";
static constexpr const char* COREMINI_UPLOAD_VERSION_MISMATCH = "The version of the coremini engine on the device and the script uploaded are not the same.";
static constexpr const char* DISK_NOT_CONNECTED = "The program tried to access a disk that is not connected.";
static constexpr const char* UNEXPECTED_RESPONSE = "Received an unexpected or invalid response from the device.";
@ -146,41 +147,6 @@ static constexpr const char* ERROR_SETTING_SOCKET_OPTION = "A call to setsockopt
static constexpr const char* GETIFADDRS_ERROR = "A call to getifaddrs() failed.";
static constexpr const char* SEND_TO_ERROR = "A call to sendto() failed.";
// FTD3XX
static constexpr const char* FT_OK = "FTD3XX success.";
static constexpr const char* FT_INVALID_HANDLE = "Invalid FTD3XX handle.";
static constexpr const char* FT_DEVICE_NOT_FOUND = "FTD3XX device not found.";
static constexpr const char* FT_DEVICE_NOT_OPENED = "FTD3XX device not opened.";
static constexpr const char* FT_IO_ERROR = "FTD3XX IO error.";
static constexpr const char* FT_INSUFFICIENT_RESOURCES = "Insufficient resources for FTD3XX.";
static constexpr const char* FT_INVALID_PARAMETER = "Invalid FTD3XX parameter.";
static constexpr const char* FT_INVALID_BAUD_RATE = "Invalid FTD3XX baud rate.";
static constexpr const char* FT_DEVICE_NOT_OPENED_FOR_ERASE = "FTD3XX device not opened for erase.";
static constexpr const char* FT_DEVICE_NOT_OPENED_FOR_WRITE = "FTD3XX not opened for write.";
static constexpr const char* FT_FAILED_TO_WRITE_DEVICE = "FTD3XX failed to write device.";
static constexpr const char* FT_EEPROM_READ_FAILED = "FTD3XX EEPROM read failed.";
static constexpr const char* FT_EEPROM_WRITE_FAILED = "FTD3XX EEPROM write failed.";
static constexpr const char* FT_EEPROM_ERASE_FAILED = "FTD3XX EEPROM erase failed.";
static constexpr const char* FT_EEPROM_NOT_PRESENT = "FTD3XX EEPROM not present.";
static constexpr const char* FT_EEPROM_NOT_PROGRAMMED = "FTD3XX EEPROM not programmed.";
static constexpr const char* FT_INVALID_ARGS = "Invalid FTD3XX arguments.";
static constexpr const char* FT_NOT_SUPPORTED = "FTD3XX not supported.";
static constexpr const char* FT_NO_MORE_ITEMS = "No more FTD3XX items.";
static constexpr const char* FT_TIMEOUT = "FTD3XX timeout.";
static constexpr const char* FT_OPERATION_ABORTED = "FTD3XX operation aborted.";
static constexpr const char* FT_RESERVED_PIPE = "Reserved FTD3XX pipe.";
static constexpr const char* FT_INVALID_CONTROL_REQUEST_DIRECTION = "Invalid FTD3XX control request direction.";
static constexpr const char* FT_INVALID_CONTROL_REQUEST_TYPE = "Invalid FTD3XX control request type.";
static constexpr const char* FT_IO_PENDING = "FTD3XX IO pending.";
static constexpr const char* FT_IO_INCOMPLETE = "FTD3XX IO incomplete.";
static constexpr const char* FT_HANDLE_EOF = "Handle FTD3XX EOF.";
static constexpr const char* FT_BUSY = "FTD3XX busy.";
static constexpr const char* FT_NO_SYSTEM_RESOURCES = "No FTD3XX system resources.";
static constexpr const char* FT_DEVICE_LIST_NOT_READY = "FTD3XX device list not ready.";
static constexpr const char* FT_DEVICE_NOT_CONNECTED = "FTD3XX device not connected.";
static constexpr const char* FT_INCORRECT_DEVICE_PATH = "Incorrect FTD3XX device path.";
static constexpr const char* FT_OTHER_ERROR = "Other FTD3XX error.";
// VSA
static constexpr const char* VSA_BUFFER_CORRUPTED = "VSA data in record buffer is corrupted.";
static constexpr const char* VSA_TIMESTAMP_NOT_FOUND = "Unable to find a VSA record with a valid timestamp.";
@ -203,6 +169,13 @@ static constexpr const char* SERVD_POLL_ERROR = "Error polling on Servd socket";
static constexpr const char* SERVD_NODATA_ERROR = "No data received from Servd";
static constexpr const char* SERVD_JOIN_MULTICAST_ERROR = "Error joining Servd multicast group";
// DXX
static constexpr const char* DXX_ERROR_SYS = "System error, check errno/GetLastError()";
static constexpr const char* DXX_ERROR_INT = "DXX interrupt called";
static constexpr const char* DXX_ERROR_OVERFLOW = "Overflow in DXX";
static constexpr const char* DXX_ERROR_IO = "I/O failure in DXX";
static constexpr const char* DXX_ERROR_ARG = "Invalid arg passed to DXX";
static constexpr const char* TOO_MANY_EVENTS = "Too many events have occurred. The list has been truncated.";
static constexpr const char* UNKNOWN = "An unknown internal error occurred.";
static constexpr const char* INVALID = "An invalid internal error occurred.";
@ -250,6 +223,8 @@ const char* APIEvent::DescriptionForType(Type type) {
return FIXED_POINT_OVERFLOW;
case Type::FixedPointPrecision:
return FIXED_POINT_PRECISION;
case Type::SyscallError:
return SYSCALL_ERROR;
// Device Errors
case Type::PollingMessageOverflow:
@ -379,74 +354,6 @@ const char* APIEvent::DescriptionForType(Type type) {
return DISK_FORMAT_NOT_SUPPORTED;
case Type::DiskFormatInvalidCount:
return DISK_FORMAT_INVALID_COUNT;
// FTD3XX
case Type::FTOK:
return FT_OK;
case Type::FTInvalidHandle:
return FT_INVALID_HANDLE;
case Type::FTDeviceNotFound:
return FT_DEVICE_NOT_FOUND;
case Type::FTDeviceNotOpened:
return FT_DEVICE_NOT_OPENED;
case Type::FTIOError:
return FT_IO_ERROR;
case Type::FTInsufficientResources:
return FT_INSUFFICIENT_RESOURCES;
case Type::FTInvalidParameter:
return FT_INVALID_PARAMETER;
case Type::FTInvalidBaudRate:
return FT_INVALID_BAUD_RATE;
case Type::FTDeviceNotOpenedForErase:
return FT_DEVICE_NOT_OPENED_FOR_ERASE;
case Type::FTDeviceNotOpenedForWrite:
return FT_DEVICE_NOT_OPENED_FOR_WRITE;
case Type::FTFailedToWriteDevice:
return FT_FAILED_TO_WRITE_DEVICE;
case Type::FTEEPROMReadFailed:
return FT_EEPROM_READ_FAILED;
case Type::FTEEPROMWriteFailed:
return FT_EEPROM_WRITE_FAILED;
case Type::FTEEPROMEraseFailed:
return FT_EEPROM_ERASE_FAILED;
case Type::FTEEPROMNotPresent:
return FT_EEPROM_NOT_PRESENT;
case Type::FTEEPROMNotProgrammed:
return FT_EEPROM_NOT_PROGRAMMED;
case Type::FTInvalidArgs:
return FT_INVALID_ARGS;
case Type::FTNotSupported:
return FT_NOT_SUPPORTED;
case Type::FTNoMoreItems:
return FT_NO_MORE_ITEMS;
case Type::FTTimeout:
return FT_TIMEOUT;
case Type::FTOperationAborted:
return FT_OPERATION_ABORTED;
case Type::FTReservedPipe:
return FT_RESERVED_PIPE;
case Type::FTInvalidControlRequestDirection:
return FT_INVALID_CONTROL_REQUEST_DIRECTION;
case Type::FTInvalidControlRequestType:
return FT_INVALID_CONTROL_REQUEST_TYPE;
case Type::FTIOPending:
return FT_IO_PENDING;
case Type::FTIOIncomplete:
return FT_IO_INCOMPLETE;
case Type::FTHandleEOF:
return FT_HANDLE_EOF;
case Type::FTBusy:
return FT_BUSY;
case Type::FTNoSystemResources:
return FT_NO_SYSTEM_RESOURCES;
case Type::FTDeviceListNotReady:
return FT_DEVICE_LIST_NOT_READY;
case Type::FTDeviceNotConnected:
return FT_DEVICE_NOT_CONNECTED;
case Type::FTIncorrectDevicePath:
return FT_INCORRECT_DEVICE_PATH;
case Type::FTOtherError:
return FT_OTHER_ERROR;
// VSA
case Type::VSABufferCorrupted:
@ -488,6 +395,18 @@ const char* APIEvent::DescriptionForType(Type type) {
case Type::ServdJoinMulticastError:
return SERVD_JOIN_MULTICAST_ERROR;
// DXX
case Type::DXXErrorSys:
return DXX_ERROR_SYS;
case Type::DXXErrorInt:
return DXX_ERROR_INT;
case Type::DXXErrorOverflow:
return DXX_ERROR_OVERFLOW;
case Type::DXXErrorIO:
return DXX_ERROR_IO;
case Type::DXXErrorArg:
return DXX_ERROR_ARG;
// Other Errors
case Type::TooManyEvents:
return TOO_MANY_EVENTS;
@ -501,7 +420,7 @@ const char* APIEvent::DescriptionForType(Type type) {
bool EventFilter::match(const APIEvent& event) const noexcept {
if(type != APIEvent::Type::Any && type != event.getType())
return false;
if(matchOnDevicePtr && !event.isForDevice(device))
return false;

View File

@ -1,5 +1,7 @@
//FILE: icsneo40DLLAPI.H
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include "icsneo/icsnVC40.h"

View File

@ -28,6 +28,9 @@ void init_event(pybind11::module_& m) {
.value("WiVINotSupported", APIEvent::Type::WiVINotSupported)
.value("RestrictedEntryFlag", APIEvent::Type::RestrictedEntryFlag)
.value("NotSupported", APIEvent::Type::NotSupported)
.value("FixedPointOverflow", APIEvent::Type::FixedPointOverflow)
.value("FixedPointPrecision", APIEvent::Type::FixedPointPrecision)
.value("SyscallError", APIEvent::Type::SyscallError)
.value("PollingMessageOverflow", APIEvent::Type::PollingMessageOverflow)
.value("NoSerialNumber", APIEvent::Type::NoSerialNumber)
.value("IncorrectSerialNumber", APIEvent::Type::IncorrectSerialNumber)
@ -36,8 +39,6 @@ void init_event(pybind11::module_& m) {
.value("SettingsLengthError", APIEvent::Type::SettingsLengthError)
.value("SettingsChecksumError", APIEvent::Type::SettingsChecksumError)
.value("SettingsNotAvailable", APIEvent::Type::SettingsNotAvailable)
.value("DiskFormatNotSupported", APIEvent::Type::DiskFormatNotSupported)
.value("DiskFormatInvalidCount", APIEvent::Type::DiskFormatInvalidCount)
.value("SettingsReadOnly", APIEvent::Type::SettingsReadOnly)
.value("CANSettingsNotAvailable", APIEvent::Type::CANSettingsNotAvailable)
.value("CANFDSettingsNotAvailable", APIEvent::Type::CANFDSettingsNotAvailable)
@ -86,6 +87,10 @@ void init_event(pybind11::module_& m) {
.value("LINSettingsNotAvailable", APIEvent::Type::LINSettingsNotAvailable)
.value("ModeNotFound", APIEvent::Type::ModeNotFound)
.value("AppErrorParsingFailed", APIEvent::Type::AppErrorParsingFailed)
.value("GPTPNotSupported", APIEvent::Type::GPTPNotSupported)
.value("SettingNotAvaiableDevice", APIEvent::Type::SettingNotAvaiableDevice)
.value("DiskFormatNotSupported", APIEvent::Type::DiskFormatNotSupported)
.value("DiskFormatInvalidCount", APIEvent::Type::DiskFormatInvalidCount)
.value("FailedToRead", APIEvent::Type::FailedToRead)
.value("FailedToWrite", APIEvent::Type::FailedToWrite)
.value("DriverFailedToOpen", APIEvent::Type::DriverFailedToOpen)
@ -102,39 +107,6 @@ void init_event(pybind11::module_& m) {
.value("GetIfAddrsError", APIEvent::Type::GetIfAddrsError)
.value("SendToError", APIEvent::Type::SendToError)
.value("MDIOMessageExceedsMaxLength", APIEvent::Type::MDIOMessageExceedsMaxLength)
.value("FTOK", APIEvent::Type::FTOK)
.value("FTInvalidHandle", APIEvent::Type::FTInvalidHandle)
.value("FTDeviceNotFound", APIEvent::Type::FTDeviceNotFound)
.value("FTDeviceNotOpened", APIEvent::Type::FTDeviceNotOpened)
.value("FTIOError", APIEvent::Type::FTIOError)
.value("FTInsufficientResources", APIEvent::Type::FTInsufficientResources)
.value("FTInvalidParameter", APIEvent::Type::FTInvalidParameter)
.value("FTInvalidBaudRate", APIEvent::Type::FTInvalidBaudRate)
.value("FTDeviceNotOpenedForErase", APIEvent::Type::FTDeviceNotOpenedForErase)
.value("FTDeviceNotOpenedForWrite", APIEvent::Type::FTDeviceNotOpenedForWrite)
.value("FTFailedToWriteDevice", APIEvent::Type::FTFailedToWriteDevice)
.value("FTEEPROMReadFailed", APIEvent::Type::FTEEPROMReadFailed)
.value("FTEEPROMWriteFailed", APIEvent::Type::FTEEPROMWriteFailed)
.value("FTEEPROMEraseFailed", APIEvent::Type::FTEEPROMEraseFailed)
.value("FTEEPROMNotPresent", APIEvent::Type::FTEEPROMNotPresent)
.value("FTEEPROMNotProgrammed", APIEvent::Type::FTEEPROMNotProgrammed)
.value("FTInvalidArgs", APIEvent::Type::FTInvalidArgs)
.value("FTNotSupported", APIEvent::Type::FTNotSupported)
.value("FTNoMoreItems", APIEvent::Type::FTNoMoreItems)
.value("FTTimeout", APIEvent::Type::FTTimeout)
.value("FTOperationAborted", APIEvent::Type::FTOperationAborted)
.value("FTReservedPipe", APIEvent::Type::FTReservedPipe)
.value("FTInvalidControlRequestDirection", APIEvent::Type::FTInvalidControlRequestDirection)
.value("FTInvalidControlRequestType", APIEvent::Type::FTInvalidControlRequestType)
.value("FTIOPending", APIEvent::Type::FTIOPending)
.value("FTIOIncomplete", APIEvent::Type::FTIOIncomplete)
.value("FTHandleEOF", APIEvent::Type::FTHandleEOF)
.value("FTBusy", APIEvent::Type::FTBusy)
.value("FTNoSystemResources", APIEvent::Type::FTNoSystemResources)
.value("FTDeviceListNotReady", APIEvent::Type::FTDeviceListNotReady)
.value("FTDeviceNotConnected", APIEvent::Type::FTDeviceNotConnected)
.value("FTIncorrectDevicePath", APIEvent::Type::FTIncorrectDevicePath)
.value("FTOtherError", APIEvent::Type::FTOtherError)
.value("VSABufferCorrupted", APIEvent::Type::VSABufferCorrupted)
.value("VSATimestampNotFound", APIEvent::Type::VSATimestampNotFound)
.value("VSABufferFormatError", APIEvent::Type::VSABufferFormatError)
@ -142,18 +114,32 @@ void init_event(pybind11::module_& m) {
.value("VSAByteParseFailure", APIEvent::Type::VSAByteParseFailure)
.value("VSAExtendedMessageError", APIEvent::Type::VSAExtendedMessageError)
.value("VSAOtherError", APIEvent::Type::VSAOtherError)
.value("ServdBindError", APIEvent::Type::ServdBindError)
.value("ServdNonblockError", APIEvent::Type::ServdNonblockError)
.value("ServdTransceiveError", APIEvent::Type::ServdTransceiveError)
.value("ServdOutdatedError", APIEvent::Type::ServdOutdatedError)
.value("ServdInvalidResponseError", APIEvent::Type::ServdInvalidResponseError)
.value("ServdLockError", APIEvent::Type::ServdLockError)
.value("ServdSendError", APIEvent::Type::ServdSendError)
.value("ServdRecvError", APIEvent::Type::ServdRecvError)
.value("ServdPollError", APIEvent::Type::ServdPollError)
.value("ServdNoDataError", APIEvent::Type::ServdNoDataError)
.value("ServdJoinMulticastError", APIEvent::Type::ServdJoinMulticastError)
.value("DXXErrorSys", APIEvent::Type::DXXErrorSys)
.value("DXXErrorInt", APIEvent::Type::DXXErrorInt)
.value("DXXErrorOverflow", APIEvent::Type::DXXErrorOverflow)
.value("DXXErrorIO", APIEvent::Type::DXXErrorIO)
.value("DXXErrorArg", APIEvent::Type::DXXErrorArg)
.value("NoErrorFound", APIEvent::Type::NoErrorFound)
.value("TooManyEvents", APIEvent::Type::TooManyEvents)
.value("Unknown", APIEvent::Type::Unknown)
.value("FixedPointOverflow", APIEvent::Type::FixedPointOverflow)
.value("FixedPointPrecision", APIEvent::Type::FixedPointPrecision);
pybind11::enum_<APIEvent::Severity>(apiEvent, "Severity")
.value("Unknown", APIEvent::Type::Unknown);
pybind11::enum_<APIEvent::Severity>(apiEvent, "Severity")
.value("Any", APIEvent::Severity::Any)
.value("EventInfo", APIEvent::Severity::EventInfo)
.value("EventWarning", APIEvent::Severity::EventWarning)
.value("Error", APIEvent::Severity::Error);
apiEvent
.def("get_type", &APIEvent::getType)
.def("get_severity", &APIEvent::getSeverity)
@ -170,5 +156,5 @@ void init_event(pybind11::module_& m) {
.def_readwrite("serial", &EventFilter::serial);
}
} // namespace icsneo
} // namespace icsneo

View File

@ -1,21 +0,0 @@
#!/bin/sh
VERSION="1.0.27"
ROOT="$PWD/libusb"
SOURCE="$ROOT/source"
BUILD="$ROOT/build"
INSTALL="$ROOT/install"
mkdir -p "$ROOT"
cd "$ROOT" || exit 1
curl -LO "https://github.com/libusb/libusb/releases/download/v$VERSION/libusb-$VERSION.tar.bz2" || exit 1
tar -xf "libusb-$VERSION.tar.bz2" || exit 1
mv "libusb-$VERSION" "$SOURCE" || exit 1
mkdir "$BUILD" || exit 1
cd "$BUILD" || exit 1
"$SOURCE/configure" --prefix="$INSTALL" --disable-shared --disable-udev --disable-eventfd --disable-timerfd --with-pic || exit 1
make || exit 1
make install || exit 1

View File

@ -1,22 +0,0 @@
find_path(FTD3XX_INCLUDE_DIR
NAMES ftd3xx.h FTD3XX.h
)
find_library(FTD3XX_LIBRARY
NAMES libftd3xx.a libftd3xx-static.a FTD3XX.lib
PATH_SUFFIXES x64/Static
)
mark_as_advanced(FTD3XX_FOUND FTD3XX_INCLUDE_DIR FTD3XX_LIBRARY)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(FTD3XX
REQUIRED_VARS FTD3XX_INCLUDE_DIR FTD3XX_LIBRARY
)
if(FTD3XX_FOUND AND NOT TARGET D3XX::D3XX)
add_library(FTD3XX::FTD3XX INTERFACE IMPORTED)
set_target_properties(FTD3XX::FTD3XX PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${FTD3XX_INCLUDE_DIR}"
INTERFACE_LINK_LIBRARIES "${FTD3XX_LIBRARY}"
)
endif()

View File

@ -133,6 +133,10 @@ EthernetPacketizer::EthernetPacket::EthernetPacket(const uint8_t* data, size_t s
int EthernetPacketizer::EthernetPacket::loadBytestream(const std::vector<uint8_t>& bytestream) {
errorWhileDecodingFromBytestream = 0;
if (bytestream.size() < 24) {
errorWhileDecodingFromBytestream = 1;
return errorWhileDecodingFromBytestream;
}
for(size_t i = 0; i < 6; i++)
destMAC[i] = bytestream[i];
for(size_t i = 0; i < 6; i++)

View File

@ -421,7 +421,8 @@ bool Device::disableLogData() {
}
bool Device::goOnline() {
if(!enableNetworkCommunication(true))
static constexpr uint32_t onlineTimeoutMs = 5000;
if(!enableNetworkCommunication(true, onlineTimeoutMs))
return false;
auto startTime = std::chrono::system_clock::now();
@ -450,13 +451,19 @@ bool Device::goOnline() {
return false;
}
// (re)start the keeponline
keeponline = std::make_unique<Periodic>([this] { return enableNetworkCommunication(true, onlineTimeoutMs); }, std::chrono::milliseconds(onlineTimeoutMs / 4));
online = true;
forEachExtension([](const std::shared_ptr<DeviceExtension>& ext) { ext->onGoOnline(); return true; });
return true;
}
bool Device::goOffline() {
keeponline.reset();
forEachExtension([](const std::shared_ptr<DeviceExtension>& ext) { ext->onGoOffline(); return true; });
if(isDisconnected()) {
@ -3482,13 +3489,14 @@ bool Device::writeMACsecConfig(const MACsecMessage& message, uint16_t binaryInde
return writeBinaryFile(raw, binaryIndex);
}
bool Device::enableNetworkCommunication(bool enable) {
bool Device::enableNetworkCommunication(bool enable, uint32_t timeout) {
bool sendMsg = false;
if(!com->driver->enableCommunication(enable, sendMsg)) {
return false;
}
if(sendMsg) {
if(!com->sendCommand(Command::EnableNetworkCommunication, enable)) {
const uint8_t* i = (uint8_t*)&timeout;
if(!com->sendCommand(Command::EnableNetworkCommunication, {enable, 0, 0, 0, i[0], i[1], i[2], i[3]})) {
return false;
}
}

View File

@ -16,12 +16,8 @@
#include "icsneo/platform/cdcacm.h"
#endif
#ifdef ICSNEO_ENABLE_FTDI
#include "icsneo/platform/ftdi.h"
#endif
#ifdef ICSNEO_ENABLE_FTD3XX
#include "icsneo/platform/ftd3xx.h"
#ifdef ICSNEO_ENABLE_DXX
#include "icsneo/platform/dxx.h"
#endif
#ifdef ICSNEO_ENABLE_TCP
@ -83,12 +79,8 @@ std::vector<std::shared_ptr<Device>> DeviceFinder::FindAll() {
CDCACM::Find(newDriverFoundDevices);
#endif
#ifdef ICSNEO_ENABLE_FTDI
FTDI::Find(newDriverFoundDevices);
#endif
#ifdef ICSNEO_ENABLE_FTD3XX
FTD3XX::Find(newDriverFoundDevices);
#ifdef ICSNEO_ENABLE_DXX
DXX::Find(newDriverFoundDevices);
#endif
}

View File

@ -7,7 +7,7 @@ VSA08::VSA08(uint8_t* const recordBytes)
{
setType(VSA::Type::AA08);
troubleSramCount.insert(troubleSramCount.end(), recordBytes + 2, recordBytes + 6);
troubleSectors.insert(troubleSectors.end(), reinterpret_cast<uint32_t*>(recordBytes + 6), reinterpret_cast<uint32_t*>(recordBytes + 20));
troubleSectors.insert(troubleSectors.end(), reinterpret_cast<uint32_t*>(recordBytes + 6), reinterpret_cast<uint32_t*>(recordBytes + 22));
timestamp = *reinterpret_cast<uint64_t*>(recordBytes + 22) & UINT63_MAX;
checksum = *reinterpret_cast<uint16_t*>(recordBytes + 30);
doChecksum(recordBytes);

View File

@ -8,7 +8,7 @@ Dependencies
The minimum requirements to build libicsneo are:
- CMake version 3.12 or newer
- A C++17 compiler
- libusb and libpcap on Linux and macOS
- libpcap on Linux and macOS
Building library & examples

View File

@ -0,0 +1,56 @@
===================
CAN Getting Started
===================
Prerequisites
=============
- Python 3.8 or higher
- icsneopy library installed
- CAN hardware device connected
:download:`Download complete example <../../examples/python/can/can_complete_example.py>`
Basic Setup
===========
1. Import the library and discover devices:
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 11-19
2. Configure and open the device:
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 22-37
Transmitting CAN Frames
========================
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 40-53
Receiving CAN Frames
=====================
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 56-72
Cleanup and Resource Management
===============================
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 75-79
Complete Example with Error Handling
====================================
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
:lines: 82-125

View File

@ -0,0 +1,70 @@
=========================
Ethernet Getting Started
=========================
This guide provides basic instructions for working with Ethernet devices using the icsneopy library.
Prerequisites
=============
- Python 3.8 or higher
- icsneopy library installed
- Intrepid Control Systems Device
:download:`Download complete example <../../examples/python/ethernet/ethernet_complete_example.py>`
Opening the Device
==================
To begin working with an Ethernet device, you must first discover, open, and bring it online:
.. literalinclude:: ../../examples/python/ethernet/ethernet_complete_example.py
:language: python
:lines: 11-37
Transmitting Ethernet Frames
=============================
Transmit Ethernet frames using the EthernetMessage class:
.. literalinclude:: ../../examples/python/ethernet/ethernet_complete_example.py
:language: python
:lines: 61-78
Monitoring Ethernet Status
==========================
Monitor Ethernet link status changes using message callbacks:
.. literalinclude:: ../../examples/python/ethernet/ethernet_complete_example.py
:language: python
:lines: 39-40
DoIP Ethernet Activation Control
================================
Diagnostics over Internet Protocol (DoIP) can be controlled through digital I/O pins:
:download:`Download DoIP example <../../examples/python/doip/doip_activation_control.py>`
.. literalinclude:: ../../examples/python/doip/doip_activation_control.py
:language: python
:lines: 28-44
Network Configuration Reference
===============================
Standard Ethernet network identifiers:
- ``icsneopy.Network.NetID.ETHERNET_01`` - Standard Ethernet Port 1
Automotive Ethernet network identifiers:
- ``icsneopy.Network.NetID.AE_01`` - Automotive Ethernet Port 1
Complete Example with Resource Management
=========================================
.. literalinclude:: ../../examples/python/ethernet/ethernet_complete_example.py
:language: python
:lines: 86-127

View File

@ -3,161 +3,66 @@ Python Examples
===============
Transmit CAN frames on DW CAN 01
============================
=================================
.. code-block:: python
:download:`Download example <../../examples/python/can/can_transmit_basic.py>`
import icsneopy
devices: list[icsneopy.Device] = icsneopy.find_all_devices()
# grab the first/only device found
device: icsneopy.Device = devices[0]
message = icsneopy.CANMessage()
message.network = icsneopy.Network(icsneopy.Network.NetID.DWCAN_01)
message.arbid = 0x56
message.data = (0x11, 0x22, 0x33)
device.open()
device.go_online()
device.transmit(message)
.. literalinclude:: ../../examples/python/can/can_transmit_basic.py
:language: python
Receive CAN frames on DW CAN 01
===========================
================================
.. code-block:: python
:download:`Download example <../../examples/python/can/can_receive_basic.py>`
import icsneopy
import time
.. literalinclude:: ../../examples/python/can/can_receive_basic.py
:language: python
devices: list[icsneopy.Device] = icsneopy.find_all_devices()
Complete CAN Example
====================
# grab the first/only device found
device: icsneopy.Device = devices[0]
:download:`Download example <../../examples/python/can/can_complete_example.py>`
def on_message(message: icsneopy.CANMessage):
print(message.arbid, message.data)
message_filter = icsneopy.MessageFilter(icsneopy.Network.NetID.DWCAN_01)
callback = icsneopy.MessageCallback(on_message, message_filter)
.. literalinclude:: ../../examples/python/can/can_complete_example.py
:language: python
device.add_message_callback(callback)
device.open()
device.go_online()
Transmit Ethernet frames on Ethernet 01
========================================
# rx for 10s
time.sleep(10)
:download:`Download example <../../examples/python/ethernet/ethernet_transmit_basic.py>`
.. literalinclude:: ../../examples/python/ethernet/ethernet_transmit_basic.py
:language: python
Monitor Ethernet Status
=======================
.. code-block:: python
:download:`Download example <../../examples/python/ethernet/ethernet_monitor_status.py>`
import icsneopy
import time
.. literalinclude:: ../../examples/python/ethernet/ethernet_monitor_status.py
:language: python
def main():
devices = icsneopy.find_all_devices()
if len(devices) == 0:
print("error: no devices found")
return False
TC10 Power Management
=====================
device = devices[0]
print(f"info: monitoring Ethernet status on {device}")
:download:`Download example <../../examples/python/tc10/tc10.py>`
def on_message(message):
print(f"info: network: {message.network}, state: {message.state}, speed: {message.speed}, duplex: {message.duplex}, mode: {message.mode}")
filter = icsneopy.MessageFilter(icsneopy.Message.Type.EthernetStatus)
callback = icsneopy.MessageCallback(on_message, filter)
device.add_message_callback(callback)
if not device.open():
print("error: unable to open device")
return False
if not device.go_online():
print("error: unable to go online")
return False
while True:
time.sleep(1)
main()
TC10
====
.. code-block:: python
import icsneopy
import time
devices: list[icsneopy.Device] = icsneopy.find_all_devices()
device: icsneopy.Device = devices[0]
print(f"using {device} for TC10")
device.open()
netid = icsneopy.Network.NetID.AE_01
if device.supports_tc10():
# initial
status = device.get_tc10_status(netid)
print(f"initial status: wake: {status.wakeStatus}, sleep: {status.sleepStatus}")
time.sleep(1)
# sleep
device.request_tc10_sleep(netid)
print("waiting 1s for sleep to occur")
time.sleep(1)
status = device.get_tc10_status(netid)
print(f"post sleep status: wake: {status.wakeStatus}, sleep: {status.sleepStatus}")
# wake
device.request_tc10_wake(netid)
print("waiting 1s for wake to occur")
time.sleep(1)
status = device.get_tc10_status(netid)
print(f"post wake status: wake: {status.wakeStatus}, sleep: {status.sleepStatus}")
else:
print(f"{device} does not support TC10")
.. literalinclude:: ../../examples/python/tc10/tc10.py
:language: python
DoIP Ethernet Activation
========================
.. code-block:: python
:download:`Download example <../../examples/python/doip/doip_activation_control.py>`
import icsneopy
import time
.. literalinclude:: ../../examples/python/doip/doip_activation_control.py
:language: python
devs = icsneopy.find_all_devices()
Complete Ethernet Example
=========================
dev = devs[0]
:download:`Download example <../../examples/python/ethernet/ethernet_complete_example.py>`
dev.open()
# the device must be online for digital I/O
dev.go_online()
print(f"initial state: {dev.get_digital_io(icsneopy.IO.EthernetActivation, 1)}")
dev.set_digital_io(icsneopy.IO.EthernetActivation, 1, True)
print(f"after setting to true: {dev.get_digital_io(icsneopy.IO.EthernetActivation, 1)}")
# allow for observing the change
time.sleep(2)
dev.set_digital_io(icsneopy.IO.EthernetActivation, 1, False)
print(f"after setting to false: {dev.get_digital_io(icsneopy.IO.EthernetActivation, 1)}")
# allow for observing the change
time.sleep(2)
.. literalinclude:: ../../examples/python/ethernet/ethernet_complete_example.py
:language: python

View File

@ -5,6 +5,8 @@ icsneopy
.. toctree::
:maxdepth: 2
can_getting_started
ethernet_getting_started
examples
api
radepsilon

View File

@ -52,7 +52,7 @@ Although the example program will build without successfully completing the step
First, we are going to build the icsneoc library into a .so file that we can later use in order to access the library functions.
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libusb-1.0-0-dev libpcap0.8-dev`
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libpcap0.8-dev`
2. Change directories to `libicsneo-examples/third-party/libicsneo` and create a build directory by running `mkdir -p build`
3. Enter the build directory with `cd build`
4. Run `cmake ..` to generate your Makefile.

View File

@ -52,7 +52,7 @@ Although the example program will build without successfully completing the step
First, we are going to build the icsneoc library into a .so file that we can later use in order to access the library functions.
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libusb-1.0-0-dev libpcap0.8-dev`
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libpcap0.8-dev`
2. Change directories to `libicsneo-examples/third-party/libicsneo` and create a build directory by running `mkdir -p build`
3. Enter the build directory with `cd build`
4. Run `cmake ..` to generate your Makefile.

View File

@ -52,7 +52,7 @@ Although the example program will build without successfully completing the step
First, we are going to build the icsneoc library into a .so file that we can later use in order to access the library functions.
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libusb-1.0-0-dev libpcap0.8-dev`
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libpcap0.8-dev`
2. Change directories to `libicsneo-examples/third-party/libicsneo` and create a build directory by running `mkdir -p build`
3. Enter the build directory with `cd build`
4. Run `cmake ..` to generate your Makefile.

View File

@ -32,7 +32,7 @@ If you haven't done this, `third-party/libicsneo` will be empty and you won't be
### Ubuntu 18.04 LTS
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libusb-1.0-0-dev libpcap0.8-dev`
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libpcap0.8-dev`
2. Change directories to your `libicsneo-examples/libicsneocpp-example` folder and create a build directory by running `mkdir -p build`
3. Enter the build directory with `cd build`
4. Run `cmake ..` to generate your Makefile.

View File

@ -32,7 +32,7 @@ If you haven't done this, `third-party/libicsneo` will be empty and you won't be
### Ubuntu 18.04 LTS
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libusb-1.0-0-dev libpcap0.8-dev`
1. Install dependencies with `sudo apt update` then `sudo apt install build-essential cmake libpcap0.8-dev`
2. Change directories to your `libicsneo-examples/libicsneocpp-example` folder and create a build directory by running `mkdir -p build`
3. Enter the build directory with `cd build`
4. Run `cmake ..` to generate your Makefile.

View File

@ -0,0 +1,125 @@
"""
Complete CAN example using icsneopy library.
Demonstrates device setup and CAN frame transmission/reception.
"""
import icsneopy
import time
def setup_device():
"""Initialize CAN device."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
print(f"Using device: {device}")
return device
def open_device(device):
"""Open device connection."""
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
device.close()
raise RuntimeError("Failed to go online")
print("Device initialized successfully")
return True
except Exception as e:
print(f"Device setup failed: {e}")
return False
def transmit_can_frame(device, arbid, data):
"""Transmit a CAN frame."""
frame = icsneopy.CANMessage()
frame.network = icsneopy.Network(icsneopy.Network.NetID.DWCAN_01)
frame.arbid = arbid
frame.data = data
success = device.transmit(frame)
if success:
print(f"Frame transmitted: ID=0x{arbid:03X}, Data={list(data)}")
else:
print(f"Failed to transmit frame ID=0x{arbid:03X}")
return success
def setup_can_reception(device):
"""Configure CAN frame reception with callback."""
frame_count = 0
def frame_handler(frame):
nonlocal frame_count
frame_count += 1
print(f"[RX {frame_count}] ID: 0x{frame.arbid:03X}, "
f"Data: {[hex(b) for b in frame.data]}, "
f"Length: {len(frame.data)}")
frame_filter = icsneopy.MessageFilter(icsneopy.Network.NetID.DWCAN_01)
callback = icsneopy.MessageCallback(frame_handler, frame_filter)
device.add_message_callback(callback)
print("CAN frame reception configured")
return 0
def cleanup_device(device):
"""Close device connection."""
if device:
device.close()
print("Device connection closed")
def main():
"""Complete CAN example with proper error handling."""
device = None
try:
# Setup device
device = setup_device()
# Open device
if not open_device(device):
raise RuntimeError("Failed to initialize device")
# Setup frame reception
setup_can_reception(device)
# Transmit test frames
test_frames = [
(0x123, (0x01, 0x02, 0x03, 0x04)),
(0x456, (0x05, 0x06, 0x07, 0x08)),
(0x789, (0x09, 0x0A, 0x0B, 0x0C))
]
for arbid, data in test_frames:
transmit_result = transmit_can_frame(device, arbid, data)
if not transmit_result:
print(f"Warning: Failed to transmit frame ID=0x{arbid:03X}")
time.sleep(0.1)
# Listen for responses
print("Listening for CAN frames for 5 seconds...")
time.sleep(5)
except Exception as e:
print(f"Error: {e}")
return 1
finally:
cleanup_device(device)
return 0
if __name__ == "__main__":
main()

View File

@ -0,0 +1,47 @@
"""
Basic CAN frame reception example using icsneopy library.
Demonstrates how to receive CAN frames on DW CAN 01 using callback handlers.
"""
import icsneopy
import time
def receive_can_frames():
"""Receive CAN frames with callback handling."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
frame_count = 0
def on_frame(frame):
nonlocal frame_count
frame_count += 1
print(f"[RX {frame_count}] ID: 0x{frame.arbid:03X}, "
f"Data: {[hex(b) for b in frame.data]}")
frame_filter = icsneopy.MessageFilter(icsneopy.Network.NetID.DWCAN_01)
callback = icsneopy.MessageCallback(on_frame, frame_filter)
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
raise RuntimeError("Failed to go online")
device.add_message_callback(callback)
print("Listening for CAN frames for 10 seconds...")
time.sleep(10)
print(f"Total frames received: {frame_count}")
finally:
device.close()
if __name__ == "__main__":
receive_can_frames()

View File

@ -0,0 +1,41 @@
"""
Basic CAN frame transmission example using icsneopy library.
Demonstrates how to transmit CAN frames on DW CAN 01.
"""
import icsneopy
def transmit_can_frame():
"""Transmit a CAN frame."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
raise RuntimeError("Failed to go online")
frame = icsneopy.CANMessage()
frame.network = icsneopy.Network(icsneopy.Network.NetID.DWCAN_01)
frame.arbid = 0x123
frame.data = (0x01, 0x02, 0x03, 0x04)
success = device.transmit(frame)
if success:
print(f"Frame transmitted: ID=0x{frame.arbid:03X}")
else:
print("Failed to transmit frame")
finally:
device.close()
if __name__ == "__main__":
transmit_can_frame()

View File

@ -0,0 +1,53 @@
"""
DoIP activation control example using icsneopy library.
Demonstrates DoIP (Diagnostics over Internet Protocol) Ethernet activation
control with comprehensive error handling and state validation.
"""
import icsneopy
import time
def doip_activation_demo():
"""Demonstrate DoIP Ethernet activation control with error handling."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
print(f"Using {device} for DoIP activation control")
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
raise RuntimeError("Failed to go online")
initial_state = device.get_digital_io(icsneopy.IO.EthernetActivation, 1)
print(f"Initial DoIP activation state: {initial_state}")
print("Activating DoIP Ethernet...")
device.set_digital_io(icsneopy.IO.EthernetActivation, 1, True)
time.sleep(1)
active_state = device.get_digital_io(icsneopy.IO.EthernetActivation, 1)
print(f"DoIP activated: {active_state}")
time.sleep(2)
print("Deactivating DoIP Ethernet...")
device.set_digital_io(icsneopy.IO.EthernetActivation, 1, False)
time.sleep(1)
final_state = device.get_digital_io(icsneopy.IO.EthernetActivation, 1)
print(f"DoIP deactivated: {final_state}")
except Exception as e:
print(f"DoIP control error: {e}")
finally:
device.close()
if __name__ == "__main__":
doip_activation_demo()

View File

@ -0,0 +1,127 @@
"""
Complete Ethernet example using icsneopy library.
Demonstrates device setup and Ethernet frame transmission/reception.
"""
import icsneopy
import time
def setup_device():
"""Initialize Ethernet device."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
print(f"Using device: {device}")
return device
def open_device(device):
"""Open device connection."""
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
device.close()
raise RuntimeError("Failed to go online")
print("Device initialized successfully")
return True
except Exception as e:
print(f"Device setup failed: {e}")
return False
def on_status_message(message):
print(f"info: network: {message.network}, state: {message.state}, speed: {message.speed}, duplex: {message.duplex}, mode: {message.mode}")
def setup_ethernet_reception(device):
"""Configure Ethernet frame reception with callback."""
frame_count = 0
def frame_handler(frame):
nonlocal frame_count
frame_count += 1
print(f"[RX {frame_count}], "
f"Data: {[hex(b) for b in frame.data]}, "
f"Length: {len(frame.data)}")
frame_filter = icsneopy.MessageFilter(icsneopy.Network.NetID.ETHERNET_01)
callback = icsneopy.MessageCallback(frame_handler, frame_filter)
device.add_message_callback(callback)
print("CAN frame reception configured")
return 0
def transmit_ethernet_frame(device):
"""Transmit an Ethernet frame."""
frame = icsneopy.EthernetMessage()
frame.network = icsneopy.Network(icsneopy.Network.NetID.ETHERNET_01)
frame.data = [
0x00, 0xFC, 0x70, 0x00, 0x01, 0x02,
0x00, 0xFC, 0x70, 0x00, 0x01, 0x01,
0x08, 0x00,
0x01, 0xC5, 0x01, 0xC5
]
success = device.transmit(frame)
if success:
print("Frame transmitted successfully")
else:
print("Failed to transmit frame")
return success
def cleanup_device(device):
"""Close device connection."""
if device:
device.close()
print("Device connection closed")
def main():
"""Complete Ethernet example"""
device = None
try:
# Initialize device
device = setup_device()
# Open device
if not open_device(device):
raise RuntimeError("Failed to initialize device")
filter = icsneopy.MessageFilter(icsneopy.Message.Type.EthernetStatus)
status_callback = icsneopy.MessageCallback(on_status_message, filter)
device.add_message_callback(status_callback)
#Setup Ethernet Callback
setup_ethernet_reception(device)
# Transmit an Ethernet frame
transmit_result = transmit_ethernet_frame(device)
if not transmit_result:
print("Warning: Failed to transmit frame")
# Monitor for a period
print("Monitoring for 10 seconds...")
time.sleep(10)
print(f"Monitoring completed.")
except Exception as e:
print(f"Error: {e}")
return 1
finally:
cleanup_device(device)
return 0
if __name__ == "__main__":
main()

View File

@ -0,0 +1,37 @@
"""
Ethernet status monitoring example using icsneopy library.
Demonstrates how to monitor Ethernet link status changes.
"""
import icsneopy
import time
def main():
devices = icsneopy.find_all_devices()
if len(devices) == 0:
print("error: no devices found")
return False
device = devices[0]
print(f"info: monitoring Ethernet status on {device}")
def on_message(message):
print(f"info: network: {message.network}, state: {message.state}, speed: {message.speed}, duplex: {message.duplex}, mode: {message.mode}")
filter = icsneopy.MessageFilter(icsneopy.Message.Type.EthernetStatus)
callback = icsneopy.MessageCallback(on_message, filter)
device.add_message_callback(callback)
if not device.open():
print("error: unable to open device")
return False
if not device.go_online():
print("error: unable to go online")
return False
while True:
time.sleep(1)
main()

View File

@ -0,0 +1,45 @@
"""
Basic Ethernet frame transmission example using icsneopy library.
Demonstrates how to transmit Ethernet frames on Ethernet 01.
"""
import icsneopy
def transmit_ethernet_frame():
"""Transmit an Ethernet frame."""
devices = icsneopy.find_all_devices()
if not devices:
raise RuntimeError("No devices found")
device = devices[0]
try:
if not device.open():
raise RuntimeError("Failed to open device")
if not device.go_online():
raise RuntimeError("Failed to go online")
frame = icsneopy.EthernetMessage()
frame.network = icsneopy.Network(icsneopy.Network.NetID.ETHERNET_01)
frame.data = [
0x00, 0xFC, 0x70, 0x00, 0x01, 0x02,
0x00, 0xFC, 0x70, 0x00, 0x01, 0x01,
0x08, 0x00,
0x01, 0xC5, 0x01, 0xC5
]
success = device.transmit(frame)
if success:
print("Frame transmitted successfully")
else:
print("Failed to transmit frame")
finally:
device.close()
if __name__ == "__main__":
transmit_ethernet_frame()

View File

@ -0,0 +1,85 @@
import icsneopy
import argparse
def main():
parser = get_parser()
args = parser.parse_args()
run_test(args)
def find_device(serial: str) -> icsneopy.Device:
devices = icsneopy.find_all_devices()
for d in devices:
if d.get_serial() == serial:
print(f"opening device {serial}")
return d
return None
def run_test(args):
# find the device
d = find_device(args.serial)
if d is None:
print(f"error: unable to find device {args.serial}")
exit(1)
# open the device
if not d.open():
print(f"error: unable to open device {args.serial}")
exit(1)
# check if TC10 is supported
if not d.supports_tc10():
print(f"error: device does not support TC10 {args.serial}")
exit(1)
# send the request on all networks
for n in args.networks:
net = getattr(icsneopy.Network.NetID, n)
if args.send_wake:
print(f"requesting TC10 wake on network {net}")
if not d.request_tc10_wake(net):
print(f"error: unable to send TC10 wake on device {args.serial}")
exit(1)
elif args.send_sleep:
print(f"requesting TC10 sleep on network {net}")
if not d.request_tc10_sleep(net):
print(f"error: unable to send TC10 sleep on device {args.serial}")
exit(1)
# close the device
print(f"closing device {args.serial}")
d.close()
def get_parser():
parser = argparse.ArgumentParser(description="TC10 wake request")
parser.add_argument(
"serial",
help="The serial number of the device",
)
parser.add_argument(
"--networks",
nargs="+",
help="List of icsneopy networks to use. Multiple networks accepted, e.g. '--networks ETHERNET_01 AE_01'",
required=True,
),
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
"--send-wake",
help="Trigger TC10 wake on the selected networks",
action="store_true",
default=False,
)
group.add_argument(
"--send-sleep",
help="Trigger TC10 sleep on the selected networks",
action="store_true",
default=False,
)
return parser
if __name__ == "__main__":
main()

View File

@ -53,6 +53,7 @@ public:
NotSupported = 0x1017,
FixedPointOverflow = 0x1018,
FixedPointPrecision = 0x1019,
SyscallError = 0x1020, // check errno/GetLastError() for details
// Device Events
PollingMessageOverflow = 0x2000,
@ -134,41 +135,6 @@ public:
SendToError = 0x3109,
MDIOMessageExceedsMaxLength = 0x3110,
// FTD3XX
FTOK = 0x4000, // placeholder
FTInvalidHandle = FTOK + 1,
FTDeviceNotFound = FTOK + 2,
FTDeviceNotOpened = FTOK + 3,
FTIOError = FTOK + 4,
FTInsufficientResources = FTOK + 5,
FTInvalidParameter = FTOK + 6,
FTInvalidBaudRate = FTOK + 7,
FTDeviceNotOpenedForErase = FTOK + 8,
FTDeviceNotOpenedForWrite = FTOK + 9,
FTFailedToWriteDevice = FTOK + 10,
FTEEPROMReadFailed = FTOK + 11,
FTEEPROMWriteFailed = FTOK + 12,
FTEEPROMEraseFailed = FTOK + 13,
FTEEPROMNotPresent = FTOK + 14,
FTEEPROMNotProgrammed = FTOK + 15,
FTInvalidArgs = FTOK + 16,
FTNotSupported = FTOK + 17,
FTNoMoreItems = FTOK + 18,
FTTimeout = FTOK + 19,
FTOperationAborted = FTOK + 20,
FTReservedPipe = FTOK + 21,
FTInvalidControlRequestDirection = FTOK + 22,
FTInvalidControlRequestType = FTOK + 23,
FTIOPending = FTOK + 24,
FTIOIncomplete = FTOK + 25,
FTHandleEOF = FTOK + 26,
FTBusy = FTOK + 27,
FTNoSystemResources = FTOK + 28,
FTDeviceListNotReady = FTOK + 29,
FTDeviceNotConnected = FTOK + 30,
FTIncorrectDevicePath = FTOK + 31,
FTOtherError = FTOK + 32,
// VSA
VSABufferCorrupted = 0x5000,
VSATimestampNotFound = VSABufferCorrupted + 1,
@ -191,6 +157,13 @@ public:
ServdNoDataError = ServdBindError + 9,
ServdJoinMulticastError = ServdBindError + 10,
// DXX
DXXErrorSys = 0x6100,
DXXErrorInt = 0x6101,
DXXErrorOverflow = 0x6102,
DXXErrorIO = 0x6103,
DXXErrorArg = 0x6104,
NoErrorFound = 0xFFFFFFFD,
TooManyEvents = 0xFFFFFFFE,
Unknown = 0xFFFFFFFF

View File

@ -20,7 +20,7 @@ public:
EventCallback(std::shared_ptr<EventFilter> f, fn_eventCallback cb) : callback(cb), filter(f) {}
EventCallback(EventFilter f, fn_eventCallback cb) : callback(cb), filter(std::make_shared<EventFilter>(f)) {}
virtual bool callIfMatch(const std::shared_ptr<APIEvent>& event) const {
bool callIfMatch(const std::shared_ptr<APIEvent>& event) const {
bool ret = filter->match(*event);
if(ret)
callback(event);

View File

@ -0,0 +1,53 @@
#ifndef __PERIODIC_H__
#define __PERIODIC_H__
#ifdef __cplusplus
#include <condition_variable>
#include <mutex>
#include <functional>
#include <chrono>
#include <thread>
namespace icsneo {
class Periodic {
public:
using Callback = std::function<bool(void)>;
Periodic(Callback&& callback, const std::chrono::milliseconds& period) :
thread(&Periodic::loop, this, std::move(callback), period)
{}
~Periodic() {
{
std::scoped_lock lk(mutex);
stop = true;
}
cv.notify_all();
thread.join();
}
private:
void loop(Callback&& callback, const std::chrono::milliseconds& period) {
while (true) {
{
std::unique_lock lk(mutex);
cv.wait_for(lk, period, [&]{ return stop; });
if(stop) {
break;
}
}
if (!callback()) {
break;
}
}
}
bool stop = false;
std::condition_variable cv;
std::mutex mutex;
std::thread thread;
};
} // icsneo
#endif // __cplusplus
#endif // __PERIODIC_H__

View File

@ -16,6 +16,7 @@
#include <chrono>
#include "icsneo/api/eventmanager.h"
#include "icsneo/api/lifetime.h"
#include "icsneo/api/periodic.h"
#include "icsneo/device/neodevice.h"
#include "icsneo/device/idevicesettings.h"
#include "icsneo/device/nullsettings.h"
@ -1037,7 +1038,10 @@ private:
*/
std::optional<uint64_t> getVSADiskSize();
bool enableNetworkCommunication(bool enable);
bool enableNetworkCommunication(bool enable, uint32_t timeout = 0);
// Keeponline (keepalive for online)
std::unique_ptr<Periodic> keeponline;
};
}

View File

@ -11,7 +11,7 @@ namespace icsneo {
class NeoVIFIRE : public Device {
public:
// USB PID is 0x0701, standard driver is FTDI
// USB PID is 0x0701, standard driver is DXX
ICSNEO_FINDABLE_DEVICE_BY_PID(NeoVIFIRE, DeviceType::FIRE, 0x0701);
static const std::vector<Network>& GetSupportedNetworks() {

View File

@ -12,7 +12,7 @@ namespace icsneo {
class NeoVIFIRE2 : public Device {
public:
// Serial numbers start with CY
// USB PID is 0x1000, standard driver is FTDI
// USB PID is 0x1000, standard driver is DXX
// Ethernet MAC allocation is 0x04, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(NeoVIFIRE2, DeviceType::FIRE2, "CY");

View File

@ -12,7 +12,7 @@ namespace icsneo {
class NeoVIION : public Plasion {
public:
// USB PID is 0x0901, standard driver is FTDI
// USB PID is 0x0901, standard driver is DXX
ICSNEO_FINDABLE_DEVICE_BY_PID(NeoVIION, DeviceType::ION, 0x0901);
private:

View File

@ -10,7 +10,7 @@ namespace icsneo {
class NeoVIPLASMA : public Plasion {
public:
// USB PID is 0x0801, standard driver is FTDI
// USB PID is 0x0801, standard driver is DXX
ICSNEO_FINDABLE_DEVICE_BY_PID(NeoVIPLASMA, DeviceType::PLASMA, 0x0801);
private:

View File

@ -15,7 +15,7 @@ namespace icsneo {
class RADA2B : public Device {
public:
// Serial numbers start with AB
// USB PID is 0x0006, standard driver is FTDI
// USB PID is 0x0006, standard driver is DXX
// Ethernet MAC allocation is 0x18, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADA2B, DeviceType::RAD_A2B, "AB");

View File

@ -12,7 +12,7 @@ class RADComet : public RADCometBase {
public:
// Serial numbers start with RC
// USB PID is 0x1207, standard driver is FTDI3
// USB PID is 0x1207, standard driver is DXX
// Ethernet MAC allocation is 0x1D, standard driver is Raw
ICSNEO_FINDABLE_DEVICE_BY_SERIAL_RANGE(RADComet, DeviceType::RADComet, "RC0000", "RC0299");

View File

@ -12,7 +12,7 @@ class RADComet2 : public RADCometBase {
public:
// Serial numbers start with RC, Comet2 starts at RC0300
// USB PID is 0x1207, standard driver is FTDI3
// USB PID is 0x1207, standard driver is DXX
// Ethernet MAC allocation is 0x1D, standard driver is Raw
ICSNEO_FINDABLE_DEVICE_BY_SERIAL_RANGE(RADComet2, DeviceType::RADComet, "RC0300", "RCZZZZ");

View File

@ -12,7 +12,7 @@ class RADComet3 : public Device {
public:
// Serial numbers start with C3
// USB PID is 0x1208, standard driver is FTDI3
// USB PID is 0x1208, standard driver is DXX
// Ethernet MAC allocation is 0x20, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADComet3, DeviceType::RADComet3, "C3");

View File

@ -14,7 +14,7 @@ namespace icsneo {
class RADGigastar : public Device {
public:
// Serial numbers start with GS
// USB PID is 0x1204, standard driver is FTDI3
// USB PID is 0x1204, standard driver is DXX
// Ethernet MAC allocation is 0x0F, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADGigastar, DeviceType::RADGigastar, "GS");

View File

@ -16,7 +16,7 @@ namespace icsneo
{
public:
// Serial numbers start with GT
// USB PID is 0x1210, standard driver is FTDI3
// USB PID is 0x1210, standard driver is DXX
// Ethernet MAC allocation is 0x22, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADGigastar2, DeviceType::RADGigastar2, "GT");

View File

@ -14,7 +14,7 @@ namespace icsneo {
class RADMars : public Device {
public:
// Serial numbers start with GL (previously, RAD-Gigalog)
// USB PID is 0x1203, standard driver is FTDI3
// USB PID is 0x1203, standard driver is DXX
// Ethernet MAC allocation is 0x0A, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADMars, DeviceType::RADMars, "GL");

View File

@ -11,7 +11,7 @@ namespace icsneo {
class RADMoon2 : public RADMoon2Base {
public:
// Serial numbers start with RM
// USB PID is 0x1202, standard driver is FTDI3
// USB PID is 0x1202, standard driver is DXX
ICSNEO_FINDABLE_DEVICE(RADMoon2, DeviceType::RADMoon2, "RM");
uint8_t getPhyAddrOrPort() const override { return 6; };

View File

@ -12,7 +12,7 @@ class RADMoonT1S : public Device {
public:
// Serial numbers start with MS
// USB PID is 0x1209, standard driver is FTDI3
// USB PID is 0x1209, standard driver is DXX
// Ethernet MAC allocation is 0x21, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADMoonT1S, DeviceType::RADMoonT1S, "MS");

View File

@ -12,7 +12,7 @@ namespace icsneo {
class RADStar2 : public Device {
public:
// Serial numbers start with RS
// USB PID is 0x0005, standard driver is FTDI
// USB PID is 0x0005, standard driver is DXX
// Ethernet MAC allocation is 0x05, standard driver is Raw
ICSNEO_FINDABLE_DEVICE(RADStar2, DeviceType::RADStar2, "RS");

View File

@ -12,7 +12,7 @@ namespace icsneo {
class RADSupermoon : public Device {
public:
// Serial numbers start with SM
// USB PID is 0x1201, standard driver is FTDI3
// USB PID is 0x1201, standard driver is DXX
ICSNEO_FINDABLE_DEVICE(RADSupermoon, DeviceType::RADSupermoon, "SM");
enum class SKU {

View File

@ -11,7 +11,7 @@ namespace icsneo {
class ValueCAN3 : public Device {
public:
// USB PID is 0x0601, standard driver is FTDI
// USB PID is 0x0601, standard driver is DXX
ICSNEO_FINDABLE_DEVICE_BY_PID(ValueCAN3, DeviceType::VCAN3, 0x0601);
static const std::vector<Network>& GetSupportedNetworks() {

View File

@ -0,0 +1,39 @@
#ifndef __DXX_H_
#define __DXX_H_
#ifdef __cplusplus
#include "icsneo/communication/driver.h"
#include "icsneo/device/founddevice.h"
#include "libredxx/libredxx.h"
namespace icsneo {
class DXX : public Driver {
public:
static void Find(std::vector<FoundDevice>& found);
DXX(const device_eventhandler_t& err, neodevice_t& forDevice, uint16_t pid, libredxx_device_type type);
bool open() override;
bool isOpen() override;
bool close() override;
private:
void read();
void write();
neodevice_t neodevice;
uint16_t pid;
libredxx_device_type type;
libredxx_opened_device* device = nullptr;
std::thread readThread;
std::thread writeThread;
};
}
#endif // __cplusplus
#endif // __DXX_H_

View File

@ -1,35 +0,0 @@
#ifndef __FTD3XX_H_
#define __FTD3XX_H_
#ifdef __cplusplus
#include <optional>
#include "icsneo/communication/driver.h"
#include "icsneo/device/founddevice.h"
namespace icsneo {
class FTD3XX : public Driver {
public:
static void Find(std::vector<FoundDevice>& foundDevices);
FTD3XX(const device_eventhandler_t& err, neodevice_t& forDevice);
~FTD3XX() override { if(isOpen()) close(); }
bool open() override;
bool isOpen() override;
bool close() override;
bool isEthernet() const override { return false; }
private:
neodevice_t& device;
std::optional<void*> handle;
std::thread readThread, writeThread;
void readTask();
void writeTask();
};
}
#endif // __cplusplus
#endif

View File

@ -1,14 +0,0 @@
#ifndef __FTDI_H_
#define __FTDI_H_
#define INTREPID_USB_VENDOR_ID (0x093c)
#if defined _WIN32
#include "icsneo/platform/windows/ftdi.h"
#elif defined (__unix__) || (defined (__APPLE__) && defined (__MACH__))
#include "icsneo/platform/posix/ftdi.h"
#else
#warning "This platform is not supported by the FTDI driver"
#endif
#endif

View File

@ -1,74 +0,0 @@
#ifndef __FTDI_POSIX_H_
#define __FTDI_POSIX_H_
#ifdef __cplusplus
#include <vector>
#include <memory>
#include <string>
#include <ftdi.h>
#include "icsneo/device/neodevice.h"
#include "icsneo/communication/driver.h"
#include "icsneo/third-party/concurrentqueue/blockingconcurrentqueue.h"
#include "icsneo/api/eventmanager.h"
namespace icsneo {
class FTDI : public Driver {
public:
static void Find(std::vector<FoundDevice>& found);
FTDI(const device_eventhandler_t& err, neodevice_t& forDevice);
~FTDI() { if(isOpen()) close(); }
bool open();
bool close();
bool isOpen() { return ftdi.isOpen(); }
private:
class FTDIContext {
public:
FTDIContext() : context(ftdi_new()) {}
~FTDIContext() {
if(context)
ftdi_free(context); // calls ftdi_deinit and ftdi_close if required
context = nullptr;
}
// A PID of 0 disables filtering by PID
std::pair<int, std::vector< std::pair<std::string, uint16_t> > > findDevices(int pid = 0);
int openDevice(int pid, const char* serial);
bool closeDevice();
bool isOpen() const { return deviceOpen; }
int flush() { return ftdi_usb_purge_buffers(context); }
int reset() { return ftdi_usb_reset(context); }
int read(uint8_t* data, size_t size) { return ftdi_read_data(context, data, (int)size); }
int write(const uint8_t* data, size_t size) { return ftdi_write_data(context, data, (int)size); }
int setBaudrate(int baudrate) { return ftdi_set_baudrate(context, baudrate); }
int setLatencyTimer(uint8_t latency) { return ftdi_set_latency_timer(context, latency); }
bool setReadTimeout(int timeout) { if(context == nullptr) return false; context->usb_read_timeout = timeout; return true; }
bool setWriteTimeout(int timeout) { if(context == nullptr) return false; context->usb_write_timeout = timeout; return true; }
private:
struct ftdi_context* context;
bool deviceOpen = false;
};
FTDIContext ftdi;
static std::vector<std::string> handles;
static bool ErrorIsDisconnection(int errorCode);
std::thread readThread, writeThread;
void readTask();
void writeTask();
bool openable; // Set to false in the constructor if the object has not been found in searchResultDevices
neodevice_t& device;
};
}
#endif // __cplusplus
#endif

View File

@ -4,6 +4,8 @@
#ifdef __cplusplus
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <winsock2.h>
#include <ws2tcpip.h>

View File

@ -3,14 +3,34 @@
#ifdef __cplusplus
#include "icsneo/platform/windows/vcp.h"
#include "icsneo/communication/driver.h"
#include "icsneo/device/founddevice.h"
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <string>
namespace icsneo {
class CDCACM : public VCP {
class CDCACM : public Driver {
public:
CDCACM(const device_eventhandler_t& err, neodevice_t& forDevice) : VCP(err, forDevice) {}
static void Find(std::vector<FoundDevice>& found) { return VCP::Find(found, { L"usbser" }); }
CDCACM(const device_eventhandler_t& err, const std::wstring& path);
static void Find(std::vector<FoundDevice>& found);
bool open() override;
bool isOpen() override;
bool close() override;
private:
void read();
void write();
std::wstring path;
HANDLE handle = INVALID_HANDLE_VALUE;
std::thread readThread;
std::thread writeThread;
OVERLAPPED readOverlapped = {};
OVERLAPPED writeOverlapped = {};
};
}

View File

@ -1,6 +1,8 @@
#ifndef __DYNAMICLIB_WINDOWS_H_
#define __DYNAMICLIB_WINDOWS_H_
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#ifndef ICSNEOC_BUILD_STATIC

View File

@ -1,20 +0,0 @@
#ifndef __FTDI_WINDOWS_H_
#define __FTDI_WINDOWS_H_
#ifdef __cplusplus
#include "icsneo/platform/windows/vcp.h"
namespace icsneo {
class FTDI : public VCP {
public:
FTDI(const device_eventhandler_t& err, neodevice_t& forDevice) : VCP(err, forDevice) {}
static void Find(std::vector<FoundDevice>& found) { return VCP::Find(found, { L"serenum" /*, L"ftdibus" */ }); }
};
}
#endif // __cplusplus
#endif

View File

@ -3,6 +3,8 @@
#ifdef __cplusplus
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <pcap.h>
#include <memory>

View File

@ -1,49 +0,0 @@
#ifndef __VCP_WINDOWS_H_
#define __VCP_WINDOWS_H_
#ifdef __cplusplus
#include <vector>
#include <string>
#include <thread>
#include <atomic>
#include <chrono>
#include "icsneo/device/neodevice.h"
#include "icsneo/communication/driver.h"
#include "icsneo/api/eventmanager.h"
namespace icsneo {
// Virtual COM Port Communication
class VCP : public Driver {
public:
static void Find(std::vector<FoundDevice>& found, std::vector<std::wstring> driverName);
static bool IsHandleValid(neodevice_handle_t handle);
typedef void(*fn_boolCallback)(bool success);
VCP(const device_eventhandler_t& err, neodevice_t& forDevice);
virtual ~VCP();
bool open() { return open(false); }
void openAsync(fn_boolCallback callback);
bool close();
bool isOpen();
private:
bool open(bool fromAsync);
bool opening = false;
neodevice_t& device;
struct Detail;
std::shared_ptr<Detail> detail;
std::vector<std::shared_ptr<std::thread>> threads;
std::thread readThread, writeThread;
void readTask();
void writeTask();
};
}
#endif // __cplusplus
#endif

194
platform/dxx.cpp 100644
View File

@ -0,0 +1,194 @@
#include "icsneo/platform/dxx.h"
#define ICS_USB_VID 0x093C
using namespace icsneo;
static APIEvent::Type eventError(libredxx_status status) {
switch (status) {
case LIBREDXX_STATUS_ERROR_SYS: return APIEvent::Type::DXXErrorSys;
case LIBREDXX_STATUS_ERROR_INTERRUPTED: return APIEvent::Type::DXXErrorSys;
case LIBREDXX_STATUS_ERROR_OVERFLOW: return APIEvent::Type::DXXErrorSys;
case LIBREDXX_STATUS_ERROR_IO: return APIEvent::Type::DXXErrorSys;
case LIBREDXX_STATUS_ERROR_INVALID_ARGUMENT: return APIEvent::Type::DXXErrorSys;
default: return APIEvent::Type::Unknown;
}
}
void DXX::Find(std::vector<FoundDevice>& found) {
libredxx_status status;
static libredxx_find_filter filters[] = {
{ LIBREDXX_DEVICE_TYPE_D2XX, { ICS_USB_VID, 0x0005 } }, // RAD-Star 2
{ LIBREDXX_DEVICE_TYPE_D2XX, { ICS_USB_VID, 0x0006 } }, // RAD-A2B Rev A
{ LIBREDXX_DEVICE_TYPE_D2XX, { ICS_USB_VID, 0x1000 } }, // neoVI FIRE2
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1201 } }, // RAD-SuperMoon
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1202 } }, // RAD-Moon2
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1203 } }, // RAD-Gigalog
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1204 } }, // RAD-Gigastar
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1206 } }, // RAD-A2B Rev B
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1207 } }, // RAD-Comet
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1208 } }, // RAD-Comet3
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1209 } }, // RAD-MoonT1S
{ LIBREDXX_DEVICE_TYPE_D3XX, { ICS_USB_VID, 0x1210 } }, // RAD-Gigastar 2
};
static size_t filterCount = sizeof(filters) / sizeof(filters[0]);
libredxx_found_device** foundDevices = nullptr;
size_t foundDevicesCount;
status = libredxx_find_devices(filters, filterCount, &foundDevices, &foundDevicesCount);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
return;
}
if(foundDevicesCount == 0) {
return;
}
for(size_t i = 0; i < foundDevicesCount; ++i) {
libredxx_found_device* foundDevice = foundDevices[i];
libredxx_serial serial = {};
status = libredxx_get_serial(foundDevice, &serial);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
continue;
}
libredxx_device_id id;
status = libredxx_get_device_id(foundDevice, &id);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
continue;
}
libredxx_device_type type;
status = libredxx_get_device_type(foundDevice, &type);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
continue;
}
auto& device = found.emplace_back();
std::copy(serial.serial, serial.serial + sizeof(device.serial), device.serial);
device.makeDriver = [id, type](device_eventhandler_t err, neodevice_t& forDevice) {
return std::make_unique<DXX>(err, forDevice, id.pid, type);
};
}
libredxx_free_found(foundDevices);
}
DXX::DXX(const device_eventhandler_t& err, neodevice_t& forDevice, uint16_t pid, libredxx_device_type type) :
Driver(err), neodevice(forDevice), pid(pid), type(type) {
}
bool DXX::open() {
libredxx_status status;
libredxx_find_filter filters[] = {
{ (libredxx_device_type)type, { ICS_USB_VID, pid } }
};
libredxx_found_device** foundDevices = nullptr;
size_t foundDevicesCount;
status = libredxx_find_devices(filters, 1, &foundDevices, &foundDevicesCount);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
return false;
}
if(foundDevicesCount == 0) {
EventManager::GetInstance().add(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
return false;
}
libredxx_found_device* foundDevice = nullptr;
for(size_t i = 0; i < foundDevicesCount; ++i) {
libredxx_serial serial = {};
status = libredxx_get_serial(foundDevices[i], &serial);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::EventWarning);
continue;
}
if(strcmp(serial.serial, neodevice.serial) == 0) {
foundDevice = foundDevices[i];
break;
}
}
if(foundDevice == nullptr) {
EventManager::GetInstance().add(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
libredxx_free_found(foundDevices);
return false;
}
status = libredxx_open_device(foundDevice, &device);
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
libredxx_free_found(foundDevices);
return false;
}
libredxx_free_found(foundDevices);
setIsDisconnected(false);
readThread = std::thread(&DXX::read, this);
writeThread = std::thread(&DXX::write, this);
return true;
}
bool DXX::isOpen() {
return device != nullptr;
}
bool DXX::close() {
setIsClosing(true);
libredxx_close_device(device); // unblock read thread & close
writeQueue.enqueue(WriteOperation{}); // unblock write thread
readThread.join();
writeThread.join();
device = nullptr;
setIsClosing(false);
return true;
}
void DXX::read() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
std::vector<uint8_t> buffer(ICSNEO_DRIVER_RINGBUFFER_SIZE);
while(!isDisconnected() && !isClosing()) {
size_t received = buffer.size();
const auto status = libredxx_read(device, buffer.data(), &received);
if(isDisconnected() || isClosing()) {
return;
}
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
while(!isDisconnected() && !isClosing()) {
if(pushRx(buffer.data(), received))
break;
}
}
}
void DXX::write() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
WriteOperation writeOp;
while(!isDisconnected() && !isClosing()) {
writeQueue.wait_dequeue(writeOp);
if(isDisconnected() || isClosing()) {
return;
}
for(size_t totalWritten = 0; totalWritten < writeOp.bytes.size();) {
size_t size = writeOp.bytes.size() - totalWritten;
const auto status = libredxx_write(device, &writeOp.bytes[totalWritten], &size);
if(isDisconnected() || isClosing()) {
return;
}
if(status != LIBREDXX_STATUS_SUCCESS) {
EventManager::GetInstance().add(eventError(status), APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
totalWritten += size;
}
}
}

View File

@ -1,182 +0,0 @@
#include <vector>
#include "icsneo/api/eventmanager.h"
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4091)
#endif
#define FTD3XX_STATIC
#include <ftd3xx.h>
#ifdef _MSC_VER
#pragma warning(pop)
#endif
#include "icsneo/platform/ftd3xx.h"
static constexpr auto READ_PIPE_ID = 0x82;
static constexpr auto WRITE_PIPE_ID = 0x02;
using namespace icsneo;
static void addEvent(FT_STATUS status, APIEvent::Severity severity) {
const auto internalEvent = static_cast<uint32_t>(APIEvent::Type::FTOK) + status;
EventManager::GetInstance().add(APIEvent((APIEvent::Type)internalEvent, severity));
}
void FTD3XX::Find(std::vector<FoundDevice>& found) {
DWORD count;
if(const auto ret = FT_CreateDeviceInfoList(&count); ret != FT_OK) {
addEvent(ret, APIEvent::Severity::EventWarning);
return;
}
if(count == 0) {
return;
}
std::vector<FT_DEVICE_LIST_INFO_NODE> devices(count);
if(const auto ret = FT_GetDeviceInfoList(devices.data(), &count); ret != FT_OK) {
addEvent(ret, APIEvent::Severity::EventWarning);
return;
}
for(const auto& dev : devices) {
FoundDevice foundDevice = {};
std::copy(dev.SerialNumber, dev.SerialNumber + sizeof(foundDevice.serial), foundDevice.serial);
foundDevice.makeDriver = [](const device_eventhandler_t& eh, neodevice_t& forDevice) {
return std::unique_ptr<Driver>(new FTD3XX(eh, forDevice));
};
found.push_back(std::move(foundDevice));
}
}
FTD3XX::FTD3XX(const device_eventhandler_t& err, neodevice_t& forDevice) : Driver(err), device(forDevice) {
}
bool FTD3XX::open() {
if(isOpen()) {
report(APIEvent::Type::DeviceCurrentlyOpen, APIEvent::Severity::Error);
return false;
}
void* tmpHandle;
if(const auto ret = FT_Create(device.serial, FT_OPEN_BY_SERIAL_NUMBER, &tmpHandle); ret != FT_OK) {
addEvent(ret, APIEvent::Severity::Error);
return false;
}
handle.emplace(tmpHandle);
setIsClosing(false);
setIsDisconnected(false);
readThread = std::thread(&FTD3XX::readTask, this);
writeThread = std::thread(&FTD3XX::writeTask, this);
return true;
}
bool FTD3XX::isOpen() {
return handle.has_value();
}
bool FTD3XX::close() {
if(!isOpen() && !isDisconnected()) {
report(APIEvent::Type::DeviceCurrentlyClosed, APIEvent::Severity::Error);
return false;
}
setIsClosing(true);
// unblock the read thread
FT_AbortPipe(*handle, READ_PIPE_ID);
if(readThread.joinable())
readThread.join();
if(writeThread.joinable())
writeThread.join();
clearBuffers();
if(const auto ret = FT_Close(*handle); ret != FT_OK) {
addEvent(ret, APIEvent::Severity::EventWarning);
}
handle.reset();
setIsClosing(false);
return true;
}
void FTD3XX::readTask() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
std::vector<uint8_t> buffer(2 * 1024 * 1024);
FT_SetStreamPipe(*handle, false, false, READ_PIPE_ID, (ULONG)buffer.size());
// disable timeouts, we will interupt the read thread with AbortPipe
FT_SetPipeTimeout(*handle, READ_PIPE_ID, 0);
OVERLAPPED overlapped = {};
FT_InitializeOverlapped(*handle, &overlapped);
FT_STATUS status;
ULONG received = 0;
while(!isClosing() && !isDisconnected()) {
received = 0;
#ifdef _WIN32
status = FT_ReadPipe(*handle, READ_PIPE_ID, buffer.data(), (ULONG)buffer.size(), &received, &overlapped);
#else
status = FT_ReadPipeAsync(*handle, 0, buffer.data(), buffer.size(), &received, &overlapped);
#endif
if(FT_FAILED(status)) {
if(status != FT_IO_PENDING) {
addEvent(status, APIEvent::Severity::Error);
setIsDisconnected(true);
break;
}
status = FT_GetOverlappedResult(*handle, &overlapped, &received, true);
if(FT_FAILED(status)) {
addEvent(status, APIEvent::Severity::Error);
setIsDisconnected(true);
break;
}
if(received > 0) {
pushRx(buffer.data(), received);
}
}
}
FT_ReleaseOverlapped(*handle, &overlapped);
}
void FTD3XX::writeTask() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
FT_SetPipeTimeout(*handle, WRITE_PIPE_ID, 0);
WriteOperation writeOp;
ULONG sent;
FT_STATUS status;
while(!isClosing() && !isDisconnected()) {
if(!writeQueue.wait_dequeue_timed(writeOp, std::chrono::milliseconds(100)))
continue;
const auto size = static_cast<ULONG>(writeOp.bytes.size());
sent = 0;
#ifdef _WIN32
status = FT_WritePipe(*handle, WRITE_PIPE_ID, writeOp.bytes.data(), size, &sent, nullptr);
#else
status = FT_WritePipe(*handle, WRITE_PIPE_ID, writeOp.bytes.data(), size, &sent, 100);
#endif
if(FT_FAILED(status)) {
addEvent(status, APIEvent::Severity::Error);
setIsDisconnected(true);
break;
}
if(sent != size) {
report(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
setIsDisconnected(true);
break;
}
}
}

View File

@ -1,242 +0,0 @@
#include "icsneo/platform/ftdi.h"
#include "icsneo/device/founddevice.h"
#include <iostream>
#include <stdio.h>
#include <cstring>
#include <memory>
#include <utility>
#include <cctype>
#include <algorithm>
#include <libusb.h>
using namespace icsneo;
std::vector<std::string> FTDI::handles;
void FTDI::Find(std::vector<FoundDevice>& found) {
constexpr size_t deviceSerialBufferLength = sizeof(device.serial);
static FTDIContext context;
const auto result = context.findDevices();
if(result.first < 0)
return; // TODO Flag an error for the client application, there was an issue with FTDI
for(const auto& [serial, pid] : result.second) {
FoundDevice d;
strncpy(d.serial, serial.c_str(), deviceSerialBufferLength - 1);
d.serial[deviceSerialBufferLength - 1] = '\0'; // strncpy does not write a null terminator if serial is too long
for(size_t i = 0; i < deviceSerialBufferLength - 1; i++)
d.serial[i] = toupper(serial[i]);
std::string devHandle = serial;
auto it = std::find(handles.begin(), handles.end(), devHandle);
size_t foundHandle = SIZE_MAX;
if(it != handles.end()) {
foundHandle = it - handles.begin();
} else {
foundHandle = handles.size();
handles.push_back(devHandle);
}
d.handle = foundHandle;
d.productId = pid;
d.makeDriver = [](const device_eventhandler_t& report, neodevice_t& device) {
return std::unique_ptr<Driver>(new FTDI(report, device));
};
found.push_back(d);
}
}
FTDI::FTDI(const device_eventhandler_t& err, neodevice_t& forDevice) : Driver(err), device(forDevice) {
openable = strlen(forDevice.serial) > 0 && device.handle >= 0 && device.handle < (neodevice_handle_t)handles.size();
}
bool FTDI::open() {
if(isOpen()) {
report(APIEvent::Type::DeviceCurrentlyOpen, APIEvent::Severity::Error);
return false;
}
if(!openable) {
report(APIEvent::Type::InvalidNeoDevice, APIEvent::Severity::Error);
return false;
}
// At this point the handle has been checked to be within the bounds of the handles array
auto& handle = handles[device.handle];
const int openError = ftdi.openDevice(0, handle.c_str());
if(openError == -5) { // Unable to claim device
report(APIEvent::Type::DeviceInUse, APIEvent::Severity::Error);
return false;
} else if(openError != 0) {
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
ftdi.setReadTimeout(100);
ftdi.setWriteTimeout(1000);
ftdi.reset();
ftdi.setBaudrate(500000);
ftdi.setLatencyTimer(1);
ftdi.flush();
// Create threads
setIsClosing(false);
readThread = std::thread(&FTDI::readTask, this);
writeThread = std::thread(&FTDI::writeTask, this);
return true;
}
bool FTDI::close() {
if(!isOpen() && !isDisconnected()) {
report(APIEvent::Type::DeviceCurrentlyClosed, APIEvent::Severity::Error);
return false;
}
setIsClosing(true);
if(readThread.joinable())
readThread.join();
if(writeThread.joinable())
writeThread.join();
bool ret = true;
if(!isDisconnected()) {
ret = ftdi.closeDevice();
if(!ret)
report(APIEvent::Type::DriverFailedToClose, APIEvent::Severity::Error);
}
clearBuffers();
setIsClosing(false);
setIsDisconnected(false);
return ret;
}
std::pair<int, std::vector< std::pair<std::string, uint16_t> > > FTDI::FTDIContext::findDevices(int pid) {
std::pair<int, std::vector< std::pair<std::string, uint16_t> > > ret;
if(context == nullptr) {
ret.first = -1;
return ret;
}
struct ftdi_device_list* devlist = nullptr;
ret.first = ftdi_usb_find_all(context, &devlist, INTREPID_USB_VENDOR_ID, pid);
if(ret.first < 1) {
// Didn't find anything, maybe got an error
if(devlist != nullptr)
ftdi_list_free(&devlist);
return ret;
}
if(devlist == nullptr) {
ret.first = -4;
return ret;
}
for(struct ftdi_device_list* curdev = devlist; curdev != nullptr; curdev = curdev->next) {
struct libusb_device_descriptor descriptor = {};
// Check against bDeviceClass here as it will be 0 for FTDI devices
// It will be 2 for CDC ACM devices, which we don't want to handle here
if(libusb_get_device_descriptor(curdev->dev, &descriptor) != 0 || descriptor.bDeviceClass != 0)
continue;
char serial[16] = {};
if(ftdi_usb_get_strings(context, curdev->dev, nullptr, 0, nullptr, 0, serial, sizeof(serial)) < 0)
continue;
const auto len = strnlen(serial, sizeof(serial));
if(len > 4 && len < 10)
ret.second.emplace_back(serial, descriptor.idProduct);
}
ret.first = static_cast<int>(ret.second.size());
ftdi_list_free(&devlist);
return ret;
}
int FTDI::FTDIContext::openDevice(int pid, const char* serial) {
if(context == nullptr)
return 1;
if(serial == nullptr)
return 2;
if(serial[0] == '\0')
return 3;
if(deviceOpen)
return 4;
int ret = ftdi_usb_open_desc(context, INTREPID_USB_VENDOR_ID, pid, nullptr, serial);
if(ret == 0 /* all ok */)
deviceOpen = true;
return ret;
}
bool FTDI::FTDIContext::closeDevice() {
if(context == nullptr)
return false;
if(!deviceOpen)
return true;
int ret = ftdi_usb_close(context);
if(ret != 0)
return false;
deviceOpen = false;
return true;
}
bool FTDI::ErrorIsDisconnection(int errorCode) {
return errorCode == LIBUSB_ERROR_NO_DEVICE ||
errorCode == LIBUSB_ERROR_PIPE ||
errorCode == LIBUSB_ERROR_IO;
}
void FTDI::readTask() {
constexpr size_t READ_BUFFER_SIZE = 8;
uint8_t readbuf[READ_BUFFER_SIZE];
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
while(!isClosing() && !isDisconnected()) {
auto readBytes = ftdi.read(readbuf, READ_BUFFER_SIZE);
if(readBytes < 0) {
if(ErrorIsDisconnection(readBytes)) {
if(!isDisconnected()) {
setIsDisconnected(true);
report(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
}
} else
report(APIEvent::Type::FailedToRead, APIEvent::Severity::EventWarning);
} else
pushRx(readbuf, readBytes);
}
}
void FTDI::writeTask() {
WriteOperation writeOp;
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
while(!isClosing() && !isDisconnected()) {
if(!writeQueue.wait_dequeue_timed(writeOp, std::chrono::milliseconds(100)))
continue;
size_t offset = 0;
while(offset < writeOp.bytes.size()) {
auto writeBytes = ftdi.write(writeOp.bytes.data() + offset, (int)writeOp.bytes.size() - offset);
if(writeBytes < 0) {
if(ErrorIsDisconnection(writeBytes)) {
if(!isDisconnected()) {
setIsDisconnected(true);
report(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
}
break;
} else
report(APIEvent::Type::FailedToWrite, APIEvent::Severity::EventWarning);
} else
offset += writeBytes;
}
}
}

View File

@ -2,6 +2,8 @@
#include <string_view>
#include <cstdlib>
using namespace icsneo;
#define SERVD_VERSION 1
@ -10,17 +12,25 @@ static const Address SERVD_ADDRESS = Address("127.0.0.1", 26741);
static const std::string SERVD_VERSION_STR = std::to_string(SERVD_VERSION);
bool Servd::Enabled() {
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4996)
#endif
char* enabled = std::getenv("LIBICSNEO_USE_SERVD");
#ifdef _MSC_VER
#pragma warning(pop)
#endif
return enabled ? enabled[0] == '1' : false;
}
std::vector<std::string> split(const std::string_view& str, char delim = ' ')
{
std::vector<std::string> split(const std::string_view& str, char delim = ' ') {
if(str.empty())
return {};
std::vector<std::string> ret;
size_t tail = 0;
size_t head = 0;
while (head < str.size()) {
if (str[head] == delim) {
while(head < str.size()) {
if(str[head] == delim) {
ret.emplace_back(&str[tail], head - tail);
tail = head + 1;
}

View File

@ -0,0 +1,233 @@
#include "icsneo/platform/windows/cdcacm.h"
#include <setupapi.h>
#include <initguid.h>
#include <usbiodef.h>
#include <devpkey.h>
using namespace icsneo;
CDCACM::CDCACM(const device_eventhandler_t& err, const std::wstring& path) : Driver(err), path(path) {
}
bool CDCACM::open() {
handle = CreateFileW(path.c_str(), GENERIC_READ | GENERIC_WRITE, 0, nullptr, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, nullptr);
if(handle == INVALID_HANDLE_VALUE) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
return false;
}
COMMTIMEOUTS timeouts;
timeouts.ReadIntervalTimeout = MAXDWORD;
timeouts.ReadTotalTimeoutMultiplier = MAXDWORD;
timeouts.ReadTotalTimeoutConstant = MAXDWORD - 1;
timeouts.WriteTotalTimeoutMultiplier = 0;
timeouts.WriteTotalTimeoutConstant = 0;
if(!SetCommTimeouts(handle, &timeouts)) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
CloseHandle(handle);
handle = INVALID_HANDLE_VALUE;
return false;
}
DCB comstate;
if(!GetCommState(handle, &comstate)) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
CloseHandle(handle);
handle = INVALID_HANDLE_VALUE;
return false;
}
comstate.BaudRate = 115200;
comstate.ByteSize = 8;
comstate.fRtsControl = RTS_CONTROL_DISABLE;
if(!SetCommState(handle, &comstate)) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
CloseHandle(handle);
handle = INVALID_HANDLE_VALUE;
return false;
}
PurgeComm(handle, PURGE_RXCLEAR);
readOverlapped.hEvent = CreateEventA(nullptr, false, false, nullptr);
writeOverlapped.hEvent = CreateEventA(nullptr, false, false, nullptr);
setIsDisconnected(false);
readThread = std::thread(&CDCACM::read, this);
writeThread = std::thread(&CDCACM::write, this);
return true;
}
bool CDCACM::isOpen() {
return handle != INVALID_HANDLE_VALUE;
}
bool CDCACM::close() {
setIsClosing(true);
SetEvent(readOverlapped.hEvent); // unblock read thread
SetEvent(writeOverlapped.hEvent); // unblock write thread if waiting on COM write
writeQueue.enqueue(WriteOperation{}); // unblock write thread if waiting on write queue pop
readThread.join();
writeThread.join();
CloseHandle(readOverlapped.hEvent);
CloseHandle(writeOverlapped.hEvent);
CloseHandle(handle);
handle = INVALID_HANDLE_VALUE;
setIsClosing(false);
return true;
}
void CDCACM::read() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
std::vector<uint8_t> buffer(ICSNEO_DRIVER_RINGBUFFER_SIZE);
while(!isDisconnected() && !isClosing()) {
if(!ReadFile(handle, buffer.data(), (DWORD)buffer.size(), nullptr, &readOverlapped)) {
if(GetLastError() != ERROR_IO_PENDING) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
}
DWORD read = 0;
if(!GetOverlappedResult(handle, &readOverlapped, &read, true)) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
if(read == 0) {
continue;
}
while(!isDisconnected() && !isClosing()) {
if(pushRx(buffer.data(), read))
break;
}
}
}
void CDCACM::write() {
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
WriteOperation writeOp;
while(!isDisconnected() && !isClosing()) {
writeQueue.wait_dequeue(writeOp);
if(isDisconnected() || isClosing()) {
return;
}
if(!WriteFile(handle, writeOp.bytes.data(), (DWORD)writeOp.bytes.size(), nullptr, &writeOverlapped)) {
if(GetLastError() != ERROR_IO_PENDING) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
}
DWORD written;
if(!GetOverlappedResult(handle, &writeOverlapped, &written, true)) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
if(written != writeOp.bytes.size()) {
EventManager::GetInstance().add(APIEvent::Type::FailedToWrite, APIEvent::Severity::Error);
setIsDisconnected(true);
return;
}
}
}
class DeviceInfo {
public:
DeviceInfo() {
mDeviceInfo = SetupDiGetClassDevsW(&GUID_DEVINTERFACE_USB_DEVICE, NULL, NULL, DIGCF_PRESENT | DIGCF_DEVICEINTERFACE);
}
~DeviceInfo() {
SetupDiDestroyDeviceInfoList(mDeviceInfo);
}
operator HDEVINFO() const {
return mDeviceInfo;
}
operator bool() const {
return mDeviceInfo != INVALID_HANDLE_VALUE;
}
private:
HDEVINFO mDeviceInfo;
};
class DeviceInfoData {
public:
DeviceInfoData() {
mDeviceInfoData.cbSize = sizeof(SP_DEVINFO_DATA);
}
operator SP_DEVINFO_DATA*() {
return &mDeviceInfoData;
}
private:
SP_DEVINFO_DATA mDeviceInfoData;
};
static constexpr size_t WSTRING_ELEMENT_SIZE = sizeof(std::wstring::value_type);
void CDCACM::Find(std::vector<FoundDevice>& found) {
DeviceInfoData deviceInfoData;
const std::wstring intrepidUSB(L"USB\\VID_093C");
DeviceInfo deviceInfoSet;
if(!deviceInfoSet) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
return;
}
for(DWORD i = 0; SetupDiEnumDeviceInfo(deviceInfoSet, i, deviceInfoData); ++i) {
DWORD DataT;
DWORD buffersize = 0;
std::wstring wclass;
while(!SetupDiGetDevicePropertyW(deviceInfoSet, deviceInfoData, &DEVPKEY_Device_Class, &DataT, reinterpret_cast<PBYTE>(wclass.data()), static_cast<DWORD>((wclass.size() + 1) * WSTRING_ELEMENT_SIZE), &buffersize, 0)) {
wclass.resize((buffersize - 1) / WSTRING_ELEMENT_SIZE);
}
if(wclass != L"Ports") {
continue;
}
// TODO: is this a bug in Windows? why is this returned size different/wrong? It's like it's not a wstring at all
std::wstring deviceInstanceId;
while(!SetupDiGetDeviceInstanceIdW(deviceInfoSet, deviceInfoData, deviceInstanceId.data(), static_cast<DWORD>(deviceInstanceId.size() + 1), &buffersize)) {
deviceInstanceId.resize(buffersize - 1);
}
if(deviceInstanceId.find(intrepidUSB) != 0) {
continue;
}
std::wstring wserial;
while(!SetupDiGetDevicePropertyW(deviceInfoSet, deviceInfoData, &DEVPKEY_Device_BusReportedDeviceDesc, &DataT, reinterpret_cast<PBYTE>(wserial.data()), static_cast<DWORD>((wserial.size() + 1) * WSTRING_ELEMENT_SIZE), &buffersize, 0)) {
wserial.resize((buffersize - 1) / WSTRING_ELEMENT_SIZE);
}
FoundDevice device;
if(WideCharToMultiByte(CP_ACP, 0, wserial.c_str(), (int)wserial.size(), device.serial, sizeof(device.serial), NULL, NULL) == 0) {
EventManager::GetInstance().add(APIEvent::Type::SyscallError, APIEvent::Severity::Error);
continue;
}
std::wstring wport;
while(!SetupDiGetCustomDevicePropertyW(deviceInfoSet, deviceInfoData, L"PortName", 0, &DataT, reinterpret_cast<PBYTE>(wport.data()), static_cast<DWORD>((wport.size() + 1) * WSTRING_ELEMENT_SIZE), &buffersize)) {
wport.resize((buffersize - 1) / WSTRING_ELEMENT_SIZE);
}
const std::wstring path(L"\\\\.\\" + wport);
device.makeDriver = [path](device_eventhandler_t err, neodevice_t&) {
return std::make_unique<CDCACM>(err, path);
};
found.emplace_back(std::move(device));
}
}

View File

@ -1,3 +1,5 @@
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <winsock2.h>

View File

@ -1,6 +1,8 @@
#include "icsneo/platform/windows/registry.h"
#include "icsneo/platform/windows/strings.h"
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <codecvt>
#include <vector>

View File

@ -1,4 +1,7 @@
#include <string>
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <Windows.h>
#include <icsneo/platform/windows/strings.h>

View File

@ -1,471 +0,0 @@
#include "icsneo/platform/windows/ftdi.h"
#include "icsneo/platform/windows/strings.h"
#include "icsneo/platform/ftdi.h"
#include "icsneo/platform/registry.h"
#include "icsneo/device/founddevice.h"
#include <windows.h>
#include <iostream>
#include <iomanip>
#include <sstream>
#include <cwctype>
#include <algorithm>
#include <codecvt>
#include <cctype>
#include <limits>
#include <stdio.h>
using namespace icsneo;
static const std::wstring DRIVER_SERVICES_REG_KEY = L"SYSTEM\\CurrentControlSet\\services\\";
static const std::wstring ALL_ENUM_REG_KEY = L"SYSTEM\\CurrentControlSet\\Enum\\";
static constexpr unsigned int RETRY_TIMES = 5;
static constexpr unsigned int RETRY_DELAY = 50;
struct VCP::Detail {
Detail() {
overlappedRead.hEvent = INVALID_HANDLE_VALUE;
overlappedWrite.hEvent = INVALID_HANDLE_VALUE;
overlappedWait.hEvent = INVALID_HANDLE_VALUE;
}
HANDLE handle = INVALID_HANDLE_VALUE;
OVERLAPPED overlappedRead = {};
OVERLAPPED overlappedWrite = {};
OVERLAPPED overlappedWait = {};
};
void VCP::Find(std::vector<FoundDevice>& found, std::vector<std::wstring> driverNames) {
for(auto& driverName : driverNames) {
std::wstringstream regss;
regss << DRIVER_SERVICES_REG_KEY << driverName << L"\\Enum\\";
std::wstring driverEnumRegKey = regss.str();
uint32_t deviceCount = 0;
if(!Registry::Get(driverEnumRegKey, L"Count", deviceCount))
continue;
for(uint32_t i = 0; i < deviceCount; i++) {
FoundDevice device;
device.makeDriver = [](const device_eventhandler_t& reportFn, neodevice_t& device) {
return std::unique_ptr<Driver>(new VCP(reportFn, device));
};
// First we want to look at what devices FTDI is enumerating (inside driverEnumRegKey)
// The entry for a ValueCAN 3 with SN 138635 looks like "FTDIBUS\VID_093C+PID_0601+138635A\0000"
// The entry for a ValueCAN 4 with SN V20227 looks like "USB\VID_093C&PID_1101\V20227"
std::wstringstream ss;
ss << i;
std::wstring entry;
if(!Registry::Get(driverEnumRegKey, ss.str(), entry))
continue;
std::transform(entry.begin(), entry.end(), entry.begin(), std::towupper);
std::wstringstream vss;
vss << "VID_" << std::setfill(L'0') << std::setw(4) << std::uppercase << std::hex << INTREPID_USB_VENDOR_ID; // Intrepid Vendor ID
if(entry.find(vss.str()) == std::wstring::npos)
continue;
auto pidpos = entry.find(L"PID_");
if(pidpos == std::wstring::npos)
continue;
// We will later use this and startchar to parse the PID
// Okay, this is a device we want
// Get the serial number
auto startchar = entry.find(L"+", pidpos + 1);
if(startchar == std::wstring::npos)
startchar = entry.find(L"\\", pidpos + 1);
bool conversionError = false;
int sn = 0;
try {
sn = std::stoi(entry.substr(startchar + 1));
}
catch(...) {
conversionError = true;
}
std::wstringstream oss;
if(!sn || conversionError)
oss << entry.substr(startchar + 1, 6); // This is a device with characters in the serial number
else
oss << sn;
device.productId = uint16_t(std::wcstol(entry.c_str() + pidpos + 4, nullptr, 16));
if(!device.productId)
continue;
std::string serial = convertWideString(oss.str());
// The serial number should not have a path slash in it. If it does, that means we don't have the real serial.
if(serial.find_first_of('\\') != std::string::npos) {
// The serial number was not in the first serenum key where we expected it.
// We can try to match the ContainerID with the one in ALL_ENUM\USB and get a serial that way
std::wstringstream uess;
uess << ALL_ENUM_REG_KEY << L"\\USB\\" << vss.str() << L"&PID_" << std::setfill(L'0') << std::setw(4)
<< std::uppercase << std::hex << device.productId << L'\\';
std::wstringstream ciss;
ciss << ALL_ENUM_REG_KEY << entry;
std::wstring containerIDFromEntry, containerIDFromEnum;
if(!Registry::Get(ciss.str(), L"ContainerID", containerIDFromEntry))
continue; // We did not get a container ID. This can happen on Windows XP and before.
if(containerIDFromEntry.empty())
continue; // The container ID was empty?
std::vector<std::wstring> subkeys;
if(!Registry::EnumerateSubkeys(uess.str(), subkeys))
continue; // VID/PID combo was not present at all.
if(subkeys.empty())
continue; // No devices for VID/PID.
std::wstring correctSerial;
for(auto& subkey : subkeys) {
std::wstringstream skss;
skss << uess.str() << L'\\' << subkey;
if(!Registry::Get(skss.str(), L"ContainerID", containerIDFromEnum))
continue;
if(containerIDFromEntry != containerIDFromEnum)
continue;
correctSerial = subkey;
break;
}
if(correctSerial.empty())
continue; // Didn't find the device within the subkeys of the enumeration
sn = 0;
conversionError = false;
try {
sn = std::stoi(correctSerial);
}
catch(...) {
conversionError = true;
}
if(!sn || conversionError) {
// This is a device with characters in the serial number
if(correctSerial.size() != 6)
continue;
serial = convertWideString(correctSerial);
}
else {
std::wstringstream soss;
soss << sn;
serial = convertWideString(soss.str());
}
if(serial.find_first_of('\\') != std::string::npos)
continue;
}
for(char& c : serial)
c = static_cast<char>(toupper(c));
strcpy_s(device.serial, sizeof(device.serial), serial.c_str());
// Serial number is saved, we want the COM port number now
// This will be stored under ALL_ENUM_REG_KEY\entry\Device Parameters\PortName (entry from the FTDI_ENUM)
std::wstringstream dpss;
dpss << ALL_ENUM_REG_KEY << entry << L"\\Device Parameters";
std::wstring port;
Registry::Get(dpss.str(), L"PortName", port); // TODO If error do something else (Plasma maybe?)
std::transform(port.begin(), port.end(), port.begin(), std::towupper);
auto compos = port.find(L"COM");
device.handle = 0;
if(compos != std::wstring::npos) {
try {
device.handle = std::stoi(port.substr(compos + 3));
}
catch(...) {} // In case of this, or any other error, handle has already been initialized to 0
}
bool alreadyFound = false;
FoundDevice* shouldReplace = nullptr;
for(auto& foundDev : found) {
if((foundDev.handle == device.handle || foundDev.handle == 0 || device.handle == 0) && serial == foundDev.serial) {
alreadyFound = true;
if(foundDev.handle == 0)
shouldReplace = &foundDev;
break;
}
}
if(!alreadyFound)
found.push_back(device);
else if(shouldReplace != nullptr)
*shouldReplace = device;
}
}
}
VCP::VCP(const device_eventhandler_t& err, neodevice_t& forDevice) : Driver(err), device(forDevice) {
detail = std::make_shared<Detail>();
}
VCP::~VCP() {
if(isOpen())
close();
}
bool VCP::IsHandleValid(neodevice_handle_t handle) {
if(handle < 1)
return false;
if(handle > 256) // Windows default max COM port is COM256
return false; // TODO Enumerate subkeys of HKLM\HARDWARE\DEVICEMAP\SERIALCOMM as a user might have more serial ports somehow
return true;
}
bool VCP::open(bool fromAsync) {
if(isOpen() || (!fromAsync && opening)) {
report(APIEvent::Type::DeviceCurrentlyOpen, APIEvent::Severity::Error);
return false;
}
if(!IsHandleValid(device.handle)) {
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
opening = true;
std::wstringstream comss;
comss << L"\\\\.\\COM" << device.handle;
// We're going to attempt to open 5 (RETRY_TIMES) times in a row
for(int i = 0; !isOpen() && i < RETRY_TIMES; i++) {
detail->handle = CreateFileW(comss.str().c_str(), GENERIC_READ | GENERIC_WRITE, 0, nullptr,
OPEN_EXISTING, FILE_FLAG_OVERLAPPED, nullptr);
if(GetLastError() == ERROR_SUCCESS)
break; // We have the file handle
std::this_thread::sleep_for(std::chrono::milliseconds(RETRY_DELAY));
}
opening = false;
if(!isOpen()) {
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
// Set the timeouts
COMMTIMEOUTS timeouts;
if(!GetCommTimeouts(detail->handle, &timeouts)) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
// See https://docs.microsoft.com/en-us/windows/desktop/api/winbase/ns-winbase-_commtimeouts#remarks
timeouts.ReadIntervalTimeout = MAXDWORD;
timeouts.ReadTotalTimeoutMultiplier = MAXDWORD;
timeouts.ReadTotalTimeoutConstant = 100;
timeouts.WriteTotalTimeoutConstant = 10000;
timeouts.WriteTotalTimeoutMultiplier = 0;
if(!SetCommTimeouts(detail->handle, &timeouts)) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
// Set the COM state
DCB comstate;
if(!GetCommState(detail->handle, &comstate)) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
comstate.BaudRate = 115200;
comstate.ByteSize = 8;
comstate.Parity = NOPARITY;
comstate.StopBits = 0;
comstate.fDtrControl = DTR_CONTROL_ENABLE;
comstate.fRtsControl = RTS_CONTROL_ENABLE;
if(!SetCommState(detail->handle, &comstate)) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
PurgeComm(detail->handle, PURGE_RXCLEAR);
// Set up events so that overlapped IO can work with them
detail->overlappedRead.hEvent = CreateEvent(nullptr, false, false, nullptr);
detail->overlappedWrite.hEvent = CreateEvent(nullptr, false, false, nullptr);
detail->overlappedWait.hEvent = CreateEvent(nullptr, true, false, nullptr);
if (detail->overlappedRead.hEvent == nullptr || detail->overlappedWrite.hEvent == nullptr || detail->overlappedWait.hEvent == nullptr) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
// Set up event so that we will satisfy overlappedWait when a character comes in
if(!SetCommMask(detail->handle, EV_RXCHAR)) {
close();
report(APIEvent::Type::DriverFailedToOpen, APIEvent::Severity::Error);
return false;
}
// TODO Set up some sort of shared memory, save which COM port we have open so we don't try to open it again
// Create threads
readThread = std::thread(&VCP::readTask, this);
writeThread = std::thread(&VCP::writeTask, this);
return true;
}
void VCP::openAsync(fn_boolCallback callback) {
threads.push_back(std::make_shared<std::thread>([&]() {
callback(open(true));
}));
}
bool VCP::close() {
if(!isOpen()) {
report(APIEvent::Type::DeviceCurrentlyClosed, APIEvent::Severity::Error);
return false;
}
setIsClosing(true); // Signal the threads that we are closing
for(auto& t : threads)
t->join(); // Wait for the threads to close
readThread.join();
writeThread.join();
setIsClosing(false);
if(!CloseHandle(detail->handle)) {
report(APIEvent::Type::DriverFailedToClose, APIEvent::Severity::Error);
return false;
}
detail->handle = INVALID_HANDLE_VALUE;
bool ret = true; // If one of the events fails closing, we probably still want to try and close the others
if(detail->overlappedRead.hEvent != INVALID_HANDLE_VALUE) {
if(!CloseHandle(detail->overlappedRead.hEvent))
ret = false;
detail->overlappedRead.hEvent = INVALID_HANDLE_VALUE;
}
if(detail->overlappedWrite.hEvent != INVALID_HANDLE_VALUE) {
if(!CloseHandle(detail->overlappedWrite.hEvent))
ret = false;
detail->overlappedWrite.hEvent = INVALID_HANDLE_VALUE;
}
if(detail->overlappedWait.hEvent != INVALID_HANDLE_VALUE) {
if(!CloseHandle(detail->overlappedWait.hEvent))
ret = false;
detail->overlappedWait.hEvent = INVALID_HANDLE_VALUE;
}
clearBuffers();
if(!ret)
report(APIEvent::Type::DriverFailedToClose, APIEvent::Severity::Error);
// TODO Set up some sort of shared memory, free which COM port we had open so we can try to open it again
return ret;
}
bool VCP::isOpen() {
return detail->handle != INVALID_HANDLE_VALUE;
}
void VCP::readTask() {
constexpr size_t READ_BUFFER_SIZE = 10240;
uint8_t readbuf[READ_BUFFER_SIZE];
IOTaskState state = LAUNCH;
DWORD bytesRead = 0;
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
while(!isClosing() && !isDisconnected()) {
switch(state) {
case LAUNCH: {
COMSTAT comStatus;
unsigned long errorCodes;
ClearCommError(detail->handle, &errorCodes, &comStatus);
bytesRead = 0;
if(ReadFile(detail->handle, readbuf, READ_BUFFER_SIZE, nullptr, &detail->overlappedRead)) {
if(GetOverlappedResult(detail->handle, &detail->overlappedRead, &bytesRead, FALSE)) {
if(bytesRead)
pushRx(readbuf, bytesRead);
}
continue;
}
auto lastError = GetLastError();
if(lastError == ERROR_IO_PENDING)
state = WAIT;
else if(lastError != ERROR_SUCCESS) {
if(lastError == ERROR_ACCESS_DENIED) {
if(!isDisconnected()) {
setIsDisconnected(true);
report(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
}
} else
report(APIEvent::Type::FailedToRead, APIEvent::Severity::Error);
}
}
break;
case WAIT: {
auto ret = WaitForSingleObject(detail->overlappedRead.hEvent, 100);
if(ret == WAIT_OBJECT_0) {
if(GetOverlappedResult(detail->handle, &detail->overlappedRead, &bytesRead, FALSE)) {
pushRx(readbuf, bytesRead);
state = LAUNCH;
} else
report(APIEvent::Type::FailedToRead, APIEvent::Severity::Error);
}
if(ret == WAIT_ABANDONED || ret == WAIT_FAILED) {
state = LAUNCH;
report(APIEvent::Type::FailedToRead, APIEvent::Severity::Error);
}
}
}
}
}
void VCP::writeTask() {
IOTaskState state = LAUNCH;
VCP::WriteOperation writeOp;
DWORD bytesWritten = 0;
EventManager::GetInstance().downgradeErrorsOnCurrentThread();
while(!isClosing() && !isDisconnected()) {
switch(state) {
case LAUNCH: {
if(!writeQueue.wait_dequeue_timed(writeOp, std::chrono::milliseconds(100)))
continue;
bytesWritten = 0;
if(WriteFile(detail->handle, writeOp.bytes.data(), (DWORD)writeOp.bytes.size(), nullptr, &detail->overlappedWrite))
continue;
auto winerr = GetLastError();
if(winerr == ERROR_IO_PENDING) {
state = WAIT;
}
else if(winerr == ERROR_ACCESS_DENIED) {
if(!isDisconnected()) {
setIsDisconnected(true);
report(APIEvent::Type::DeviceDisconnected, APIEvent::Severity::Error);
}
} else
report(APIEvent::Type::FailedToWrite, APIEvent::Severity::Error);
}
break;
case WAIT: {
auto ret = WaitForSingleObject(detail->overlappedWrite.hEvent, 50);
if(ret == WAIT_OBJECT_0) {
if(!GetOverlappedResult(detail->handle, &detail->overlappedWrite, &bytesWritten, FALSE))
report(APIEvent::Type::FailedToWrite, APIEvent::Severity::Error);
state = LAUNCH;
}
if(ret == WAIT_ABANDONED) {
report(APIEvent::Type::FailedToWrite, APIEvent::Severity::Error);
state = LAUNCH;
}
}
}
}
}

View File

@ -0,0 +1,62 @@
#include "icsneo/api/periodic.h"
#include "gtest/gtest.h"
#include <condition_variable>
using namespace icsneo;
// no wait, make sure stop works
TEST(PeriodicTest, StartStop)
{
const auto start = std::chrono::steady_clock::now();
Periodic p([] { return true; }, std::chrono::milliseconds(1000));
const auto delta = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start);
EXPECT_LT(delta.count(), 100); // hopefully enough
}
// time single cycle
TEST(PeriodicTest, OneCycle)
{
std::condition_variable cv;
std::mutex mutex;
uint8_t cycles = 0;
const auto start = std::chrono::steady_clock::now();
{
Periodic p([&] {
{
std::scoped_lock lk(mutex);
++cycles;
}
cv.notify_one();
return true;
}, std::chrono::seconds(1));
std::unique_lock<std::mutex> lk(mutex);
cv.wait_for(lk, std::chrono::seconds(2), [&]{ return cycles > 0; });
}
const auto delta = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::steady_clock::now() - start);
EXPECT_EQ(delta.count(), 1);
EXPECT_EQ(cycles, 1);
}
TEST(PeriodicTest, TenCycles)
{
std::condition_variable cv;
std::mutex mutex;
uint8_t cycles = 0;
const auto start = std::chrono::steady_clock::now();
{
Periodic p([&] {
{
std::scoped_lock lk(mutex);
++cycles;
}
cv.notify_one();
return true;
}, std::chrono::milliseconds(100));
std::unique_lock<std::mutex> lk(mutex);
cv.wait_for(lk, std::chrono::seconds(2), [&]{ return cycles >= 10; });
}
const auto delta = std::chrono::duration_cast<std::chrono::seconds>(std::chrono::steady_clock::now() - start);
EXPECT_EQ(delta.count(), 1);
EXPECT_EQ(cycles, 10);
}

View File

@ -1,4 +0,0 @@
# Run manually to reformat a file:
# clang-format -i --style=file <file>
Language: Cpp
BasedOnStyle: Google

View File

@ -1,84 +0,0 @@
# Ignore CI build directory
build/
xcuserdata
cmake-build-debug/
.idea/
bazel-bin
bazel-genfiles
bazel-googletest
bazel-out
bazel-testlogs
# python
*.pyc
# Visual Studio files
.vs
*.sdf
*.opensdf
*.VC.opendb
*.suo
*.user
_ReSharper.Caches/
Win32-Debug/
Win32-Release/
x64-Debug/
x64-Release/
# Ignore autoconf / automake files
Makefile.in
aclocal.m4
configure
build-aux/
autom4te.cache/
googletest/m4/libtool.m4
googletest/m4/ltoptions.m4
googletest/m4/ltsugar.m4
googletest/m4/ltversion.m4
googletest/m4/lt~obsolete.m4
googlemock/m4
# Ignore generated directories.
googlemock/fused-src/
googletest/fused-src/
# macOS files
.DS_Store
googletest/.DS_Store
googletest/xcode/.DS_Store
# Ignore cmake generated directories and files.
CMakeFiles
CTestTestfile.cmake
Makefile
cmake_install.cmake
googlemock/CMakeFiles
googlemock/CTestTestfile.cmake
googlemock/Makefile
googlemock/cmake_install.cmake
googlemock/gtest
/bin
/googlemock/gmock.dir
/googlemock/gmock_main.dir
/googlemock/RUN_TESTS.vcxproj.filters
/googlemock/RUN_TESTS.vcxproj
/googlemock/INSTALL.vcxproj.filters
/googlemock/INSTALL.vcxproj
/googlemock/gmock_main.vcxproj.filters
/googlemock/gmock_main.vcxproj
/googlemock/gmock.vcxproj.filters
/googlemock/gmock.vcxproj
/googlemock/gmock.sln
/googlemock/ALL_BUILD.vcxproj.filters
/googlemock/ALL_BUILD.vcxproj
/lib
/Win32
/ZERO_CHECK.vcxproj.filters
/ZERO_CHECK.vcxproj
/RUN_TESTS.vcxproj.filters
/RUN_TESTS.vcxproj
/INSTALL.vcxproj.filters
/INSTALL.vcxproj
/googletest-distribution.sln
/CMakeCache.txt
/ALL_BUILD.vcxproj.filters
/ALL_BUILD.vcxproj

View File

@ -1,77 +0,0 @@
# Build matrix / environment variable are explained on:
# https://docs.travis-ci.com/user/customizing-the-build/
# This file can be validated on:
# http://lint.travis-ci.org/
sudo: false
language: cpp
# Define the matrix explicitly, manually expanding the combinations of (os, compiler, env).
# It is more tedious, but grants us far more flexibility.
matrix:
include:
- os: linux
sudo: required
before_install: chmod -R +x ./ci/*platformio.sh
install: ./ci/install-platformio.sh
script: ./ci/build-platformio.sh
- os: linux
dist: xenial
compiler: gcc
sudo : true
install: ./ci/install-linux.sh && ./ci/log-config.sh
script: ./ci/build-linux-bazel.sh
- os: linux
dist: xenial
compiler: clang
sudo : true
install: ./ci/install-linux.sh && ./ci/log-config.sh
script: ./ci/build-linux-bazel.sh
- os: linux
compiler: gcc
env: BUILD_TYPE=Debug VERBOSE=1 CXX_FLAGS=-std=c++11
- os: linux
compiler: clang
env: BUILD_TYPE=Release VERBOSE=1 CXX_FLAGS=-std=c++11 -Wgnu-zero-variadic-macro-arguments
- os: linux
compiler: clang
env: BUILD_TYPE=Release VERBOSE=1 CXX_FLAGS=-std=c++11 NO_EXCEPTION=ON NO_RTTI=ON COMPILER_IS_GNUCXX=ON
- os: osx
compiler: gcc
env: BUILD_TYPE=Release VERBOSE=1 CXX_FLAGS=-std=c++11 HOMEBREW_LOGS=~/homebrew-logs HOMEBREW_TEMP=~/homebrew-temp
- os: osx
compiler: clang
env: BUILD_TYPE=Release VERBOSE=1 CXX_FLAGS=-std=c++11 HOMEBREW_LOGS=~/homebrew-logs HOMEBREW_TEMP=~/homebrew-temp
# These are the install and build (script) phases for the most common entries in the matrix. They could be included
# in each entry in the matrix, but that is just repetitive.
install:
- ./ci/install-${TRAVIS_OS_NAME}.sh
- . ./ci/env-${TRAVIS_OS_NAME}.sh
- ./ci/log-config.sh
script: ./ci/travis.sh
# For sudo=false builds this section installs the necessary dependencies.
addons:
apt:
# List of whitelisted in travis packages for ubuntu-precise can be found here:
# https://github.com/travis-ci/apt-package-whitelist/blob/master/ubuntu-precise
# List of whitelisted in travis apt-sources:
# https://github.com/travis-ci/apt-source-whitelist/blob/master/ubuntu.json
sources:
- ubuntu-toolchain-r-test
- llvm-toolchain-precise-3.9
packages:
- g++-4.9
- clang-3.9
update: true
homebrew:
packages:
- ccache
- gcc@4.9
- llvm@3.9
update: true
notifications:
email: false

View File

@ -1,190 +0,0 @@
# Copyright 2017 Google Inc.
# All Rights Reserved.
#
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Bazel Build for Google C++ Testing Framework(Google Test)
load("@rules_cc//cc:defs.bzl", "cc_library", "cc_test")
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
exports_files(["LICENSE"])
config_setting(
name = "windows",
constraint_values = ["@platforms//os:windows"],
)
config_setting(
name = "msvc_compiler",
flag_values = {
"@bazel_tools//tools/cpp:compiler": "msvc-cl",
},
visibility = [":__subpackages__"],
)
config_setting(
name = "has_absl",
values = {"define": "absl=1"},
)
# Library that defines the FRIEND_TEST macro.
cc_library(
name = "gtest_prod",
hdrs = ["googletest/include/gtest/gtest_prod.h"],
includes = ["googletest/include"],
)
# Google Test including Google Mock
cc_library(
name = "gtest",
srcs = glob(
include = [
"googletest/src/*.cc",
"googletest/src/*.h",
"googletest/include/gtest/**/*.h",
"googlemock/src/*.cc",
"googlemock/include/gmock/**/*.h",
],
exclude = [
"googletest/src/gtest-all.cc",
"googletest/src/gtest_main.cc",
"googlemock/src/gmock-all.cc",
"googlemock/src/gmock_main.cc",
],
),
hdrs = glob([
"googletest/include/gtest/*.h",
"googlemock/include/gmock/*.h",
]),
copts = select({
":windows": [],
"//conditions:default": ["-pthread"],
}),
defines = select({
":has_absl": ["GTEST_HAS_ABSL=1"],
"//conditions:default": [],
}),
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
includes = [
"googlemock",
"googlemock/include",
"googletest",
"googletest/include",
],
linkopts = select({
":windows": [],
"//conditions:default": ["-pthread"],
}),
deps = select({
":has_absl": [
"@com_google_absl//absl/debugging:failure_signal_handler",
"@com_google_absl//absl/debugging:stacktrace",
"@com_google_absl//absl/debugging:symbolize",
"@com_google_absl//absl/strings",
"@com_google_absl//absl/types:any",
"@com_google_absl//absl/types:optional",
"@com_google_absl//absl/types:variant",
],
"//conditions:default": [],
}),
)
cc_library(
name = "gtest_main",
srcs = ["googlemock/src/gmock_main.cc"],
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
deps = [":gtest"],
)
# The following rules build samples of how to use gTest.
cc_library(
name = "gtest_sample_lib",
srcs = [
"googletest/samples/sample1.cc",
"googletest/samples/sample2.cc",
"googletest/samples/sample4.cc",
],
hdrs = [
"googletest/samples/prime_tables.h",
"googletest/samples/sample1.h",
"googletest/samples/sample2.h",
"googletest/samples/sample3-inl.h",
"googletest/samples/sample4.h",
],
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
)
cc_test(
name = "gtest_samples",
size = "small",
# All Samples except:
# sample9 (main)
# sample10 (main and takes a command line option and needs to be separate)
srcs = [
"googletest/samples/sample1_unittest.cc",
"googletest/samples/sample2_unittest.cc",
"googletest/samples/sample3_unittest.cc",
"googletest/samples/sample4_unittest.cc",
"googletest/samples/sample5_unittest.cc",
"googletest/samples/sample6_unittest.cc",
"googletest/samples/sample7_unittest.cc",
"googletest/samples/sample8_unittest.cc",
],
linkstatic = 0,
deps = [
"gtest_sample_lib",
":gtest_main",
],
)
cc_test(
name = "sample9_unittest",
size = "small",
srcs = ["googletest/samples/sample9_unittest.cc"],
deps = [":gtest"],
)
cc_test(
name = "sample10_unittest",
size = "small",
srcs = ["googletest/samples/sample10_unittest.cc"],
deps = [":gtest"],
)

View File

@ -1,32 +0,0 @@
# Note: CMake support is community-based. The maintainers do not use CMake
# internally.
cmake_minimum_required(VERSION 2.8.12)
if (POLICY CMP0048)
cmake_policy(SET CMP0048 NEW)
endif (POLICY CMP0048)
project(googletest-distribution)
set(GOOGLETEST_VERSION 1.11.0)
if (CMAKE_VERSION VERSION_GREATER "3.0.2")
if(NOT CYGWIN AND NOT MSYS AND NOT ${CMAKE_SYSTEM_NAME} STREQUAL QNX)
set(CMAKE_CXX_EXTENSIONS OFF)
endif()
endif()
enable_testing()
include(CMakeDependentOption)
include(GNUInstallDirs)
#Note that googlemock target already builds googletest
option(BUILD_GMOCK "Builds the googlemock subproject" ON)
option(INSTALL_GTEST "Enable installation of googletest. (Projects embedding googletest may want to turn this OFF.)" ON)
if(BUILD_GMOCK)
add_subdirectory( googlemock )
else()
add_subdirectory( googletest )
endif()

View File

@ -1,130 +0,0 @@
# How to become a contributor and submit your own code
## Contributor License Agreements
We'd love to accept your patches! Before we can take them, we have to jump a
couple of legal hurdles.
Please fill out either the individual or corporate Contributor License Agreement
(CLA).
* If you are an individual writing original source code and you're sure you
own the intellectual property, then you'll need to sign an
[individual CLA](https://developers.google.com/open-source/cla/individual).
* If you work for a company that wants to allow you to contribute your work,
then you'll need to sign a
[corporate CLA](https://developers.google.com/open-source/cla/corporate).
Follow either of the two links above to access the appropriate CLA and
instructions for how to sign and return it. Once we receive it, we'll be able to
accept your pull requests.
## Are you a Googler?
If you are a Googler, please make an attempt to submit an internal change rather
than a GitHub Pull Request. If you are not able to submit an internal change a
PR is acceptable as an alternative.
## Contributing A Patch
1. Submit an issue describing your proposed change to the
[issue tracker](https://github.com/google/googletest/issues).
2. Please don't mix more than one logical change per submittal, because it
makes the history hard to follow. If you want to make a change that doesn't
have a corresponding issue in the issue tracker, please create one.
3. Also, coordinate with team members that are listed on the issue in question.
This ensures that work isn't being duplicated and communicating your plan
early also generally leads to better patches.
4. If your proposed change is accepted, and you haven't already done so, sign a
Contributor License Agreement (see details above).
5. Fork the desired repo, develop and test your code changes.
6. Ensure that your code adheres to the existing style in the sample to which
you are contributing.
7. Ensure that your code has an appropriate set of unit tests which all pass.
8. Submit a pull request.
## The Google Test and Google Mock Communities
The Google Test community exists primarily through the
[discussion group](http://groups.google.com/group/googletestframework) and the
GitHub repository. Likewise, the Google Mock community exists primarily through
their own [discussion group](http://groups.google.com/group/googlemock). You are
definitely encouraged to contribute to the discussion and you can also help us
to keep the effectiveness of the group high by following and promoting the
guidelines listed here.
### Please Be Friendly
Showing courtesy and respect to others is a vital part of the Google culture,
and we strongly encourage everyone participating in Google Test development to
join us in accepting nothing less. Of course, being courteous is not the same as
failing to constructively disagree with each other, but it does mean that we
should be respectful of each other when enumerating the 42 technical reasons
that a particular proposal may not be the best choice. There's never a reason to
be antagonistic or dismissive toward anyone who is sincerely trying to
contribute to a discussion.
Sure, C++ testing is serious business and all that, but it's also a lot of fun.
Let's keep it that way. Let's strive to be one of the friendliest communities in
all of open source.
As always, discuss Google Test in the official GoogleTest discussion group. You
don't have to actually submit code in order to sign up. Your participation
itself is a valuable contribution.
## Style
To keep the source consistent, readable, diffable and easy to merge, we use a
fairly rigid coding style, as defined by the
[google-styleguide](https://github.com/google/styleguide) project. All patches
will be expected to conform to the style outlined
[here](https://google.github.io/styleguide/cppguide.html). Use
[.clang-format](https://github.com/google/googletest/blob/master/.clang-format)
to check your formatting.
## Requirements for Contributors
If you plan to contribute a patch, you need to build Google Test, Google Mock,
and their own tests from a git checkout, which has further requirements:
* [Python](https://www.python.org/) v2.3 or newer (for running some of the
tests and re-generating certain source files from templates)
* [CMake](https://cmake.org/) v2.8.12 or newer
## Developing Google Test and Google Mock
This section discusses how to make your own changes to the Google Test project.
### Testing Google Test and Google Mock Themselves
To make sure your changes work as intended and don't break existing
functionality, you'll want to compile and run Google Test and GoogleMock's own
tests. For that you can use CMake:
mkdir mybuild
cd mybuild
cmake -Dgtest_build_tests=ON -Dgmock_build_tests=ON ${GTEST_REPO_DIR}
To choose between building only Google Test or Google Mock, you may modify your
cmake command to be one of each
cmake -Dgtest_build_tests=ON ${GTEST_DIR} # sets up Google Test tests
cmake -Dgmock_build_tests=ON ${GMOCK_DIR} # sets up Google Mock tests
Make sure you have Python installed, as some of Google Test's tests are written
in Python. If the cmake command complains about not being able to find Python
(`Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)`), try telling it
explicitly where your Python executable can be found:
cmake -DPYTHON_EXECUTABLE=path/to/python ...
Next, you can build Google Test and / or Google Mock and all desired tests. On
\*nix, this is usually done by
make
To run the tests, do
make test
All tests should pass.

View File

@ -1,63 +0,0 @@
# This file contains a list of people who've made non-trivial
# contribution to the Google C++ Testing Framework project. People
# who commit code to the project are encouraged to add their names
# here. Please keep the list sorted by first names.
Ajay Joshi <jaj@google.com>
Balázs Dán <balazs.dan@gmail.com>
Benoit Sigoure <tsuna@google.com>
Bharat Mediratta <bharat@menalto.com>
Bogdan Piloca <boo@google.com>
Chandler Carruth <chandlerc@google.com>
Chris Prince <cprince@google.com>
Chris Taylor <taylorc@google.com>
Dan Egnor <egnor@google.com>
Dave MacLachlan <dmaclach@gmail.com>
David Anderson <danderson@google.com>
Dean Sturtevant
Eric Roman <eroman@chromium.org>
Gene Volovich <gv@cite.com>
Hady Zalek <hady.zalek@gmail.com>
Hal Burch <gmock@hburch.com>
Jeffrey Yasskin <jyasskin@google.com>
Jim Keller <jimkeller@google.com>
Joe Walnes <joe@truemesh.com>
Jon Wray <jwray@google.com>
Jói Sigurðsson <joi@google.com>
Keir Mierle <mierle@gmail.com>
Keith Ray <keith.ray@gmail.com>
Kenton Varda <kenton@google.com>
Kostya Serebryany <kcc@google.com>
Krystian Kuzniarek <krystian.kuzniarek@gmail.com>
Lev Makhlis
Manuel Klimek <klimek@google.com>
Mario Tanev <radix@google.com>
Mark Paskin
Markus Heule <markus.heule@gmail.com>
Matthew Simmons <simmonmt@acm.org>
Mika Raento <mikie@iki.fi>
Mike Bland <mbland@google.com>
Miklós Fazekas <mfazekas@szemafor.com>
Neal Norwitz <nnorwitz@gmail.com>
Nermin Ozkiranartli <nermin@google.com>
Owen Carlsen <ocarlsen@google.com>
Paneendra Ba <paneendra@google.com>
Pasi Valminen <pasi.valminen@gmail.com>
Patrick Hanna <phanna@google.com>
Patrick Riley <pfr@google.com>
Paul Menage <menage@google.com>
Peter Kaminski <piotrk@google.com>
Piotr Kaminski <piotrk@google.com>
Preston Jackson <preston.a.jackson@gmail.com>
Rainer Klaffenboeck <rainer.klaffenboeck@dynatrace.com>
Russ Cox <rsc@google.com>
Russ Rufer <russ@pentad.com>
Sean Mcafee <eefacm@gmail.com>
Sigurður Ásgeirsson <siggi@google.com>
Sverre Sundsdal <sundsdal@gmail.com>
Takeshi Yoshino <tyoshino@google.com>
Tracy Bialik <tracy@pentad.com>
Vadim Berman <vadimb@google.com>
Vlad Losev <vladl@google.com>
Wolfgang Klier <wklier@google.com>
Zhanyong Wan <wan@google.com>

View File

@ -1,28 +0,0 @@
Copyright 2008, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,140 +0,0 @@
# GoogleTest
### Announcements
#### Live at Head
GoogleTest now follows the
[Abseil Live at Head philosophy](https://abseil.io/about/philosophy#upgrade-support).
We recommend using the latest commit in the `master` branch in your projects.
#### Documentation Updates
Our documentation is now live on GitHub Pages at
https://google.github.io/googletest/. We recommend browsing the documentation on
GitHub Pages rather than directly in the repository.
#### Release 1.10.x
[Release 1.10.x](https://github.com/google/googletest/releases/tag/release-1.10.0)
is now available.
#### Coming Soon
* We are planning to take a dependency on
[Abseil](https://github.com/abseil/abseil-cpp).
* More documentation improvements are planned.
## Welcome to **GoogleTest**, Google's C++ test framework!
This repository is a merger of the formerly separate GoogleTest and GoogleMock
projects. These were so closely related that it makes sense to maintain and
release them together.
### Getting Started
See the [GoogleTest User's Guide](https://google.github.io/googletest/) for
documentation. We recommend starting with the
[GoogleTest Primer](https://google.github.io/googletest/primer.html).
More information about building GoogleTest can be found at
[googletest/README.md](googletest/README.md).
## Features
* An [xUnit](https://en.wikipedia.org/wiki/XUnit) test framework.
* Test discovery.
* A rich set of assertions.
* User-defined assertions.
* Death tests.
* Fatal and non-fatal failures.
* Value-parameterized tests.
* Type-parameterized tests.
* Various options for running the tests.
* XML test report generation.
## Supported Platforms
GoogleTest requires a codebase and compiler compliant with the C++11 standard or
newer.
The GoogleTest code is officially supported on the following platforms.
Operating systems or tools not listed below are community-supported. For
community-supported platforms, patches that do not complicate the code may be
considered.
If you notice any problems on your platform, please file an issue on the
[GoogleTest GitHub Issue Tracker](https://github.com/google/googletest/issues).
Pull requests containing fixes are welcome!
### Operating Systems
* Linux
* macOS
* Windows
### Compilers
* gcc 5.0+
* clang 5.0+
* MSVC 2015+
**macOS users:** Xcode 9.3+ provides clang 5.0+.
### Build Systems
* [Bazel](https://bazel.build/)
* [CMake](https://cmake.org/)
**Note:** Bazel is the build system used by the team internally and in tests.
CMake is supported on a best-effort basis and by the community.
## Who Is Using GoogleTest?
In addition to many internal projects at Google, GoogleTest is also used by the
following notable projects:
* The [Chromium projects](http://www.chromium.org/) (behind the Chrome browser
and Chrome OS).
* The [LLVM](http://llvm.org/) compiler.
* [Protocol Buffers](https://github.com/google/protobuf), Google's data
interchange format.
* The [OpenCV](http://opencv.org/) computer vision library.
## Related Open Source Projects
[GTest Runner](https://github.com/nholthaus/gtest-runner) is a Qt5 based
automated test-runner and Graphical User Interface with powerful features for
Windows and Linux platforms.
[GoogleTest UI](https://github.com/ospector/gtest-gbar) is a test runner that
runs your test binary, allows you to track its progress via a progress bar, and
displays a list of test failures. Clicking on one shows failure text. Google
Test UI is written in C#.
[GTest TAP Listener](https://github.com/kinow/gtest-tap-listener) is an event
listener for GoogleTest that implements the
[TAP protocol](https://en.wikipedia.org/wiki/Test_Anything_Protocol) for test
result output. If your test runner understands TAP, you may find it useful.
[gtest-parallel](https://github.com/google/gtest-parallel) is a test runner that
runs tests from your binary in parallel to provide significant speed-up.
[GoogleTest Adapter](https://marketplace.visualstudio.com/items?itemName=DavidSchuldenfrei.gtest-adapter)
is a VS Code extension allowing to view GoogleTest in a tree view, and run/debug
your tests.
[C++ TestMate](https://github.com/matepek/vscode-catch2-test-adapter) is a VS
Code extension allowing to view GoogleTest in a tree view, and run/debug your
tests.
[Cornichon](https://pypi.org/project/cornichon/) is a small Gherkin DSL parser
that generates stub code for GoogleTest.
## Contributing Changes
Please read
[`CONTRIBUTING.md`](https://github.com/google/googletest/blob/master/CONTRIBUTING.md)
for details on how to contribute to this project.
Happy testing!

View File

@ -1,24 +0,0 @@
workspace(name = "com_google_googletest")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_google_absl",
urls = ["https://github.com/abseil/abseil-cpp/archive/7971fb358ae376e016d2d4fc9327aad95659b25e.zip"], # 2021-05-20T02:59:16Z
strip_prefix = "abseil-cpp-7971fb358ae376e016d2d4fc9327aad95659b25e",
sha256 = "aeba534f7307e36fe084b452299e49b97420667a8d28102cf9a0daeed340b859",
)
http_archive(
name = "rules_cc",
urls = ["https://github.com/bazelbuild/rules_cc/archive/68cb652a71e7e7e2858c50593e5a9e3b94e5b9a9.zip"], # 2021-05-14T14:51:14Z
strip_prefix = "rules_cc-68cb652a71e7e7e2858c50593e5a9e3b94e5b9a9",
sha256 = "1e19e9a3bc3d4ee91d7fcad00653485ee6c798efbbf9588d40b34cbfbded143d",
)
http_archive(
name = "rules_python",
urls = ["https://github.com/bazelbuild/rules_python/archive/ed6cc8f2c3692a6a7f013ff8bc185ba77eb9b4d2.zip"], # 2021-05-17T00:24:16Z
strip_prefix = "rules_python-ed6cc8f2c3692a6a7f013ff8bc185ba77eb9b4d2",
sha256 = "98b3c592faea9636ac8444bfd9de7f3fb4c60590932d6e6ac5946e3f8dbd5ff6",
)

View File

@ -1,126 +0,0 @@
#!/bin/bash
#
# Copyright 2020, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
set -euox pipefail
readonly LINUX_LATEST_CONTAINER="gcr.io/google.com/absl-177019/linux_hybrid-latest:20210525"
readonly LINUX_GCC_FLOOR_CONTAINER="gcr.io/google.com/absl-177019/linux_gcc-floor:20201015"
if [[ -z ${GTEST_ROOT:-} ]]; then
GTEST_ROOT="$(realpath $(dirname ${0})/..)"
fi
if [[ -z ${STD:-} ]]; then
STD="c++11 c++14 c++17 c++20"
fi
# Test the CMake build
for cc in /usr/local/bin/gcc /opt/llvm/clang/bin/clang; do
for cmake_off_on in OFF ON; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--tmpfs="/build:exec" \
--workdir="/build" \
--rm \
--env="CC=${cc}" \
--env="CXX_FLAGS=\"-Werror -Wdeprecated\"" \
${LINUX_LATEST_CONTAINER} \
/bin/bash -c "
cmake /src \
-DCMAKE_CXX_STANDARD=11 \
-Dgtest_build_samples=ON \
-Dgtest_build_tests=ON \
-Dgmock_build_tests=ON \
-Dcxx_no_exception=${cmake_off_on} \
-Dcxx_no_rtti=${cmake_off_on} && \
make -j$(nproc) && \
ctest -j$(nproc) --output-on-failure"
done
done
# Do one test with an older version of GCC
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/usr/local/bin/gcc" \
${LINUX_GCC_FLOOR_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="-Wall" \
--copt="-Werror" \
--copt="-Wno-error=pragmas" \
--keep_going \
--show_timestamps \
--test_output=errors
# Test GCC
for std in ${STD}; do
for absl in 0 1; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/usr/local/bin/gcc" \
--env="BAZEL_CXXOPTS=-std=${std}" \
${LINUX_LATEST_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--distdir="/bazel-distdir" \
--keep_going \
--show_timestamps \
--test_output=errors
done
done
# Test Clang
for std in ${STD}; do
for absl in 0 1; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/opt/llvm/clang/bin/clang" \
--env="BAZEL_CXXOPTS=-std=${std}" \
${LINUX_LATEST_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="--gcc-toolchain=/usr/local" \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--distdir="/bazel-distdir" \
--keep_going \
--linkopt="--gcc-toolchain=/usr/local" \
--show_timestamps \
--test_output=errors
done
done

View File

@ -1,73 +0,0 @@
#!/bin/bash
#
# Copyright 2020, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
set -euox pipefail
if [[ -z ${GTEST_ROOT:-} ]]; then
GTEST_ROOT="$(realpath $(dirname ${0})/..)"
fi
# Test the CMake build
for cmake_off_on in OFF ON; do
BUILD_DIR=$(mktemp -d build_dir.XXXXXXXX)
cd ${BUILD_DIR}
time cmake ${GTEST_ROOT} \
-DCMAKE_CXX_STANDARD=11 \
-Dgtest_build_samples=ON \
-Dgtest_build_tests=ON \
-Dgmock_build_tests=ON \
-Dcxx_no_exception=${cmake_off_on} \
-Dcxx_no_rtti=${cmake_off_on}
time make
time ctest -j$(nproc) --output-on-failure
done
# Test the Bazel build
# If we are running on Kokoro, check for a versioned Bazel binary.
KOKORO_GFILE_BAZEL_BIN="bazel-3.7.0-darwin-x86_64"
if [[ ${KOKORO_GFILE_DIR:-} ]] && [[ -f ${KOKORO_GFILE_DIR}/${KOKORO_GFILE_BAZEL_BIN} ]]; then
BAZEL_BIN="${KOKORO_GFILE_DIR}/${KOKORO_GFILE_BAZEL_BIN}"
chmod +x ${BAZEL_BIN}
else
BAZEL_BIN="bazel"
fi
cd ${GTEST_ROOT}
for absl in 0 1; do
${BAZEL_BIN} test ... \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--keep_going \
--show_timestamps \
--test_output=errors
done

View File

@ -1 +0,0 @@
title: GoogleTest

View File

@ -1,43 +0,0 @@
nav:
- section: "Get Started"
items:
- title: "Supported Platforms"
url: "/platforms.html"
- title: "Quickstart: Bazel"
url: "/quickstart-bazel.html"
- title: "Quickstart: CMake"
url: "/quickstart-cmake.html"
- section: "Guides"
items:
- title: "GoogleTest Primer"
url: "/primer.html"
- title: "Advanced Topics"
url: "/advanced.html"
- title: "Mocking for Dummies"
url: "/gmock_for_dummies.html"
- title: "Mocking Cookbook"
url: "/gmock_cook_book.html"
- title: "Mocking Cheat Sheet"
url: "/gmock_cheat_sheet.html"
- section: "References"
items:
- title: "Testing Reference"
url: "/reference/testing.html"
- title: "Mocking Reference"
url: "/reference/mocking.html"
- title: "Assertions"
url: "/reference/assertions.html"
- title: "Matchers"
url: "/reference/matchers.html"
- title: "Actions"
url: "/reference/actions.html"
- title: "Testing FAQ"
url: "/faq.html"
- title: "Mocking FAQ"
url: "/gmock_faq.html"
- title: "Code Samples"
url: "/samples.html"
- title: "Using pkg-config"
url: "/pkgconfig.html"
- title: "Community Documentation"
url: "/community_created_documentation.html"

View File

@ -1,58 +0,0 @@
<!DOCTYPE html>
<html lang="{{ site.lang | default: "en-US" }}">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{% seo %}
<link rel="stylesheet" href="{{ "/assets/css/style.css?v=" | append: site.github.build_revision | relative_url }}">
<script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-197576187-1', { 'storage': 'none' });
ga('set', 'referrer', document.referrer.split('?')[0]);
ga('set', 'location', window.location.href.split('?')[0]);
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script>
</head>
<body>
<div class="sidebar">
<div class="header">
<h1><a href="{{ "/" | relative_url }}">{{ site.title | default: "Documentation" }}</a></h1>
</div>
<input type="checkbox" id="nav-toggle" class="nav-toggle">
<label for="nav-toggle" class="expander">
<span class="arrow"></span>
</label>
<nav>
{% for item in site.data.navigation.nav %}
<h2>{{ item.section }}</h2>
<ul>
{% for subitem in item.items %}
<a href="{{subitem.url | relative_url }}">
<li class="{% if subitem.url == page.url %}active{% endif %}">
{{ subitem.title }}
</li>
</a>
{% endfor %}
</ul>
{% endfor %}
</nav>
</div>
<div class="main markdown-body">
<div class="main-inner">
{{ content }}
</div>
<div class="footer">
GoogleTest &middot;
<a href="https://github.com/google/googletest">GitHub Repository</a> &middot;
<a href="https://github.com/google/googletest/blob/master/LICENSE">License</a> &middot;
<a href="https://policies.google.com/privacy">Privacy Policy</a>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/anchor-js/4.1.0/anchor.min.js" integrity="sha256-lZaRhKri35AyJSypXXs4o6OPFTbTmUoltBbDCbdzegg=" crossorigin="anonymous"></script>
<script>anchors.add('.main h2, .main h3, .main h4, .main h5, .main h6');</script>
</body>
</html>

View File

@ -1,200 +0,0 @@
// Styles for GoogleTest docs website on GitHub Pages.
// Color variables are defined in
// https://github.com/pages-themes/primer/tree/master/_sass/primer-support/lib/variables
$sidebar-width: 260px;
body {
display: flex;
margin: 0;
}
.sidebar {
background: $black;
color: $text-white;
flex-shrink: 0;
height: 100vh;
overflow: auto;
position: sticky;
top: 0;
width: $sidebar-width;
}
.sidebar h1 {
font-size: 1.5em;
}
.sidebar h2 {
color: $gray-light;
font-size: 0.8em;
font-weight: normal;
margin-bottom: 0.8em;
padding-left: 2.5em;
text-transform: uppercase;
}
.sidebar .header {
background: $black;
padding: 2em;
position: sticky;
top: 0;
width: 100%;
}
.sidebar .header a {
color: $text-white;
text-decoration: none;
}
.sidebar .nav-toggle {
display: none;
}
.sidebar .expander {
cursor: pointer;
display: none;
height: 3em;
position: absolute;
right: 1em;
top: 1.5em;
width: 3em;
}
.sidebar .expander .arrow {
border: solid $white;
border-width: 0 3px 3px 0;
display: block;
height: 0.7em;
margin: 1em auto;
transform: rotate(45deg);
transition: transform 0.5s;
width: 0.7em;
}
.sidebar nav {
width: 100%;
}
.sidebar nav ul {
list-style-type: none;
margin-bottom: 1em;
padding: 0;
&:last-child {
margin-bottom: 2em;
}
a {
text-decoration: none;
}
li {
color: $text-white;
padding-left: 2em;
text-decoration: none;
}
li.active {
background: $border-gray-darker;
font-weight: bold;
}
li:hover {
background: $border-gray-darker;
}
}
.main {
background-color: $bg-gray;
width: calc(100% - #{$sidebar-width});
}
.main .main-inner {
background-color: $white;
padding: 2em;
}
.main .footer {
margin: 0;
padding: 2em;
}
.main table th {
text-align: left;
}
.main .callout {
border-left: 0.25em solid $white;
padding: 1em;
a {
text-decoration: underline;
}
&.important {
background-color: $bg-yellow-light;
border-color: $bg-yellow;
color: $black;
}
&.note {
background-color: $bg-blue-light;
border-color: $text-blue;
color: $text-blue;
}
&.tip {
background-color: $green-000;
border-color: $green-700;
color: $green-700;
}
&.warning {
background-color: $red-000;
border-color: $text-red;
color: $text-red;
}
}
.main .good pre {
background-color: $bg-green-light;
}
.main .bad pre {
background-color: $red-000;
}
@media all and (max-width: 768px) {
body {
flex-direction: column;
}
.sidebar {
height: auto;
position: relative;
width: 100%;
}
.sidebar .expander {
display: block;
}
.sidebar nav {
height: 0;
overflow: hidden;
}
.sidebar .nav-toggle:checked {
& ~ nav {
height: auto;
}
& + .expander .arrow {
transform: rotate(-135deg);
}
}
.main {
width: 100%;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +0,0 @@
---
---
@import "jekyll-theme-primer";
@import "main";

View File

@ -1,7 +0,0 @@
# Community-Created Documentation
The following is a list, in no particular order, of links to documentation
created by the Googletest community.
* [Googlemock Insights](https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles/blob/master/googletest/insights.md),
by [ElectricRCAircraftGuy](https://github.com/ElectricRCAircraftGuy)

View File

@ -1,693 +0,0 @@
# Googletest FAQ
## Why should test suite names and test names not contain underscore?
{: .callout .note}
Note: Googletest reserves underscore (`_`) for special purpose keywords, such as
[the `DISABLED_` prefix](advanced.md#temporarily-disabling-tests), in addition
to the following rationale.
Underscore (`_`) is special, as C++ reserves the following to be used by the
compiler and the standard library:
1. any identifier that starts with an `_` followed by an upper-case letter, and
2. any identifier that contains two consecutive underscores (i.e. `__`)
*anywhere* in its name.
User code is *prohibited* from using such identifiers.
Now let's look at what this means for `TEST` and `TEST_F`.
Currently `TEST(TestSuiteName, TestName)` generates a class named
`TestSuiteName_TestName_Test`. What happens if `TestSuiteName` or `TestName`
contains `_`?
1. If `TestSuiteName` starts with an `_` followed by an upper-case letter (say,
`_Foo`), we end up with `_Foo_TestName_Test`, which is reserved and thus
invalid.
2. If `TestSuiteName` ends with an `_` (say, `Foo_`), we get
`Foo__TestName_Test`, which is invalid.
3. If `TestName` starts with an `_` (say, `_Bar`), we get
`TestSuiteName__Bar_Test`, which is invalid.
4. If `TestName` ends with an `_` (say, `Bar_`), we get
`TestSuiteName_Bar__Test`, which is invalid.
So clearly `TestSuiteName` and `TestName` cannot start or end with `_`
(Actually, `TestSuiteName` can start with `_` -- as long as the `_` isn't
followed by an upper-case letter. But that's getting complicated. So for
simplicity we just say that it cannot start with `_`.).
It may seem fine for `TestSuiteName` and `TestName` to contain `_` in the
middle. However, consider this:
```c++
TEST(Time, Flies_Like_An_Arrow) { ... }
TEST(Time_Flies, Like_An_Arrow) { ... }
```
Now, the two `TEST`s will both generate the same class
(`Time_Flies_Like_An_Arrow_Test`). That's not good.
So for simplicity, we just ask the users to avoid `_` in `TestSuiteName` and
`TestName`. The rule is more constraining than necessary, but it's simple and
easy to remember. It also gives googletest some wiggle room in case its
implementation needs to change in the future.
If you violate the rule, there may not be immediate consequences, but your test
may (just may) break with a new compiler (or a new version of the compiler you
are using) or with a new version of googletest. Therefore it's best to follow
the rule.
## Why does googletest support `EXPECT_EQ(NULL, ptr)` and `ASSERT_EQ(NULL, ptr)` but not `EXPECT_NE(NULL, ptr)` and `ASSERT_NE(NULL, ptr)`?
First of all, you can use `nullptr` with each of these macros, e.g.
`EXPECT_EQ(ptr, nullptr)`, `EXPECT_NE(ptr, nullptr)`, `ASSERT_EQ(ptr, nullptr)`,
`ASSERT_NE(ptr, nullptr)`. This is the preferred syntax in the style guide
because `nullptr` does not have the type problems that `NULL` does.
Due to some peculiarity of C++, it requires some non-trivial template meta
programming tricks to support using `NULL` as an argument of the `EXPECT_XX()`
and `ASSERT_XX()` macros. Therefore we only do it where it's most needed
(otherwise we make the implementation of googletest harder to maintain and more
error-prone than necessary).
Historically, the `EXPECT_EQ()` macro took the *expected* value as its first
argument and the *actual* value as the second, though this argument order is now
discouraged. It was reasonable that someone wanted
to write `EXPECT_EQ(NULL, some_expression)`, and this indeed was requested
several times. Therefore we implemented it.
The need for `EXPECT_NE(NULL, ptr)` wasn't nearly as strong. When the assertion
fails, you already know that `ptr` must be `NULL`, so it doesn't add any
information to print `ptr` in this case. That means `EXPECT_TRUE(ptr != NULL)`
works just as well.
If we were to support `EXPECT_NE(NULL, ptr)`, for consistency we'd have to
support `EXPECT_NE(ptr, NULL)` as well. This means using the template meta
programming tricks twice in the implementation, making it even harder to
understand and maintain. We believe the benefit doesn't justify the cost.
Finally, with the growth of the gMock matcher library, we are encouraging people
to use the unified `EXPECT_THAT(value, matcher)` syntax more often in tests. One
significant advantage of the matcher approach is that matchers can be easily
combined to form new matchers, while the `EXPECT_NE`, etc, macros cannot be
easily combined. Therefore we want to invest more in the matchers than in the
`EXPECT_XX()` macros.
## I need to test that different implementations of an interface satisfy some common requirements. Should I use typed tests or value-parameterized tests?
For testing various implementations of the same interface, either typed tests or
value-parameterized tests can get it done. It's really up to you the user to
decide which is more convenient for you, depending on your particular case. Some
rough guidelines:
* Typed tests can be easier to write if instances of the different
implementations can be created the same way, modulo the type. For example,
if all these implementations have a public default constructor (such that
you can write `new TypeParam`), or if their factory functions have the same
form (e.g. `CreateInstance<TypeParam>()`).
* Value-parameterized tests can be easier to write if you need different code
patterns to create different implementations' instances, e.g. `new Foo` vs
`new Bar(5)`. To accommodate for the differences, you can write factory
function wrappers and pass these function pointers to the tests as their
parameters.
* When a typed test fails, the default output includes the name of the type,
which can help you quickly identify which implementation is wrong.
Value-parameterized tests only show the number of the failed iteration by
default. You will need to define a function that returns the iteration name
and pass it as the third parameter to INSTANTIATE_TEST_SUITE_P to have more
useful output.
* When using typed tests, you need to make sure you are testing against the
interface type, not the concrete types (in other words, you want to make
sure `implicit_cast<MyInterface*>(my_concrete_impl)` works, not just that
`my_concrete_impl` works). It's less likely to make mistakes in this area
when using value-parameterized tests.
I hope I didn't confuse you more. :-) If you don't mind, I'd suggest you to give
both approaches a try. Practice is a much better way to grasp the subtle
differences between the two tools. Once you have some concrete experience, you
can much more easily decide which one to use the next time.
## I got some run-time errors about invalid proto descriptors when using `ProtocolMessageEquals`. Help!
{: .callout .note}
**Note:** `ProtocolMessageEquals` and `ProtocolMessageEquiv` are *deprecated*
now. Please use `EqualsProto`, etc instead.
`ProtocolMessageEquals` and `ProtocolMessageEquiv` were redefined recently and
are now less tolerant of invalid protocol buffer definitions. In particular, if
you have a `foo.proto` that doesn't fully qualify the type of a protocol message
it references (e.g. `message<Bar>` where it should be `message<blah.Bar>`), you
will now get run-time errors like:
```
... descriptor.cc:...] Invalid proto descriptor for file "path/to/foo.proto":
... descriptor.cc:...] blah.MyMessage.my_field: ".Bar" is not defined.
```
If you see this, your `.proto` file is broken and needs to be fixed by making
the types fully qualified. The new definition of `ProtocolMessageEquals` and
`ProtocolMessageEquiv` just happen to reveal your bug.
## My death test modifies some state, but the change seems lost after the death test finishes. Why?
Death tests (`EXPECT_DEATH`, etc) are executed in a sub-process s.t. the
expected crash won't kill the test program (i.e. the parent process). As a
result, any in-memory side effects they incur are observable in their respective
sub-processes, but not in the parent process. You can think of them as running
in a parallel universe, more or less.
In particular, if you use mocking and the death test statement invokes some mock
methods, the parent process will think the calls have never occurred. Therefore,
you may want to move your `EXPECT_CALL` statements inside the `EXPECT_DEATH`
macro.
## EXPECT_EQ(htonl(blah), blah_blah) generates weird compiler errors in opt mode. Is this a googletest bug?
Actually, the bug is in `htonl()`.
According to `'man htonl'`, `htonl()` is a *function*, which means it's valid to
use `htonl` as a function pointer. However, in opt mode `htonl()` is defined as
a *macro*, which breaks this usage.
Worse, the macro definition of `htonl()` uses a `gcc` extension and is *not*
standard C++. That hacky implementation has some ad hoc limitations. In
particular, it prevents you from writing `Foo<sizeof(htonl(x))>()`, where `Foo`
is a template that has an integral argument.
The implementation of `EXPECT_EQ(a, b)` uses `sizeof(... a ...)` inside a
template argument, and thus doesn't compile in opt mode when `a` contains a call
to `htonl()`. It is difficult to make `EXPECT_EQ` bypass the `htonl()` bug, as
the solution must work with different compilers on various platforms.
## The compiler complains about "undefined references" to some static const member variables, but I did define them in the class body. What's wrong?
If your class has a static data member:
```c++
// foo.h
class Foo {
...
static const int kBar = 100;
};
```
You also need to define it *outside* of the class body in `foo.cc`:
```c++
const int Foo::kBar; // No initializer here.
```
Otherwise your code is **invalid C++**, and may break in unexpected ways. In
particular, using it in googletest comparison assertions (`EXPECT_EQ`, etc) will
generate an "undefined reference" linker error. The fact that "it used to work"
doesn't mean it's valid. It just means that you were lucky. :-)
If the declaration of the static data member is `constexpr` then it is
implicitly an `inline` definition, and a separate definition in `foo.cc` is not
needed:
```c++
// foo.h
class Foo {
...
static constexpr int kBar = 100; // Defines kBar, no need to do it in foo.cc.
};
```
## Can I derive a test fixture from another?
Yes.
Each test fixture has a corresponding and same named test suite. This means only
one test suite can use a particular fixture. Sometimes, however, multiple test
cases may want to use the same or slightly different fixtures. For example, you
may want to make sure that all of a GUI library's test suites don't leak
important system resources like fonts and brushes.
In googletest, you share a fixture among test suites by putting the shared logic
in a base test fixture, then deriving from that base a separate fixture for each
test suite that wants to use this common logic. You then use `TEST_F()` to write
tests using each derived fixture.
Typically, your code looks like this:
```c++
// Defines a base test fixture.
class BaseTest : public ::testing::Test {
protected:
...
};
// Derives a fixture FooTest from BaseTest.
class FooTest : public BaseTest {
protected:
void SetUp() override {
BaseTest::SetUp(); // Sets up the base fixture first.
... additional set-up work ...
}
void TearDown() override {
... clean-up work for FooTest ...
BaseTest::TearDown(); // Remember to tear down the base fixture
// after cleaning up FooTest!
}
... functions and variables for FooTest ...
};
// Tests that use the fixture FooTest.
TEST_F(FooTest, Bar) { ... }
TEST_F(FooTest, Baz) { ... }
... additional fixtures derived from BaseTest ...
```
If necessary, you can continue to derive test fixtures from a derived fixture.
googletest has no limit on how deep the hierarchy can be.
For a complete example using derived test fixtures, see
[sample5_unittest.cc](https://github.com/google/googletest/blob/master/googletest/samples/sample5_unittest.cc).
## My compiler complains "void value not ignored as it ought to be." What does this mean?
You're probably using an `ASSERT_*()` in a function that doesn't return `void`.
`ASSERT_*()` can only be used in `void` functions, due to exceptions being
disabled by our build system. Please see more details
[here](advanced.md#assertion-placement).
## My death test hangs (or seg-faults). How do I fix it?
In googletest, death tests are run in a child process and the way they work is
delicate. To write death tests you really need to understand how they work—see
the details at [Death Assertions](reference/assertions.md#death) in the
Assertions Reference.
In particular, death tests don't like having multiple threads in the parent
process. So the first thing you can try is to eliminate creating threads outside
of `EXPECT_DEATH()`. For example, you may want to use mocks or fake objects
instead of real ones in your tests.
Sometimes this is impossible as some library you must use may be creating
threads before `main()` is even reached. In this case, you can try to minimize
the chance of conflicts by either moving as many activities as possible inside
`EXPECT_DEATH()` (in the extreme case, you want to move everything inside), or
leaving as few things as possible in it. Also, you can try to set the death test
style to `"threadsafe"`, which is safer but slower, and see if it helps.
If you go with thread-safe death tests, remember that they rerun the test
program from the beginning in the child process. Therefore make sure your
program can run side-by-side with itself and is deterministic.
In the end, this boils down to good concurrent programming. You have to make
sure that there are no race conditions or deadlocks in your program. No silver
bullet - sorry!
## Should I use the constructor/destructor of the test fixture or SetUp()/TearDown()? {#CtorVsSetUp}
The first thing to remember is that googletest does **not** reuse the same test
fixture object across multiple tests. For each `TEST_F`, googletest will create
a **fresh** test fixture object, immediately call `SetUp()`, run the test body,
call `TearDown()`, and then delete the test fixture object.
When you need to write per-test set-up and tear-down logic, you have the choice
between using the test fixture constructor/destructor or `SetUp()/TearDown()`.
The former is usually preferred, as it has the following benefits:
* By initializing a member variable in the constructor, we have the option to
make it `const`, which helps prevent accidental changes to its value and
makes the tests more obviously correct.
* In case we need to subclass the test fixture class, the subclass'
constructor is guaranteed to call the base class' constructor *first*, and
the subclass' destructor is guaranteed to call the base class' destructor
*afterward*. With `SetUp()/TearDown()`, a subclass may make the mistake of
forgetting to call the base class' `SetUp()/TearDown()` or call them at the
wrong time.
You may still want to use `SetUp()/TearDown()` in the following cases:
* C++ does not allow virtual function calls in constructors and destructors.
You can call a method declared as virtual, but it will not use dynamic
dispatch, it will use the definition from the class the constructor of which
is currently executing. This is because calling a virtual method before the
derived class constructor has a chance to run is very dangerous - the
virtual method might operate on uninitialized data. Therefore, if you need
to call a method that will be overridden in a derived class, you have to use
`SetUp()/TearDown()`.
* In the body of a constructor (or destructor), it's not possible to use the
`ASSERT_xx` macros. Therefore, if the set-up operation could cause a fatal
test failure that should prevent the test from running, it's necessary to
use `abort` and abort the whole test
executable, or to use `SetUp()` instead of a constructor.
* If the tear-down operation could throw an exception, you must use
`TearDown()` as opposed to the destructor, as throwing in a destructor leads
to undefined behavior and usually will kill your program right away. Note
that many standard libraries (like STL) may throw when exceptions are
enabled in the compiler. Therefore you should prefer `TearDown()` if you
want to write portable tests that work with or without exceptions.
* The googletest team is considering making the assertion macros throw on
platforms where exceptions are enabled (e.g. Windows, Mac OS, and Linux
client-side), which will eliminate the need for the user to propagate
failures from a subroutine to its caller. Therefore, you shouldn't use
googletest assertions in a destructor if your code could run on such a
platform.
## The compiler complains "no matching function to call" when I use ASSERT_PRED*. How do I fix it?
See details for [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the
Assertions Reference.
## My compiler complains about "ignoring return value" when I call RUN_ALL_TESTS(). Why?
Some people had been ignoring the return value of `RUN_ALL_TESTS()`. That is,
instead of
```c++
return RUN_ALL_TESTS();
```
they write
```c++
RUN_ALL_TESTS();
```
This is **wrong and dangerous**. The testing services needs to see the return
value of `RUN_ALL_TESTS()` in order to determine if a test has passed. If your
`main()` function ignores it, your test will be considered successful even if it
has a googletest assertion failure. Very bad.
We have decided to fix this (thanks to Michael Chastain for the idea). Now, your
code will no longer be able to ignore `RUN_ALL_TESTS()` when compiled with
`gcc`. If you do so, you'll get a compiler error.
If you see the compiler complaining about you ignoring the return value of
`RUN_ALL_TESTS()`, the fix is simple: just make sure its value is used as the
return value of `main()`.
But how could we introduce a change that breaks existing tests? Well, in this
case, the code was already broken in the first place, so we didn't break it. :-)
## My compiler complains that a constructor (or destructor) cannot return a value. What's going on?
Due to a peculiarity of C++, in order to support the syntax for streaming
messages to an `ASSERT_*`, e.g.
```c++
ASSERT_EQ(1, Foo()) << "blah blah" << foo;
```
we had to give up using `ASSERT*` and `FAIL*` (but not `EXPECT*` and
`ADD_FAILURE*`) in constructors and destructors. The workaround is to move the
content of your constructor/destructor to a private void member function, or
switch to `EXPECT_*()` if that works. This
[section](advanced.md#assertion-placement) in the user's guide explains it.
## My SetUp() function is not called. Why?
C++ is case-sensitive. Did you spell it as `Setup()`?
Similarly, sometimes people spell `SetUpTestSuite()` as `SetupTestSuite()` and
wonder why it's never called.
## I have several test suites which share the same test fixture logic, do I have to define a new test fixture class for each of them? This seems pretty tedious.
You don't have to. Instead of
```c++
class FooTest : public BaseTest {};
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
class BarTest : public BaseTest {};
TEST_F(BarTest, Abc) { ... }
TEST_F(BarTest, Def) { ... }
```
you can simply `typedef` the test fixtures:
```c++
typedef BaseTest FooTest;
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
typedef BaseTest BarTest;
TEST_F(BarTest, Abc) { ... }
TEST_F(BarTest, Def) { ... }
```
## googletest output is buried in a whole bunch of LOG messages. What do I do?
The googletest output is meant to be a concise and human-friendly report. If
your test generates textual output itself, it will mix with the googletest
output, making it hard to read. However, there is an easy solution to this
problem.
Since `LOG` messages go to stderr, we decided to let googletest output go to
stdout. This way, you can easily separate the two using redirection. For
example:
```shell
$ ./my_test > gtest_output.txt
```
## Why should I prefer test fixtures over global variables?
There are several good reasons:
1. It's likely your test needs to change the states of its global variables.
This makes it difficult to keep side effects from escaping one test and
contaminating others, making debugging difficult. By using fixtures, each
test has a fresh set of variables that's different (but with the same
names). Thus, tests are kept independent of each other.
2. Global variables pollute the global namespace.
3. Test fixtures can be reused via subclassing, which cannot be done easily
with global variables. This is useful if many test suites have something in
common.
## What can the statement argument in ASSERT_DEATH() be?
`ASSERT_DEATH(statement, matcher)` (or any death assertion macro) can be used
wherever *`statement`* is valid. So basically *`statement`* can be any C++
statement that makes sense in the current context. In particular, it can
reference global and/or local variables, and can be:
* a simple function call (often the case),
* a complex expression, or
* a compound statement.
Some examples are shown here:
```c++
// A death test can be a simple function call.
TEST(MyDeathTest, FunctionCall) {
ASSERT_DEATH(Xyz(5), "Xyz failed");
}
// Or a complex expression that references variables and functions.
TEST(MyDeathTest, ComplexExpression) {
const bool c = Condition();
ASSERT_DEATH((c ? Func1(0) : object2.Method("test")),
"(Func1|Method) failed");
}
// Death assertions can be used anywhere in a function. In
// particular, they can be inside a loop.
TEST(MyDeathTest, InsideLoop) {
// Verifies that Foo(0), Foo(1), ..., and Foo(4) all die.
for (int i = 0; i < 5; i++) {
EXPECT_DEATH_M(Foo(i), "Foo has \\d+ errors",
::testing::Message() << "where i is " << i);
}
}
// A death assertion can contain a compound statement.
TEST(MyDeathTest, CompoundStatement) {
// Verifies that at lease one of Bar(0), Bar(1), ..., and
// Bar(4) dies.
ASSERT_DEATH({
for (int i = 0; i < 5; i++) {
Bar(i);
}
},
"Bar has \\d+ errors");
}
```
## I have a fixture class `FooTest`, but `TEST_F(FooTest, Bar)` gives me error ``"no matching function for call to `FooTest::FooTest()'"``. Why?
Googletest needs to be able to create objects of your test fixture class, so it
must have a default constructor. Normally the compiler will define one for you.
However, there are cases where you have to define your own:
* If you explicitly declare a non-default constructor for class `FooTest`
(`DISALLOW_EVIL_CONSTRUCTORS()` does this), then you need to define a
default constructor, even if it would be empty.
* If `FooTest` has a const non-static data member, then you have to define the
default constructor *and* initialize the const member in the initializer
list of the constructor. (Early versions of `gcc` doesn't force you to
initialize the const member. It's a bug that has been fixed in `gcc 4`.)
## Why does ASSERT_DEATH complain about previous threads that were already joined?
With the Linux pthread library, there is no turning back once you cross the line
from a single thread to multiple threads. The first time you create a thread, a
manager thread is created in addition, so you get 3, not 2, threads. Later when
the thread you create joins the main thread, the thread count decrements by 1,
but the manager thread will never be killed, so you still have 2 threads, which
means you cannot safely run a death test.
The new NPTL thread library doesn't suffer from this problem, as it doesn't
create a manager thread. However, if you don't control which machine your test
runs on, you shouldn't depend on this.
## Why does googletest require the entire test suite, instead of individual tests, to be named *DeathTest when it uses ASSERT_DEATH?
googletest does not interleave tests from different test suites. That is, it
runs all tests in one test suite first, and then runs all tests in the next test
suite, and so on. googletest does this because it needs to set up a test suite
before the first test in it is run, and tear it down afterwards. Splitting up
the test case would require multiple set-up and tear-down processes, which is
inefficient and makes the semantics unclean.
If we were to determine the order of tests based on test name instead of test
case name, then we would have a problem with the following situation:
```c++
TEST_F(FooTest, AbcDeathTest) { ... }
TEST_F(FooTest, Uvw) { ... }
TEST_F(BarTest, DefDeathTest) { ... }
TEST_F(BarTest, Xyz) { ... }
```
Since `FooTest.AbcDeathTest` needs to run before `BarTest.Xyz`, and we don't
interleave tests from different test suites, we need to run all tests in the
`FooTest` case before running any test in the `BarTest` case. This contradicts
with the requirement to run `BarTest.DefDeathTest` before `FooTest.Uvw`.
## But I don't like calling my entire test suite \*DeathTest when it contains both death tests and non-death tests. What do I do?
You don't have to, but if you like, you may split up the test suite into
`FooTest` and `FooDeathTest`, where the names make it clear that they are
related:
```c++
class FooTest : public ::testing::Test { ... };
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
using FooDeathTest = FooTest;
TEST_F(FooDeathTest, Uvw) { ... EXPECT_DEATH(...) ... }
TEST_F(FooDeathTest, Xyz) { ... ASSERT_DEATH(...) ... }
```
## googletest prints the LOG messages in a death test's child process only when the test fails. How can I see the LOG messages when the death test succeeds?
Printing the LOG messages generated by the statement inside `EXPECT_DEATH()`
makes it harder to search for real problems in the parent's log. Therefore,
googletest only prints them when the death test has failed.
If you really need to see such LOG messages, a workaround is to temporarily
break the death test (e.g. by changing the regex pattern it is expected to
match). Admittedly, this is a hack. We'll consider a more permanent solution
after the fork-and-exec-style death tests are implemented.
## The compiler complains about `no match for 'operator<<'` when I use an assertion. What gives?
If you use a user-defined type `FooType` in an assertion, you must make sure
there is an `std::ostream& operator<<(std::ostream&, const FooType&)` function
defined such that we can print a value of `FooType`.
In addition, if `FooType` is declared in a name space, the `<<` operator also
needs to be defined in the *same* name space. See
[Tip of the Week #49](http://abseil.io/tips/49) for details.
## How do I suppress the memory leak messages on Windows?
Since the statically initialized googletest singleton requires allocations on
the heap, the Visual C++ memory leak detector will report memory leaks at the
end of the program run. The easiest way to avoid this is to use the
`_CrtMemCheckpoint` and `_CrtMemDumpAllObjectsSince` calls to not report any
statically initialized heap objects. See MSDN for more details and additional
heap check/debug routines.
## How can my code detect if it is running in a test?
If you write code that sniffs whether it's running in a test and does different
things accordingly, you are leaking test-only logic into production code and
there is no easy way to ensure that the test-only code paths aren't run by
mistake in production. Such cleverness also leads to
[Heisenbugs](https://en.wikipedia.org/wiki/Heisenbug). Therefore we strongly
advise against the practice, and googletest doesn't provide a way to do it.
In general, the recommended way to cause the code to behave differently under
test is [Dependency Injection](http://en.wikipedia.org/wiki/Dependency_injection). You can inject
different functionality from the test and from the production code. Since your
production code doesn't link in the for-test logic at all (the
[`testonly`](http://docs.bazel.build/versions/master/be/common-definitions.html#common.testonly) attribute for BUILD targets helps to ensure
that), there is no danger in accidentally running it.
However, if you *really*, *really*, *really* have no choice, and if you follow
the rule of ending your test program names with `_test`, you can use the
*horrible* hack of sniffing your executable name (`argv[0]` in `main()`) to know
whether the code is under test.
## How do I temporarily disable a test?
If you have a broken test that you cannot fix right away, you can add the
`DISABLED_` prefix to its name. This will exclude it from execution. This is
better than commenting out the code or using `#if 0`, as disabled tests are
still compiled (and thus won't rot).
To include disabled tests in test execution, just invoke the test program with
the `--gtest_also_run_disabled_tests` flag.
## Is it OK if I have two separate `TEST(Foo, Bar)` test methods defined in different namespaces?
Yes.
The rule is **all test methods in the same test suite must use the same fixture
class.** This means that the following is **allowed** because both tests use the
same fixture class (`::testing::Test`).
```c++
namespace foo {
TEST(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace foo
namespace bar {
TEST(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace bar
```
However, the following code is **not allowed** and will produce a runtime error
from googletest because the test methods are using different test fixture
classes with the same test suite name.
```c++
namespace foo {
class CoolTest : public ::testing::Test {}; // Fixture foo::CoolTest
TEST_F(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace foo
namespace bar {
class CoolTest : public ::testing::Test {}; // Fixture: bar::CoolTest
TEST_F(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace bar
```

View File

@ -1,241 +0,0 @@
# gMock Cheat Sheet
## Defining a Mock Class
### Mocking a Normal Class {#MockClass}
Given
```cpp
class Foo {
...
virtual ~Foo();
virtual int GetSize() const = 0;
virtual string Describe(const char* name) = 0;
virtual string Describe(int type) = 0;
virtual bool Process(Bar elem, int count) = 0;
};
```
(note that `~Foo()` **must** be virtual) we can define its mock as
```cpp
#include "gmock/gmock.h"
class MockFoo : public Foo {
...
MOCK_METHOD(int, GetSize, (), (const, override));
MOCK_METHOD(string, Describe, (const char* name), (override));
MOCK_METHOD(string, Describe, (int type), (override));
MOCK_METHOD(bool, Process, (Bar elem, int count), (override));
};
```
To create a "nice" mock, which ignores all uninteresting calls, a "naggy" mock,
which warns on all uninteresting calls, or a "strict" mock, which treats them as
failures:
```cpp
using ::testing::NiceMock;
using ::testing::NaggyMock;
using ::testing::StrictMock;
NiceMock<MockFoo> nice_foo; // The type is a subclass of MockFoo.
NaggyMock<MockFoo> naggy_foo; // The type is a subclass of MockFoo.
StrictMock<MockFoo> strict_foo; // The type is a subclass of MockFoo.
```
{: .callout .note}
**Note:** A mock object is currently naggy by default. We may make it nice by
default in the future.
### Mocking a Class Template {#MockTemplate}
Class templates can be mocked just like any class.
To mock
```cpp
template <typename Elem>
class StackInterface {
...
virtual ~StackInterface();
virtual int GetSize() const = 0;
virtual void Push(const Elem& x) = 0;
};
```
(note that all member functions that are mocked, including `~StackInterface()`
**must** be virtual).
```cpp
template <typename Elem>
class MockStack : public StackInterface<Elem> {
...
MOCK_METHOD(int, GetSize, (), (const, override));
MOCK_METHOD(void, Push, (const Elem& x), (override));
};
```
### Specifying Calling Conventions for Mock Functions
If your mock function doesn't use the default calling convention, you can
specify it by adding `Calltype(convention)` to `MOCK_METHOD`'s 4th parameter.
For example,
```cpp
MOCK_METHOD(bool, Foo, (int n), (Calltype(STDMETHODCALLTYPE)));
MOCK_METHOD(int, Bar, (double x, double y),
(const, Calltype(STDMETHODCALLTYPE)));
```
where `STDMETHODCALLTYPE` is defined by `<objbase.h>` on Windows.
## Using Mocks in Tests {#UsingMocks}
The typical work flow is:
1. Import the gMock names you need to use. All gMock symbols are in the
`testing` namespace unless they are macros or otherwise noted.
2. Create the mock objects.
3. Optionally, set the default actions of the mock objects.
4. Set your expectations on the mock objects (How will they be called? What
will they do?).
5. Exercise code that uses the mock objects; if necessary, check the result
using googletest assertions.
6. When a mock object is destructed, gMock automatically verifies that all
expectations on it have been satisfied.
Here's an example:
```cpp
using ::testing::Return; // #1
TEST(BarTest, DoesThis) {
MockFoo foo; // #2
ON_CALL(foo, GetSize()) // #3
.WillByDefault(Return(1));
// ... other default actions ...
EXPECT_CALL(foo, Describe(5)) // #4
.Times(3)
.WillRepeatedly(Return("Category 5"));
// ... other expectations ...
EXPECT_EQ(MyProductionFunction(&foo), "good"); // #5
} // #6
```
## Setting Default Actions {#OnCall}
gMock has a **built-in default action** for any function that returns `void`,
`bool`, a numeric value, or a pointer. In C++11, it will additionally returns
the default-constructed value, if one exists for the given type.
To customize the default action for functions with return type `T`, use
[`DefaultValue<T>`](reference/mocking.md#DefaultValue). For example:
```cpp
// Sets the default action for return type std::unique_ptr<Buzz> to
// creating a new Buzz every time.
DefaultValue<std::unique_ptr<Buzz>>::SetFactory(
[] { return MakeUnique<Buzz>(AccessLevel::kInternal); });
// When this fires, the default action of MakeBuzz() will run, which
// will return a new Buzz object.
EXPECT_CALL(mock_buzzer_, MakeBuzz("hello")).Times(AnyNumber());
auto buzz1 = mock_buzzer_.MakeBuzz("hello");
auto buzz2 = mock_buzzer_.MakeBuzz("hello");
EXPECT_NE(buzz1, nullptr);
EXPECT_NE(buzz2, nullptr);
EXPECT_NE(buzz1, buzz2);
// Resets the default action for return type std::unique_ptr<Buzz>,
// to avoid interfere with other tests.
DefaultValue<std::unique_ptr<Buzz>>::Clear();
```
To customize the default action for a particular method of a specific mock
object, use [`ON_CALL`](reference/mocking.md#ON_CALL). `ON_CALL` has a similar
syntax to `EXPECT_CALL`, but it is used for setting default behaviors when you
do not require that the mock method is called. See
[Knowing When to Expect](gmock_cook_book.md#UseOnCall) for a more detailed
discussion.
## Setting Expectations {#ExpectCall}
See [`EXPECT_CALL`](reference/mocking.md#EXPECT_CALL) in the Mocking Reference.
## Matchers {#MatcherList}
See the [Matchers Reference](reference/matchers.md).
## Actions {#ActionList}
See the [Actions Reference](reference/actions.md).
## Cardinalities {#CardinalityList}
See the [`Times` clause](reference/mocking.md#EXPECT_CALL.Times) of
`EXPECT_CALL` in the Mocking Reference.
## Expectation Order
By default, expectations can be matched in *any* order. If some or all
expectations must be matched in a given order, you can use the
[`After` clause](reference/mocking.md#EXPECT_CALL.After) or
[`InSequence` clause](reference/mocking.md#EXPECT_CALL.InSequence) of
`EXPECT_CALL`, or use an [`InSequence` object](reference/mocking.md#InSequence).
## Verifying and Resetting a Mock
gMock will verify the expectations on a mock object when it is destructed, or
you can do it earlier:
```cpp
using ::testing::Mock;
...
// Verifies and removes the expectations on mock_obj;
// returns true if and only if successful.
Mock::VerifyAndClearExpectations(&mock_obj);
...
// Verifies and removes the expectations on mock_obj;
// also removes the default actions set by ON_CALL();
// returns true if and only if successful.
Mock::VerifyAndClear(&mock_obj);
```
Do not set new expectations after verifying and clearing a mock after its use.
Setting expectations after code that exercises the mock has undefined behavior.
See [Using Mocks in Tests](gmock_for_dummies.md#using-mocks-in-tests) for more
information.
You can also tell gMock that a mock object can be leaked and doesn't need to be
verified:
```cpp
Mock::AllowLeak(&mock_obj);
```
## Mock Classes
gMock defines a convenient mock class template
```cpp
class MockFunction<R(A1, ..., An)> {
public:
MOCK_METHOD(R, Call, (A1, ..., An));
};
```
See this [recipe](gmock_cook_book.md#using-check-points) for one application of
it.
## Flags
| Flag | Description |
| :----------------------------- | :---------------------------------------- |
| `--gmock_catch_leaked_mocks=0` | Don't report leaked mock objects as failures. |
| `--gmock_verbose=LEVEL` | Sets the default verbosity level (`info`, `warning`, or `error`) of Google Mock messages. |

File diff suppressed because it is too large Load Diff

View File

@ -1,390 +0,0 @@
# Legacy gMock FAQ
### When I call a method on my mock object, the method for the real object is invoked instead. What's the problem?
In order for a method to be mocked, it must be *virtual*, unless you use the
[high-perf dependency injection technique](gmock_cook_book.md#MockingNonVirtualMethods).
### Can I mock a variadic function?
You cannot mock a variadic function (i.e. a function taking ellipsis (`...`)
arguments) directly in gMock.
The problem is that in general, there is *no way* for a mock object to know how
many arguments are passed to the variadic method, and what the arguments' types
are. Only the *author of the base class* knows the protocol, and we cannot look
into his or her head.
Therefore, to mock such a function, the *user* must teach the mock object how to
figure out the number of arguments and their types. One way to do it is to
provide overloaded versions of the function.
Ellipsis arguments are inherited from C and not really a C++ feature. They are
unsafe to use and don't work with arguments that have constructors or
destructors. Therefore we recommend to avoid them in C++ as much as possible.
### MSVC gives me warning C4301 or C4373 when I define a mock method with a const parameter. Why?
If you compile this using Microsoft Visual C++ 2005 SP1:
```cpp
class Foo {
...
virtual void Bar(const int i) = 0;
};
class MockFoo : public Foo {
...
MOCK_METHOD(void, Bar, (const int i), (override));
};
```
You may get the following warning:
```shell
warning C4301: 'MockFoo::Bar': overriding virtual function only differs from 'Foo::Bar' by const/volatile qualifier
```
This is a MSVC bug. The same code compiles fine with gcc, for example. If you
use Visual C++ 2008 SP1, you would get the warning:
```shell
warning C4373: 'MockFoo::Bar': virtual function overrides 'Foo::Bar', previous versions of the compiler did not override when parameters only differed by const/volatile qualifiers
```
In C++, if you *declare* a function with a `const` parameter, the `const`
modifier is ignored. Therefore, the `Foo` base class above is equivalent to:
```cpp
class Foo {
...
virtual void Bar(int i) = 0; // int or const int? Makes no difference.
};
```
In fact, you can *declare* `Bar()` with an `int` parameter, and define it with a
`const int` parameter. The compiler will still match them up.
Since making a parameter `const` is meaningless in the method declaration, we
recommend to remove it in both `Foo` and `MockFoo`. That should workaround the
VC bug.
Note that we are talking about the *top-level* `const` modifier here. If the
function parameter is passed by pointer or reference, declaring the pointee or
referee as `const` is still meaningful. For example, the following two
declarations are *not* equivalent:
```cpp
void Bar(int* p); // Neither p nor *p is const.
void Bar(const int* p); // p is not const, but *p is.
```
### I can't figure out why gMock thinks my expectations are not satisfied. What should I do?
You might want to run your test with `--gmock_verbose=info`. This flag lets
gMock print a trace of every mock function call it receives. By studying the
trace, you'll gain insights on why the expectations you set are not met.
If you see the message "The mock function has no default action set, and its
return type has no default value set.", then try
[adding a default action](gmock_cheat_sheet.md#OnCall). Due to a known issue,
unexpected calls on mocks without default actions don't print out a detailed
comparison between the actual arguments and the expected arguments.
### My program crashed and `ScopedMockLog` spit out tons of messages. Is it a gMock bug?
gMock and `ScopedMockLog` are likely doing the right thing here.
When a test crashes, the failure signal handler will try to log a lot of
information (the stack trace, and the address map, for example). The messages
are compounded if you have many threads with depth stacks. When `ScopedMockLog`
intercepts these messages and finds that they don't match any expectations, it
prints an error for each of them.
You can learn to ignore the errors, or you can rewrite your expectations to make
your test more robust, for example, by adding something like:
```cpp
using ::testing::AnyNumber;
using ::testing::Not;
...
// Ignores any log not done by us.
EXPECT_CALL(log, Log(_, Not(EndsWith("/my_file.cc")), _))
.Times(AnyNumber());
```
### How can I assert that a function is NEVER called?
```cpp
using ::testing::_;
...
EXPECT_CALL(foo, Bar(_))
.Times(0);
```
### I have a failed test where gMock tells me TWICE that a particular expectation is not satisfied. Isn't this redundant?
When gMock detects a failure, it prints relevant information (the mock function
arguments, the state of relevant expectations, and etc) to help the user debug.
If another failure is detected, gMock will do the same, including printing the
state of relevant expectations.
Sometimes an expectation's state didn't change between two failures, and you'll
see the same description of the state twice. They are however *not* redundant,
as they refer to *different points in time*. The fact they are the same *is*
interesting information.
### I get a heapcheck failure when using a mock object, but using a real object is fine. What can be wrong?
Does the class (hopefully a pure interface) you are mocking have a virtual
destructor?
Whenever you derive from a base class, make sure its destructor is virtual.
Otherwise Bad Things will happen. Consider the following code:
```cpp
class Base {
public:
// Not virtual, but should be.
~Base() { ... }
...
};
class Derived : public Base {
public:
...
private:
std::string value_;
};
...
Base* p = new Derived;
...
delete p; // Surprise! ~Base() will be called, but ~Derived() will not
// - value_ is leaked.
```
By changing `~Base()` to virtual, `~Derived()` will be correctly called when
`delete p` is executed, and the heap checker will be happy.
### The "newer expectations override older ones" rule makes writing expectations awkward. Why does gMock do that?
When people complain about this, often they are referring to code like:
```cpp
using ::testing::Return;
...
// foo.Bar() should be called twice, return 1 the first time, and return
// 2 the second time. However, I have to write the expectations in the
// reverse order. This sucks big time!!!
EXPECT_CALL(foo, Bar())
.WillOnce(Return(2))
.RetiresOnSaturation();
EXPECT_CALL(foo, Bar())
.WillOnce(Return(1))
.RetiresOnSaturation();
```
The problem, is that they didn't pick the **best** way to express the test's
intent.
By default, expectations don't have to be matched in *any* particular order. If
you want them to match in a certain order, you need to be explicit. This is
gMock's (and jMock's) fundamental philosophy: it's easy to accidentally
over-specify your tests, and we want to make it harder to do so.
There are two better ways to write the test spec. You could either put the
expectations in sequence:
```cpp
using ::testing::Return;
...
// foo.Bar() should be called twice, return 1 the first time, and return
// 2 the second time. Using a sequence, we can write the expectations
// in their natural order.
{
InSequence s;
EXPECT_CALL(foo, Bar())
.WillOnce(Return(1))
.RetiresOnSaturation();
EXPECT_CALL(foo, Bar())
.WillOnce(Return(2))
.RetiresOnSaturation();
}
```
or you can put the sequence of actions in the same expectation:
```cpp
using ::testing::Return;
...
// foo.Bar() should be called twice, return 1 the first time, and return
// 2 the second time.
EXPECT_CALL(foo, Bar())
.WillOnce(Return(1))
.WillOnce(Return(2))
.RetiresOnSaturation();
```
Back to the original questions: why does gMock search the expectations (and
`ON_CALL`s) from back to front? Because this allows a user to set up a mock's
behavior for the common case early (e.g. in the mock's constructor or the test
fixture's set-up phase) and customize it with more specific rules later. If
gMock searches from front to back, this very useful pattern won't be possible.
### gMock prints a warning when a function without EXPECT_CALL is called, even if I have set its behavior using ON_CALL. Would it be reasonable not to show the warning in this case?
When choosing between being neat and being safe, we lean toward the latter. So
the answer is that we think it's better to show the warning.
Often people write `ON_CALL`s in the mock object's constructor or `SetUp()`, as
the default behavior rarely changes from test to test. Then in the test body
they set the expectations, which are often different for each test. Having an
`ON_CALL` in the set-up part of a test doesn't mean that the calls are expected.
If there's no `EXPECT_CALL` and the method is called, it's possibly an error. If
we quietly let the call go through without notifying the user, bugs may creep in
unnoticed.
If, however, you are sure that the calls are OK, you can write
```cpp
using ::testing::_;
...
EXPECT_CALL(foo, Bar(_))
.WillRepeatedly(...);
```
instead of
```cpp
using ::testing::_;
...
ON_CALL(foo, Bar(_))
.WillByDefault(...);
```
This tells gMock that you do expect the calls and no warning should be printed.
Also, you can control the verbosity by specifying `--gmock_verbose=error`. Other
values are `info` and `warning`. If you find the output too noisy when
debugging, just choose a less verbose level.
### How can I delete the mock function's argument in an action?
If your mock function takes a pointer argument and you want to delete that
argument, you can use testing::DeleteArg<N>() to delete the N'th (zero-indexed)
argument:
```cpp
using ::testing::_;
...
MOCK_METHOD(void, Bar, (X* x, const Y& y));
...
EXPECT_CALL(mock_foo_, Bar(_, _))
.WillOnce(testing::DeleteArg<0>()));
```
### How can I perform an arbitrary action on a mock function's argument?
If you find yourself needing to perform some action that's not supported by
gMock directly, remember that you can define your own actions using
[`MakeAction()`](#NewMonoActions) or
[`MakePolymorphicAction()`](#NewPolyActions), or you can write a stub function
and invoke it using [`Invoke()`](#FunctionsAsActions).
```cpp
using ::testing::_;
using ::testing::Invoke;
...
MOCK_METHOD(void, Bar, (X* p));
...
EXPECT_CALL(mock_foo_, Bar(_))
.WillOnce(Invoke(MyAction(...)));
```
### My code calls a static/global function. Can I mock it?
You can, but you need to make some changes.
In general, if you find yourself needing to mock a static function, it's a sign
that your modules are too tightly coupled (and less flexible, less reusable,
less testable, etc). You are probably better off defining a small interface and
call the function through that interface, which then can be easily mocked. It's
a bit of work initially, but usually pays for itself quickly.
This Google Testing Blog
[post](https://testing.googleblog.com/2008/06/defeat-static-cling.html) says it
excellently. Check it out.
### My mock object needs to do complex stuff. It's a lot of pain to specify the actions. gMock sucks!
I know it's not a question, but you get an answer for free any way. :-)
With gMock, you can create mocks in C++ easily. And people might be tempted to
use them everywhere. Sometimes they work great, and sometimes you may find them,
well, a pain to use. So, what's wrong in the latter case?
When you write a test without using mocks, you exercise the code and assert that
it returns the correct value or that the system is in an expected state. This is
sometimes called "state-based testing".
Mocks are great for what some call "interaction-based" testing: instead of
checking the system state at the very end, mock objects verify that they are
invoked the right way and report an error as soon as it arises, giving you a
handle on the precise context in which the error was triggered. This is often
more effective and economical to do than state-based testing.
If you are doing state-based testing and using a test double just to simulate
the real object, you are probably better off using a fake. Using a mock in this
case causes pain, as it's not a strong point for mocks to perform complex
actions. If you experience this and think that mocks suck, you are just not
using the right tool for your problem. Or, you might be trying to solve the
wrong problem. :-)
### I got a warning "Uninteresting function call encountered - default action taken.." Should I panic?
By all means, NO! It's just an FYI. :-)
What it means is that you have a mock function, you haven't set any expectations
on it (by gMock's rule this means that you are not interested in calls to this
function and therefore it can be called any number of times), and it is called.
That's OK - you didn't say it's not OK to call the function!
What if you actually meant to disallow this function to be called, but forgot to
write `EXPECT_CALL(foo, Bar()).Times(0)`? While one can argue that it's the
user's fault, gMock tries to be nice and prints you a note.
So, when you see the message and believe that there shouldn't be any
uninteresting calls, you should investigate what's going on. To make your life
easier, gMock dumps the stack trace when an uninteresting call is encountered.
From that you can figure out which mock function it is, and how it is called.
### I want to define a custom action. Should I use Invoke() or implement the ActionInterface interface?
Either way is fine - you want to choose the one that's more convenient for your
circumstance.
Usually, if your action is for a particular function type, defining it using
`Invoke()` should be easier; if your action can be used in functions of
different types (e.g. if you are defining `Return(*value*)`),
`MakePolymorphicAction()` is easiest. Sometimes you want precise control on what
types of functions the action can be used in, and implementing `ActionInterface`
is the way to go here. See the implementation of `Return()` in
`testing/base/public/gmock-actions.h` for an example.
### I use SetArgPointee() in WillOnce(), but gcc complains about "conflicting return type specified". What does it mean?
You got this error as gMock has no idea what value it should return when the
mock method is called. `SetArgPointee()` says what the side effect is, but
doesn't say what the return value should be. You need `DoAll()` to chain a
`SetArgPointee()` with a `Return()` that provides a value appropriate to the API
being mocked.
See this [recipe](gmock_cook_book.md#mocking-side-effects) for more details and
an example.
### I have a huge mock class, and Microsoft Visual C++ runs out of memory when compiling it. What can I do?
We've noticed that when the `/clr` compiler flag is used, Visual C++ uses 5~6
times as much memory when compiling a mock class. We suggest to avoid `/clr`
when compiling native C++ mocks.

View File

@ -1,700 +0,0 @@
# gMock for Dummies
## What Is gMock?
When you write a prototype or test, often it's not feasible or wise to rely on
real objects entirely. A **mock object** implements the same interface as a real
object (so it can be used as one), but lets you specify at run time how it will
be used and what it should do (which methods will be called? in which order? how
many times? with what arguments? what will they return? etc).
It is easy to confuse the term *fake objects* with mock objects. Fakes and mocks
actually mean very different things in the Test-Driven Development (TDD)
community:
* **Fake** objects have working implementations, but usually take some
shortcut (perhaps to make the operations less expensive), which makes them
not suitable for production. An in-memory file system would be an example of
a fake.
* **Mocks** are objects pre-programmed with *expectations*, which form a
specification of the calls they are expected to receive.
If all this seems too abstract for you, don't worry - the most important thing
to remember is that a mock allows you to check the *interaction* between itself
and code that uses it. The difference between fakes and mocks shall become much
clearer once you start to use mocks.
**gMock** is a library (sometimes we also call it a "framework" to make it sound
cool) for creating mock classes and using them. It does to C++ what
jMock/EasyMock does to Java (well, more or less).
When using gMock,
1. first, you use some simple macros to describe the interface you want to
mock, and they will expand to the implementation of your mock class;
2. next, you create some mock objects and specify its expectations and behavior
using an intuitive syntax;
3. then you exercise code that uses the mock objects. gMock will catch any
violation to the expectations as soon as it arises.
## Why gMock?
While mock objects help you remove unnecessary dependencies in tests and make
them fast and reliable, using mocks manually in C++ is *hard*:
* Someone has to implement the mocks. The job is usually tedious and
error-prone. No wonder people go great distance to avoid it.
* The quality of those manually written mocks is a bit, uh, unpredictable. You
may see some really polished ones, but you may also see some that were
hacked up in a hurry and have all sorts of ad hoc restrictions.
* The knowledge you gained from using one mock doesn't transfer to the next
one.
In contrast, Java and Python programmers have some fine mock frameworks (jMock,
EasyMock, etc), which automate the creation of mocks. As a result, mocking is a
proven effective technique and widely adopted practice in those communities.
Having the right tool absolutely makes the difference.
gMock was built to help C++ programmers. It was inspired by jMock and EasyMock,
but designed with C++'s specifics in mind. It is your friend if any of the
following problems is bothering you:
* You are stuck with a sub-optimal design and wish you had done more
prototyping before it was too late, but prototyping in C++ is by no means
"rapid".
* Your tests are slow as they depend on too many libraries or use expensive
resources (e.g. a database).
* Your tests are brittle as some resources they use are unreliable (e.g. the
network).
* You want to test how your code handles a failure (e.g. a file checksum
error), but it's not easy to cause one.
* You need to make sure that your module interacts with other modules in the
right way, but it's hard to observe the interaction; therefore you resort to
observing the side effects at the end of the action, but it's awkward at
best.
* You want to "mock out" your dependencies, except that they don't have mock
implementations yet; and, frankly, you aren't thrilled by some of those
hand-written mocks.
We encourage you to use gMock as
* a *design* tool, for it lets you experiment with your interface design early
and often. More iterations lead to better designs!
* a *testing* tool to cut your tests' outbound dependencies and probe the
interaction between your module and its collaborators.
## Getting Started
gMock is bundled with googletest.
## A Case for Mock Turtles
Let's look at an example. Suppose you are developing a graphics program that
relies on a [LOGO](http://en.wikipedia.org/wiki/Logo_programming_language)-like
API for drawing. How would you test that it does the right thing? Well, you can
run it and compare the screen with a golden screen snapshot, but let's admit it:
tests like this are expensive to run and fragile (What if you just upgraded to a
shiny new graphics card that has better anti-aliasing? Suddenly you have to
update all your golden images.). It would be too painful if all your tests are
like this. Fortunately, you learned about
[Dependency Injection](http://en.wikipedia.org/wiki/Dependency_injection) and know the right thing
to do: instead of having your application talk to the system API directly, wrap
the API in an interface (say, `Turtle`) and code to that interface:
```cpp
class Turtle {
...
virtual ~Turtle() {}
virtual void PenUp() = 0;
virtual void PenDown() = 0;
virtual void Forward(int distance) = 0;
virtual void Turn(int degrees) = 0;
virtual void GoTo(int x, int y) = 0;
virtual int GetX() const = 0;
virtual int GetY() const = 0;
};
```
(Note that the destructor of `Turtle` **must** be virtual, as is the case for
**all** classes you intend to inherit from - otherwise the destructor of the
derived class will not be called when you delete an object through a base
pointer, and you'll get corrupted program states like memory leaks.)
You can control whether the turtle's movement will leave a trace using `PenUp()`
and `PenDown()`, and control its movement using `Forward()`, `Turn()`, and
`GoTo()`. Finally, `GetX()` and `GetY()` tell you the current position of the
turtle.
Your program will normally use a real implementation of this interface. In
tests, you can use a mock implementation instead. This allows you to easily
check what drawing primitives your program is calling, with what arguments, and
in which order. Tests written this way are much more robust (they won't break
because your new machine does anti-aliasing differently), easier to read and
maintain (the intent of a test is expressed in the code, not in some binary
images), and run *much, much faster*.
## Writing the Mock Class
If you are lucky, the mocks you need to use have already been implemented by
some nice people. If, however, you find yourself in the position to write a mock
class, relax - gMock turns this task into a fun game! (Well, almost.)
### How to Define It
Using the `Turtle` interface as example, here are the simple steps you need to
follow:
* Derive a class `MockTurtle` from `Turtle`.
* Take a *virtual* function of `Turtle` (while it's possible to
[mock non-virtual methods using templates](gmock_cook_book.md#MockingNonVirtualMethods),
it's much more involved).
* In the `public:` section of the child class, write `MOCK_METHOD();`
* Now comes the fun part: you take the function signature, cut-and-paste it
into the macro, and add two commas - one between the return type and the
name, another between the name and the argument list.
* If you're mocking a const method, add a 4th parameter containing `(const)`
(the parentheses are required).
* Since you're overriding a virtual method, we suggest adding the `override`
keyword. For const methods the 4th parameter becomes `(const, override)`,
for non-const methods just `(override)`. This isn't mandatory.
* Repeat until all virtual functions you want to mock are done. (It goes
without saying that *all* pure virtual methods in your abstract class must
be either mocked or overridden.)
After the process, you should have something like:
```cpp
#include "gmock/gmock.h" // Brings in gMock.
class MockTurtle : public Turtle {
public:
...
MOCK_METHOD(void, PenUp, (), (override));
MOCK_METHOD(void, PenDown, (), (override));
MOCK_METHOD(void, Forward, (int distance), (override));
MOCK_METHOD(void, Turn, (int degrees), (override));
MOCK_METHOD(void, GoTo, (int x, int y), (override));
MOCK_METHOD(int, GetX, (), (const, override));
MOCK_METHOD(int, GetY, (), (const, override));
};
```
You don't need to define these mock methods somewhere else - the `MOCK_METHOD`
macro will generate the definitions for you. It's that simple!
### Where to Put It
When you define a mock class, you need to decide where to put its definition.
Some people put it in a `_test.cc`. This is fine when the interface being mocked
(say, `Foo`) is owned by the same person or team. Otherwise, when the owner of
`Foo` changes it, your test could break. (You can't really expect `Foo`'s
maintainer to fix every test that uses `Foo`, can you?)
So, the rule of thumb is: if you need to mock `Foo` and it's owned by others,
define the mock class in `Foo`'s package (better, in a `testing` sub-package
such that you can clearly separate production code and testing utilities), put
it in a `.h` and a `cc_library`. Then everyone can reference them from their
tests. If `Foo` ever changes, there is only one copy of `MockFoo` to change, and
only tests that depend on the changed methods need to be fixed.
Another way to do it: you can introduce a thin layer `FooAdaptor` on top of
`Foo` and code to this new interface. Since you own `FooAdaptor`, you can absorb
changes in `Foo` much more easily. While this is more work initially, carefully
choosing the adaptor interface can make your code easier to write and more
readable (a net win in the long run), as you can choose `FooAdaptor` to fit your
specific domain much better than `Foo` does.
## Using Mocks in Tests
Once you have a mock class, using it is easy. The typical work flow is:
1. Import the gMock names from the `testing` namespace such that you can use
them unqualified (You only have to do it once per file). Remember that
namespaces are a good idea.
2. Create some mock objects.
3. Specify your expectations on them (How many times will a method be called?
With what arguments? What should it do? etc.).
4. Exercise some code that uses the mocks; optionally, check the result using
googletest assertions. If a mock method is called more than expected or with
wrong arguments, you'll get an error immediately.
5. When a mock is destructed, gMock will automatically check whether all
expectations on it have been satisfied.
Here's an example:
```cpp
#include "path/to/mock-turtle.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
using ::testing::AtLeast; // #1
TEST(PainterTest, CanDrawSomething) {
MockTurtle turtle; // #2
EXPECT_CALL(turtle, PenDown()) // #3
.Times(AtLeast(1));
Painter painter(&turtle); // #4
EXPECT_TRUE(painter.DrawCircle(0, 0, 10)); // #5
}
```
As you might have guessed, this test checks that `PenDown()` is called at least
once. If the `painter` object didn't call this method, your test will fail with
a message like this:
```text
path/to/my_test.cc:119: Failure
Actual function call count doesn't match this expectation:
Actually: never called;
Expected: called at least once.
Stack trace:
...
```
**Tip 1:** If you run the test from an Emacs buffer, you can hit `<Enter>` on
the line number to jump right to the failed expectation.
**Tip 2:** If your mock objects are never deleted, the final verification won't
happen. Therefore it's a good idea to turn on the heap checker in your tests
when you allocate mocks on the heap. You get that automatically if you use the
`gtest_main` library already.
**Important note:** gMock requires expectations to be set **before** the mock
functions are called, otherwise the behavior is **undefined**. Do not alternate
between calls to `EXPECT_CALL()` and calls to the mock functions, and do not set
any expectations on a mock after passing the mock to an API.
This means `EXPECT_CALL()` should be read as expecting that a call will occur
*in the future*, not that a call has occurred. Why does gMock work like that?
Well, specifying the expectation beforehand allows gMock to report a violation
as soon as it rises, when the context (stack trace, etc) is still available.
This makes debugging much easier.
Admittedly, this test is contrived and doesn't do much. You can easily achieve
the same effect without using gMock. However, as we shall reveal soon, gMock
allows you to do *so much more* with the mocks.
## Setting Expectations
The key to using a mock object successfully is to set the *right expectations*
on it. If you set the expectations too strict, your test will fail as the result
of unrelated changes. If you set them too loose, bugs can slip through. You want
to do it just right such that your test can catch exactly the kind of bugs you
intend it to catch. gMock provides the necessary means for you to do it "just
right."
### General Syntax
In gMock we use the `EXPECT_CALL()` macro to set an expectation on a mock
method. The general syntax is:
```cpp
EXPECT_CALL(mock_object, method(matchers))
.Times(cardinality)
.WillOnce(action)
.WillRepeatedly(action);
```
The macro has two arguments: first the mock object, and then the method and its
arguments. Note that the two are separated by a comma (`,`), not a period (`.`).
(Why using a comma? The answer is that it was necessary for technical reasons.)
If the method is not overloaded, the macro can also be called without matchers:
```cpp
EXPECT_CALL(mock_object, non-overloaded-method)
.Times(cardinality)
.WillOnce(action)
.WillRepeatedly(action);
```
This syntax allows the test writer to specify "called with any arguments"
without explicitly specifying the number or types of arguments. To avoid
unintended ambiguity, this syntax may only be used for methods that are not
overloaded.
Either form of the macro can be followed by some optional *clauses* that provide
more information about the expectation. We'll discuss how each clause works in
the coming sections.
This syntax is designed to make an expectation read like English. For example,
you can probably guess that
```cpp
using ::testing::Return;
...
EXPECT_CALL(turtle, GetX())
.Times(5)
.WillOnce(Return(100))
.WillOnce(Return(150))
.WillRepeatedly(Return(200));
```
says that the `turtle` object's `GetX()` method will be called five times, it
will return 100 the first time, 150 the second time, and then 200 every time.
Some people like to call this style of syntax a Domain-Specific Language (DSL).
{: .callout .note}
**Note:** Why do we use a macro to do this? Well it serves two purposes: first
it makes expectations easily identifiable (either by `grep` or by a human
reader), and second it allows gMock to include the source file location of a
failed expectation in messages, making debugging easier.
### Matchers: What Arguments Do We Expect?
When a mock function takes arguments, we may specify what arguments we are
expecting, for example:
```cpp
// Expects the turtle to move forward by 100 units.
EXPECT_CALL(turtle, Forward(100));
```
Oftentimes you do not want to be too specific. Remember that talk about tests
being too rigid? Over specification leads to brittle tests and obscures the
intent of tests. Therefore we encourage you to specify only what's necessary—no
more, no less. If you aren't interested in the value of an argument, write `_`
as the argument, which means "anything goes":
```cpp
using ::testing::_;
...
// Expects that the turtle jumps to somewhere on the x=50 line.
EXPECT_CALL(turtle, GoTo(50, _));
```
`_` is an instance of what we call **matchers**. A matcher is like a predicate
and can test whether an argument is what we'd expect. You can use a matcher
inside `EXPECT_CALL()` wherever a function argument is expected. `_` is a
convenient way of saying "any value".
In the above examples, `100` and `50` are also matchers; implicitly, they are
the same as `Eq(100)` and `Eq(50)`, which specify that the argument must be
equal (using `operator==`) to the matcher argument. There are many
[built-in matchers](reference/matchers.md) for common types (as well as
[custom matchers](gmock_cook_book.md#NewMatchers)); for example:
```cpp
using ::testing::Ge;
...
// Expects the turtle moves forward by at least 100.
EXPECT_CALL(turtle, Forward(Ge(100)));
```
If you don't care about *any* arguments, rather than specify `_` for each of
them you may instead omit the parameter list:
```cpp
// Expects the turtle to move forward.
EXPECT_CALL(turtle, Forward);
// Expects the turtle to jump somewhere.
EXPECT_CALL(turtle, GoTo);
```
This works for all non-overloaded methods; if a method is overloaded, you need
to help gMock resolve which overload is expected by specifying the number of
arguments and possibly also the
[types of the arguments](gmock_cook_book.md#SelectOverload).
### Cardinalities: How Many Times Will It Be Called?
The first clause we can specify following an `EXPECT_CALL()` is `Times()`. We
call its argument a **cardinality** as it tells *how many times* the call should
occur. It allows us to repeat an expectation many times without actually writing
it as many times. More importantly, a cardinality can be "fuzzy", just like a
matcher can be. This allows a user to express the intent of a test exactly.
An interesting special case is when we say `Times(0)`. You may have guessed - it
means that the function shouldn't be called with the given arguments at all, and
gMock will report a googletest failure whenever the function is (wrongfully)
called.
We've seen `AtLeast(n)` as an example of fuzzy cardinalities earlier. For the
list of built-in cardinalities you can use, see
[here](gmock_cheat_sheet.md#CardinalityList).
The `Times()` clause can be omitted. **If you omit `Times()`, gMock will infer
the cardinality for you.** The rules are easy to remember:
* If **neither** `WillOnce()` **nor** `WillRepeatedly()` is in the
`EXPECT_CALL()`, the inferred cardinality is `Times(1)`.
* If there are *n* `WillOnce()`'s but **no** `WillRepeatedly()`, where *n* >=
1, the cardinality is `Times(n)`.
* If there are *n* `WillOnce()`'s and **one** `WillRepeatedly()`, where *n* >=
0, the cardinality is `Times(AtLeast(n))`.
**Quick quiz:** what do you think will happen if a function is expected to be
called twice but actually called four times?
### Actions: What Should It Do?
Remember that a mock object doesn't really have a working implementation? We as
users have to tell it what to do when a method is invoked. This is easy in
gMock.
First, if the return type of a mock function is a built-in type or a pointer,
the function has a **default action** (a `void` function will just return, a
`bool` function will return `false`, and other functions will return 0). In
addition, in C++ 11 and above, a mock function whose return type is
default-constructible (i.e. has a default constructor) has a default action of
returning a default-constructed value. If you don't say anything, this behavior
will be used.
Second, if a mock function doesn't have a default action, or the default action
doesn't suit you, you can specify the action to be taken each time the
expectation matches using a series of `WillOnce()` clauses followed by an
optional `WillRepeatedly()`. For example,
```cpp
using ::testing::Return;
...
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(100))
.WillOnce(Return(200))
.WillOnce(Return(300));
```
says that `turtle.GetX()` will be called *exactly three times* (gMock inferred
this from how many `WillOnce()` clauses we've written, since we didn't
explicitly write `Times()`), and will return 100, 200, and 300 respectively.
```cpp
using ::testing::Return;
...
EXPECT_CALL(turtle, GetY())
.WillOnce(Return(100))
.WillOnce(Return(200))
.WillRepeatedly(Return(300));
```
says that `turtle.GetY()` will be called *at least twice* (gMock knows this as
we've written two `WillOnce()` clauses and a `WillRepeatedly()` while having no
explicit `Times()`), will return 100 and 200 respectively the first two times,
and 300 from the third time on.
Of course, if you explicitly write a `Times()`, gMock will not try to infer the
cardinality itself. What if the number you specified is larger than there are
`WillOnce()` clauses? Well, after all `WillOnce()`s are used up, gMock will do
the *default* action for the function every time (unless, of course, you have a
`WillRepeatedly()`.).
What can we do inside `WillOnce()` besides `Return()`? You can return a
reference using `ReturnRef(*variable*)`, or invoke a pre-defined function, among
[others](gmock_cook_book.md#using-actions).
**Important note:** The `EXPECT_CALL()` statement evaluates the action clause
only once, even though the action may be performed many times. Therefore you
must be careful about side effects. The following may not do what you want:
```cpp
using ::testing::Return;
...
int n = 100;
EXPECT_CALL(turtle, GetX())
.Times(4)
.WillRepeatedly(Return(n++));
```
Instead of returning 100, 101, 102, ..., consecutively, this mock function will
always return 100 as `n++` is only evaluated once. Similarly, `Return(new Foo)`
will create a new `Foo` object when the `EXPECT_CALL()` is executed, and will
return the same pointer every time. If you want the side effect to happen every
time, you need to define a custom action, which we'll teach in the
[cook book](gmock_cook_book.md).
Time for another quiz! What do you think the following means?
```cpp
using ::testing::Return;
...
EXPECT_CALL(turtle, GetY())
.Times(4)
.WillOnce(Return(100));
```
Obviously `turtle.GetY()` is expected to be called four times. But if you think
it will return 100 every time, think twice! Remember that one `WillOnce()`
clause will be consumed each time the function is invoked and the default action
will be taken afterwards. So the right answer is that `turtle.GetY()` will
return 100 the first time, but **return 0 from the second time on**, as
returning 0 is the default action for `int` functions.
### Using Multiple Expectations {#MultiExpectations}
So far we've only shown examples where you have a single expectation. More
realistically, you'll specify expectations on multiple mock methods which may be
from multiple mock objects.
By default, when a mock method is invoked, gMock will search the expectations in
the **reverse order** they are defined, and stop when an active expectation that
matches the arguments is found (you can think of it as "newer rules override
older ones."). If the matching expectation cannot take any more calls, you will
get an upper-bound-violated failure. Here's an example:
```cpp
using ::testing::_;
...
EXPECT_CALL(turtle, Forward(_)); // #1
EXPECT_CALL(turtle, Forward(10)) // #2
.Times(2);
```
If `Forward(10)` is called three times in a row, the third time it will be an
error, as the last matching expectation (#2) has been saturated. If, however,
the third `Forward(10)` call is replaced by `Forward(20)`, then it would be OK,
as now #1 will be the matching expectation.
{: .callout .note}
**Note:** Why does gMock search for a match in the *reverse* order of the
expectations? The reason is that this allows a user to set up the default
expectations in a mock object's constructor or the test fixture's set-up phase
and then customize the mock by writing more specific expectations in the test
body. So, if you have two expectations on the same method, you want to put the
one with more specific matchers **after** the other, or the more specific rule
would be shadowed by the more general one that comes after it.
{: .callout .tip}
**Tip:** It is very common to start with a catch-all expectation for a method
and `Times(AnyNumber())` (omitting arguments, or with `_` for all arguments, if
overloaded). This makes any calls to the method expected. This is not necessary
for methods that are not mentioned at all (these are "uninteresting"), but is
useful for methods that have some expectations, but for which other calls are
ok. See
[Understanding Uninteresting vs Unexpected Calls](gmock_cook_book.md#uninteresting-vs-unexpected).
### Ordered vs Unordered Calls {#OrderedCalls}
By default, an expectation can match a call even though an earlier expectation
hasn't been satisfied. In other words, the calls don't have to occur in the
order the expectations are specified.
Sometimes, you may want all the expected calls to occur in a strict order. To
say this in gMock is easy:
```cpp
using ::testing::InSequence;
...
TEST(FooTest, DrawsLineSegment) {
...
{
InSequence seq;
EXPECT_CALL(turtle, PenDown());
EXPECT_CALL(turtle, Forward(100));
EXPECT_CALL(turtle, PenUp());
}
Foo();
}
```
By creating an object of type `InSequence`, all expectations in its scope are
put into a *sequence* and have to occur *sequentially*. Since we are just
relying on the constructor and destructor of this object to do the actual work,
its name is really irrelevant.
In this example, we test that `Foo()` calls the three expected functions in the
order as written. If a call is made out-of-order, it will be an error.
(What if you care about the relative order of some of the calls, but not all of
them? Can you specify an arbitrary partial order? The answer is ... yes! The
details can be found [here](gmock_cook_book.md#OrderedCalls).)
### All Expectations Are Sticky (Unless Said Otherwise) {#StickyExpectations}
Now let's do a quick quiz to see how well you can use this mock stuff already.
How would you test that the turtle is asked to go to the origin *exactly twice*
(you want to ignore any other instructions it receives)?
After you've come up with your answer, take a look at ours and compare notes
(solve it yourself first - don't cheat!):
```cpp
using ::testing::_;
using ::testing::AnyNumber;
...
EXPECT_CALL(turtle, GoTo(_, _)) // #1
.Times(AnyNumber());
EXPECT_CALL(turtle, GoTo(0, 0)) // #2
.Times(2);
```
Suppose `turtle.GoTo(0, 0)` is called three times. In the third time, gMock will
see that the arguments match expectation #2 (remember that we always pick the
last matching expectation). Now, since we said that there should be only two
such calls, gMock will report an error immediately. This is basically what we've
told you in the [Using Multiple Expectations](#MultiExpectations) section above.
This example shows that **expectations in gMock are "sticky" by default**, in
the sense that they remain active even after we have reached their invocation
upper bounds. This is an important rule to remember, as it affects the meaning
of the spec, and is **different** to how it's done in many other mocking
frameworks (Why'd we do that? Because we think our rule makes the common cases
easier to express and understand.).
Simple? Let's see if you've really understood it: what does the following code
say?
```cpp
using ::testing::Return;
...
for (int i = n; i > 0; i--) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i));
}
```
If you think it says that `turtle.GetX()` will be called `n` times and will
return 10, 20, 30, ..., consecutively, think twice! The problem is that, as we
said, expectations are sticky. So, the second time `turtle.GetX()` is called,
the last (latest) `EXPECT_CALL()` statement will match, and will immediately
lead to an "upper bound violated" error - this piece of code is not very useful!
One correct way of saying that `turtle.GetX()` will return 10, 20, 30, ..., is
to explicitly say that the expectations are *not* sticky. In other words, they
should *retire* as soon as they are saturated:
```cpp
using ::testing::Return;
...
for (int i = n; i > 0; i--) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i))
.RetiresOnSaturation();
}
```
And, there's a better way to do it: in this case, we expect the calls to occur
in a specific order, and we line up the actions to match the order. Since the
order is important here, we should make it explicit using a sequence:
```cpp
using ::testing::InSequence;
using ::testing::Return;
...
{
InSequence s;
for (int i = 1; i <= n; i++) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i))
.RetiresOnSaturation();
}
}
```
By the way, the other situation where an expectation may *not* be sticky is when
it's in a sequence - as soon as another expectation that comes after it in the
sequence has been used, it automatically retires (and will never be used to
match any call).
### Uninteresting Calls
A mock object may have many methods, and not all of them are that interesting.
For example, in some tests we may not care about how many times `GetX()` and
`GetY()` get called.
In gMock, if you are not interested in a method, just don't say anything about
it. If a call to this method occurs, you'll see a warning in the test output,
but it won't be a failure. This is called "naggy" behavior; to change, see
[The Nice, the Strict, and the Naggy](gmock_cook_book.md#NiceStrictNaggy).

View File

@ -1,22 +0,0 @@
# GoogleTest User's Guide
## Welcome to GoogleTest!
GoogleTest is Google's C++ testing and mocking framework. This user's guide has
the following contents:
* [GoogleTest Primer](primer.md) - Teaches you how to write simple tests using
GoogleTest. Read this first if you are new to GoogleTest.
* [GoogleTest Advanced](advanced.md) - Read this when you've finished the
Primer and want to utilize GoogleTest to its full potential.
* [GoogleTest Samples](samples.md) - Describes some GoogleTest samples.
* [GoogleTest FAQ](faq.md) - Have a question? Want some tips? Check here
first.
* [Mocking for Dummies](gmock_for_dummies.md) - Teaches you how to create mock
objects and use them in tests.
* [Mocking Cookbook](gmock_cook_book.md) - Includes tips and approaches to
common mocking use cases.
* [Mocking Cheat Sheet](gmock_cheat_sheet.md) - A handy reference for
matchers, actions, invariants, and more.
* [Mocking FAQ](gmock_faq.md) - Contains answers to some mocking-specific
questions.

View File

@ -1,148 +0,0 @@
## Using GoogleTest from various build systems
GoogleTest comes with pkg-config files that can be used to determine all
necessary flags for compiling and linking to GoogleTest (and GoogleMock).
Pkg-config is a standardised plain-text format containing
* the includedir (-I) path
* necessary macro (-D) definitions
* further required flags (-pthread)
* the library (-L) path
* the library (-l) to link to
All current build systems support pkg-config in one way or another. For all
examples here we assume you want to compile the sample
`samples/sample3_unittest.cc`.
### CMake
Using `pkg-config` in CMake is fairly easy:
```cmake
cmake_minimum_required(VERSION 3.0)
cmake_policy(SET CMP0048 NEW)
project(my_gtest_pkgconfig VERSION 0.0.1 LANGUAGES CXX)
find_package(PkgConfig)
pkg_search_module(GTEST REQUIRED gtest_main)
add_executable(testapp samples/sample3_unittest.cc)
target_link_libraries(testapp ${GTEST_LDFLAGS})
target_compile_options(testapp PUBLIC ${GTEST_CFLAGS})
include(CTest)
add_test(first_and_only_test testapp)
```
It is generally recommended that you use `target_compile_options` + `_CFLAGS`
over `target_include_directories` + `_INCLUDE_DIRS` as the former includes not
just -I flags (GoogleTest might require a macro indicating to internal headers
that all libraries have been compiled with threading enabled. In addition,
GoogleTest might also require `-pthread` in the compiling step, and as such
splitting the pkg-config `Cflags` variable into include dirs and macros for
`target_compile_definitions()` might still miss this). The same recommendation
goes for using `_LDFLAGS` over the more commonplace `_LIBRARIES`, which happens
to discard `-L` flags and `-pthread`.
### Help! pkg-config can't find GoogleTest!
Let's say you have a `CMakeLists.txt` along the lines of the one in this
tutorial and you try to run `cmake`. It is very possible that you get a failure
along the lines of:
```
-- Checking for one of the modules 'gtest_main'
CMake Error at /usr/share/cmake/Modules/FindPkgConfig.cmake:640 (message):
None of the required 'gtest_main' found
```
These failures are common if you installed GoogleTest yourself and have not
sourced it from a distro or other package manager. If so, you need to tell
pkg-config where it can find the `.pc` files containing the information. Say you
installed GoogleTest to `/usr/local`, then it might be that the `.pc` files are
installed under `/usr/local/lib64/pkgconfig`. If you set
```
export PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig
```
pkg-config will also try to look in `PKG_CONFIG_PATH` to find `gtest_main.pc`.
### Using pkg-config in a cross-compilation setting
Pkg-config can be used in a cross-compilation setting too. To do this, let's
assume the final prefix of the cross-compiled installation will be `/usr`, and
your sysroot is `/home/MYUSER/sysroot`. Configure and install GTest using
```
mkdir build && cmake -DCMAKE_INSTALL_PREFIX=/usr ..
```
Install into the sysroot using `DESTDIR`:
```
make -j install DESTDIR=/home/MYUSER/sysroot
```
Before we continue, it is recommended to **always** define the following two
variables for pkg-config in a cross-compilation setting:
```
export PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=yes
export PKG_CONFIG_ALLOW_SYSTEM_LIBS=yes
```
otherwise `pkg-config` will filter `-I` and `-L` flags against standard prefixes
such as `/usr` (see https://bugs.freedesktop.org/show_bug.cgi?id=28264#c3 for
reasons why this stripping needs to occur usually).
If you look at the generated pkg-config file, it will look something like
```
libdir=/usr/lib64
includedir=/usr/include
Name: gtest
Description: GoogleTest (without main() function)
Version: 1.10.0
URL: https://github.com/google/googletest
Libs: -L${libdir} -lgtest -lpthread
Cflags: -I${includedir} -DGTEST_HAS_PTHREAD=1 -lpthread
```
Notice that the sysroot is not included in `libdir` and `includedir`! If you try
to run `pkg-config` with the correct
`PKG_CONFIG_LIBDIR=/home/MYUSER/sysroot/usr/lib64/pkgconfig` against this `.pc`
file, you will get
```
$ pkg-config --cflags gtest
-DGTEST_HAS_PTHREAD=1 -lpthread -I/usr/include
$ pkg-config --libs gtest
-L/usr/lib64 -lgtest -lpthread
```
which is obviously wrong and points to the `CBUILD` and not `CHOST` root. In
order to use this in a cross-compilation setting, we need to tell pkg-config to
inject the actual sysroot into `-I` and `-L` variables. Let us now tell
pkg-config about the actual sysroot
```
export PKG_CONFIG_DIR=
export PKG_CONFIG_SYSROOT_DIR=/home/MYUSER/sysroot
export PKG_CONFIG_LIBDIR=${PKG_CONFIG_SYSROOT_DIR}/usr/lib64/pkgconfig
```
and running `pkg-config` again we get
```
$ pkg-config --cflags gtest
-DGTEST_HAS_PTHREAD=1 -lpthread -I/home/MYUSER/sysroot/usr/include
$ pkg-config --libs gtest
-L/home/MYUSER/sysroot/usr/lib64 -lgtest -lpthread
```
which contains the correct sysroot now. For a more comprehensive guide to also
including `${CHOST}` in build system calls, see the excellent tutorial by Diego
Elio Pettenò: <https://autotools.io/pkgconfig/cross-compiling.html>

View File

@ -1,35 +0,0 @@
# Supported Platforms
GoogleTest requires a codebase and compiler compliant with the C++11 standard or
newer.
The GoogleTest code is officially supported on the following platforms.
Operating systems or tools not listed below are community-supported. For
community-supported platforms, patches that do not complicate the code may be
considered.
If you notice any problems on your platform, please file an issue on the
[GoogleTest GitHub Issue Tracker](https://github.com/google/googletest/issues).
Pull requests containing fixes are welcome!
### Operating systems
* Linux
* macOS
* Windows
### Compilers
* gcc 5.0+
* clang 5.0+
* MSVC 2015+
**macOS users:** Xcode 9.3+ provides clang 5.0+.
### Build systems
* [Bazel](https://bazel.build/)
* [CMake](https://cmake.org/)
Bazel is the build system used by the team internally and in tests. CMake is
supported on a best-effort basis and by the community.

View File

@ -1,482 +0,0 @@
# Googletest Primer
## Introduction: Why googletest?
*googletest* helps you write better C++ tests.
googletest is a testing framework developed by the Testing Technology team with
Google's specific requirements and constraints in mind. Whether you work on
Linux, Windows, or a Mac, if you write C++ code, googletest can help you. And it
supports *any* kind of tests, not just unit tests.
So what makes a good test, and how does googletest fit in? We believe:
1. Tests should be *independent* and *repeatable*. It's a pain to debug a test
that succeeds or fails as a result of other tests. googletest isolates the
tests by running each of them on a different object. When a test fails,
googletest allows you to run it in isolation for quick debugging.
2. Tests should be well *organized* and reflect the structure of the tested
code. googletest groups related tests into test suites that can share data
and subroutines. This common pattern is easy to recognize and makes tests
easy to maintain. Such consistency is especially helpful when people switch
projects and start to work on a new code base.
3. Tests should be *portable* and *reusable*. Google has a lot of code that is
platform-neutral; its tests should also be platform-neutral. googletest
works on different OSes, with different compilers, with or without
exceptions, so googletest tests can work with a variety of configurations.
4. When tests fail, they should provide as much *information* about the problem
as possible. googletest doesn't stop at the first test failure. Instead, it
only stops the current test and continues with the next. You can also set up
tests that report non-fatal failures after which the current test continues.
Thus, you can detect and fix multiple bugs in a single run-edit-compile
cycle.
5. The testing framework should liberate test writers from housekeeping chores
and let them focus on the test *content*. googletest automatically keeps
track of all tests defined, and doesn't require the user to enumerate them
in order to run them.
6. Tests should be *fast*. With googletest, you can reuse shared resources
across tests and pay for the set-up/tear-down only once, without making
tests depend on each other.
Since googletest is based on the popular xUnit architecture, you'll feel right
at home if you've used JUnit or PyUnit before. If not, it will take you about 10
minutes to learn the basics and get started. So let's go!
## Beware of the nomenclature
{: .callout .note}
_Note:_ There might be some confusion arising from different definitions of the
terms _Test_, _Test Case_ and _Test Suite_, so beware of misunderstanding these.
Historically, googletest started to use the term _Test Case_ for grouping
related tests, whereas current publications, including International Software
Testing Qualifications Board ([ISTQB](http://www.istqb.org/)) materials and
various textbooks on software quality, use the term
_[Test Suite][istqb test suite]_ for this.
The related term _Test_, as it is used in googletest, corresponds to the term
_[Test Case][istqb test case]_ of ISTQB and others.
The term _Test_ is commonly of broad enough sense, including ISTQB's definition
of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as
was used in Google Test is of contradictory sense and thus confusing.
googletest recently started replacing the term _Test Case_ with _Test Suite_.
The preferred API is *TestSuite*. The older TestCase API is being slowly
deprecated and refactored away.
So please be aware of the different definitions of the terms:
Meaning | googletest Term | [ISTQB](http://www.istqb.org/) Term
:----------------------------------------------------------------------------------- | :---------------------- | :----------------------------------
Exercise a particular program path with specific input values and verify the results | [TEST()](#simple-tests) | [Test Case][istqb test case]
[istqb test case]: http://glossary.istqb.org/en/search/test%20case
[istqb test suite]: http://glossary.istqb.org/en/search/test%20suite
## Basic Concepts
When using googletest, you start by writing *assertions*, which are statements
that check whether a condition is true. An assertion's result can be *success*,
*nonfatal failure*, or *fatal failure*. If a fatal failure occurs, it aborts the
current function; otherwise the program continues normally.
*Tests* use assertions to verify the tested code's behavior. If a test crashes
or has a failed assertion, then it *fails*; otherwise it *succeeds*.
A *test suite* contains one or many tests. You should group your tests into test
suites that reflect the structure of the tested code. When multiple tests in a
test suite need to share common objects and subroutines, you can put them into a
*test fixture* class.
A *test program* can contain multiple test suites.
We'll now explain how to write a test program, starting at the individual
assertion level and building up to tests and test suites.
## Assertions
googletest assertions are macros that resemble function calls. You test a class
or function by making assertions about its behavior. When an assertion fails,
googletest prints the assertion's source file and line number location, along
with a failure message. You may also supply a custom failure message which will
be appended to googletest's message.
The assertions come in pairs that test the same thing but have different effects
on the current function. `ASSERT_*` versions generate fatal failures when they
fail, and **abort the current function**. `EXPECT_*` versions generate nonfatal
failures, which don't abort the current function. Usually `EXPECT_*` are
preferred, as they allow more than one failure to be reported in a test.
However, you should use `ASSERT_*` if it doesn't make sense to continue when the
assertion in question fails.
Since a failed `ASSERT_*` returns from the current function immediately,
possibly skipping clean-up code that comes after it, it may cause a space leak.
Depending on the nature of the leak, it may or may not be worth fixing - so keep
this in mind if you get a heap checker error in addition to assertion errors.
To provide a custom failure message, simply stream it into the macro using the
`<<` operator or a sequence of such operators. See the following example, using
the [`ASSERT_EQ` and `EXPECT_EQ`](reference/assertions.md#EXPECT_EQ) macros to
verify value equality:
```c++
ASSERT_EQ(x.size(), y.size()) << "Vectors x and y are of unequal length";
for (int i = 0; i < x.size(); ++i) {
EXPECT_EQ(x[i], y[i]) << "Vectors x and y differ at index " << i;
}
```
Anything that can be streamed to an `ostream` can be streamed to an assertion
macro--in particular, C strings and `string` objects. If a wide string
(`wchar_t*`, `TCHAR*` in `UNICODE` mode on Windows, or `std::wstring`) is
streamed to an assertion, it will be translated to UTF-8 when printed.
GoogleTest provides a collection of assertions for verifying the behavior of
your code in various ways. You can check Boolean conditions, compare values
based on relational operators, verify string values, floating-point values, and
much more. There are even assertions that enable you to verify more complex
states by providing custom predicates. For the complete list of assertions
provided by GoogleTest, see the [Assertions Reference](reference/assertions.md).
## Simple Tests
To create a test:
1. Use the `TEST()` macro to define and name a test function. These are
ordinary C++ functions that don't return a value.
2. In this function, along with any valid C++ statements you want to include,
use the various googletest assertions to check values.
3. The test's result is determined by the assertions; if any assertion in the
test fails (either fatally or non-fatally), or if the test crashes, the
entire test fails. Otherwise, it succeeds.
```c++
TEST(TestSuiteName, TestName) {
... test body ...
}
```
`TEST()` arguments go from general to specific. The *first* argument is the name
of the test suite, and the *second* argument is the test's name within the test
suite. Both names must be valid C++ identifiers, and they should not contain
any underscores (`_`). A test's *full name* consists of its containing test suite and
its individual name. Tests from different test suites can have the same
individual name.
For example, let's take a simple integer function:
```c++
int Factorial(int n); // Returns the factorial of n
```
A test suite for this function might look like:
```c++
// Tests factorial of 0.
TEST(FactorialTest, HandlesZeroInput) {
EXPECT_EQ(Factorial(0), 1);
}
// Tests factorial of positive numbers.
TEST(FactorialTest, HandlesPositiveInput) {
EXPECT_EQ(Factorial(1), 1);
EXPECT_EQ(Factorial(2), 2);
EXPECT_EQ(Factorial(3), 6);
EXPECT_EQ(Factorial(8), 40320);
}
```
googletest groups the test results by test suites, so logically related tests
should be in the same test suite; in other words, the first argument to their
`TEST()` should be the same. In the above example, we have two tests,
`HandlesZeroInput` and `HandlesPositiveInput`, that belong to the same test
suite `FactorialTest`.
When naming your test suites and tests, you should follow the same convention as
for
[naming functions and classes](https://google.github.io/styleguide/cppguide.html#Function_Names).
**Availability**: Linux, Windows, Mac.
## Test Fixtures: Using the Same Data Configuration for Multiple Tests {#same-data-multiple-tests}
If you find yourself writing two or more tests that operate on similar data, you
can use a *test fixture*. This allows you to reuse the same configuration of
objects for several different tests.
To create a fixture:
1. Derive a class from `::testing::Test` . Start its body with `protected:`, as
we'll want to access fixture members from sub-classes.
2. Inside the class, declare any objects you plan to use.
3. If necessary, write a default constructor or `SetUp()` function to prepare
the objects for each test. A common mistake is to spell `SetUp()` as
**`Setup()`** with a small `u` - Use `override` in C++11 to make sure you
spelled it correctly.
4. If necessary, write a destructor or `TearDown()` function to release any
resources you allocated in `SetUp()` . To learn when you should use the
constructor/destructor and when you should use `SetUp()/TearDown()`, read
the [FAQ](faq.md#CtorVsSetUp).
5. If needed, define subroutines for your tests to share.
When using a fixture, use `TEST_F()` instead of `TEST()` as it allows you to
access objects and subroutines in the test fixture:
```c++
TEST_F(TestFixtureName, TestName) {
... test body ...
}
```
Like `TEST()`, the first argument is the test suite name, but for `TEST_F()`
this must be the name of the test fixture class. You've probably guessed: `_F`
is for fixture.
Unfortunately, the C++ macro system does not allow us to create a single macro
that can handle both types of tests. Using the wrong macro causes a compiler
error.
Also, you must first define a test fixture class before using it in a
`TEST_F()`, or you'll get the compiler error "`virtual outside class
declaration`".
For each test defined with `TEST_F()`, googletest will create a *fresh* test
fixture at runtime, immediately initialize it via `SetUp()`, run the test,
clean up by calling `TearDown()`, and then delete the test fixture. Note that
different tests in the same test suite have different test fixture objects, and
googletest always deletes a test fixture before it creates the next one.
googletest does **not** reuse the same test fixture for multiple tests. Any
changes one test makes to the fixture do not affect other tests.
As an example, let's write tests for a FIFO queue class named `Queue`, which has
the following interface:
```c++
template <typename E> // E is the element type.
class Queue {
public:
Queue();
void Enqueue(const E& element);
E* Dequeue(); // Returns NULL if the queue is empty.
size_t size() const;
...
};
```
First, define a fixture class. By convention, you should give it the name
`FooTest` where `Foo` is the class being tested.
```c++
class QueueTest : public ::testing::Test {
protected:
void SetUp() override {
q1_.Enqueue(1);
q2_.Enqueue(2);
q2_.Enqueue(3);
}
// void TearDown() override {}
Queue<int> q0_;
Queue<int> q1_;
Queue<int> q2_;
};
```
In this case, `TearDown()` is not needed since we don't have to clean up after
each test, other than what's already done by the destructor.
Now we'll write tests using `TEST_F()` and this fixture.
```c++
TEST_F(QueueTest, IsEmptyInitially) {
EXPECT_EQ(q0_.size(), 0);
}
TEST_F(QueueTest, DequeueWorks) {
int* n = q0_.Dequeue();
EXPECT_EQ(n, nullptr);
n = q1_.Dequeue();
ASSERT_NE(n, nullptr);
EXPECT_EQ(*n, 1);
EXPECT_EQ(q1_.size(), 0);
delete n;
n = q2_.Dequeue();
ASSERT_NE(n, nullptr);
EXPECT_EQ(*n, 2);
EXPECT_EQ(q2_.size(), 1);
delete n;
}
```
The above uses both `ASSERT_*` and `EXPECT_*` assertions. The rule of thumb is
to use `EXPECT_*` when you want the test to continue to reveal more errors after
the assertion failure, and use `ASSERT_*` when continuing after failure doesn't
make sense. For example, the second assertion in the `Dequeue` test is
`ASSERT_NE(n, nullptr)`, as we need to dereference the pointer `n` later, which
would lead to a segfault when `n` is `NULL`.
When these tests run, the following happens:
1. googletest constructs a `QueueTest` object (let's call it `t1`).
2. `t1.SetUp()` initializes `t1`.
3. The first test (`IsEmptyInitially`) runs on `t1`.
4. `t1.TearDown()` cleans up after the test finishes.
5. `t1` is destructed.
6. The above steps are repeated on another `QueueTest` object, this time
running the `DequeueWorks` test.
**Availability**: Linux, Windows, Mac.
## Invoking the Tests
`TEST()` and `TEST_F()` implicitly register their tests with googletest. So,
unlike with many other C++ testing frameworks, you don't have to re-list all
your defined tests in order to run them.
After defining your tests, you can run them with `RUN_ALL_TESTS()`, which
returns `0` if all the tests are successful, or `1` otherwise. Note that
`RUN_ALL_TESTS()` runs *all tests* in your link unit--they can be from
different test suites, or even different source files.
When invoked, the `RUN_ALL_TESTS()` macro:
* Saves the state of all googletest flags.
* Creates a test fixture object for the first test.
* Initializes it via `SetUp()`.
* Runs the test on the fixture object.
* Cleans up the fixture via `TearDown()`.
* Deletes the fixture.
* Restores the state of all googletest flags.
* Repeats the above steps for the next test, until all tests have run.
If a fatal failure happens the subsequent steps will be skipped.
{: .callout .important}
> IMPORTANT: You must **not** ignore the return value of `RUN_ALL_TESTS()`, or
> you will get a compiler error. The rationale for this design is that the
> automated testing service determines whether a test has passed based on its
> exit code, not on its stdout/stderr output; thus your `main()` function must
> return the value of `RUN_ALL_TESTS()`.
>
> Also, you should call `RUN_ALL_TESTS()` only **once**. Calling it more than
> once conflicts with some advanced googletest features (e.g., thread-safe
> [death tests](advanced.md#death-tests)) and thus is not supported.
**Availability**: Linux, Windows, Mac.
## Writing the main() Function
Most users should _not_ need to write their own `main` function and instead link
with `gtest_main` (as opposed to with `gtest`), which defines a suitable entry
point. See the end of this section for details. The remainder of this section
should only apply when you need to do something custom before the tests run that
cannot be expressed within the framework of fixtures and test suites.
If you write your own `main` function, it should return the value of
`RUN_ALL_TESTS()`.
You can start from this boilerplate:
```c++
#include "this/package/foo.h"
#include "gtest/gtest.h"
namespace my {
namespace project {
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if their bodies would
// be empty.
FooTest() {
// You can do set-up work for each test here.
}
~FooTest() override {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
void SetUp() override {
// Code here will be called immediately after the constructor (right
// before each test).
}
void TearDown() override {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Class members declared here can be used by all tests in the test suite
// for Foo.
};
// Tests that the Foo::Bar() method does Abc.
TEST_F(FooTest, MethodBarDoesAbc) {
const std::string input_filepath = "this/package/testdata/myinputfile.dat";
const std::string output_filepath = "this/package/testdata/myoutputfile.dat";
Foo f;
EXPECT_EQ(f.Bar(input_filepath, output_filepath), 0);
}
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
} // namespace
} // namespace project
} // namespace my
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
```
The `::testing::InitGoogleTest()` function parses the command line for
googletest flags, and removes all recognized flags. This allows the user to
control a test program's behavior via various flags, which we'll cover in
the [AdvancedGuide](advanced.md). You **must** call this function before calling
`RUN_ALL_TESTS()`, or the flags won't be properly initialized.
On Windows, `InitGoogleTest()` also works with wide strings, so it can be used
in programs compiled in `UNICODE` mode as well.
But maybe you think that writing all those `main` functions is too much work? We
agree with you completely, and that's why Google Test provides a basic
implementation of main(). If it fits your needs, then just link your test with
the `gtest_main` library and you are good to go.
{: .callout .note}
NOTE: `ParseGUnitFlags()` is deprecated in favor of `InitGoogleTest()`.
## Known Limitations
* Google Test is designed to be thread-safe. The implementation is thread-safe
on systems where the `pthreads` library is available. It is currently
_unsafe_ to use Google Test assertions from two threads concurrently on
other systems (e.g. Windows). In most tests this is not an issue as usually
the assertions are done in the main thread. If you want to help, you can
volunteer to implement the necessary synchronization primitives in
`gtest-port.h` for your platform.

View File

@ -1,161 +0,0 @@
# Quickstart: Building with Bazel
This tutorial aims to get you up and running with GoogleTest using the Bazel
build system. If you're using GoogleTest for the first time or need a refresher,
we recommend this tutorial as a starting point.
## Prerequisites
To complete this tutorial, you'll need:
* A compatible operating system (e.g. Linux, macOS, Windows).
* A compatible C++ compiler that supports at least C++11.
* [Bazel](https://bazel.build/), the preferred build system used by the
GoogleTest team.
See [Supported Platforms](platforms.md) for more information about platforms
compatible with GoogleTest.
If you don't already have Bazel installed, see the
[Bazel installation guide](https://docs.bazel.build/versions/master/install.html).
{: .callout .note}
Note: The terminal commands in this tutorial show a Unix shell prompt, but the
commands work on the Windows command line as well.
## Set up a Bazel workspace
A
[Bazel workspace](https://docs.bazel.build/versions/master/build-ref.html#workspace)
is a directory on your filesystem that you use to manage source files for the
software you want to build. Each workspace directory has a text file named
`WORKSPACE` which may be empty, or may contain references to external
dependencies required to build the outputs.
First, create a directory for your workspace:
```
$ mkdir my_workspace && cd my_workspace
```
Next, youll create the `WORKSPACE` file to specify dependencies. A common and
recommended way to depend on GoogleTest is to use a
[Bazel external dependency](https://docs.bazel.build/versions/master/external.html)
via the
[`http_archive` rule](https://docs.bazel.build/versions/master/repo/http.html#http_archive).
To do this, in the root directory of your workspace (`my_workspace/`), create a
file named `WORKSPACE` with the following contents:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_google_googletest",
urls = ["https://github.com/google/googletest/archive/609281088cfefc76f9d0ce82e1ff6c30cc3591e5.zip"],
strip_prefix = "googletest-609281088cfefc76f9d0ce82e1ff6c30cc3591e5",
)
```
The above configuration declares a dependency on GoogleTest which is downloaded
as a ZIP archive from GitHub. In the above example,
`609281088cfefc76f9d0ce82e1ff6c30cc3591e5` is the Git commit hash of the
GoogleTest version to use; we recommend updating the hash often to point to the
latest version.
Bazel also needs a dependency on the
[`rules_cc` repository](https://github.com/bazelbuild/rules_cc) to build C++
code, so add the following to the `WORKSPACE` file:
```
http_archive(
name = "rules_cc",
urls = ["https://github.com/bazelbuild/rules_cc/archive/40548a2974f1aea06215272d9c2b47a14a24e556.zip"],
strip_prefix = "rules_cc-40548a2974f1aea06215272d9c2b47a14a24e556",
)
```
Now you're ready to build C++ code that uses GoogleTest.
## Create and run a binary
With your Bazel workspace set up, you can now use GoogleTest code within your
own project.
As an example, create a file named `hello_test.cc` in your `my_workspace`
directory with the following contents:
```cpp
#include <gtest/gtest.h>
// Demonstrate some basic assertions.
TEST(HelloTest, BasicAssertions) {
// Expect two strings not to be equal.
EXPECT_STRNE("hello", "world");
// Expect equality.
EXPECT_EQ(7 * 6, 42);
}
```
GoogleTest provides [assertions](primer.md#assertions) that you use to test the
behavior of your code. The above sample includes the main GoogleTest header file
and demonstrates some basic assertions.
To build the code, create a file named `BUILD` in the same directory with the
following contents:
```
load("@rules_cc//cc:defs.bzl", "cc_test")
cc_test(
name = "hello_test",
size = "small",
srcs = ["hello_test.cc"],
deps = ["@com_google_googletest//:gtest_main"],
)
```
This `cc_test` rule declares the C++ test binary you want to build, and links to
GoogleTest (`//:gtest_main`) using the prefix you specified in the `WORKSPACE`
file (`@com_google_googletest`). For more information about Bazel `BUILD` files,
see the
[Bazel C++ Tutorial](https://docs.bazel.build/versions/master/tutorial/cpp.html).
Now you can build and run your test:
<pre>
<strong>my_workspace$ bazel test --test_output=all //:hello_test</strong>
INFO: Analyzed target //:hello_test (26 packages loaded, 362 targets configured).
INFO: Found 1 test target...
INFO: From Testing //:hello_test:
==================== Test output for //:hello_test:
Running main() from gmock_main.cc
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from HelloTest
[ RUN ] HelloTest.BasicAssertions
[ OK ] HelloTest.BasicAssertions (0 ms)
[----------] 1 test from HelloTest (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (0 ms total)
[ PASSED ] 1 test.
================================================================================
Target //:hello_test up-to-date:
bazel-bin/hello_test
INFO: Elapsed time: 4.190s, Critical Path: 3.05s
INFO: 27 processes: 8 internal, 19 linux-sandbox.
INFO: Build completed successfully, 27 total actions
//:hello_test PASSED in 0.1s
INFO: Build completed successfully, 27 total actions
</pre>
Congratulations! You've successfully built and run a test binary using
GoogleTest.
## Next steps
* [Check out the Primer](primer.md) to start learning how to write simple
tests.
* [See the code samples](samples.md) for more examples showing how to use a
variety of GoogleTest features.

View File

@ -1,156 +0,0 @@
# Quickstart: Building with CMake
This tutorial aims to get you up and running with GoogleTest using CMake. If
you're using GoogleTest for the first time or need a refresher, we recommend
this tutorial as a starting point. If your project uses Bazel, see the
[Quickstart for Bazel](quickstart-bazel.md) instead.
## Prerequisites
To complete this tutorial, you'll need:
* A compatible operating system (e.g. Linux, macOS, Windows).
* A compatible C++ compiler that supports at least C++11.
* [CMake](https://cmake.org/) and a compatible build tool for building the
project.
* Compatible build tools include
[Make](https://www.gnu.org/software/make/),
[Ninja](https://ninja-build.org/), and others - see
[CMake Generators](https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html)
for more information.
See [Supported Platforms](platforms.md) for more information about platforms
compatible with GoogleTest.
If you don't already have CMake installed, see the
[CMake installation guide](https://cmake.org/install).
{: .callout .note}
Note: The terminal commands in this tutorial show a Unix shell prompt, but the
commands work on the Windows command line as well.
## Set up a project
CMake uses a file named `CMakeLists.txt` to configure the build system for a
project. You'll use this file to set up your project and declare a dependency on
GoogleTest.
First, create a directory for your project:
```
$ mkdir my_project && cd my_project
```
Next, you'll create the `CMakeLists.txt` file and declare a dependency on
GoogleTest. There are many ways to express dependencies in the CMake ecosystem;
in this quickstart, you'll use the
[`FetchContent` CMake module](https://cmake.org/cmake/help/latest/module/FetchContent.html).
To do this, in your project directory (`my_project`), create a file named
`CMakeLists.txt` with the following contents:
```cmake
cmake_minimum_required(VERSION 3.14)
project(my_project)
# GoogleTest requires at least C++11
set(CMAKE_CXX_STANDARD 11)
include(FetchContent)
FetchContent_Declare(
googletest
URL https://github.com/google/googletest/archive/609281088cfefc76f9d0ce82e1ff6c30cc3591e5.zip
)
# For Windows: Prevent overriding the parent project's compiler/linker settings
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)
```
The above configuration declares a dependency on GoogleTest which is downloaded
from GitHub. In the above example, `609281088cfefc76f9d0ce82e1ff6c30cc3591e5` is
the Git commit hash of the GoogleTest version to use; we recommend updating the
hash often to point to the latest version.
For more information about how to create `CMakeLists.txt` files, see the
[CMake Tutorial](https://cmake.org/cmake/help/latest/guide/tutorial/index.html).
## Create and run a binary
With GoogleTest declared as a dependency, you can use GoogleTest code within
your own project.
As an example, create a file named `hello_test.cc` in your `my_project`
directory with the following contents:
```cpp
#include <gtest/gtest.h>
// Demonstrate some basic assertions.
TEST(HelloTest, BasicAssertions) {
// Expect two strings not to be equal.
EXPECT_STRNE("hello", "world");
// Expect equality.
EXPECT_EQ(7 * 6, 42);
}
```
GoogleTest provides [assertions](primer.md#assertions) that you use to test the
behavior of your code. The above sample includes the main GoogleTest header file
and demonstrates some basic assertions.
To build the code, add the following to the end of your `CMakeLists.txt` file:
```cmake
enable_testing()
add_executable(
hello_test
hello_test.cc
)
target_link_libraries(
hello_test
gtest_main
)
include(GoogleTest)
gtest_discover_tests(hello_test)
```
The above configuration enables testing in CMake, declares the C++ test binary
you want to build (`hello_test`), and links it to GoogleTest (`gtest_main`). The
last two lines enable CMake's test runner to discover the tests included in the
binary, using the
[`GoogleTest` CMake module](https://cmake.org/cmake/help/git-stage/module/GoogleTest.html).
Now you can build and run your test:
<pre>
<strong>my_project$ cmake -S . -B build</strong>
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
...
-- Build files have been written to: .../my_project/build
<strong>my_project$ cmake --build build</strong>
Scanning dependencies of target gtest
...
[100%] Built target gmock_main
<strong>my_project$ cd build && ctest</strong>
Test project .../my_project/build
Start 1: HelloTest.BasicAssertions
1/1 Test #1: HelloTest.BasicAssertions ........ Passed 0.00 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.01 sec
</pre>
Congratulations! You've successfully built and run a test binary using
GoogleTest.
## Next steps
* [Check out the Primer](primer.md) to start learning how to write simple
tests.
* [See the code samples](samples.md) for more examples showing how to use a
variety of GoogleTest features.

View File

@ -1,115 +0,0 @@
# Actions Reference
[**Actions**](../gmock_for_dummies.md#actions-what-should-it-do) specify what a
mock function should do when invoked. This page lists the built-in actions
provided by GoogleTest. All actions are defined in the `::testing` namespace.
## Returning a Value
| | |
| :-------------------------------- | :-------------------------------------------- |
| `Return()` | Return from a `void` mock function. |
| `Return(value)` | Return `value`. If the type of `value` is different to the mock function's return type, `value` is converted to the latter type <i>at the time the expectation is set</i>, not when the action is executed. |
| `ReturnArg<N>()` | Return the `N`-th (0-based) argument. |
| `ReturnNew<T>(a1, ..., ak)` | Return `new T(a1, ..., ak)`; a different object is created each time. |
| `ReturnNull()` | Return a null pointer. |
| `ReturnPointee(ptr)` | Return the value pointed to by `ptr`. |
| `ReturnRef(variable)` | Return a reference to `variable`. |
| `ReturnRefOfCopy(value)` | Return a reference to a copy of `value`; the copy lives as long as the action. |
| `ReturnRoundRobin({a1, ..., ak})` | Each call will return the next `ai` in the list, starting at the beginning when the end of the list is reached. |
## Side Effects
| | |
| :--------------------------------- | :-------------------------------------- |
| `Assign(&variable, value)` | Assign `value` to variable. |
| `DeleteArg<N>()` | Delete the `N`-th (0-based) argument, which must be a pointer. |
| `SaveArg<N>(pointer)` | Save the `N`-th (0-based) argument to `*pointer`. |
| `SaveArgPointee<N>(pointer)` | Save the value pointed to by the `N`-th (0-based) argument to `*pointer`. |
| `SetArgReferee<N>(value)` | Assign `value` to the variable referenced by the `N`-th (0-based) argument. |
| `SetArgPointee<N>(value)` | Assign `value` to the variable pointed by the `N`-th (0-based) argument. |
| `SetArgumentPointee<N>(value)` | Same as `SetArgPointee<N>(value)`. Deprecated. Will be removed in v1.7.0. |
| `SetArrayArgument<N>(first, last)` | Copies the elements in source range [`first`, `last`) to the array pointed to by the `N`-th (0-based) argument, which can be either a pointer or an iterator. The action does not take ownership of the elements in the source range. |
| `SetErrnoAndReturn(error, value)` | Set `errno` to `error` and return `value`. |
| `Throw(exception)` | Throws the given exception, which can be any copyable value. Available since v1.1.0. |
## Using a Function, Functor, or Lambda as an Action
In the following, by "callable" we mean a free function, `std::function`,
functor, or lambda.
| | |
| :---------------------------------- | :------------------------------------- |
| `f` | Invoke f with the arguments passed to the mock function, where f is a callable. |
| `Invoke(f)` | Invoke `f` with the arguments passed to the mock function, where `f` can be a global/static function or a functor. |
| `Invoke(object_pointer, &class::method)` | Invoke the method on the object with the arguments passed to the mock function. |
| `InvokeWithoutArgs(f)` | Invoke `f`, which can be a global/static function or a functor. `f` must take no arguments. |
| `InvokeWithoutArgs(object_pointer, &class::method)` | Invoke the method on the object, which takes no arguments. |
| `InvokeArgument<N>(arg1, arg2, ..., argk)` | Invoke the mock function's `N`-th (0-based) argument, which must be a function or a functor, with the `k` arguments. |
The return value of the invoked function is used as the return value of the
action.
When defining a callable to be used with `Invoke*()`, you can declare any unused
parameters as `Unused`:
```cpp
using ::testing::Invoke;
double Distance(Unused, double x, double y) { return sqrt(x*x + y*y); }
...
EXPECT_CALL(mock, Foo("Hi", _, _)).WillOnce(Invoke(Distance));
```
`Invoke(callback)` and `InvokeWithoutArgs(callback)` take ownership of
`callback`, which must be permanent. The type of `callback` must be a base
callback type instead of a derived one, e.g.
```cpp
BlockingClosure* done = new BlockingClosure;
... Invoke(done) ...; // This won't compile!
Closure* done2 = new BlockingClosure;
... Invoke(done2) ...; // This works.
```
In `InvokeArgument<N>(...)`, if an argument needs to be passed by reference,
wrap it inside `std::ref()`. For example,
```cpp
using ::testing::InvokeArgument;
...
InvokeArgument<2>(5, string("Hi"), std::ref(foo))
```
calls the mock function's #2 argument, passing to it `5` and `string("Hi")` by
value, and `foo` by reference.
## Default Action
| Matcher | Description |
| :------------ | :----------------------------------------------------- |
| `DoDefault()` | Do the default action (specified by `ON_CALL()` or the built-in one). |
{: .callout .note}
**Note:** due to technical reasons, `DoDefault()` cannot be used inside a
composite action - trying to do so will result in a run-time error.
## Composite Actions
| | |
| :----------------------------- | :------------------------------------------ |
| `DoAll(a1, a2, ..., an)` | Do all actions `a1` to `an` and return the result of `an` in each invocation. The first `n - 1` sub-actions must return void and will receive a readonly view of the arguments. |
| `IgnoreResult(a)` | Perform action `a` and ignore its result. `a` must not return void. |
| `WithArg<N>(a)` | Pass the `N`-th (0-based) argument of the mock function to action `a` and perform it. |
| `WithArgs<N1, N2, ..., Nk>(a)` | Pass the selected (0-based) arguments of the mock function to action `a` and perform it. |
| `WithoutArgs(a)` | Perform action `a` without any arguments. |
## Defining Actions
| | |
| :--------------------------------- | :-------------------------------------- |
| `ACTION(Sum) { return arg0 + arg1; }` | Defines an action `Sum()` to return the sum of the mock function's argument #0 and #1. |
| `ACTION_P(Plus, n) { return arg0 + n; }` | Defines an action `Plus(n)` to return the sum of the mock function's argument #0 and `n`. |
| `ACTION_Pk(Foo, p1, ..., pk) { statements; }` | Defines a parameterized action `Foo(p1, ..., pk)` to execute the given `statements`. |
The `ACTION*` macros cannot be used inside a function or class.

Some files were not shown because too many files have changed in this diff Show More