Local Testing
Back-End Testing
Pytest
This library is used for unit testing. An alternative to default unit-testing package. The biggest difference between two libraries is that Pytest can run tests simultaneously for saving time. It supports Fixtures and use testing asynchronous functions. Most importantly it comes almost ready out of the box with FastAPI.
Pytest can be run on specific files or the whole directory by passing arguments:
In addition, test occasionally print out the information(custom messages) about the current test. By default pytest hides user output, this can be fixed with -s
argument.
For example:
$ pytest -s
=============================== test session starts ================================
platform linux -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/mpisman/Documents/PollingApp, configfile: pytest.ini, testpaths: tests
plugins: trio-0.8.0, anyio-3.6.2, cov-4.0.0, Faker-15.3.2, asyncio-0.20.2
asyncio: mode=Mode.AUTO
collected 10 items
tests/test_1_users.py
TESTING: Registering new user: agilbert@ucmerced.edu
INFO: User 639e94f54c8775804e127b92 has registered.
SUCCESS: New user ha been registered: agilbert@ucmerced.edu with id: 639e94f54c8775804e127b92
.
TESTING: Attempting to register a new user with an existing email: agilbert@ucmerced.edu
SUCCESS: The new user failed to register because email already exists: agilbert@ucmerced.edu
.
TESTING: Attempting to login with incorrect email
SUCCESS: The new user received a 404 "LOGIN_BAD_CREDENTIALS" error
.
TESTING: Attempting to login with incorrect password
SUCCESS: The new user received a 404 "LOGIN_BAD_CREDENTIALS" error
.
Tests User functionality
Test to see if the user can create an account
1) Try to Register a new user (Success) 2) Validate: status is 201 3) Validate: id, email, first name, last name
Test to see if the user can register with an existing email
1) Try to register a new user with the same email (Fail) 2) Validate: status is 400 3) Validate: details = "REGISTER_USER_ALREADY_EXISTS"
Test to see if the new user can login with the incorrect email
1) Try to login with the incorrect email (Fail) 2) Validate: status is 400 3) Validate: details = "LOGIN_BAD_CREDENTIALS"
Test to see if the new user can login with the incorrect password
1) Try to login with the incorrect email (Fail) 2) Validate: status is 400 3) Validate: details = "LOGIN_BAD_CREDENTIALS"
Test to see if the new user can login with the correct credentials
1) Try to login with the correct credentials (Success) 2) Validate: status is 200 3) Validate: token_type = "bearer" 4) Validate: access_token field exists in the response and it is not empty 5) Set temp user token to the access_token to be used in the next tests
Test to see if the user can get their own information
1) Try to get the user information with the token passed in the header (Success) 2) Validate: status is 200 3) Validate: id, email, first name, last name, is_active, is_superuser, is_verified
Test to see if User can delete their own account
1) Find user by id to make sure it exists (Success) 2) Try to delete the user 3) Validate: status is 204 (Success)
Test Group functionality
This test relies on the user functionality to be working correctly, and uses the functions from the previous test.
Test to see if the user can create a group
1) Create a temporary user(group owner) for test and login (Success) 2) The new user requests list of groups(should be empty) (Success) 3) Validate: status is 200 4) Validate: the list of groups is empty 5) The user creates a group (Success) 6) Validate: status is 201 7) Validate: id, name 8) The user gets list of groups. (Success) 9) Validate: status is 200 10) Validate: the length of the list of groups is 1 11) Validate: the only group has the name of the group created in step 5 and the role listed as owner 12) User gets the group information by id. (Success) 13) Validate: name, description, owner name and email 14) The user gets list of admins. (Success) 15) Validate: status is 200 16) Validate: the length of the list of admins is 0 (the owner is not counted) 17) The user gets list of user (a member of user role) (Success) 18) Validate: status is 200 19) Validate: the length of the list of members is 0 (the owner is not counted) 20) The user gets list of members (any member) (Success) 21) Validate: status is 200 22) Validate: the length of the list of members is 1 23) Validate: email and role of the member is the owner 24) The user get the owner information (Success) 25) Validate: status is 200 26) Validate: email and name of the owner
Test to see if the user can add other users to the group
1) Register 13 new users, storing 3 in one list(admins) and the rest in a separate list (regular users) (Success) 2) The owner sends a request to add members from the payload. which is a dictionary of user emails and their roles (Success) 3) Validate: status is 201 4) The owner gets list of members (Success) 5) Validate: status is 200 6) Traverse the list of members and validate that all users have been added (Success) 7) The owner gets list of users (Success) 8) Validate: status is 200 9) Validate: the length of the list of users is 10 10) Validate: the list of users contains all the correct email addresses 11) The owner gets list of admins (Success) 12) Validate: status is 200 13) Validate: the length of the list of admins is 3 14) Validate: the list of admins contains all the correct email addresses
Test to see if the user can delete a non-existing group
1) The owner sends a request to delete a group that does not exist (Fail) 2) Validate: status is 404 3) Validate: details = "Group with id {id} not found"
Test to see if the user can delete a group they own
1) The owner sends a request to get the group information (Success) 2) Validate: status is 200 3) Validate: name 4) The owner sends a request to delete the group (Success) 5) Validate: status is 204 6) The owner sends a request to get the group information (Fail) 7) Validate: status is 404
Naming convention
For pytest to recognize test files, the file name should have a prefix test_
. By default, pytest runs test files in the alphabetical order, so to make test run in a specific order, the easiest solution is to name them as test_ORDER_NAME
, where order is an integer, and name is a short description of the test.
For instance:
./tests/test_1_users.py
./tests/test_2_groups.py
Pytest configuration
The small configuration can found in ./pytest.ini
:
- Sets pytest to use asyncio
- Adds all files inside ./tests/
for testing
- Specifies some common warning which should not be displayed
Fixtures
Another file, ./tests/conftest.py
, provides fixtures for database use and asynhronous client to make requests in the same session.
Pytest Coverage
We use pytest-cov to provide additional information of testing coverage in terms of code lines. Basically, the pytest keeps track of which lines of source code have been executed during tests. Ideally, the coverage should be near 100% on all files, which would indicate that all code has been tested in some way.
Here is example of coverage report that will be provided at the end of each test:
---------- coverage: platform linux, python 3.11.0-final-0 -----------
Name Stmts Miss Cover
------------------------------------------------
app/__init__.py 0 0 100%
app/app.py 24 0 100%
app/config.py 12 0 100%
app/exceptions/__init__.py 1 0 100%
app/exceptions/group.py 30 11 63%
app/exceptions/user.py 19 9 53%
------------------------------------------------
TOTAL 474 105 78%
Faker
This Library is used for generating random data such as names, email, address, etc. It's useful for testing.
Flake8
Flake8 is a combination of tools that checks that files follow pep8 styling format. The configuration can be specified inside pyproject.toml
. The only difference I made from default configuration is maximum length of the line, the default value of 80 is too small for modern widescreens, so we use 120 instead. Otherwise the formatting becomes to tedious.
After running, the tool will output a line number that needs to be fixed as well as short description of the error. Therefore, a successful run will have no output.
Examples of erroneous run:
$ flake8 app --max-line-length=120
app/app.py:13:1: W293 blank line contains whitespace
app/app.py:54:1: E302 expected 2 blank lines, found 0
app/app.py:67:6: W292 no newline at end of file
app/config.py:9:51: W291 trailing whitespace
app/config.py:10:1: W293 blank line contains whitespace
app/config.py:13:1: W293 blank line contains whitespace
app/config.py:14:1: E302 expected 2 blank lines, found 1
app/config.py:16:22: W292 no newline at end of file
app/exceptions/__init__.py:1:1: F401 'app.exceptions.group' imported but unused
app/exceptions/__init__.py:1:1: F401 'app.exceptions.user' imported but unused
app/exceptions/__init__.py:1:39: W292 no newline at end of file
app/models/user_manager.py:22:1: W293 blank line contains whitespace
Mypy
There are many good reasons to use typing in Python application. However, one of the bigger benefits comes from using Pydantic models. By defining types, Pydantic will automatically implement validation check, when you will try to set/update the value. If the type of value differs from the one you have defined before, Pydantic will raise an error. FastAPI further extends this functionality by using Pydantic models for schemas of HTTP requests and responses. Meaning, if a user will try to send data that does not follow the schema, they will get a "422 Unprocessed entity" error. Vise versa, if server will try to send a response that does not follow the schema, it will most likely fail due to exception. For this reason, typing is crucial in the fastAPI application.
Mypy is a static type checker library for Python, which helps solve the issues described above. Coupled with Pydantic plugin it check all of the modes and function, and it manages nested types very well.
On successful run you should get the following message.
In case of any errors, the mypy will provide a detailed output.
$ mypy app
app/utils/colored_dbg.py:1: error: Library stubs not installed for "colorama" [import]
app/utils/colored_dbg.py:1: note: Hint: "python3 -m pip install types-colorama"
app/utils/colored_dbg.py:1: note: (or run "mypy --install-types" to install all missing stub packages)
app/utils/colored_dbg.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
app/mongo_db.py:2: error: Skipping analyzing "motor.motor_asyncio": module is installed, but missing library stubs or py.typed marker [import]
app/mongo_db.py:2: error: Skipping analyzing "motor": module is installed, but missing library stubs or py.typed marker [import]
app/routes/group.py:19: error: Module "app.schemas.user" has no attribute "UserID" [attr-defined]
app/tests/test_users.py:51: error: "Response" has no attribute "get" [attr-defined]
app/tests/test_groups.py:123: error: Value of type "Response" is not indexable [index]
At times, third parties do a poor job of specifying types for parameters of functions and their return objects. In case of error/warning that is not fixable, you can place a comment on the same line causing error to ignore type check.
Tox
tox aims to automate and standardize testing in Python. It is a generic virtual environment management and test command line tool. tox allows us to create multiple virtual environments with different python versions and run multiple test, such as flake8, mypy, etc. Coupled with tox-gh-actions, it becomes a powerful tool that can help testing project on various systems automatically using GitHub Actions.
The configuration can be found in tox.ini
We use 2 Python versions(3.10 and 3.11) with the latest ubuntu image(this also has to be specified in workflows
). However, other versions of python can be easily added, as well as, other operating systems. Tox creates a virtual environment for each python version and runs all tests inside of it. First it runs pytest, then flake8 and mypy.
At the end of all tests, you should see a message indicating that tox has successfully ran all tests using 2 virtual environments(python 3.10 and 3.11), and it also ran mypy and flake8. In case of any errors tox will let us know.
py310: OK (18.02=setup[5.09]+cmd[12.93] seconds)
py311: OK (17.10=setup[4.27]+cmd[12.83] seconds)
flake8: OK (3.95=setup[3.77]+cmd[0.18] seconds)
mypy: OK (4.19=setup[3.78]+cmd[0.41] seconds)
congratulations :) (43.31 seconds)
Automated testing with GitHub Actions
Github actions allow us to automatically run tests, update documentation, release new version of builds, and other.
In our case, we use GitHub actions to run tox tests on every on pull request to the Development branch. This is a great way to ensure that the code is always in a working state before releasing a new version.
The configuration can be found in .github/workflows/python-app.yml
. The workflow will fail if any of the tests fail. As an indication of success or failure, you should see a green checkmark or a red cross next to the commit and a badge on the top of the README file in the Development branch.
Like this:
Postman Testing
Postman is a great tool for front-end developers. It can help with testing the API without having to learn the back-end. It provides a GUI for testing APIs called flows.
More testing with postman coming soon...