This guide outlines standards and best practices for automated testing of GitLab CE and EE.
It is meant to be an extension of the thoughtbot testing styleguide. If this guide defines a rule that contradicts the thoughtbot guide, this guide takes precedence. Some guidelines may be repeated verbatim to stress their importance.
Formal definition: https://en.wikipedia.org/wiki/Unit_testing
These kind of tests ensure that a single unit of code (a method) works as expected (given an input, it has a predictable output). These tests should be isolated as much as possible. For example, model methods that don't do anything with the database shouldn't need a DB record. Classes that don't need database records should use stubs/doubles as much as possible.
Code path | Tests path | Testing engine | Notes |
---|---|---|---|
app/finders/ |
spec/finders/ |
RSpec | |
app/helpers/ |
spec/helpers/ |
RSpec | |
app/db/{post_,}migrate/ |
spec/migrations/ |
RSpec | More details at spec/migrations/README.md . |
app/policies/ |
spec/policies/ |
RSpec | |
app/presenters/ |
spec/presenters/ |
RSpec | |
app/routing/ |
spec/routing/ |
RSpec | |
app/serializers/ |
spec/serializers/ |
RSpec | |
app/services/ |
spec/services/ |
RSpec | |
app/tasks/ |
spec/tasks/ |
RSpec | |
app/uploaders/ |
spec/uploaders/ |
RSpec | |
app/views/ |
spec/views/ |
RSpec | |
app/workers/ |
spec/workers/ |
RSpec | |
app/assets/javascripts/ |
spec/javascripts/ |
Karma | More details in the JavaScript section. |
Formal definition: https://en.wikipedia.org/wiki/Integration_testing
These kind of tests ensure that individual parts of the application work well together, without the overhead of the actual app environment (i.e. the browser). These tests should assert at the request/response level: status code, headers, body. They're useful to test permissions, redirections, what view is rendered etc.
Code path | Tests path | Testing engine | Notes |
---|---|---|---|
app/controllers/ |
spec/controllers/ |
RSpec | |
app/mailers/ |
spec/mailers/ |
RSpec | |
lib/api/ |
spec/requests/api/ |
RSpec | |
lib/ci/api/ |
spec/requests/ci/api/ |
RSpec | |
app/assets/javascripts/ |
spec/javascripts/ |
Karma | More details in the JavaScript section. |
In an ideal world, controllers should be thin. However, when this is not the case, it's acceptable to write a system/feature test without JavaScript instead of a controller test. The reason is that testing a fat controller usually involves a lot of stubbing, things like:
controller.instance_variable_set(:@user, user)
and use methods which are deprecated in Rails 5 (#23768).
As you may have noticed, Karma is both in the Unit tests and the Integration tests category. That's because Karma is a tool that provides an environment to run JavaScript tests, so you can either run unit tests (e.g. test a single JavaScript method), or integration tests (e.g. test a component that is composed of multiple components).
Formal definition: https://en.wikipedia.org/wiki/System_testing.
These kind of tests ensure the application works as expected from a user point of view (aka black-box testing). These tests should test a happy path for a given page or set of pages, and a test case should be added for any regression that couldn't have been caught at lower levels with better tests (i.e. if a regression is found, regression tests should be added at the lowest-level possible).
Tests path | Testing engine | Notes |
---|---|---|
spec/features/ |
Capybara + RSpec | If your spec has the :js metadata, the browser driver will be Poltergeist, otherwise it's using RackTest. |
features/ |
Spinach | Spinach tests are deprecated, you shouldn't add new Spinach tests. |
Model.count
increased by one.If we're confident that the low-level components work well (and we should be if we have enough Unit & Integration tests), we shouldn't need to duplicate their thorough testing at the System test level.
It's very easy to add tests, but a lot harder to remove or improve tests, so one should take care of not introducing too many (slow and duplicated) specs.
The reasons why we should follow these best practices are as follows:
GitLab consists of multiple pieces such as GitLab Shell, GitLab Workhorse, Gitaly, GitLab Pages, GitLab Runner, and GitLab Rails. All theses pieces are configured and packaged by GitLab Omnibus.
GitLab QA is a tool that allows to test that all these pieces integrate well together by building a Docker image for a given version of GitLab Rails and running feature tests (i.e. using Capybara) against it.
The actual test scenarios and steps are part of GitLab Rails so that they're always in-sync with the codebase.
As many things in life, deciding what to test at each level of testing is a trade-off:
Another way to see it is to think about the "cost of tests", this is well explained in this article and the basic idea is that the cost of a test includes:
Please consult the dedicated "Frontend testing" guide.
describe ClassName
block.described_class
instead of repeating the class name being described
(this is enforced by RuboCop)..method
to describe class methods and #method
to describe instance
methods.context
to test branching logic.do...end
blocks for before
and after
, even when it would
fit on a single line.describe
symbols (see Gotchas).:each
argument to hooks since it's the default.not_to
to to_not
(this is enforced by RuboCop).Gitlab.config.gitlab.host
rather than hard coding 'localhost'
before
and after
hooks, prefer it scoped to :context
over :all
let
variablesGitLab's RSpec suite has made extensive use of let
variables to reduce
duplication. However, this sometimes comes at the cost of clarity,
so we need to set some guidelines for their use going forward:
let
variables are preferable to instance variables. Local variables are
preferable to let
variables.let
to reduce duplication throughout an entire spec file.let
to define variables used by a single test; define them as
local variables inside the test's it
block.let
variable inside the top-level describe
block that's
only used in a more deeply-nested context
or describe
block. Keep the
definition as close as possible to where it's used.let
variable with another.let
variable that's only used by the definition of another.
Use a helper method instead.set
variablesIn some cases there is no need to recreate the same object for tests again for
each example. For example, a project is needed to test issues on the same
project, one project will do for the entire file. This can be achieved by using
set
in the same way you would use let
.
rspec-set
only works on ActiveRecord objects, and before new examples it
reloads or recreates the model, only if needed. That is, when you changed
properties or destroyed the object.
There is one gotcha; you can't reference a model defined in a let
block in a
set
block.
Timecop is available in our Ruby-based tests for verifying things that are time-sensitive. Any test that exercises or verifies something time-sensitive should make use of Timecop to prevent transient test failures.
Example:
it 'is overdue' do
issue = build(:issue, due_date: Date.tomorrow)
Timecop.freeze(3.days.from_now) do
expect(issue).to be_overdue
end
end
ROLE_ACTION_spec.rb
, such as
user_changes_password_spec.rb
.feature
block per feature spec file.Custom matchers should be created to clarify the intent and/or hide the
complexity of RSpec expectations.They should be placed under
spec/support/matchers/
. Matchers can be placed in subfolder if they apply to
a certain type of specs only (e.g. features, requests etc.) but shouldn't be if
they apply to multiple type of specs.
All shared contexts should be be placed under spec/support/shared_contexts/
.
Shared contexts can be placed in subfolder if they apply to a certain type of
specs only (e.g. features, requests etc.) but shouldn't be if they apply to
multiple type of specs.
Each file should include only one context and have a descriptive name, e.g.
spec/support/shared_contexts/controllers/githubish_import_controller_shared_context.rb
.
All shared examples should be be placed under spec/support/shared_examples/
.
Shared examples can be placed in subfolder if they apply to a certain type of
specs only (e.g. features, requests etc.) but shouldn't be if they apply to
multiple type of specs.
Each file should include only one context and have a descriptive name, e.g.
spec/support/shared_examples/controllers/githubish_import_controller_shared_example.rb
.
Helpers are usually modules that provide some methods to hide the complexity of
specific RSpec examples. You can define helpers in RSpec files if they're not
intended to be shared with other specs. Otherwise, they should be be placed
under spec/support/helpers/
. Helpers can be placed in subfolder if they apply
to a certain type of specs only (e.g. features, requests etc.) but shouldn't be
if they apply to multiple type of specs.
Helpers should follow the Rails naming / namespacing convention. For instance
spec/support/helpers/cycle_analytics_helpers.rb
should define:
module Spec
module Support
module Helpers
module CycleAnalyticsHelpers
def create_commit_referencing_issue(issue, branch_name: random_git_name)
project.repository.add_branch(user, branch_name, 'master')
create_commit("Commit for ##{issue.iid}", issue.project, user, branch_name)
end
end
end
end
end
Helpers should not change the RSpec config. For instance, the helpers module described above should not include:
RSpec.configure do |config|
config.include Spec::Support::Helpers::CycleAnalyticsHelpers
end
GitLab uses factory_girl as a test fixture replacement.
spec/factories/
, named using the pluralization
of their corresponding model (User
factories are defined in users.rb
).create(...)
instead of FactoryGirl.create(...)
.ActiveRecord
objects.
See example.All fixtures should be be placed under spec/fixtures/
.
RSpec config files are files that change the RSpec config (i.e.
RSpec.configure do |config|
blocks). They should be placed under
spec/support/config/
.
Each file should be related to a specific domain, e.g.
spec/support/config/capybara.rb
, spec/support/config/carrierwave.rb
, etc.
Helpers can be included in the spec/support/config/rspec.rb
file. If a
helpers module applies only to a certain kind of specs, it should add modifiers
to the config.include
call. For instance if
spec/support/helpers/cycle_analytics_helpers.rb
applies to :lib
and
type: :model
specs only, you would write the following:
RSpec.configure do |config|
config.include Spec::Support::Helpers::CycleAnalyticsHelpers, :lib
config.include Spec::Support::Helpers::CycleAnalyticsHelpers, type: :model
end
To make testing Rake tasks a little easier, there is a helper that can be included
in lieu of the standard Spec helper. Instead of require 'spec_helper'
, use
require 'rake_helper'
. The helper includes spec_helper
for you, and configures
a few other things to make testing Rake tasks easier.
At a minimum, requiring the Rake helper will redirect stdout
, include the
runtime task helpers, and include the RakeHelpers
Spec support module.
The RakeHelpers
module exposes a run_rake_task(<task>)
method to make
executing tasks simple. See spec/support/rake_helpers.rb
for all available
methods.
Example:
require 'rake_helper'
describe 'gitlab:shell rake tasks' do
before do
Rake.application.rake_require 'tasks/gitlab/shell'
stub_warn_user_is_not_gitlab
end
describe 'install task' do
it 'invokes create_hooks task' do
expect(Rake::Task['gitlab:shell:create_hooks']).to receive(:invoke)
run_rake_task('gitlab:shell:install')
end
end
end
GitLab has a massive test suite that, without parallelization, can take hours to run. It's important that we make an effort to write tests that are accurate and effective as well as fast.
Here are some things to keep in mind regarding test performance:
double
and spy
are faster than FactoryGirl.build(...)
FactoryGirl.build(...)
and .build_stubbed
are faster than .create
.create
an object when build
, build_stubbed
, attributes_for
,
spy
, or double
will do. Database persistence is slow!create(:empty_project)
instead of create(:project)
when you don't need
the underlying Git repository. Filesystem operations are slow!@javascript
in
Spinach or :js
in RSpec) unless it's actually required for the test
to be valid. Headless browser testing is slow!Our current CI parallelization setup is as follows:
knapsack
job in the prepare stage that is supposed to ensure we have a
knapsack/${CI_PROJECT_NAME}/rspec_report-master.json
file:knapsack/${CI_PROJECT_NAME}/rspec_report-master.json
file is fetched
from S3, if it's not here we initialize the file with {}
.rspec x y
job are run with knapsack rspec
and should have an evenly
distributed share of tests:knapsack/${CI_PROJECT_NAME}/rspec_report-master.json
since the "artifacts
from all previous stages are passed by default". ^1
KNAPSACK_REPORT_PATH=knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json
.Report specs
, not under Leftover specs
.update-knapsack
job takes all the
knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json
files from the rspec x y
jobs and merge them all together into a single
knapsack/${CI_PROJECT_NAME}/rspec_report-master.json
file that is then
uploaded to S3.After that, the next pipeline will use the up-to-date
knapsack/${CI_PROJECT_NAME}/rspec_report-master.json
file. The same strategy
is used for Spinach tests as well.
The GitLab test suite is monitored for the master
branch, and any branch
that includes rspec-profile
in their name.
A public dashboard is available for everyone to see. Feel free to look at the slowest test files and try to improve them.
master
, and any branch that includes
mysql
in the name.GitLab moved from Cucumber to Spinach for its feature/integration tests in September 2012.
As of March 2016, we are trying to avoid adding new Spinach tests going forward, opting for RSpec feature specs.
Adding new Spinach scenarios is acceptable only if the new scenario requires
no more than one new step
definition. If more than that is required, the
test should be re-implemented using RSpec instead.
Вы можете оставить комментарий после Вход в систему
Неприемлемый контент может быть отображен здесь и не будет показан на странице. Вы можете проверить и изменить его с помощью соответствующей функции редактирования.
Если вы подтверждаете, что содержание не содержит непристойной лексики/перенаправления на рекламу/насилия/вульгарной порнографии/нарушений/пиратства/ложного/незначительного или незаконного контента, связанного с национальными законами и предписаниями, вы можете нажать «Отправить» для подачи апелляции, и мы обработаем ее как можно скорее.
Опубликовать ( 0 )