#1 Dirinstall gating test for Anaconda
Closed 5 years ago by rvykydal. Opened 5 years ago by rvykydal.
rpms/ rvykydal/anaconda master-gating-tests  into  master

file added
+56
@@ -0,0 +1,56 @@ 

+ [Anaconda](https://github.com/rhinstaller/anaconda) installer gating tests.

+ 

+ Additionally to the tests (`tests*.yml`) contains also playbooks for runnning the tests from localhost on a remote test runner. The runner can be provisioned by `linchpin`. See [run_tests_remotely.sh](run_tests_remotely.sh) script as an example of the playbooks usage.

+ 

+ Running the tests remotely

+ --------------------------

+ 

+ ### Test runner

+ 

+ The remote test runner can be provided in any way. To be used by the playbooks:

+ 

+ * It has to allow ssh access as a remote user which is allowed to become root, using the ssh key configured by `private_key_file` in [ansible config](remote_config/ansible.cfg). By default the `remote_user` for running the tests is `root`.

+ * Test runner host name / IP should be configured for the playbooks in `gating_test_runner` group of [remote_config/inventory](remote_config/inventory).

+ 

+ The runner can be provisioned in a cloud by linchpin as in the [script](run_tests_remotely.sh):

+ 

+ * The cloud credentials need to be configured in the file and profile reffered by `credentials` variable of [topology](linchpin/topologies/gating-test.yml). So the credentials file [`clouds.yml`](linchpin/credentials/clouds.yml) should contain profile `ci-rhos`. The file can be placed to `~/.config/linchpin` directory or the directory containing the file can be set by `linchpin` `--creds-path` option.

+ * The ssh key is set by `keypair` value of linchpin [topology](linchpin/topologies/gating-test.yml) file. It should correspond to the key defined in [ansible config](remote_config/ansible.cfg). The [topology](linchpin/topologies/gating-test.yml) file also defines image to be used for test runner.

+ * The script populates the [inventory](remote_config/inventory) for playbooks with [inventory](linchpin/layouts/gating-test.yml) generated by linchpin.

+ * The script tries to find out which remote user should be used (`root`, `fedora`, `cloud-user`) and updates [ansible config](remote_config/ansible.cfg) with the value.

+ 

+ ### Test runner environment

+ 

+ Test runner environment is prepared by [`prepare-test-runner.yml`](prepare-test-runner.yml) playbook:

+ 

+ * It is possible to add repositories to the runner by defining [`test_runner_repos`](roles/prepare-test-runner/defaults/main.yml) variable. It can be useful for example for adding a repository with scratch build to be tested or adding repositories for test dependencies missing on remote runner.

+ * Empty directory for storing test artifacts is created on test runner based on the [`artifacts`](roles/prepare-test-runner/vars/main.yml) variable.

+ 

+ ### Test playbooks configuration

+ 

+ #### Running on the remote runner:

+ 

+ Normally the testing system runs all the `tests*.yml` playbooks.

+ 

+ **WARNING:**

+ The test playbooks are run on `localhost` (test runner provided by the testing system). They change the test runner environment (eg install packages) so most probably you don't want to run them as they are - on your local host.

+ The [script](run_tests_remotely.sh) updates `hosts` value of the test playbooks to use remote host from [`gating_test_runner`](remote_config/inventory/hosts) group as test runner (using a [playbook](set_tests_to_run_on_remote.yml)).

+ If you want to run the tests playbooks separately make sure the `hosts` variable in the test playbook is set to remote test runner (eg. `gating_test_runner`).

+ 

+ 

+ 

+ 

+ The test playbooks need [`artifacts`](roles/prepare-test-runner/vars/main.yml) variable supplied as can be seen in the [script](run_tests_remotely.sh). (Normally testing system takes care of this.)

+ 

+ #### Installation repositories:

+ 

+ Repositories (base and additional) used for installation test are defined in [repos](roles/installation-repos/defaults/main.yml) configuration. Their URL can be either defined explicitly or looked up in specified repositories of the test runner.

+ 

+ #### dirinstall test

+ 

+ There is a text and a vnc variant of dirinstall test. Both will run all the kickstarts found in [roles/dirinstall/templates/kickstarts](roles/dirinstall/templates/kickstarts).

+ 

+ 

+ ### The results

+ 

+ The results and logs are fetched from remote host by another [playbook](clean-test-runner.yml) which is using [`local_artifacts`](roles/clean-test-runner/defaults/main.yml) variable to set local host target directory. This value can be passed also to the [script](run_tests_remotely.sh) with `-a` option.

@@ -0,0 +1,25 @@ 

+ ---

+ # Check if remote_user is reachable by ansible and set ansible.cfg

+ # if so.

+ 

+ - hosts: gating_test_runner

+   become: True

+   gather_facts: False

+   remote_user: "{{ remote_user }}"

+ 

+   tasks:

+   - name: Try a raw command as a check

+     raw: echo "CHECK OK"

+     register: result

+ 

+   - debug:

+       msg: "{{ result }}"

+ 

+   - name: Set ansible.cfg remote user to "{{ remote_user }}"

+     become: no

+     local_action:

+       module: lineinfile

+       path: ./remote_config/ansible.cfg

+       regexp: ^remote_user

+       line: "remote_user = {{ remote_user }}"

+     when: result.stdout_lines[0] == "CHECK OK"

@@ -0,0 +1,11 @@ 

+ ---

+ # get artifacts from test-runner

+ - hosts: gating_test_runner

+   become: true

+   vars_files:

+     # Needed to get artifacts location on test runner

+     - roles/prepare-test-runner/vars/main.yml

+ 

+   roles:

+     - role: clean-test-runner

+ 

@@ -0,0 +1,4 @@ 

+ ---

+ gating-test:

+   topology: gating-test.yml

+   layout: gating-test.yml

@@ -0,0 +1,7 @@ 

+ clouds:

+   ci-rhos:

+     auth:

+       auth_url:

+       project_name:

+       username:

+       password:

@@ -0,0 +1,1 @@ 

+ [gating_test_runner]

@@ -0,0 +1,11 @@ 

+ ---

+ inventory_layout:

+   #inventory_file: "{% raw -%}{{ workspace }}/inventories/gating-test.inventory{%- endraw%}"

+   vars:

+     hostname: __IP__

+   hosts:

+     gating_test_runner:

+       count: 1

+       host_groups:

+         - gating_test_runner

+ 

@@ -0,0 +1,295 @@ 

+ # This file is a well-documented, and commented out (mostly) file, which

+ # covers the configuration options available in LinchPin

+ #

+ # Used to override default configuration settings for LinchPin

+ # Defaults exist in linchpin/linchpin.constants file

+ #

+ # Uncommented options enable features found in v1.5.1 or newer and

+ # can be turned off by commenting them out.

+ #

+ # structured in INI style

+ # use %% to allow code interpolation

+ # use % to use config interpolation

+ #

+ 

+ [DEFAULT]

+ # name of the python package (Redundant, but easier than programmatically

+ # obtaining the value. It's very unlikely to change.)

+ pkg = linchpin

+ 

+ # Useful for storing the RunDB or other components like global credentials

+ # travis-ci doesn't like ~/.config/linchpin, use /tmp

+ #default_config_path = ~/.config/linchpin

+ 

+ # When creating an provider not already included in LinchPin, this path

+ # extends where LinchPin will look to run the appropriate playbooks

+ #external_providers_path = %(default_config_path)s/linchpin-x

+ 

+ # When adding anything to the lp section, it should be general for

+ # the entire application.

+ [lp]

+ 

+ # load custom ansible modules from here

+ #module_folder = library

+ 

+ # rundb tracks provisioning transactions

+ # If you add a new one, rundb/drivers.py needs to be updated to match

+ # rundb_conn is the location of the run database.

+ # A common reason to move it is to use the rundb centrally across

+ # the entire system, or in a central db on a shared filesystem.

+ # System-wide RunDB: rundb_conn = ~/.config/linchpin/rundb-::mac::.json

+ #rundb_conn = {{ workspace }}/.rundb/rundb-::mac::.json

+ rundb_conn = ~/.config/linchpin/rundb-::mac::.json

+ 

+ # name the type of Run database. Currently only TinyRunDB exists

+ #rundb_type = TinyRunDB

+ 

+ # How to connect to the RunDB, if it's on a separate server,

+ # it may be tcp or ssh

+ #rundb_conn_type = file

+ 

+ # The schema is used because TinyDB is a NoSQL db. Another DB

+ # may use this as a way to manage fields in a specific table.

+ #rundb_schema = {"action": "",

+ #                "inputs": [],

+ #                "outputs": [],

+ #                "start": "",

+ #                "end": "",

+ #                "rc": 0,

+ #                "uhash": ""}

+ 

+ # each entry in the RunDB contains a unique-ish hash (uhash). This

+ # sets the hashing mechanism used to generate the uhash.

+ #rundb_hash = sha256

+ 

+ # The default dateformat used in LinchPin. Specifically used in the

+ # RunDB for recording start and end dates, but also used elsewhere.

+ #dateformat = %%m/%%d/%%Y %%I:%%M:%%S %%p

+ 

+ # The name of the pinfile. Someone could adjust this and use TopFile

+ # or somesuch. The ramifications of this would mean that the file in

+ # the workspace that linchpin reads would change to this value.

+ #default_pinfile = PinFile

+ 

+ # By default, whenever linchpin performs an action

+ # (linchpin up/linchpin destroy), the data is read from the PinFile.

+ # Enabling 'use_rundb_for_actions' will allow destroy and certain up

+ # actions (specifically when using --run-id or --tx-id) to pull data

+ # from the RunDB instead.

+ #use_rundb_for_actions = False

+ use_rundb_for_actions = True

+ 

+ # A user can request specific data distilled from the RunDB. This flag

+ # enables the Context Distiller.

+ # NOTE: This flag requires generate_resources = False.

+ #distill_data = False

+ distill_data = True

+ 

+ # If desired, enabling distill_on_error will distill any successfully (and

+ # possibly failed) provisioned resources. This is predicated on the data

+ # being written to the RunDB (usually means _async tasks may never record

+ # data upon failure).

+ distill_on_error = False

+ 

+ # User can make linchpin use the actual return codes for final return code

+ # if enabled True, even if one target provision is successfull linchpin 

+ # returns exit code zero else returns the sum of all the return codes

+ # use_actual_rcs = False

+ 

+ # LinchPin sets several extra_vars (evars) that are passed to the playbooks.

+ # This section controls those items.

+ [evars]

+ 

+ # enables the ansible --check option

+ # _check_mode = False

+ 

+ # enables the ansible async ability. For some providers, it allows multiple

+ # provisioning tasks to happen at once, then will collect the data afterward.

+ # The default is perform the provision actions in serial.

+ #_async = False

+ 

+ # How long to wait before failing (in seconds) for an async task.

+ #async_timeout = 1000

+ 

+ # the uhash value will still exist, but will not be added to

+ # instances or the inventory_path

+ #enable_uhash = False

+ enable_uhash = True

+ 

+ # in older versions of linchpin (<v1.0.4), a resources folder exists, which

+ # dumped the data that is now stored in the RunDB. To disable the resources

+ # output, set the value to False.

+ #generate_resources = True

+ generate_resources = False

+ 

+ # default paths in playbooks

+ #

+ # lp_path = <src_dir>/linchpin

+ # determined in the load_config method of # linchpin.cli.LinchpinCliContext

+ 

+ # Each of the following items controls the path (usually along with the

+ # default values below) to the corresponding item.

+ 

+ # In the workspace (generally), this is the location of the layouts and

+ # topologies looked up by the PinFile. If either of these change, the

+ # value in linchpin/templates must also change.

+ #layouts_folder = layouts

+ #topologies_folder = topologies

+ 

+ # The relative location for hooks

+ #hooks_folder = hooks

+ 

+ # The relative location for provider roles

+ #roles_folder = roles

+ 

+ # The relative location for storing inventories

+ #inventories_folder = inventories

+ 

+ # The relative location for resources output (deprecated)

+ #resources_folder = resources

+ 

+ # The relative location to find schemas (deprecated)

+ #schemas_folder = schemas

+ 

+ # The relative location to find playbooks

+ #playbooks_folder = provision

+ 

+ # The default path to schemas for validation (deprecated)

+ #default_schemas_path = {{ lp_path }}/defaults/%(schemas_folder)s

+ 

+ # The default path to topologies if they aren't in the workspace

+ #default_topologies_path = {{ lp_path }}/defaults/%(topologies_folder)s

+ 

+ # The default path to inventory layouts if they aren't in the workspace

+ #default_layouts_path = {{ lp_path }}/defaults/%(layouts_folder)s

+ 

+ # The default path for outputting ansible static inventories

+ #default_inventories_path = {{ lp_path }}/defaults/%(inventories_folder)s

+ 

+ # The default path to the ansible roles which control the providers

+ #default_roles_path = {{ lp_path }}/%(playbooks_folder)s/%(roles_folder)s

+ 

+ # In older versions (<1.2.x), the schema was held here. These schemas are

+ # deprecated.

+ #schema_v3 = %(default_schemas_path)s/schema_v3.json

+ #schema_v4 = %(default_schemas_path)s/schema_v4.json

+ 

+ # The location where default credentials data would exist. This path doesn't

+ # automatically exist

+ #default_credentials_path = %(default_config_path)s

+ 

+ # If desired, one could overwrite the location of the generated inventory path

+ #inventory_path = {{ workspace }}/{{inventories_folder}}/happy.inventory

+ 

+ # Libvirt images can be stored almost anywhere (not /tmp).

+ # Unprivileged users need not setup sudo to manage a path to which they have rights.

+ # The following are specific settings to manage libvirt images and instances

+ 

+ # the location to store generated ssh keys and the like

+ #default_ssh_key_path = ~/.ssh

+ 

+ # Where to store the libvirt images for copying/booting instances

+ #libvirt_image_path = /var/lib/libvirt/images/

+ 

+ # What user to use to access libvirt.

+ # Using root means sudo without password must be setup

+ #libvirt_user = root

+ 

+ # When using root or any privileged user, this must be set to yes.

+ # sudo without password must also be setup

+ #libvirt_become = yes

+ 

+ # This section covers settings for the `linchpin init` command

+ #[init]

+ 

+ # source path of files generated by linchpin init

+ #source = templates/

+ 

+ # formal name of the generated PinFile. Can be changed as desired.

+ #pinfile = PinFile

+ 

+ # This section covers logging setup

+ [logger]

+ 

+ # Turns off and on the logger functionality

+ #enable = True

+ 

+ # Full path to the location of the linchpin log file

+ file = ~/.config/linchpin/linchpin.log

+ 

+ # Log format used. See https://docs.python.org/2/howto/logging-cookbook.html

+ #format = %%(levelname)s %%(asctime)s %%(message)s

+ 

+ # Date format used. See https://docs.python.org/2/howto/logging-cookbook.html

+ #dateformat = %%m/%%d/%%Y %%I:%%M:%%S %%p

+ 

+ # Level of logging provided

+ #level = logging.DEBUG

+ 

+ # Logging to the console via STDERR

+ #[console]

+ 

+ # logging to the console should also be possible

+ # NOTE: Placeholder only, cannot disable.

+ #enable = True

+ 

+ # Log format used. See https://docs.python.org/2/howto/logging-cookbook.html

+ #format = %%(message)s

+ 

+ # Level of logging provided

+ #level = logging.INFO

+ 

+ # LinchPin hooks have several states depending on the action. Currently, there

+ # are three hook states relating to tasks being completed.

+ # * up - when performing the up (provision) action

+ # * destroy - when performing the destroy (teardown) action

+ # * inv - when performing the internal inventory generation action

+ #   (currently unimplemented)

+ #[hookstates]

+ 

+ # when performing the up action, these hooks states are run

+ #up = pre,post,inv

+ 

+ # when performing the inv action, these hooks states are run

+ #inv = post

+ 

+ # when performing the destroy action, these hooks states are run

+ #destroy = pre,post

+ 

+ # This section covers file extensions for generating or looking

+ # up specific files

+ #[extensions]

+ 

+ # When looking for provider playbooks, use this extension

+ #playbooks = .yml

+ 

+ # When generating inventory files, use this extension

+ #inventory = .inventory

+ 

+ # This section controls the ansible settings for display or other settings

+ #[ansible]

+ 

+ # If set to true, this enables verbose output automatically to the screen.

+ # This is equivalent of passing `-v` to the linchpin command line shell.

+ #console = False

+ 

+ # When linchpin is run, certain states are called at certain points along the

+ # execution timeline. These STATES are defined below.

+ #[states]

+ # in future each state will have comma separated substates

+ 

+ # The name of the state which occurs before (pre) provisioning (up)

+ #preup = preup

+ 

+ # The name of the state which occurs before (pre) teardown (destroy)

+ #predestroy = predestroy

+ 

+ # The name of the state which occurs after (post) provisioning (up)

+ #postup = postup

+ 

+ # The name of the state which occurs after (pre) teardown (destroy)

+ #postdestroy = postdestroy

+ 

+ # The name of the state which occurs after (post) inventory is generated (inv)

+ #postinv = inventory

+ 

@@ -0,0 +1,19 @@ 

+ ---

+ topology_name: gating-test

+ resource_groups:

+     - resource_group_name: gating-test

+       resource_group_type: openstack

+       resource_definitions:

+         - name: "gating_test_runner"

+           role: os_server

+           flavor: m1.small

+           #image: Fedora-Cloud-Base-28-1.1

+           image: Fedora-Cloud-Base-28-compose-latest

+           count: 1

+           keypair: kstests

+           fip_pool: 10.8.240.0

+           networks:

+             - installer-jenkins-priv-network

+       credentials:

+         filename: clouds.yml

+         profile: ci-rhos

@@ -0,0 +1,7 @@ 

+ ---

+ # prepare test runner

+ - hosts: gating_test_runner

+   become: true

+ 

+   roles:

+     - prepare-test-runner

@@ -0,0 +1,5 @@ 

+ [defaults]

+ inventory = inventory

+ remote_user = root

+ host_key_checking = False

+ private_key_file = /path/to/private_key

@@ -0,0 +1,3 @@ 

+ [gating_test_runner]

+ [gating_test_runner:vars]

+ ansible_python_interpreter=/usr/bin/python3

@@ -0,0 +1,1 @@ 

+ local_artifacts: /tmp/artifacts

@@ -0,0 +1,23 @@ 

+ ---

+ 

+ 

+ - name: Create local artifacts dir

+   become: no

+   local_action:

+     module: file

+     path: "{{ local_artifacts }}"

+     state: "{{ item }}"

+   with_items:

+     - absent

+ 

+ - name: Make sure rsync required to fetch artifacts is installed

+   dnf:

+     name:

+       - rsync

+ 

+ - name: Fetch artifacts

+   synchronize:

+     mode: pull

+     delete: yes

+     src: "{{ artifacts }}"

+     dest: "{{ local_artifacts }}"

@@ -0,0 +1,74 @@ 

+ ---

+ 

+ - set_fact:

+     kickstart: "{{ kickstart_template | basename }}"

+ 

+ - set_fact:

+     test_name_with_ks: "{{ test_name }}.{{ kickstart }}"

+ 

+ - debug:

+     msg: "Running '{{ test_name }}' with kickstart '{{ kickstart }}'"

+ 

+ - name: Copy installation kickstart

+   template:

+     src: "templates/kickstarts/{{ kickstart }}"

+     dest: "{{ kickstart_dest }}"

+     mode: 0755

+ 

+ - name: Clean target directory

+   file:

+     path: "{{ install_dir }}/"

+     state: absent

+ 

+ - name: Clean installation logs

+   file:

+     path: "/tmp/{{ item }}"

+     state: absent

+   with_items: "{{ installation_logs }}"

+ 

+ - name: Run dirinstall

+   shell: anaconda --dirinstall {{ install_dir }} --kickstart {{ kickstart_dest }} {{ method }} --noninteractive 2>&1

+   register: result

+ 

+ - debug:

+     msg: "{{ result }}"

+ 

+ - set_fact:

+     result_str: "FAIL"

+ 

+ - set_fact:

+     result_str: "PASS"

+   when: result.rc == 0

+ 

+ - name: Update global test.log

+   lineinfile:

+     path: "{{ artifacts }}/test.log"

+     line: "{{ result_str }} {{ test_name_with_ks }}"

+     create: yes

+     insertafter: EOF

+ 

+ - name: Create this test log

+   copy:

+     content: "{{ result.stdout }}"

+     dest: "{{ artifacts }}/{{ result_str }}_{{ test_name_with_ks }}.log"

+ 

+ - name: Create installation logs dir in artifacts

+   file:

+     path: "{{ artifacts }}/{{ test_name_with_ks }}"

+     state: directory

+ 

+ - name: Copy input kickstart to artifacts

+   copy:

+     remote_src: True

+     src: "{{ kickstart_dest }}"

+     dest: "{{ artifacts }}/{{ test_name_with_ks }}/{{ kickstart_dest | basename }}"

+ 

+ - name: Copy installation logs to artifacts

+   copy:

+     remote_src: True

+     src: "/tmp/{{ item }}"

+     dest: "{{ artifacts }}/{{ test_name_with_ks }}/{{ item }}"

+   with_items: "{{ installation_logs }}"

+   ignore_errors: True

+ 

+ 

@@ -0,0 +1,13 @@ 

+ ---

+ - name: Install vnc install dependencies

+   dnf:

+     name:

+       - metacity

+     state: latest

+   when: method == "--vnc"

+ 

+ - include_tasks: ks-run.yml

+   with_fileglob:

+     - templates/kickstarts/*

+   loop_control:

+     loop_var: kickstart_template

@@ -0,0 +1,18 @@ 

+ {{ base_repo_command }}

+ {{ "\n".join(repo_commands) }}

+ {{ "\n".join(additional_repo_commands) }}

+ lang en_US.UTF-8

+ keyboard --vckeymap=us --xlayouts='us'

+ rootpw --plaintext redhat

+ #firstboot --reconfig

+ timezone --utc Europe/Prague

+ 

+ #bootloader --location=mbr --boot-drive=vda --driveorder=vda

+ #clearpart --all --drives=vda

+ #ignoredisk --only-use=vda

+ #autopart

+ 

+ shutdown

+ 

+ %packages

+ %end

@@ -0,0 +1,5 @@ 

+ ---

+ 

+ install_dir: "/root/installdir"

+ kickstart_dest: "/root/ks.dirinstall.cfg"

+ 

@@ -0,0 +1,41 @@ 

+ ---

+ 

+ ### Base repository

+ 

+ # Base repository command for kickstart

+ 

+ #base_repo_command: "url --url=http://download.englab.brq.redhat.com/pub/fedora/development-rawhide/Everything/x86_64/os/"

+ 

+ # If base_repo_command is not defined, look for base repo url

+ # in [base_repo_from_runner.repo] repository of

+ # /etc/yum.repos.d/base_repo_from_runner.file on test runner

+ 

+ base_repo_from_runner:

+   file: fedora.repo

+   repo: fedora

+ 

+ 

+ ### Additional repositories

+ 

+ # Additional repo commands for kickstart:

+ # - undefine to allow detecting of repos from test runner by

+ #   repos_from_runner variable

+ # - set to [] for no additional repositories

+ 

+ #repo_commands: []

+ #repo_commands:

+ #  - "repo --baseurl=http://download.englab.brq.redhat.com/pub/fedora/development-rawhide/Everything/x86_64/os/"

+ 

+ # If repo_commands is not defined, look for additinal repositories

+ # in [repo] repository of /etc/yum.repos.d/file of test runner.

+ # Multiple repositories can be defined here.

+ 

+ #repos_from_runner:

+ #  - file: fedora.repo

+ #    repo: fedora

+ 

+ 

+ # Additional repo commands to be used in any case,

+ # ie even in case additional repos are detected by repos_from_runner

+ 

+ additional_repo_commands: []

@@ -0,0 +1,87 @@ 

+ ---

+ 

+ ### Set up local facts from system repositories

+ 

+ - name: Create facts directory for repository custom facts

+   file:

+     state: directory

+     recurse: yes

+     path: /etc/ansible/facts.d

+ 

+ - name: Install base repository facts

+   copy:

+     remote_src: yes

+     src: "/etc/yum.repos.d/{{ base_repo_from_runner.file }}"

+     dest: "/etc/ansible/facts.d/{{ base_repo_from_runner.file}}.fact"

+   when: base_repo_command is not defined and base_repo_from_runner is defined

+ 

+ - name: Install additional repositories facts

+   copy:

+     remote_src: yes

+     src: "/etc/yum.repos.d/{{ item.file }}"

+     dest: "/etc/ansible/facts.d/{{ item.file}}.fact"

+   with_items: "{{ repos_from_runner }}"

+   when: repo_commands is not defined and repos_from_runner is defined

+ 

+ - name: Setup repository facts

+   setup:

+     filter: ansible_local

+ 

+ ### Base repository

+ 

+ - name: Set base installation repository from system base metalink repository

+   set_fact:

+     base_repo_command: "url --metalink={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['metalink'] }}"

+   when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['metalink'] is defined

+ 

+ - name: Set base installation repository from system base mirrorlist repository

+   set_fact:

+     base_repo_command: "url --mirrorlist={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['mirrorlist'] }}"

+   when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['mirrorlist'] is defined

+ 

+ - name: Set base installation repository from system base url repository

+   set_fact:

+     base_repo_command: "url --url={{ ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['baseurl'] }}"

+   when: ansible_local[base_repo_from_runner.file][base_repo_from_runner.repo]['baseurl'] is defined

+ 

+ ### Additional repositories

+ 

+ - name: Look for system metalink repositories

+   set_fact:

+     repos_metalink: "{{ repos_metalink | default([]) + [ 'repo --name=' + item.repo + ' --metalink=' + ansible_local[item.file][item.repo]['metalink'] ] }}"

+     #ignore_errors: true

+   with_items: "{{ repos_from_runner }}"

+   when: repo_commands is not defined and ansible_local[item.file][item.repo]['metalink'] is defined

+ 

+ - name: Look for system mirrorlist repositories

+   set_fact:

+     repos_mirrorlist: "{{ repos_mirrorlist | default([]) + [ 'repo --name=' + item.repo + ' --mirrorlist=' + ansible_local[item.file][item.repo]['mirrorlist'] ] }}"

+     #ignore_errors: true

+   with_items: "{{ repos_from_runner }}"

+   when: repo_commands is not defined and ansible_local[item.file][item.repo]['mirrorlist'] is defined

+ 

+ - name: Look for system baseurl repositories

+   set_fact:

+     repos_baseurl: "{{ repos_baseurl | default([]) + [ 'repo --name=' + item.repo + ' --baseurl=' + ansible_local[item.file][item.repo]['baseurl'] ] }}"

+     #ignore_errors: true

+   with_items: "{{ repos_from_runner }}"

+   when: repo_commands is not defined and ansible_local[item.file][item.repo]['baseurl'] is defined

+ 

+ 

+ - name: Set additional metalink installation repositories from system

+   set_fact:

+     repo_commands: "{{ repo_commands | default([]) + [ item ] }}"

+   with_items: "{{ repos_metalink }}"

+   when: repos_metalink is defined

+ 

+ - name: Set additional mirrorlist installation repositories from system

+   set_fact:

+     repo_commands: "{{ repo_commands | default([]) + [ item ] }}"

+   with_items: "{{ repos_mirrorlist }}"

+   when: repos_mirrorlist is defined

+ 

+ - name: Set additional baseurl installation repositories from system

+   set_fact:

+     repo_commands: "{{ repo_commands | default([]) + [ item ] }}"

+   with_items: "{{ repos_baseurl }}"

+   when: repos_baseurl is defined

@@ -0,0 +1,6 @@ 

+ ---

+ - name: Prepare testing environment

+   dnf:

+     name:

+       - anaconda

+     state: latest

@@ -0,0 +1,8 @@ 

+ ---

+ 

+ # Additional repos added to test runner, eg repo with builds to be tested

+ #test_runner_repos:

+ #  latest-build:

+ #    name: latest-build

+ #    source: "baseurl=http://example.com/x86_64/os/"

+ test_runner_repos: []

@@ -0,0 +1,17 @@ 

+ ---

+ 

+ - name: Add repositories

+   template:

+     src: repo.j2

+     dest: "/etc/yum.repos.d/{{ test_runner_repos[item]['name']}}.repo"

+   with_items: "{{ test_runner_repos }}"

+ 

+ - name: Create empty artifacts directory

+   file:

+     path: "{{ artifacts }}/"

+     state: "{{ item }}"

+     mode: 0755

+   with_items:

+     - absent

+     - directory

+ 

@@ -0,0 +1,5 @@ 

+ [{{ test_runner_repos[item]['name'] }}]

+ name={{ test_runner_repos[item]['name'] }}

+ {{ test_runner_repos[item]['source'] }}

+ enabled=1

+ gpgcheck=0

@@ -0,0 +1,2 @@ 

+ ---

+ artifacts: "./artifacts"

@@ -0,0 +1,163 @@ 

+ #!/bin/sh

+ 

+ usage () {

+     cat <<HELP_USAGE

+ 

+     $0  [-c] [-a <ARTIFACTS DIR>]

+ 

+     Run gating tests on test runners provisioned by linchpin and deployed with ansible,

+     syncing artifacts to localhost.

+ 

+    -c  Run configuration check.

+    -a  Local host directory for fetching artifacts from test runner.

+ HELP_USAGE

+ }

+ 

+ CHECK_ONLY="no"

+ ARTIFACTS_VAR=""

+ 

+ while getopts "ca:" opt; do

+     case $opt in

+         c)

+             # Run only configuration check

+             CHECK_ONLY="yes"

+             ;;

+         a)

+             # Set up directory for fetching artifacts

+             ARTIFACTS_VAR="local_artifacts=${OPTARG}"

+             ;;

+         *)

+             echo "Usage:"

+             usage

+             exit 1

+             ;;

+     esac

+ done

+ 

+ DEFAULT_CRED_FILENAME="clouds.yml"

+ CRED_DIR="${HOME}/.config/linchpin"

+ CRED_FILE_PATH=${CRED_DIR}/${DEFAULT_CRED_FILENAME}

+ TOPOLOGY_FILE_PATH="linchpin/topologies/gating-test.yml"

+ ANSIBLE_CFG_PATH="remote_config/ansible.cfg"

+ 

+ CHECK_RESULT=0

+ 

+ 

+ ############################## Check the configuration

+ 

+ echo

+ echo "========= Dependencies are installed"

+ echo "linchpin and ansible are required to be installed."

+ echo "For linchpin installation instructions see:"

+ echo "https://linchpin.readthedocs.io/en/latest/installation.html"

+ echo

+ 

+ if ! type ansible &> /dev/null; then

+     echo "=> FAILED: ansible package is not installed"

+     CHECK_RESULT=1

+ else

+     echo "=> OK: ansible is installed"

+ fi

+ 

+ if ! type linchpin &> /dev/null; then

+     echo "=> FAILED: linchpin is not installed"

+     CHECK_RESULT=1

+ else

+     echo "=> OK: linchpin is installed"

+ fi

+ 

+ 

+ echo

+ echo "========= Linchpin cloud credentials configuration"

+ echo "The credentials file for linchpin provisioner should be in ${CRED_DIR}"

+ echo "The name of the file and the profile to be used is defined by"

+ echo "   resource_groups.credentials variables in the topology file"

+ echo "   (${TOPOLOGY_FILE_PATH})"

+ echo

+ 

+ config_changed=0

+ if [[ -f ${TOPOLOGY_FILE_PATH} ]]; then

+     grep -q 'filename:.*'${DEFAULT_CRED_FILENAME} ${TOPOLOGY_FILE_PATH}

+     config_changed=$?

+ fi

+ 

+ if [[ ${config_changed} -eq 0 ]]; then

+     if [[ -f ${CRED_FILE_PATH} ]]; then

+         echo "=> OK: ${CRED_FILE_PATH} exists"

+     else

+         echo "=> FAILED: ${CRED_FILE_PATH} does not exist"

+         CHECK_RESULT=1

+     fi

+ else

+     echo "=> NOT CHECKING: seems like this has been configured in a different way"

+ fi

+ 

+ 

+ echo

+ echo "========== Deployment ssh key configuration"

+ echo "The ssh key used for deployment with ansible has to be defined by"

+ echo "private_key_file variable in ${ANSIBLE_CFG_PATH}"

+ echo "and match the key used for provisioning of the machines with linchpin"

+ echo "which is defined by resource_groups.resource_definitions.keypair variable"

+ echo "in topology file (${TOPOLOGY_FILE_PATH})."

+ echo

+ 

+ 

+ deployment_key_defined_line=$(grep 'private_key_file.*=.*[^\S]' ${ANSIBLE_CFG_PATH})

+ if [[ -n "${deployment_key_defined_line}" ]]; then

+     echo "=> OK: ${ANSIBLE_CFG_PATH}: ${deployment_key_defined_line}"

+ else

+     echo "=> FAILED: deployment ssh key not defined in ${ANSIBLE_CFG_PATH}"

+     CHECK_RESULT=1

+ fi

+ 

+ linchpin_keypair=$(grep "keypair:" ${TOPOLOGY_FILE_PATH} | uniq)

+ echo "=> INFO: should be the same key as ${TOPOLOGY_FILE_PATH}: ${linchpin_keypair}"

+ 

+ 

+ if [[ ${CHECK_RESULT} -ne 0 ]]; then

+ echo

+ echo "=> Configuration check FAILED, see FAILED messages above."

+ echo

+ fi

+ 

+ if [[ ${CHECK_ONLY} == "yes" || ${CHECK_RESULT} -ne 0 ]]; then

+     exit ${CHECK_RESULT}

+ fi

+ 

+ 

+ ############################## Run the tests

+ 

+ set -x

+ 

+ ### Clean the linchpin generated inventory

+ rm -rf linchpin/inventories/*.inventory

+ 

+ ### Provision test runner

+ linchpin -v --workspace linchpin -p linchpin/PinFile -c linchpin/linchpin.conf up

+ 

+ ### Pass inventory generated by linchpin to ansible

+ cp linchpin/inventories/*.inventory remote_config/inventory/linchpin.inventory

+ 

+ ### Use remote hosts in tests playbooks

+ ansible-playbook set_tests_to_run_on_remote.yml

+ 

+ ### Use the ansible configuration for running tests on remote host

+ export ANSIBLE_CONFIG=${ANSIBLE_CFG_PATH}

+ 

+ ### Configure remote user for playbooks

+ # By default root is used but it can be fedora or cloud-user for cloud images

+ for USER in root fedora cloud-user; do

+   ansible-playbook --extra-vars="remote_user=$USER" check_and_set_remote_user.yml

+ done

+ 

+ ### Prepare test runner

+ ansible-playbook prepare-test-runner.yml

+ ### Run test on test runner (supply artifacts variable which is testing system's job)

+ ansible-playbook --extra-vars="artifacts=./artifacts" tests.yml

+ ### Gather artifacts (into /tmp/artifacts by default)

+ ansible-playbook --extra-vars="${ARTIFACTS_VAR}" clean-test-runner.yml

+ 

+ ### Destroy the test runner

+ linchpin -v --workspace linchpin -p linchpin/PinFile -c linchpin/linchpin.conf destroy

+ 

@@ -0,0 +1,17 @@ 

+ ---

+ # Replace hosts in test playbooks to run on remote host instead of localhost

+ 

+ - hosts: localhost

+   become: False

+   gather_facts: False

+ 

+   tasks:

+   - name: Replace hosts in tests*.yml playbooks

+     lineinfile:

+       path: "{{ item }}"

+       regexp: "- hosts: localhost\\S*"

+       line: "- hosts: gating_test_runner"

+       backrefs: yes

+     with_fileglob:

+       - tests*.yml

+ 

Is this repeatable or do we need always do a git clean before next run?

file added
+26
@@ -0,0 +1,26 @@ 

+ ---

+ # test anaconda

+ - hosts: localhost

+   become: true

+   tags:

+     - classic

+   vars_files:

+     - vars_tests.yml

+ 

+   roles:

+     - role: prepare-env

+       tags:

+         - prepare-env

+     - role: installation-repos

+     - role: dirinstall

+       vars:

+         method: "--text"

+         test_name: dirinstall-text

+       tags:

+         - dirinstall-text

+     - role: dirinstall

+       vars:

+         method: "--vnc"

+         test_name: dirinstall-vnc

+       tags:

+         - dirinstall-vnc

file added
+11
@@ -0,0 +1,11 @@ 

+ ---

+ # variables for tests.yml

+ installation_logs:

+   - anaconda.log

+   - dbus.log

+   - dnf.librepo.log

+   - hawkey.log

+   - ifcfg.log

+   - packaging.log

+   - program.log

+   - storage.log

We may want to squash the commits before eventual pushing.
Before enabling the test https://bugzilla.redhat.com/show_bug.cgi?id=1616214 should be resolved.

3 new commits added

  • Gather repositories from runner facts instead of local repo files
  • Ansible packages are not required to be installed on remote runner.
  • Add optional repo after test env requiremets are installed
5 years ago

Fixed detection of installation repositories from test runner system

allowed to become root fix two spaces error please.

I would change this to be more explicit what linchpin does. Something like:
The runner can be provisioned in a cloud by linchpin...

Is this repeatable or do we need always do a git clean before next run?

Please fix typo in [invenotory].

I don't think this line should be here.

I wrote there a few minor things and questions but in overall really great work!!
You have my ACK.

4 new commits added

  • Don't turn off selinux for the test.
  • Minor cleanups
  • Ansible packages are not required to be installed on remote runner.
  • Add optional repo after test env requiremets are installed
5 years ago

Added 2 new commits (the third and four were actually already there, and were already reviewed).
- One commit for Jirka's review.
- One commit removing disabling of selinux as it was fixed in Anaconda code.

As for the set_tests_to_run_on_remote.yml question from the review , the playbook should be idempotent, so no need for git clean before next run on localhost.

Looks good to me now. Thanks!

Thank you, I've opened new pull request with all the commits squashed:
https://src.fedoraproject.org/rpms/anaconda/pull-request/2
It seems that for this PR Fedora CI has problems with ordering of the patches in generated patch used for CI test.

Pull-Request has been closed by rvykydal

5 years ago
Changes Summary 29