summaryrefslogtreecommitdiff
path: root/xen
AgeCommit message (Collapse)Author
2020-11-30libxl: Introduce basic virtio-mmio support on ArmJulien Grall
This patch creates specific device node in the Guest device-tree with allocated MMIO range and SPI interrupt if specific 'virtio' property is present in domain config. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Message-Id: <1606732298-22107-23-git-send-email-olekstysh@gmail.com>
2020-11-30xen/arm: Add mapcache invalidation handlingOleksandr Tyshchenko
We need to send mapcache invalidation request to qemu/demu everytime the page gets removed from a guest. At the moment, the Arm code doesn't explicitely remove the existing mapping before inserting the new mapping. Instead, this is done implicitely by __p2m_set_entry(). So we need to recognize a case when old entry is a RAM page *and* the new MFN is different in order to set the corresponding flag. The most suitable place to do this is p2m_free_entry(), there we can find the correct leaf type. The invalidation request will be sent in do_trap_hypercall() later on. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Make x86's send_invalidate_req() commonOleksandr Tyshchenko
As the IOREQ is a common feature now and we also need to invalidate qemu/demu mapcache on Arm when the required condition occurs this patch moves this function to the common code (and remames it to ioreq_signal_mapcache_invalidate). This patch also moves per-domain qemu_mapcache_invalidate variable out of the arch sub-struct (and drops "qemu" prefix). The subsequent patch will add mapcache invalidation handling on Arm. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
2020-11-30xen/arm: io: Abstract sign-extensionOleksandr Tyshchenko
In order to avoid code duplication (both handle_read() and handle_ioserv() contain the same code for the sign-extension) put this code to a common helper to be used for both. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
2020-11-30xen/dm: Introduce xendevicemodel_set_irq_level DM opJulien Grall
This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level) to inject an interrupt as the "isa_irq" field is only 8-bit and able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020). Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Message-Id: <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Introduce domain_has_ioreq_server()Oleksandr Tyshchenko
This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion() (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. Also this helper will be used by one of the subsequent patches on Arm. This involves adding an extra per-domain variable to store the count of servers in use. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
2020-11-30xen/mm: Handle properly reference in set_foreign_p2m_entry() on ArmOleksandr Tyshchenko
This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It is valid to always pass "p2m_map_foreign_rw" type to guest_physmap_add_entry() since the current and foreign domains would be always different. A case when they are equal would be rejected by rcu_lock_remote_domain_by_id(). Besides the similar comment in the code put a respective ASSERT() to catch incorrect usage in future. It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry() and a helper to indicate whether the arch supports the reference counting of foreign entries and the restriction for the hardware domain in the common code can be skipped for it. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
2020-11-30xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completedOleksandr Tyshchenko
This patch adds proper handling of return value of vcpu_ioreq_handle_completion() which involves using a loop in leave_hypervisor_to_guest(). The reason to use an unbounded loop here is the fact that vCPU shouldn't continue until an I/O has completed. In Xen case, if an I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. This wouldn't be an issue for Xen as do_softirq() would be called at every loop. In case of failure, the guest will crash and the vCPU will be unscheduled. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
2020-11-30arm/ioreq: Introduce arch specific bits for IOREQ/DM featuresJulien Grall
This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality and add remaining bits. The IOREQ/DM features are supposed to be built with IOREQ_SERVER option enabled, which is disabled by default on Arm for now. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Message-Id: <1606732298-22107-15-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()Oleksandr Tyshchenko
The cmpxchg() in ioreq_send_buffered() operates on memory shared with the emulator domain (and the target domain if the legacy interface is used). In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. As there is no plan to support the legacy interface on Arm, we will have a page to be mapped in a single domain at the time, so we can use s->emulator in guest_cmpxchg64() safely. Thankfully the only user of the legacy interface is x86 so far and there is not concern regarding the atomics operations. Please note, that the legacy interface *must* not be used on Arm without revisiting the code. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-14-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Remove "hvm" prefixes from involved function namesOleksandr Tyshchenko
This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code and performs a renaming where appropriate according to the more consistent new naming scheme: - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" A few function names are clarified to better fit into their purposes: handle_hvm_io_completion -> vcpu_ioreq_handle_completion hvm_io_pending -> vcpu_ioreq_pending hvm_ioreq_init -> ioreq_domain_init hvm_alloc_ioreq_mfn -> ioreq_server_alloc_mfn hvm_free_ioreq_mfn -> ioreq_server_free_mfn Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-13-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Move x86's io_completion/io_req fields to struct vcpuOleksandr Tyshchenko
The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io and drop duplicating "io" prefixes. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
2020-11-30xen/mm: Make x86's XENMEM_resource_ioreq_server handling commonJulien Grall
As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource as unneeded. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Message-Id: <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
2020-11-30xen/dm: Make x86's DM feature commonJulien Grall
As a lot of x86 code can be re-used on Arm later on, this patch splits devicemodel support into common and arch specific parts. The common DM feature is supposed to be built with IOREQ_SERVER option enabled (as well as the IOREQ feature), which is selected for x86's config HVM for now. Also update XSM code a bit to let DM op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Message-Id: <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Move x86's ioreq_server to struct domainOleksandr Tyshchenko
The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-9-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs commonOleksandr Tyshchenko
The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-8-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Make x86's hvm_mmio_first(last)_byte() commonOleksandr Tyshchenko
The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes with "ioreq". Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: Paul Durrant <paul@xen.org> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-7-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Make x86's hvm_ioreq_needs_completion() commonOleksandr Tyshchenko
The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix. Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: Paul Durrant <paul@xen.org> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-6-git-send-email-olekstysh@gmail.com>
2020-11-30xen/ioreq: Make x86's IOREQ feature commonOleksandr Tyshchenko
As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared IOREQ support to the common code (the code movement is verbatim copy). The "legacy" mechanism of mapping magic pages for the IOREQ servers remains x86 specific and not exposed to the common code. The common IOREQ feature is supposed to be built with IOREQ_SERVER option enabled, which is selected for x86's config HVM for now. In order to avoid having a gigantic patch here, the subsequent patches will update remaining bits in the common code step by step: - Make IOREQ related structs/materials common - Drop the "hvm" prefixes and infixes - Remove layering violation by moving corresponding fields out of *arch.hvm* or abstracting away accesses to them Also include <xen/domain_page.h> which will be needed on Arm to avoid touch the common code again when introducing Arm specific bits. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
2020-11-30x86/ioreq: Provide out-of-line wrapper for the handle_mmio()Oleksandr Tyshchenko
The IOREQ is about to be common feature and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide a wrapper ioreq_complete_mmio() to be used on common and Arm code. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-4-git-send-email-olekstysh@gmail.com>
2020-11-30x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for movingOleksandr Tyshchenko
This patch continues to make some preparation to x86/hvm/ioreq.c before moving to the common code. Add IOREQ_STATUS_* #define-s and update candidates for moving since X86EMUL_* shouldn't be exposed to the common code in that form. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
2020-11-30x86/ioreq: Prepare IOREQ feature for making it commonOleksandr Tyshchenko
As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch. This patch mostly introduces specific hooks to abstract arch specific materials taking into the account the requirment to leave the "legacy" mechanism of mapping magic pages for the IOREQ servers x86 specific and not expose it to the common code. These hooks are named according to the more consistent new naming scheme right away (including dropping the "hvm" prefixes and infixes): - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" other functions will be renamed in subsequent patches. It worth mentioning that a code which checks the return value of p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was folded into arch_ioreq_server_map_mem_type() for the clear split. So the p2m_change_entry_type_global() is called with ioreq_server lock held. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> CC: Julien Grall <julien.grall@arm.com> Message-Id: <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
2020-11-27xen/pci: solve compilation error on ARM with HAS_PCI enabledRahul Singh
If mem-sharing, mem-paging, or log-dirty functionality is not enabled for architecture when HAS_PCI is enabled, the compiler will throw an error. Move code to x86 specific file to fix compilation error. Also, modify the code to use likely() in place of unlikley() for each condition to make code more optimized. No functional change intended. Signed-off-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com>
2020-11-27xen/pci: move x86 specific code to x86 directoryRahul Singh
passthrough/pci.c file is common for all architecture, but there is x86 specific code in this file. Move x86 specific code to the drivers/passthrough/io.c file to avoid compilation error for other architecture. As drivers/passthrough/io.c is compiled only for x86 move it to x86 directory and rename it to hvm.c. No functional change intended. Signed-off-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com>
2020-11-27iommu: stop calling IOMMU page tables 'p2m tables'Paul Durrant
It's confusing and not consistent with the terminology introduced with 'dfn_t'. Just call them IOMMU page tables. Also remove a pointless check of the 'acpi_drhd_units' list in vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would not have been enabled for the domain in the first place. NOTE: All calls to printk() have also been removed from iommu_dump_page_tables(); the implementation specific code is now responsible for all output. The check for the global 'iommu_enabled' has also been replaced by an ASSERT since iommu_dump_page_tables() is not registered as a key handler unless IOMMU mappings are enabled. Error messages are now prefixed with the name of the function. Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
2020-11-27iommu: remove the share_p2m operationPaul Durrant
Sharing of HAP tables is now VT-d specific so the operation is never defined for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_maddr when using shared EPT as it is straightforward to simply define a helper function to return the appropriate value in the shared and non-shared cases. NOTE: This patch also modifies unmap_vtd_domain_page() to take a const pointer since the only thing it calls, unmap_domain_page(), also takes a const pointer. Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
2020-11-25evtchn: double per-channel locking can't hit identical channelsJan Beulich
Inter-domain channels can't possibly be bound to themselves, there's always a 2nd channel involved, even when this is a loopback into the same domain. As a result we can drop one conditional each from the two involved functions. With this, the number of evtchn_write_lock() invocations can also be shrunk by half, swapping the two incoming function arguments instead. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
2020-11-25mm: check for truncation in vmalloc_type()Jan Beulich
While it's currently implied from the checking xmalloc_array() does, let's make this more explicit in the function itself. As a result both involved local variables don't need to have size_t type anymore. This brings them in line with the rest of the code in this file. Requested-by: Julien Grall <julien@xen.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
2020-11-25x86: replace open-coded occurrences of sizeof_field()...Paul Durrant
... with macro evaluations, now that it is available. A recent patch imported the sizeof_field() macro from Linux. This patch makes use of it in places where the construct is currently open-coded. Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
2020-11-25xen/include: import sizeof_field() macro from Linux stddef.hPaul Durrant
Co-locate it with the definition of offsetof() (since this is also in stddef.h in the Linux kernel source). This macro will be needed in a subsequent patch. Signed-off-by: Paul Durrant <pdurrant@amazon.com> Acked-by: Jan Beulich <jbeulich@suse.com>
2020-11-25xen/arm: Add workaround for Cortex-A55 erratum #1530923Bertrand Marquis
On the Cortex A55, TLB entries can be allocated by a speculative AT instruction. If this is happening during a guest context switch with an inconsistent page table state in the guest, TLBs with wrong values might be allocated. The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum 1165522 on Cortex A76 or Neoverse N1. This change is also introducing the MIDR identifier for the Cortex-A55. Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Reviewed-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Julien Grall <jgrall@amazon.com>
2020-11-24memory: fix off-by-one in XSA-346 changeJan Beulich
The comparison against ARRAY_SIZE() needs to be >= in order to avoid overrunning the pages[] array. This is XSA-355. Fixes: 5777a3742d88 ("IOMMU: hold page ref until after deferred TLB flush") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Julien Grall <jgrall@amazon.com>
2020-11-24ns16550: drop stray "#ifdef CONFIG_HAS_PCI"Jan Beulich
There's no point wrapping the function invocation when - the function body is already suitably wrapped, - the function itself is unconditionally available. Reported-by: Julien Grall <julien@xen.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
2020-11-24ns16550: "com<N>=" command line options are x86-specificJan Beulich
Pure code motion (plus the addition of "#ifdef CONFIG_X86); no functional change intended. Reported-by: Julien Grall <julien@xen.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
2020-11-24ns16550: move PCI arrays next to the function using themJan Beulich
Pure code motion; no functional change intended. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
2020-11-24x86/DMI: fix table mapping when one lives above 1MbJan Beulich
Use of __acpi_map_table() is kind of an abuse here, and doesn't work anymore for the majority of cases if any of the tables lives outside the low first Mb. Keep this (ab)use only prior to reaching SYS_STATE_boot, primarily to avoid needing to audit whether any of the calls here can happen this early in the first place; quite likely this isn't necessary at all - at least dmi_scan_machine() gets called late enough. For the "normal" case, call __vmap() directly, despite effectively duplicating acpi_os_map_memory(). There's one difference though: We shouldn't need to establish UC- mappings, WP or r/o WB mappings ought to be fine, as the tables are going to live in either RAM or ROM. Short of having PAGE_HYPERVISOR_WP and wanting to map the tables r/o anyway, use the latter of the two options. The r/o mapping implies some constification of code elsewhere in the file. For code touched anyway also switch to void (where possible) or uint8_t. Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
2020-11-24x86/ACPI: fix mapping of FACSJan Beulich
acpi_fadt_parse_sleep_info() runs when the system is already in SYS_STATE_boot. Hence its direct call to __acpi_map_table() won't work anymore. This call should probably have been replaced long ago already, as the layering violation hasn't been necessary for quite some time. Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
2020-11-24x86/DMI: fix SMBIOS pointer range checkJan Beulich
Forever since its introduction this has been using an inverted relation operator. Fixes: 54057a28f22b ("x86: support SMBIOS v3") Signed-off-by: Jan Beulich <JBeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
2020-11-24xen/events: access last_priority and last_vcpu_id togetherJuergen Gross
The queue for a fifo event is depending on the vcpu_id and the priority of the event. When sending an event it might happen the event needs to change queues and the old queue needs to be kept for keeping the links between queue elements intact. For this purpose the event channel contains last_priority and last_vcpu_id values elements for being able to identify the old queue. In order to avoid races always access last_priority and last_vcpu_id with a single atomic operation avoiding any inconsistencies. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Julien Grall <jgrall@amazon.com>
2020-11-20amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901Andrew Cooper
"amd-iommu: use a bitfield for DTE" renamed iommu_dte_set_guest_cr3()'s gcr3 parameter to gcr3_mfn but ended up with an off-by-PAGE_SIZE error when extracting bits from the address. get_guest_cr3_from_dte() and iommu_dte_set_guest_cr3() are (almost) getters and setters for the same field, so should live together. Rename them to dte_{get,set}_gcr3_table() to specifically avoid 'guest_cr3' in the name. This field actually points to a table in memory containing an array of guest CR3 values. As these functions are used for different logical indirections, they shouldn't use gfn/mfn terminology for their parameters. Switch them to use straight uint64_t full addresses. Fixes: 3a7947b6901 ("amd-iommu: use a bitfield for DTE") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
2020-11-20AMD/IOMMU: avoid UB in guest CR3 retrievalJan Beulich
Found by looking for patterns similar to the one Julien did spot in pci_vtd_quirks(). (Not that it matters much here, considering the code is dead right now.) Fixes: 3a7947b69011 ("amd-iommu: use a bitfield for DTE") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
2020-11-20SVM: avoid UB in intercept mask definitionsJan Beulich
Found by looking for patterns similar to the one Julien did spot in pci_vtd_quirks(). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
2020-11-20x86/nmi: avoid UB for P4-era watchdogsJan Beulich
Found by looking for patterns similar to the one Julien did spot in pci_vtd_quirks(). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
2020-11-20lib: split _ctype[] into its own object, under lib/Jan Beulich
This is, besides for tidying, in preparation of then starting to use an archive rather than an object file for generic library code which arch-es (or even specific configurations within a single arch) may or may not need. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
2020-11-19xen/arm: acpi: Allow Xen to boot with ACPI 5.1Julien Grall
At the moment Xen requires the FADT ACPI table to be at least version 6.0, apparently because of some reliance on other ACPI v6.0 features. But actually this is overzealous, and Xen works now fine with ACPI v5.1. Let's relax the version check for the FADT table to allow QEMU to run the hypervisor with ACPI. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
2020-11-19xen/arm: gic: acpi: Use the correct length for the GICC structureJulien Grall
The length of the GICC structure in the MADT ACPI table differs between version 5.1 and 6.0, although there are no other relevant differences. Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to overcome this issue. Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
2020-11-19xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPIJulien Grall
gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific. While they build fine today, this will change in a follow-up patch. Rather than trying to fix the build on ACPI, it is best to avoid compiling the helpers and the associated callbacks when CONFIG_ACPI=n. Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
2020-11-18xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807Michal Orzel
On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual address for a cacheable mapping of a location is being accessed by a core while another core is remapping the virtual address to a new physical page using the recommended break-before-make sequence, then under very rare circumstances TLBI+DSB completes before a read using the translation being invalidated has been observed by other observers. The workaround repeats the TLBI+DSB operation for all the TLB flush operations. While this is stricly not necessary, we don't want to take any risk. Signed-off-by: Michal Orzel <michal.orzel@arm.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Julien Grall <jgrall@amazon.com>
2020-11-18xen/x86: issue pci_serr error message via NMI continuationJuergen Gross
Instead of using a softirq pci_serr_error() can use NMI continuation for issuing an error message. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
2020-11-18xen/oprofile: use NMI continuation for sending virq to guestJuergen Gross
Instead of calling send_guest_vcpu_virq() from NMI context use the NMI continuation framework for that purpose. This avoids taking locks in NMI mode. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>