Blob Blame History Raw
commit fdd2bbd9e18e813faa3880f820c1b674267eec0c
Author: Kyle McMartin <kmcmarti@redhat.com>
Date:   Wed Feb 18 09:49:05 2015 -0500

    fixes for xgene enet

commit ac44fa9c24a21d78e8fff79c0dab3deea490d782
Author: Kyle McMartin <kmcmarti@redhat.com>
Date:   Tue Feb 17 12:04:33 2015 -0500

    fixes for HEAD

commit 7320bdde2a2abd1267f0a138ed1c19791be519c5
Merge: 9d1c60d 796e1c5
Author: Kyle McMartin <kmcmarti@redhat.com>
Date:   Tue Feb 17 11:11:12 2015 -0500

    Merge branch 'master' into devel
    
    Conflicts:
    	arch/arm64/kernel/efi.c
    	arch/arm64/kernel/pci.c
    	arch/arm64/mm/dma-mapping.c
    	arch/x86/pci/mmconfig-shared.c
    	drivers/ata/ahci_xgene.c
    	drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
    	drivers/net/ethernet/apm/xgene/xgene_enet_main.c
    	drivers/tty/serial/8250/8250_dw.c
    	include/linux/pci.h

commit 9d1c60d3f33dd3b331e1365dba5c8c5de50db77c
Author: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Date:   Wed Feb 4 19:06:03 2015 +0200

    firmware: dmi-sysfs: add SMBIOS entry point area attribute
    
    Some utils, like dmidecode and smbios, need to access SMBIOS entry
    table area in order to get information like SMBIOS version, size, etc.
    Currently it's done via /dev/mem. But for situation when /dev/mem
    usage is disabled, the utils have to use dmi sysfs instead, which
    doesn't represent SMBIOS entry. So this patch adds SMBIOS area to
    dmi-sysfs in order to allow utils in question to work correctly with
    dmi sysfs interface.
    
    Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>

commit 9ede97bc136e217ea00406a3388c6082a6a8d049
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Feb 16 13:46:56 2015 -0500

    ata: ahci_platform: DO NOT UPSTREAM Add HID for AMD seattle platform
    
    Add HID match to get modules working. The class matching works but not
    for modules. Yet.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 794db044849e4388586abda59a262d34f18f20fa
Author: Mark Langsdorf <mlangsdo@redhat.com>
Date:   Thu Nov 13 21:46:59 2014 -0500

    usb: make xhci platform driver use 64 bit or 32 bit DMA
    
    The xhci platform driver needs to work on systems that either only
    support 64-bit DMA or only support 32-bit DMA. Attempt to set a
    coherent dma mask for 64-bit DMA, and attempt again with 32-bit
    DMA if that fails.
    
    Signed-off-by: Mark Langsdorf <mlangsdo@redhat.com>

commit da4d6ecc4b7b5802b7caee5844933abaca6f7de4
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Nov 10 16:31:05 2014 -0500

    iommu/arm-smmu: fix NULL dereference with ACPI PCI devices
    
    Fix a NULL dereference in find_mmu_master which occurs when
    booting with ACPI. In that case, PCI bridges with not have
    an of_node. Add a check for NULL of_node and bail out if that
    is the case.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit eb81ac50a2193a52251c933c07ddc05a39717c7c
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Nov 10 21:35:11 2014 -0500

    DO NOT UPSTREAM - arm64: fix dma_ops for ACPI and PCI devices
    
    Commit 2189064795dc3fb4101e5:
    
      arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers
    
    removed the bus notifiers from dma-mapping.c. This patch
    adds the notifier back for ACPI and PCI devices until a
    better permanent solution is worked out.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit efaf1674559c92ed2a862cfc44d51967ccd99928
Author: Mark Salter <msalter@redhat.com>
Date:   Thu Aug 14 12:32:13 2014 -0400

    acpi: add utility to test for device dma coherency
    
    ACPI 5.1 adds a _CCA object to indicate memory coherency
    of a bus master device. It is an integer with zero meaning
    non-coherent and one meaning coherent. This attribute may
    be inherited from a parent device. It may also be missing
    entirely, in which case, an architecture-specific default
    is assumed.
    
    This patch adds a utility function to parse a device handle
    (and its parents) for a _CCA object and return the coherency
    attribute if found.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 907e55f6d08572382856382ea0726c377ced90fc
Author: Donald Dutile <ddutile@redhat.com>
Date:   Sat Nov 22 12:08:53 2014 -0500

    DO NOT UPSTREAM - arm64: kvm: Change vgic resource size error to info
    
    A new check was added to upstream to ensure a full
    kernel page was allocated to the vgic.  The check failed
    kvm configuration if the condition wasn't met. An arm64
    kernel with 64K pagesize and certain early firmware will
    fail this test. Change error to info & continue configuration
    for now.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 7572b5b209ececd0a7a7c94e8104b829b0f22d5e
Author: Wei Huang <wei@redhat.com>
Date:   Sat Nov 22 10:38:45 2014 -0500

    KVM/ACPI: Enable ACPI support for KVM virt GIC
    
    This patches enables ACPI support for KVM virtual GIC. KVM parses
    ACPI table for virt GIC related information when DT table is not
    present. This is done by retrieving the information defined in
    generic_interrupt entry of MADT table.
    
    Note: Alexander Spyridakis from Virtual Open System posts a
    _very_ similar patch to enable acpi-kvm. This patch borrows some
    ideas from his patch.
    
    Signed-off-by: Wei Huang <wei@redhat.com>
    [combined with subsequent patch to use acpi_disabled]
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit f40884650cda7b3f255a72f5b8ed554b1e3d660e
Author: Wei Huang <wei@redhat.com>
Date:   Sat Nov 22 10:18:57 2014 -0500

    KVM/ACPI: Enable ACPI support for virt arch timer
    
    This patches enables ACPI support for KVM virtual arch_timer. It
    allows KVM to parse ACPI table for virt arch_timer PPI when DT table
    is not present. This is done by retrieving the information from
    arch_timer_ppi array in arm_arch_timer driver.
    
    Signed-off-by: Wei Huang <wei@redhat.com>
    [combined with subsequent patch to use acpi_disabled]
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 541a331980be3eec12baa65040ce7c2b354462af
Author: Mark Salter <msalter@redhat.com>
Date:   Tue Oct 7 12:54:08 2014 -0400

    xgene acpi network - first cut

commit a90c889901a6455730539cb7a0935df27f1a68de
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Feb 16 16:39:38 2015 -0500

    net: phy: amd: DO NOT UPSTREAM: Add support for A0 silicon xgbe phy
    
    From: Tom Lendacky <thomas.lendacky@amd.com>
    Support A0 silicon xgbe phy
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit c72678575de9cfe30cfc2da2fb15b90692fd2068
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Feb 16 16:37:15 2015 -0500

    net: amd: DO NOT UPSTREAM: Add xgbe-a0 driver
    
    From: Tom Lendacky <thomas.lendacky@amd.com>
    Add support for A0 silicon xgbe driver
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 72b6be9435d46fe4a3e09e5e01317f8a0951624d
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Fri Jul 26 17:55:02 2013 +0100

    virtio-mmio: add ACPI probing
    
    Added the match table and pointers for ACPI probing to the driver.
    
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>

commit bcb68db63adaf54843fa5c8a559ba48034886576
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Wed Jul 24 11:29:48 2013 +0100

    net: smc91x: add ACPI probing support.
    
    Add device ID LINA0003 for this device and add the match table.
    
    As its a platform device it needs no other code and will be probed in by
    acpi_platform once device ID is added.
    
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>

commit 2df0d5a82afba0e0d9c55977ae7e7483e0f4080f
Author: Mark Salter <msalter@redhat.com>
Date:   Sun Sep 14 09:44:44 2014 -0400

    Revert "ahci_xgene: Skip the PHY and clock initialization if already configured by the firmware."
    
    This reverts commit 0bed13bebd6c99d097796d2ca6c4f10fb5b2eabc.
    
    Temporarily revert for backwards compatibility with rh-0.12-1 firmware

commit 52e7166eb5ecf60fcc1d557ecfe8447fd83c55d9
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Aug 11 13:46:43 2014 -0400

    xgene: add support for ACPI-probed serial port

commit 862142c487e37cea64a9fef9fbd37816395de142
Author: Mark Salter <msalter@redhat.com>
Date:   Sat Aug 9 12:01:20 2014 -0400

    sata/xgene: support acpi probing
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 7c26e9688ab62d89c5027127482dc40095a9d5ba
Author: Mark Salter <msalter@redhat.com>
Date:   Thu Sep 18 15:05:23 2014 -0400

    arm64: add sev to parking protocol
    
    Parking protocol wakes secondary cores with an interrupt.
    This patch adds an additional sev() to send an event. This
    is a temporary hack for APM Mustang board and not intended
    for upstream.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit be1439218c1ce70298d976f5fad95b85644ae0fa
Author: Mark Salter <msalter@redhat.com>
Date:   Tue Sep 9 22:59:48 2014 -0400

    arm64: add parking protocol support
    
    This is a first-cut effort at parking protocol support. It is
    very much a work in progress (as is the spec it is based on).
    This code deviates from the current spec in a number of ways
    to work around current firmware issues and issues with kernels
    using 64K page sizes.
    
    caveat utilitor
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 8a0834f58a9484d5983b349779efe2921db23c30
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Oct 13 20:49:43 2014 -0400

    arm64/perf: add ACPI support
    
    Add ACPI support to perf_event driver. This involves getting
    the irq info from the MADT and registering a platform device
    when booting with ACPI. This requires updated firmware for
    Mustang and FM for PMU driver to get correct info from ACPI.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 0a8be018986d895852300cb33de2d316e1029f77
Author: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Date:   Mon Feb 9 00:20:04 2015 +0800

    ata: ahci_platform: Add ACPI _CLS matching
    
    This patch adds ACPI supports for AHCI platform driver, which uses _CLS
    method to match the device.
    
    The following is an example of ASL structure in DSDT for a SATA controller,
    which contains _CLS package to be matched by the ahci_platform driver:
    
      Device (AHC0) // AHCI Controller
      {
        Name(_HID, "AMDI0600")
        Name (_CCA, 1)
        Name (_CLS, Package (3)
        {
          0x01, // Base Class: Mass Storage
          0x06, // Sub-Class: serial ATA
          0x01, // Interface: AHCI
        })
        Name (_CRS, ResourceTemplate ()
        {
          Memory32Fixed (ReadWrite, 0xE0300000, 0x00010000)
          Interrupt (ResourceConsumer, Level, ActiveHigh, Exclusive,,,) { 387 }
        })
      }
    
    Also, since ATA driver should not require PCI support for ATA_ACPI,
    this patch removes dependency in the driver/ata/Kconfig.
    
    Acked-by: Tejun Heo <tj@kernel.org>
    Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>

commit c89b443a5d643c81be1eef7817ea5e91f9d7e2fd
Author: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Date:   Mon Feb 9 00:20:03 2015 +0800

    ACPI / scan: Add support for ACPI _CLS device matching
    
    Device drivers typically use ACPI _HIDs/_CIDs listed in struct device_driver
    acpi_match_table to match devices. However, for generic drivers, we do
    not want to list _HID for all supported devices, and some device classes
    do not have _CID (e.g. SATA, USB). Instead, we can leverage ACPI _CLS,
    which specifies PCI-defined class code (i.e. base-class, subclass and
    programming interface).
    
    This patch adds support for matching ACPI devices using the _CLS method.
    
    Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>

commit ca42ac9e3bcde0394316ed697a05146730ed8c06
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Sep 8 17:04:28 2014 -0400

    acpi/arm64: NOT FOR UPSTREAM - remove EXPERT dependency
    
    For convenience to keep existing configs working, remove
    CONFIG_EXPERT dependency from ACPI for ARM64. This shouldn't
    go upstream just yet.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 574138b1f1aacdb83c362a44bc20e7bc4c9ddfa3
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Jan 19 18:15:16 2015 -0500

    DO NOT UPSTREAM - pci/xgene: Provide fixup for ACPI MCFG support
    
    Xgene doesn't decode bus bits of mmconfig region and only
    supports devfn 0 of bus 0. For other buses/devices, some
    internal registers need to be poked. This patch provides
    a fixup to support ACPI MCFG tables. This is a horrible
    hack allowing the hardware to be used for PCI testing, but
    it is not intended to be a long term patch.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 2963b36f72691836ffe87beb2a1819dcf46c8b5f
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Jan 19 17:43:54 2015 -0500

    DO NOT UPSTREAM - provide hook for MCFG fixups
    
    This is a temprary mechanism needed by at least one early
    arm64 hardware platform with broken MCFG support. This is
    not intended for upstream and will go away as soon as newer
    hardware with fully compliant ECAM becomes available.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit e1dee689c6b97065d8de205f78218cb348c7f600
Author: Mark Salter <msalter@redhat.com>
Date:   Wed Feb 11 14:46:03 2015 -0500

    arm64/pci/acpi: initial support for ACPI probing of PCI
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 507ba9b44cb9454d45b79f93e2ac4f7e45233f98
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:51 2014 +0100

    pci, acpi: Share ACPI PCI config space accessors.
    
    MMCFG can be used perfectly for all architectures which support ACPI.
    ACPI mandates MMCFG to describe PCI config space ranges which means
    we should use MMCONFIG accessors by default.
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Tested-by: Hanjun Guo <hanjun.guo@linaro.org>

commit cc00e957e27e141416df5a48cc2bda4437992c67
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:50 2014 +0100

    x86, acpi, pci: mmconfig_64.c becomes default implementation for arch agnostic low-level direct PCI config space accessors via MMCONFIG.
    
    Note that x86 32bits machines still have its own low-level direct
    PCI config space accessors.
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>

commit 18343755a531c6dafdfd390bef7e62f2137e6836
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:49 2014 +0100

    x86, acpi, pci: mmconfig_{32,64}.c code refactoring - remove code duplication.
    
    mmconfig_64.c version is going to be default implementation for arch
    agnostic low-level direct PCI config space accessors via MMCONFIG.
    However, now it initialize raw_pci_ext_ops pointer which is used in
    x86 specific code only. Moreover, mmconfig_32.c is doing the same thing
    at the same time.
    
    Move it to mmconfig_shared.c so it becomes common for both and
    mmconfig_64.c turns out to be purely arch agnostic.
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Tested-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 36db151289ae6e5cb09a86c23127a9cd6e10129c
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:48 2014 +0100

    x86, acpi, pci: Move PCI config space accessors.
    
    We are going to use mmio_config_{} name convention across all architectures.
    Currently it belongs to asm/pci_x86.h header which should be included
    only for x86 specific files. From now on, those accessors are in asm/pci.h
    header which can be included in non-architecture code much easier.
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Tested-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 72208d51b9b656ac97fc632b015af87e702bd352
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:47 2014 +0100

    x86, acpi, pci: Move arch-agnostic MMCFG code out of arch/x86/ directory
    
    MMCFG table seems to be architecture independent and it makes sense
    to share common code across all architectures. The ones that may need
    architectural specific actions have default prototype (__weak).
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Tested-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 4620b275253d117ba30e996bfc890bcc472165ba
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Wed Nov 19 17:04:46 2014 +0100

    x86, acpi, pci: Reorder logic of pci_mmconfig_insert() function
    
    This patch is the first step of MMCONFIG refactoring process.
    
    Code that uses pci_mmcfg_lock will be moved to common file and become
    accessible for all architectures. pci_mmconfig_insert() cannot be moved
    so easily since it is mixing generic mmcfg code with x86 specific logic
    inside of mutual exclusive block guarded by pci_mmcfg_lock.
    
    To get rid of that constraint we reorder actions as fallow:
    1. mmconfig entry allocation can be done at first, does not need lock
    2. insertion to iomem_resource has its own lock, no need to wrap it into mutex
    3. insertion to mmconfig list can be done as the final stage in separate
    function (candidate for further factoring)
    
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Tested-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 1686ea5a6feed1c661173e66a2996fc6286cd7c3
Author: Suravee Suthikulanit <suravee.suthikulpanit@amd.com>
Date:   Tue Jan 20 23:49:46 2015 -0600

    DO NOT UPSTREAM YET: Introducing ACPI support for GICv2m
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit be2b8706d9bf4e69cc3fd63709859a148c0fdb6e
Author: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Date:   Tue Feb 3 12:55:57 2015 -0500

    DO NOT UPSTREAM YET: Clean up GIC irq domain for ACPI
    
    Instead of using the irq_default_domain, define the acpi_irq_domain.
    This still have the same assumption that ACPI only support a single
    GIC domain.
    
    Also, rename acpi_gic_init() to acpi_irq_init()
    
    From: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    [Add extern declaration for gicv2m_acpi_init()]
    [Do not rename acpi_gic_init()]
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 6eb83df4e91d30b66fbfa16292e09677f683ee93
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:12 2015 +0000

    PCI/MSI: Drop domain field from msi_controller
    
    The only two users of that field are not using the msi_controller
    structure anymore, so drop it altogether.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit 8326736fc8b61b9422756c0bf0e4c5dd9dc1ce80
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:11 2015 +0000

    irqchip: gicv3-its: Get rid of struct msi_controller
    
    The GICv3 ITS only uses the msi_controller structure as a way
    to match the PHB with its MSI HW, and thus the msi_domain.
    But now that we can directly associate an msi_domain with a device,
    there is no use keeping this msi_controller around.
    
    Just remove all traces of msi_controller from the driver.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit fded15d5b5e0492f64564824111f2f901911b5de
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:10 2015 +0000

    irqchip: GICv2m: Get rid of struct msi_controller
    
    GICv2m only uses the msi_controller structure as a way to match
    the PHB with its MSI HW, and thus the msi_domain. But now that
    we can directly associate an msi_domain with a device, there is
    no use keeping this msi_controller around.
    
    Just remove all traces of msi_controller from the driver.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit 4b34322f05a85bb863f3d013776588677e326767
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:09 2015 +0000

    PCI/MSI: Let pci_msi_get_domain use struct device's msi_domain
    
    Now that we can easily find which MSI domain a PCI device is
    using, use dev_get_msi_domain as a way to retrieve the information.
    
    The original code is still used as a fallback.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit cc60a2e8d9efe415509b532a6fbb04ce1f43b1c5
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:08 2015 +0000

    PCI/MSI: of: Allow msi_domain lookup using the PHB node
    
    A number of platforms do not need to use the msi-parent property,
    as the host bridge itself provides the MSI controller.
    
    Allow this configuration by performing an irq domain lookup based
    on the PHB node if it doesn't have a valid msi-parent property.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit af6b972b6cabcda796b88b52111d2f5883006fbc
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:07 2015 +0000

    PCI/MSI: of: Add support for OF-provided msi_domain
    
    In order to populate the PHB msi_domain, use the "msi-parent"
    attribute to lookup a corresponding irq domain. If found,
    this is our MSI domain.
    
    This gets plugged into the core PCI code.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit 3e4b7b8d1bc9927144be8e123dfcc790e7ff1f46
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:06 2015 +0000

    PCI/MSI: Add hooks to populate the msi_domain field
    
    In order to be able to populate the device msi_domain field,
    add the necesary hooks to propagate the PHB msi_domain across
    secondary busses to devices.
    
    So far, nobody populates the initial msi_domain.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit e44063e72aecf5273d2b2a72afdef227cf34b92e
Author: Marc Zyngier <marc.zyngier@arm.com>
Date:   Thu Jan 8 17:06:05 2015 +0000

    device core: Introduce per-device MSI domain pointer
    
    As MSI-type features are creeping into non-PCI devices, it is
    starting to make sense to give our struct device some form of
    support for this, by allowing a pointer to an MSI irq domain to
    be set/retrieved.
    
    Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

commit 2b88e0efc0079e2914b8d29eda7178d319bc451d
Author: Al Stone <al.stone@linaro.org>
Date:   Mon Feb 2 20:45:49 2015 +0800

    arm64: ACPI: additions of ACPI documentation for arm64
    
    Two more documentation files are also being added:
    (1) A verbatim copy of the "Why ACPI on ARM?" blog posting by Grant Likely,
        which is also summarized in arm-acpi.txt, and
    
    (2) A section by section review of the ACPI spec (acpi_object_usage.txt)
        to note recommendations and prohibitions on the use of the numerous
        ACPI tables and objects.  This sets out the current expectations of
        the firmware by Linux very explicitly (or as explicitly as I can, for
        now).
    
    CC: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    CC: Yi Li <phoenix.liyi@huawei.com>
    CC: Mark Langsdorf <mlangsdo@redhat.com>
    CC: Ashwin Chaugule <ashwinc@codeaurora.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 11898311c598c099fff3be2fc80296ba0ce04760
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:48 2015 +0800

    Documentation: ACPI for ARM64
    
    Add documentation for the guidelines of how to use ACPI
    on ARM64.
    
    Reviewed-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Reviewed-by: Yi Li <phoenix.liyi@huawei.com>
    Reviewed-by: Mark Langsdorf <mlangsdo@redhat.com>
    Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 15c86f7418a04b2d6f10f6c7c22c09b2eb4f184a
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:47 2015 +0800

    ARM64 / ACPI: Enable ARM64 in Kconfig
    
    Add Kconfigs to build ACPI on ARM64, and make ACPI available on ARM64.
    
    acpi_idle driver is x86/IA64 dependent now, so make CONFIG_ACPI_PROCESSOR
    depend on X86 || IA64, and implement it on ARM64 in the future.
    
    CC: Rafael J. Wysocki <rjw@rjwysocki.net>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Reviewed-by: Grant Likely <grant.likely@linaro.org>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 7f6b100b64905fcb75d6daadf794a20f6dce2f1c
Author: Mark Salter <msalter@redhat.com>
Date:   Tue Feb 3 10:51:16 2015 -0500

    acpi: fix acpi_os_ioremap for arm64
    
    The acpi_os_ioremap() function may be used to map normal RAM or IO
    regions. The current implementation simply uses ioremap_cache(). This
    will work for some architectures, but arm64 ioremap_cache() cannot be
    used to map IO regions which don't support caching. So for arm64, use
    ioremap() for non-RAM regions.
    
    CC: Rafael J Wysocki <rjw@rjwysocki.net>
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 49e6c38cd04dbc34d1582209cf54ed6d0810f10b
Author: Al Stone <al.stone@linaro.org>
Date:   Mon Feb 2 20:45:46 2015 +0800

    ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64
    
    ACPI reduced hardware mode is disabled by default, but ARM64
    can only run properly in ACPI hardware reduced mode, so select
    ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64.
    
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Reviewed-by: Grant Likely <grant.likely@linaro.org>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 2b40359067fcd5b12fcd9c0b50064bcc6d4ed8a6
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:45 2015 +0800

    clocksource / arch_timer: Parse GTDT to initialize arch timer
    
    Using the information presented by GTDT (Generic Timer Description Table)
    to initialize the arch timer (not memory-mapped).
    
    CC: Daniel Lezcano <daniel.lezcano@linaro.org>
    Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 19b24b7ca4887f2e6d600d380a59d32478545499
Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Date:   Mon Feb 2 20:45:44 2015 +0800

    irqchip: Add GICv2 specific ACPI boot support
    
    ACPI kernel uses MADT table for proper GIC initialization. It needs to
    parse GIC related subtables, collect CPU interface and distributor
    addresses and call driver initialization function (which is hardware
    abstraction agnostic). In a similar way, FDT initialize GICv1/2.
    
    NOTE: This commit allow to initialize GICv1/2 basic functionality.
    While now simple GICv2 init call is used, any further GIC features
    require generic infrastructure for proper ACPI irqchip initialization.
    That mechanism and stacked irqdomains to support GICv2 MSI/vitalization
    extension, GICv3/4 and its ITS are considered as next steps.
    
    CC: Jason Cooper <jason@lakedaemon.net>
    CC: Marc Zyngier <marc.zyngier@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 5a8765560eac0563826a7abf83f5bca2ce158ec7
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:43 2015 +0800

    ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsi
    
    Introduce ACPI_IRQ_MODEL_GIC which is needed for ARM64 as GIC is
    used, and then register device's gsi with the core IRQ subsystem.
    
    acpi_register_gsi() is similar to DT based irq_of_parse_and_map(),
    since gsi is unique in the system, so use hwirq number directly
    for the mapping.
    
    We are going to implement stacked domains when GICv2m, GICv3, ITS
    support are added.
    
    CC: Marc Zyngier <marc.zyngier@arm.com>
    Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit e81cf7fc6beaad3f9532a7db22d91e43b7fc1ae5
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:42 2015 +0800

    ACPI / processor: Make it possible to get CPU hardware ID via GICC
    
    Introduce a new function map_gicc_mpidr() to allow MPIDRs to be obtained
    from the GICC Structure introduced by ACPI 5.1.
    
    MPIDR is the CPU hardware ID as local APIC ID on x86 platform, so we use
    MPIDR not the GIC CPU interface ID to identify CPUs.
    
    Further steps would typedef a phys_id_t for in arch code(with
    appropriate size and a corresponding invalid value, say ~0) and use that
    instead of an int in drivers/acpi/processor_core.c to store phys_id, then
    no need for mpidr packing.
    
    CC: Rafael J. Wysocki <rjw@rjwysocki.net>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 1887acdfd84b02c68bf17db11742a2c6b1461d14
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:41 2015 +0800

    ARM64 / ACPI: Parse MADT for SMP initialization
    
    MADT contains the information for MPIDR which is essential for
    SMP initialization, parse the GIC cpu interface structures to
    get the MPIDR value and map it to cpu_logical_map(), and add
    enabled cpu with valid MPIDR into cpu_possible_map.
    
    ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and
    Parking protocol, but the Parking protocol is only specified for
    ARMv7 now, so make PSCI as the only way for the SMP boot protocol
    before some updates for the ACPI spec or the Parking protocol spec.
    
    Parking protocol patches for SMP boot will be sent to upstream when
    the new version of Parking protocol is ready.
    
    CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    CC: Mark Rutland <mark.rutland@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>

commit ce773743a3ea42fa3fcdb8a19c8883b0555abc87
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:40 2015 +0800

    ACPI / table: Print GIC information when MADT is parsed
    
    When MADT is parsed, print GIC information to make the boot
    log look pretty:
    
    ACPI: GICC (acpi_id[0x0000] address[00000000e112f000] MPIDR[0x0] enabled)
    ACPI: GICC (acpi_id[0x0001] address[00000000e112f000] MPIDR[0x1] enabled)
    ...
    ACPI: GICC (acpi_id[0x0201] address[00000000e112f000] MPIDR[0x201] enabled)
    
    These information will be very helpful to bring up early systems to
    see if acpi_id and MPIDR are matched or not as spec defined.
    
    CC: Rafael J. Wysocki <rjw@rjwysocki.net>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>

commit 093da4679d8c27c8e688ad527c63cf9cf908e21c
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:39 2015 +0800

    ARM64 / ACPI: Get PSCI flags in FADT for PSCI init
    
    There are two flags: PSCI_COMPLIANT and PSCI_USE_HVC. When set,
    the former signals to the OS that the firmware is PSCI compliant.
    The latter selects the appropriate conduit for PSCI calls by
    toggling between Hypervisor Calls (HVC) and Secure Monitor Calls
    (SMC).
    
    FADT table contains such information in ACPI 5.1, FADT table was
    parsed in ACPI table init and copy to struct acpi_gbl_FADT, so
    use the flags in struct acpi_gbl_FADT for PSCI init.
    
    Since ACPI 5.1 doesn't support self defined PSCI function IDs,
    which means that only PSCI 0.2+ is supported in ACPI.
    
    CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 029a9833a4e8a876bdac71fe7560718bca3a7312
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:38 2015 +0800

    ARM64 / ACPI: If we chose to boot from acpi then disable FDT
    
    If the early boot methods of acpi are happy that we have valid ACPI
    tables and acpi=force has been passed, then do not unflat devicetree
    effectively disabling further hardware probing from DT.
    
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 1d9af820ab2bc576e3d1ddc6e1797049f46d32ba
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:37 2015 +0800

    ARM64 / ACPI: Disable ACPI if FADT revision is less than 5.1
    
    FADT Major.Minor version was introduced in ACPI 5.1, it is the same
    as ACPI version.
    
    In ACPI 5.1, some major gaps are fixed for ARM, such as updates in
    MADT table for GIC and SMP init, without those updates, we can not
    get the MPIDR for SMP init, and GICv2/3 related init information, so
    we can't boot arm64 ACPI properly with table versions predating 5.1.
    
    If firmware provides ACPI tables with ACPI version less than 5.1,
    OS has no way to retrieve the configuration data that is necessary
    to init SMP boot protocol and the GIC properly, so disable ACPI if
    we get an FADT table with version less that 5.1.
    
    CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 1bcb26c31cbc9db1645a62a71b8bce9302d778fc
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:36 2015 +0800

    dt / chosen: Add linux,uefi-stub-generated-dtb property
    
    When system supporting both DT and ACPI but firmware providing
    no dtb, we can use this linux,uefi-stub-generated-dtb property
    to let kernel know that we can try ACPI configuration data even
    if no "acpi=force" is passed in early parameters.
    
    CC: Mark Rutland <mark.rutland@arm.com>
    CC: Jonathan Corbet <corbet@lwn.net>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    CC: Leif Lindholm <leif.lindholm@linaro.org>
    CC: Grant Likely <grant.likely@linaro.org>
    CC: Matt Fleming <matt.fleming@intel.com>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit db0a9f442cefc5eb655fd25baaf725c7f69440c7
Author: Al Stone <al.stone@linaro.org>
Date:   Mon Feb 2 20:45:35 2015 +0800

    ARM64 / ACPI: Introduce early_param for "acpi" and pass acpi=force to enable ACPI
    
    Introduce two early parameters "off" and "force" for "acpi", acpi=off
    will be the default behavior for ARM64, so introduce acpi=force to
    enable ACPI on ARM64.
    
    Disable ACPI before early parameters parsed, and enable it to pass
    "acpi=force" if people want use ACPI on ARM64. This ensures DT be
    the prefer one if ACPI table and DT both are provided at this moment.
    
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    CC: Rafael J. Wysocki <rjw@rjwysocki.net>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 4d8a88fa316ced237776e33106aaf11696f458a0
Author: Hanjun Guo <hanjun.guo@linaro.org>
Date:   Mon Feb 2 20:45:34 2015 +0800

    ARM64 / ACPI: Introduce PCI stub functions for ACPI
    
    CONFIG_ACPI depends CONFIG_PCI on x86 and ia64, in ARM64 server
    world we will have PCIe in most cases, but some of them may not,
    make CONFIG_ACPI depend CONFIG_PCI on ARM64 will satisfy both.
    
    With that case, we need some arch dependent PCI functions to
    access the config space before the PCI root bridge is created, and
    pci_acpi_scan_root() to create the PCI root bus. So introduce
    some stub function here to make ACPI core compile and revisit
    them later when implemented on ARM64.
    
    CC: Liviu Dudau <Liviu.Dudau@arm.com>
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit b74ac8899a0e305d32808f735953b465991b8f17
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:33 2015 +0800

    ACPI / sleep: Introduce sleep_arm.c
    
    ACPI 5.1 does not currently support S states for ARM64 hardware but
    ACPI code will call acpi_target_system_state() for device power
    managment, so introduce sleep_arm.c to allow other drivers to function
    until S states are defined.
    
    CC: Rafael J. Wysocki <rjw@rjwysocki.net>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit bfbe33ec40d51e51d0544fa1273ecf3dd4142a92
Author: Al Stone <al.stone@linaro.org>
Date:   Mon Feb 2 20:45:32 2015 +0800

    ARM64 / ACPI: Get RSDP and ACPI boot-time tables
    
    As we want to get ACPI tables to parse and then use the information
    for system initialization, we should get the RSDP (Root System
    Description Pointer) first, it then locates Extended Root Description
    Table (XSDT) which contains all the 64-bit physical address that
    pointer to other boot-time tables.
    
    Introduce acpi.c and its related head file in this patch to provide
    fundamental needs of extern variables and functions for ACPI core,
    and then get boot-time tables as needed.
      - asm/acenv.h for arch specific ACPICA environments and
        implementation, It is needed unconditionally by ACPI core;
      - asm/acpi.h for arch specific variables and functions needed by
        ACPI driver core;
      - acpi.c for ARM64 related ACPI implementation for ACPI driver
        core;
    
    acpi_boot_table_init() is introduced to get RSDP and boot-time tables,
    it will be called in setup_arch() before paging_init(), so we should
    use eary_memremap() mechanism here to get the RSDP and all the table
    pointers.
    
    CC: Catalin Marinas <catalin.marinas@arm.com>
    CC: Will Deacon <will.deacon@arm.com>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Yijing Wang <wangyijing@huawei.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Tested-by: Timur Tabi <timur@codeaurora.org>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit db306fcca88084b45084a72ac7a9491994e3a1eb
Author: Mark Salter <msalter@redhat.com>
Date:   Mon Feb 2 20:45:31 2015 +0800

    arm64: allow late use of early_ioremap
    
    Commit 0e63ea48b4d8 (arm64/efi: add missing call to early_ioremap_reset())
    added a missing call to early_ioremap_reset(). This triggers a BUG if code
    tries using early_ioremap() after the early_ioremap_reset(). This is a
    problem for some ACPI code which needs short-lived temporary mappings
    after paging_init() but before acpi_early_init() in start_kernel(). This
    patch adds definitions for the __late_set_fixmap() and __late_clear_fixmap()
    which avoids the BUG by allowing later use of early_ioremap().
    
    CC: Leif Lindholm <leif.lindholm@linaro.org>
    CC: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: Jon Masters <jcm@redhat.com>
    Signed-off-by: Mark Salter <msalter@redhat.com>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit df7dbcea6c185ceb3d62626e1e98024c0742b658
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Mon Feb 2 20:45:29 2015 +0800

    acpi: add arm64 to the platforms that use ioremap
    
    Now with the base changes to the arm memory mapping it is safe
    to convert to using ioremap to map in the tables after
    acpi_gbl_permanent_mmap is set.
    
    CC: Rafael J Wysocki <rjw@rjwysocki.net>
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
    Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>

commit 3f61bd4dc0405314173e5b4e129e79745ec994b9
Author: Mark Salter <msalter@redhat.com>
Date:   Tue Sep 30 17:19:24 2014 -0400

    arm64: avoid need for console= to enable serial console
    
    Tell kernel to prefer one of the serial ports on platforms
    pl011, 8250, or sbsa uarts. console= on command line will
    override these assumed preferences. This is just a hack to
    get the behavior we want from SPCR table support. Once SPCR
    is supported, we can drop this.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 830714ea6390e53a92751b338dea8863aabaf81f
Author: Mark Salter <msalter@redhat.com>
Date:   Fri Nov 21 23:21:30 2014 -0500

    DO NOT UPSTREAM - tty/pl011: make ttyAMA0 the active console device
    
    The pl011 uart driver doesn't register itself as a console
    until device_initcall time. This allows the virtual console
    driver to register the active console if no console= is
    given on the cmdline. This patch allows ttyAMA0 to take
    over the active console device role from any existing
    console device if no console= is given on the cmdline.
    
    This is just a temporary hack until SPCR table is supported.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit a1c79144bc02623b27edafa950eacb304e1b6548
Author: Mark Salter <msalter@redhat.com>
Date:   Wed Nov 19 10:08:29 2014 -0500

    tty/sbsauart: DO NOT UPSTREAM - make ttySBSA the active console device
    
    The sbsauart driver doesn't register itself as a console
    until module_initcall time. This allows the virtual console
    driver to register the active console if no console= is
    given on the cmdline. This patch allows ttySBSA to take
    over the active console device role from any existing
    console device if no console= is given on the cmdline.
    
    This is just a temprary hack until SPCR table is supported.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 92c29f8cd602b302e6b56823ce056df05e9116cb
Author: Graeme Gregory <graeme.gregory@linaro.org>
Date:   Wed Aug 13 13:47:18 2014 +0100

    tty: SBSA compatible UART
    
    This is a subset of pl011 UART which does not supprt DMA or baud rate
    changing.  It does, however, provide earlycon support (i.e., using
    "earlycon=ttySBSA" on the kernel command line).
    
    It is specified in the Server Base System Architecture document from
    ARM.
    
    Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>

commit 6c38227252804421b94861cd06473fd608d44276
Author: Mark Salter <msalter@redhat.com>
Date:   Sat Nov 8 22:25:48 2014 -0500

    arm64: use UEFI for reboot
    
    Wire in support for UEFI reboot. We want UEFI reboot to have
    highest priority for capsule support.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 116f11ce2a8538873fb787d42e69518e4dfb07fa
Author: Mark Salter <msalter@redhat.com>
Date:   Sat Nov 8 15:25:41 2014 -0500

    arm64: use UEFI as last resort for poweroff
    
    Wire in support for poweroff via UEFI.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 5dfa57078318e948a199b9f00855c0ae2dd7e2ca
Author: Mark Salter <msalter@redhat.com>
Date:   Thu Jul 17 13:34:50 2014 -0400

    ahci_xgene: add errata workaround for ATA_CMD_SMART
    
    commit 2a0bdff6b958d1b2:
    
     ahci_xgene: fix the dma state machine lockup for the IDENTIFY DEVICE PIO mode command.
    
    added a workaround for X-Gene AHCI controller errata. This was done
    for all ATA_CMD_ID_ATA commands. The errata also appears to affect
    ATA_CMD_SMART commands as well. This was discovered when running
    smartd or just smartctl -x. This patch adds a dma engine restart for
    ATA_CMD_SMART commands which clears up the issues seen with smartd.
    
    Signed-off-by: Mark Salter <msalter@redhat.com>

commit 8dc5d6c4d782adc8a5f83f02ce7fd848eff129ca
Author: Kyle McMartin <kmcmarti@redhat.com>
Date:   Tue May 13 22:25:26 2014 -0400

    arm64: don't set READ_IMPLIES_EXEC for EM_AARCH64 ELF objects
    
    Currently, we're accidentally ending up with executable stacks on
    AArch64 when the ABI says we shouldn't be, and relying on glibc to fix
    things up for us when we're loaded. However, SELinux will deny us
    mucking with the stack, and hit us with execmem AVCs.
    
    The reason this is happening is somewhat complex:
    
    fs/binfmt_elf.c:load_elf_binary()
     - initializes executable_stack = EXSTACK_DEFAULT implying the
       architecture should make up its mind.
     - does a pile of loading goo
     - runs through the program headers, looking for PT_GNU_STACK
       and setting (or unsetting) executable_stack if it finds it.
    
       This is our first problem, we won't generate these unless an
       executable stack is explicitly requested.
    
     - more ELF loading goo
     - sets whether we're a compat task or not (TIF_32BIT) based on compat.h
     - for compat reasons (pre-GNU_STACK) checks if the READ_IMPLIES_EXEC
       flag should be set for ancient toolchains
    
       Here's our second problem, we test if read_implies_exec based on
       stk != EXSTACK_DISABLE_X, which is true since stk == EXSTACK_DEFAULT.
    
       So we set current->personality |= READ_IMPLIES_EXEC like a broken
       legacy toolchain would want.
    
     - Now we call setup_arg_pages to set up the stack...
    
    fs/exec.c:setup_arg_pages()
     - lots of magic happens here
     - vm_flags gets initialized to VM_STACK_FLAGS
    
       Here's our third problem, VM_STACK_FLAGS on arm64 is
       VM_DEFAULT_DATA_FLAG which tests READ_IMPLIES_EXEC and sets VM_EXEC
       if it's true. So we end up with an executable stack mapping, since we
       don't have executable_stack set (it's still EXSTACK_DEFAULT at this
       point) to unset it anywhere.
    
    Bang. execstack AVC when the program starts running.
    
    The easiest way I can see to fix this is to test if we're a legacy task
    and fix it up there. But that's not as simple as it sounds, because
    the 32-bit ABI depends on what revision of the CPU we've enabled (not
    that it matters since we're ARMv8...) Regardless, in the compat case,
    set READ_IMPLIES_EXEC if we've found a GNU_STACK header which explicitly
    requested it as in arch/arm/kernel/elf.c:arm_elf_read_implies_exec().
    
    Signed-off-by: Kyle McMartin <kmcmarti@redhat.com>
    Signed-off-by: Donald Dutile <ddutile@redhat.com>

 Documentation/ABI/testing/sysfs-firmware-dmi     |   10 +
 Documentation/arm/uefi.txt                       |    3 +
 Documentation/arm64/acpi_object_usage.txt        |  592 +++++
 Documentation/arm64/arm-acpi.txt                 |  506 ++++
 Documentation/arm64/why_use_acpi.txt             |  231 ++
 Documentation/kernel-parameters.txt              |    3 +-
 arch/arm64/Kconfig                               |   10 +
 arch/arm64/include/asm/acenv.h                   |   18 +
 arch/arm64/include/asm/acpi.h                    |  120 +
 arch/arm64/include/asm/cpu_ops.h                 |    1 +
 arch/arm64/include/asm/elf.h                     |    3 +-
 arch/arm64/include/asm/fixmap.h                  |    3 +
 arch/arm64/include/asm/pci.h                     |   66 +
 arch/arm64/include/asm/psci.h                    |    3 +-
 arch/arm64/include/asm/smp.h                     |   10 +-
 arch/arm64/kernel/Makefile                       |    4 +-
 arch/arm64/kernel/acpi.c                         |  397 +++
 arch/arm64/kernel/cpu_ops.c                      |    6 +-
 arch/arm64/kernel/efi.c                          |   37 +
 arch/arm64/kernel/pci.c                          |  424 +++-
 arch/arm64/kernel/perf_event.c                   |  102 +
 arch/arm64/kernel/psci.c                         |   78 +-
 arch/arm64/kernel/setup.c                        |   80 +-
 arch/arm64/kernel/smp.c                          |    2 +-
 arch/arm64/kernel/smp_parking_protocol.c         |  110 +
 arch/arm64/kernel/time.c                         |    7 +
 arch/arm64/mm/dma-mapping.c                      |    6 +
 arch/x86/include/asm/pci.h                       |   42 +
 arch/x86/include/asm/pci_x86.h                   |   72 -
 arch/x86/pci/Makefile                            |    5 +-
 arch/x86/pci/acpi.c                              |    1 +
 arch/x86/pci/init.c                              |    1 +
 arch/x86/pci/mmconfig-shared.c                   |  198 +-
 arch/x86/pci/mmconfig_32.c                       |   11 +-
 arch/x86/pci/mmconfig_64.c                       |  153 --
 drivers/acpi/Kconfig                             |    3 +-
 drivers/acpi/Makefile                            |    5 +
 drivers/acpi/bus.c                               |    4 +
 drivers/acpi/mmconfig.c                          |  414 +++
 drivers/acpi/osl.c                               |    6 +-
 drivers/acpi/processor_core.c                    |   37 +
 drivers/acpi/scan.c                              |   75 +-
 drivers/acpi/sleep_arm.c                         |   28 +
 drivers/acpi/tables.c                            |   43 +
 drivers/acpi/utils.c                             |   26 +
 drivers/ata/Kconfig                              |    2 +-
 drivers/ata/ahci_platform.c                      |   12 +
 drivers/ata/ahci_xgene.c                         |   27 +-
 drivers/clocksource/arm_arch_timer.c             |  135 +-
 drivers/firmware/dmi-sysfs.c                     |   42 +
 drivers/firmware/dmi_scan.c                      |   26 +
 drivers/firmware/efi/libstub/fdt.c               |    8 +
 drivers/iommu/arm-smmu.c                         |    8 +-
 drivers/irqchip/irq-gic-v2m.c                    |  148 +-
 drivers/irqchip/irq-gic-v3-its.c                 |   33 +-
 drivers/irqchip/irq-gic-v3.c                     |   10 +
 drivers/irqchip/irq-gic.c                        |  125 +-
 drivers/irqchip/irqchip.c                        |    3 +
 drivers/net/ethernet/amd/Makefile                |    1 +
 drivers/net/ethernet/amd/xgbe-a0/Makefile        |    8 +
 drivers/net/ethernet/amd/xgbe-a0/xgbe-common.h   | 1142 +++++++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-dcb.c      |  269 ++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-debugfs.c  |  373 +++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-desc.c     |  636 +++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-dev.c      | 2964 ++++++++++++++++++++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-drv.c      | 2204 ++++++++++++++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-ethtool.c  |  616 +++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-main.c     |  643 +++++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-mdio.c     |  312 +++
 drivers/net/ethernet/amd/xgbe-a0/xgbe-ptp.c      |  284 +++
 drivers/net/ethernet/amd/xgbe-a0/xgbe.h          |  868 +++++++
 drivers/net/ethernet/apm/xgene/xgene_enet_hw.c   |   69 +-
 drivers/net/ethernet/apm/xgene/xgene_enet_main.c |  111 +-
 drivers/net/ethernet/apm/xgene/xgene_enet_main.h |    4 +-
 drivers/net/ethernet/smsc/smc91x.c               |   10 +
 drivers/net/phy/Makefile                         |    1 +
 drivers/net/phy/amd-xgbe-phy-a0.c                | 1829 +++++++++++++
 drivers/pci/host/pci-xgene.c                     |  248 ++
 drivers/pci/msi.c                                |    3 +-
 drivers/pci/of.c                                 |   20 +
 drivers/pci/pci-acpi.c                           |   36 +
 drivers/pci/probe.c                              |   33 +
 drivers/tty/Kconfig                              |    6 +
 drivers/tty/Makefile                             |    1 +
 drivers/tty/sbsauart.c                           |  358 +++
 drivers/tty/serial/8250/8250_dw.c                |    6 +-
 drivers/tty/serial/amba-pl011.c                  |    8 +
 drivers/usb/host/xhci-plat.c                     |   15 +-
 drivers/virtio/virtio_mmio.c                     |   12 +-
 include/acpi/acnames.h                           |    1 +
 include/acpi/acpi_bus.h                          |    2 +
 include/acpi/acpi_io.h                           |    3 +
 include/asm-generic/vmlinux.lds.h                |    7 +
 include/kvm/arm_vgic.h                           |   20 +-
 include/linux/acpi.h                             |   26 +
 include/linux/clocksource.h                      |    6 +
 include/linux/device.h                           |   21 +
 include/linux/dmi.h                              |    3 +
 include/linux/irqchip/arm-gic-acpi.h             |   31 +
 include/linux/irqchip/arm-gic.h                  |    7 +
 include/linux/mmconfig.h                         |   86 +
 include/linux/mod_devicetable.h                  |    6 +
 include/linux/msi.h                              |    4 +-
 include/linux/pci-acpi.h                         |    3 +
 include/linux/pci.h                              |   11 +-
 kernel/irq/msi.c                                 |   24 +
 virt/kvm/arm/arch_timer.c                        |  107 +-
 virt/kvm/arm/vgic-v2.c                           |   86 +-
 virt/kvm/arm/vgic-v3.c                           |    8 +-
 virt/kvm/arm/vgic.c                              |   32 +-
 110 files changed, 17386 insertions(+), 733 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-firmware-dmi b/Documentation/ABI/testing/sysfs-firmware-dmi
index c78f9ab..3a9ffe8 100644
--- a/Documentation/ABI/testing/sysfs-firmware-dmi
+++ b/Documentation/ABI/testing/sysfs-firmware-dmi
@@ -12,6 +12,16 @@ Description:
 		cannot ensure that the data as exported to userland is
 		without error either.
 
+		The firmware provides DMI structures as a packed list of
+		data referenced by a SMBIOS table entry point. The SMBIOS
+		entry point contains general information, like SMBIOS
+		version, DMI table size, etc. The structure, content and
+		size of SMBIOS entry point is dependent on SMBIOS version.
+		That's why SMBIOS entry point is represented in dmi sysfs
+		like a raw attribute and is accessible via
+		/sys/firmware/dmi/smbios_raw_header. The format of SMBIOS
+		entry point header can be read in SMBIOS specification.
+
 		DMI is structured as a large table of entries, where
 		each entry has a common header indicating the type and
 		length of the entry, as well as a firmware-provided
diff --git a/Documentation/arm/uefi.txt b/Documentation/arm/uefi.txt
index d60030a..5f86eae 100644
--- a/Documentation/arm/uefi.txt
+++ b/Documentation/arm/uefi.txt
@@ -60,5 +60,8 @@ linux,uefi-mmap-desc-ver  | 32-bit | Version of the mmap descriptor format.
 --------------------------------------------------------------------------------
 linux,uefi-stub-kern-ver  | string | Copy of linux_banner from build.
 --------------------------------------------------------------------------------
+linux,uefi-stub-generated-dtb  | bool | Indication for no DTB provided by
+			       |      | firmware.
+--------------------------------------------------------------------------------
 
 For verbose debug messages, specify 'uefi_debug' on the kernel command line.
diff --git a/Documentation/arm64/acpi_object_usage.txt b/Documentation/arm64/acpi_object_usage.txt
new file mode 100644
index 0000000..2c4f733
--- /dev/null
+++ b/Documentation/arm64/acpi_object_usage.txt
@@ -0,0 +1,592 @@
+ACPI Tables
+-----------
+The expectations of individual ACPI tables are discussed in the list that
+follows.
+
+If a section number is used, it refers to a section number in the ACPI
+specification where the object is defined.  If "Signature Reserved" is used,
+the table signature (the first four bytes of the table) is the only portion
+of the table recognized by the specification, and the actual table is defined
+outside of the UEFI Forum (see Section 5.2.6 of the specification).
+
+For ACPI on arm64, tables also fall into the following categories:
+
+	-- Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT
+
+	-- Recommended: BERT, EINJ, ERST, HEST, SSDT
+
+	-- Optional: BGRT, CPEP, CSRT, DRTM, ECDT, FACS, FPDT, MCHI, MPST,
+	   MSCT, RASF, SBST, SLIT, SPMI, SRAT, TCPA, TPM2, UEFI
+
+	-- Not supported: BOOT, DBG2, DBGP, DMAR, ETDT, HPET, IBFT, IVRS,
+	   LPIT, MSDM, RSDT, SLIC, WAET, WDAT, WDRT, WPBT
+
+
+Table	Usage for ARMv8 Linux
+-----	----------------------------------------------------------------
+BERT	Section 18.3 (signature == "BERT")
+	== Boot Error Record Table ==
+	Must be supplied if RAS support is provided by the platform.  It
+	is recommended this table be supplied.
+
+BOOT	Signature Reserved (signature == "BOOT")
+	== simple BOOT flag table ==
+	Microsoft only table, will not be supported.
+
+BGRT	Section 5.2.22 (signature == "BGRT")
+	== Boot Graphics Resource Table ==
+	Optional, not currently supported, with no real use-case for an
+	ARM server.
+
+CPEP	Section 5.2.18 (signature == "CPEP")
+	== Corrected Platform Error Polling table ==
+	Optional, not currently supported, and not recommended until such
+	time as ARM-compatible hardware is available, and the specification
+	suitably modified.
+
+CSRT	Signature Reserved (signature == "CSRT")
+	== Core System Resources Table ==
+	Optional, not currently supported.
+
+DBG2	Signature Reserved (signature == "DBG2")
+	== DeBuG port table 2 ==
+	Microsoft only table, will not be supported.
+
+DBGP	Signature Reserved (signature == "DBGP")
+	== DeBuG Port table ==
+	Microsoft only table, will not be supported.
+
+DSDT	Section 5.2.11.1 (signature == "DSDT")
+	== Differentiated System Description Table ==
+	A DSDT is required; see also SSDT.
+
+	ACPI tables contain only one DSDT but can contain one or more SSDTs,
+	which are optional.  Each SSDT can only add to the ACPI namespace,
+	but cannot modify or replace anything in the DSDT.
+
+DMAR	Signature Reserved (signature == "DMAR")
+	== DMA Remapping table ==
+	x86 only table, will not be supported.
+
+DRTM	Signature Reserved (signature == "DRTM")
+	== Dynamic Root of Trust for Measurement table ==
+	Optional, not currently supported.
+
+ECDT	Section 5.2.16 (signature == "ECDT")
+	== Embedded Controller Description Table ==
+	Optional, not currently supported, but could be used on ARM if and
+	only if one uses the GPE_BIT field to represent an IRQ number, since
+	there are no GPE blocks defined in hardware reduced mode.  This would
+	need to be modified in the ACPI specification.
+
+EINJ	Section 18.6 (signature == "EINJ")
+	== Error Injection table ==
+	This table is very useful for testing platform response to error
+	conditions; it allows one to inject an error into the system as
+	if it had actually occurred.  However, this table should not be
+	shipped with a production system; it should be dynamically loaded
+	and executed with the ACPICA tools only during testing.
+
+ERST	Section 18.5 (signature == "ERST")
+	== Error Record Serialization Table ==
+	Must be supplied if RAS support is provided by the platform.  It
+	is recommended this table be supplied.
+
+ETDT	Signature Reserved (signature == "ETDT")
+	== Event Timer Description Table ==
+	Obsolete table, will not be supported.
+
+FACS	Section 5.2.10 (signature == "FACS")
+	== Firmware ACPI Control Structure ==
+	It is unlikely that this table will be terribly useful.  If it is
+	provided, the Global Lock will NOT be used since it is not part of
+	the hardware reduced profile, and only 64-bit address fields will
+	be considered valid.
+
+FADT	Section 5.2.9 (signature == "FACP")
+	== Fixed ACPI Description Table ==
+	Required for arm64.
+
+	The HW_REDUCED_ACPI flag must be set.  All of the fields that are
+	to be ignored when HW_REDUCED_ACPI is set are expected to be set to
+	zero.
+
+	If an FACS table is provided, the X_FIRMWARE_CTRL field is to be
+	used, not FIRMWARE_CTRL.
+
+	If PSCI is used (as is recommended), make sure that ARM_BOOT_ARCH is
+	filled in properly -- that the PSCI_COMPLIANT flag is set and that
+	PSCI_USE_HVC is set or unset as needed (see table 5-37).
+
+	For the DSDT that is also required, the X_DSDT field is to be used,
+	not the DSDT field.
+
+FPDT	Section 5.2.23 (signature == "FPDT")
+	== Firmware Performance Data Table ==
+	Optional, not currently supported.
+
+GTDT	Section 5.2.24 (signature == "GTDT")
+	== Generic Timer Description Table ==
+	Required for arm64.
+
+HEST	Section 18.3.2 (signature == "HEST")
+	== Hardware Error Source Table ==
+	Until further error source types are defined, use only types 6 (AER
+	Root Port), 7 (AER Endpoint), 8 (AER Bridge), or 9 (Generic Hardware
+	Error Source).  Firmware first error handling is possible if and only
+	if Trusted Firmware is being used on arm64.
+
+	Must be supplied if RAS support is provided by the platform.  It
+	is recommended this table be supplied.
+
+HPET	Signature Reserved (signature == "HPET")
+	== High Precision Event timer Table ==
+	x86 only table, will not be supported.
+
+IBFT	Signature Reserved (signature == "IBFT")
+	== iSCSI Boot Firmware Table ==
+	Microsoft defined table, support TBD.
+
+IVRS	Signature Reserved (signature == "IVRS")
+	== I/O Virtualization Reporting Structure ==
+	x86_64 (AMD) only table, will not be supported.
+
+LPIT	Signature Reserved (signature == "LPIT")
+	== Low Power Idle Table ==
+	x86 only table as of ACPI 5.1; future versions have been adapted for
+	use with ARM and will be recommended in order to support ACPI power
+	management.
+
+MADT	Section 5.2.12 (signature == "APIC")
+	== Multiple APIC Description Table ==
+	Required for arm64.  Only the GIC interrupt controller structures 
+	should be used (types 0xA - 0xE).
+
+MCFG	Signature Reserved (signature == "MCFG")
+	== Memory-mapped ConFiGuration space ==
+	If the platform supports PCI/PCIe, an MCFG table is required.
+
+MCHI	Signature Reserved (signature == "MCHI")
+	== Management Controller Host Interface table ==
+	Optional, not currently supported.
+
+MPST	Section 5.2.21 (signature == "MPST")
+	== Memory Power State Table ==
+	Optional, not currently supported.
+
+MSDM	Signature Reserved (signature == "MSDM")
+	== Microsoft Data Management table ==
+	Microsoft only table, will not be supported.
+
+MSCT	Section 5.2.19 (signature == "MSCT")
+	== Maximum System Characteristic Table ==
+	Optional, not currently supported.
+
+RASF	Section 5.2.20 (signature == "RASF")
+	== RAS Feature table ==
+	Optional, not currently supported.
+
+RSDP	Section 5.2.5 (signature == "RSD PTR")
+	== Root System Description PoinTeR ==
+	Required for arm64.
+
+RSDT	Section 5.2.7 (signature == "RSDT")
+	== Root System Description Table ==
+	Since this table can only provide 32-bit addresses, it is deprecated
+	on arm64, and will not be used.
+
+SBST	Section 5.2.14 (signature == "SBST")
+	== Smart Battery Subsystem Table ==
+	Optional, not currently supported.
+
+SLIC	Signature Reserved (signature == "SLIC")
+	== Software LIcensing table ==
+	Microsoft only table, will not be supported.
+
+SLIT	Section 5.2.17 (signature == "SLIT")
+	== System Locality distance Information Table ==
+	Optional in general, but required for NUMA systems.
+
+SPCR	Signature Reserved (signature == "SPCR")
+	== Serial Port Console Redirection table ==
+	Required for arm64.
+
+SPMI	Signature Reserved (signature == "SPMI")
+	== Server Platform Management Interface table ==
+	Optional, not currently supported.
+
+SRAT	Section 5.2.16 (signature == "SRAT")
+	== System Resource Affinity Table ==
+	Optional, but if used, only the GICC Affinity structures are read.
+	To support NUMA, this table is required.
+
+SSDT	Section 5.2.11.2 (signature == "SSDT")
+	== Secondary System Description Table ==
+	These tables are a continuation of the DSDT; these are recommended
+	for use with devices that can be added to a running system, but can
+	also serve the purpose of dividing up device descriptions into more
+	manageable pieces.
+
+	An SSDT can only ADD to the ACPI namespace.  It cannot modify or
+	replace existing device descriptions already in the namespace.
+
+	These tables are optional, however.  ACPI tables should contain only
+	one DSDT but can contain many SSDTs.
+
+TCPA	Signature Reserved (signature == "TCPA")
+	== Trusted Computing Platform Alliance table ==
+	Optional, not currently supported, and may need changes to fully
+	interoperate with arm64.
+
+TPM2	Signature Reserved (signature == "TPM2")
+	== Trusted Platform Module 2 table ==
+	Optional, not currently supported, and may need changes to fully
+	interoperate with arm64.
+
+UEFI	Signature Reserved (signature == "UEFI")
+	== UEFI ACPI data table ==
+	Optional, not currently supported.  No known use case for arm64,
+	at present.
+
+WAET	Signature Reserved (signature == "WAET")
+	== Windows ACPI Emulated devices Table ==
+	Microsoft only table, will not be supported.
+
+WDAT	Signature Reserved (signature == "WDAT")
+	== Watch Dog Action Table ==
+	Microsoft only table, will not be supported.
+
+WDRT	Signature Reserved (signature == "WDRT")
+	== Watch Dog Resource Table ==
+	Microsoft only table, will not be supported.
+
+WPBT	Signature Reserved (signature == "WPBT")
+	== Windows Platform Binary Table ==
+	Microsoft only table, will not be supported.
+
+XSDT	Section 5.2.8 (signature == "XSDT")
+	== eXtended System Description Table ==
+	Required for arm64.
+
+
+ACPI Objects
+------------
+The expectations on individual ACPI objects are discussed in the list that
+follows:
+
+Name	Section		Usage for ARMv8 Linux
+----	------------	-------------------------------------------------
+_ADR	6.1.1		Use as needed.
+
+_BBN	6.5.5		Use as needed; PCI-specific.
+
+_BDN	6.5.3		Optional; not likely to be used on arm64.
+
+_CCA	6.2.17		This method should be defined for all bus masters
+			on arm64.  While cache coherency is assumed, making
+			it explicit ensures the kernel will set up DMA as
+			it should.
+
+_CDM	6.2.1		Optional, to be used only for processor devices.
+
+_CID	6.1.2		Use as needed.
+
+_CLS	6.1.3		Use as needed.
+
+_CRS	6.2.2		Required on arm64.
+
+_DCK	6.5.2		Optional; not likely to be used on arm64.
+
+_DDN	6.1.4		This field can be used for a device name.  However,
+			it is meant for DOS device names (e.g., COM1), so be
+			careful of its use across OSes.
+
+_DEP	6.5.8		Use as needed.
+
+_DIS	6.2.3		Optional, for power management use.
+
+_DLM	5.7.5		Optional.
+
+_DMA	6.2.4		Optional.
+
+_DSD	6.2.5		To be used with caution.  If this object is used, try
+			to use it within the constraints already defined by the
+			Device Properties UUID.  Only in rare circumstances
+			should it be necessary to create a new _DSD UUID.
+
+			In either case, submit the _DSD definition along with
+			any driver patches for discussion, especially when
+			device properties are used.  A driver will not be 
+			considered complete without a corresponding _DSD
+			description.  Once approved by kernel maintainers,
+			the UUID or device properties must then be registered
+			with the UEFI Forum; this may cause some iteration as
+			more than one OS will be registering entries.
+
+_DSM			Do not use this method.  It is not standardized, the
+			return values are not well documented, and it is
+			currently a frequent source of error.
+
+_DSW	7.2.1		Use as needed; power management specific.
+
+_EDL	6.3.1		Optional.
+
+_EJD	6.3.2		Optional.
+
+_EJx	6.3.3		Optional.
+
+_FIX	6.2.7		x86 specific, not used on arm64.
+
+\_GL	5.7.1		This object is not to be used in hardware reduced
+			mode, and therefore should not be used on arm64.
+
+_GLK	6.5.7		This object requires a global lock be defined; there
+			is no global lock on arm64 since it runs in hardware
+			reduced mode.  Hence, do not use this object on arm64.
+
+\_GPE	5.3.1		This namespace is for x86 use only.  Do not use it
+			on arm64.
+
+_GSB	6.2.7		Optional.
+
+_HID	6.1.5		Use as needed.  This is the primary object to use in
+			device probing, though _CID and _CLS may also be used.
+
+_HPP	6.2.8		Optional, PCI specific.
+
+_HPX	6.2.9		Optional, PCI specific.
+
+_HRV	6.1.6		Optional, use as needed to clarify device behavior; in
+			some cases, this may be easier to use than _DSD.
+
+_INI	6.5.1		Not required, but can be useful in setting up devices
+			when UEFI leaves them in a state that may not be what
+			the driver expects before it starts probing.
+
+_IRC	7.2.15		Use as needed; power management specific.
+
+_LCK	6.3.4		Optional.
+
+_MAT	6.2.10		Optional; see also the MADT.
+
+_MLS	6.1.7		Optional, but highly recommended for use in 
+			internationalization.
+
+_OFF	7.1.2		It is recommended to define this method for any device
+			that can be turned on or off.
+
+_ON	7.1.3		It is recommended to define this method for any device
+			that can be turned on or off.
+
+\_OS	5.7.3		This method will return "Linux" by default (this is
+			the value of the macro ACPI_OS_NAME on Linux).  The
+			command line parameter acpi_os=<string> can be used
+			to set it to some other value.
+
+_OSC	6.2.11		This method can be a global method in ACPI (i.e.,
+			\_SB._OSC), or it may be associated with a specific
+			device (e.g., \_SB.DEV0._OSC), or both.  When used
+			as a global method, only capabilities published in
+			the ACPI specification are allowed.  When used as
+			a device-specifc method, the process described for
+			using _DSD MUST be used to create an _OSC definition;
+			out-of-process use of _OSC is not allowed.  That is,
+			submit the device-specific _OSC usage description as
+			part of the kernel driver submission, get it approved
+			by the kernel community, then register it with the
+			UEFI Forum.
+
+\_OSI	5.7.2		Deprecated on ARM64.  Any invocation of this method
+			will print a warning on the console and return false.
+			That is, as far as ACPI firmware is concerned, _OSI
+			cannot be used to determine what sort of system is
+			being used or what functionality is provided.  The
+			_OSC method is to be used instead.
+
+_OST	6.3.5		Optional.
+
+_PDC	8.4.1		Deprecated, do not use on arm64.
+
+\_PIC	5.8.1		The method should not be used.  On arm64, the only
+			interrupt model available is GIC.
+
+_PLD	6.1.8		Optional.
+
+\_PR	5.3.1		This namespace is for x86 use only on legacy systems.
+			Do not use it on arm64.
+
+_PRS	6.2.12		Optional.
+
+_PRT	6.2.13		Required as part of the definition of all PCI root
+			devices.
+
+_PRW	7.2.13		Use as needed; power management specific.
+
+_PRx	7.2.8-11	Use as needed; power management specific.  If _PR0 is
+			defined, _PR3 must also be defined. 
+
+_PSC	7.2.6		Use as needed; power management specific.
+
+_PSE	7.2.7		Use as needed; power management specific.
+
+_PSW	7.2.14		Use as needed; power management specific.
+
+_PSx	7.2.2-5		Use as needed; power management specific.  If _PS0 is
+			defined, _PS3 must also be defined.  If clocks or
+			regulators need adjusting to be consistent with power
+			usage, change them in these methods.
+
+\_PTS	7.3.1		Use as needed; power management specific.
+
+_PXM	6.2.14		Optional.
+
+_REG	6.5.4		Use as needed.
+
+\_REV	5.7.4		Always returns the latest version of ACPI supported.
+
+_RMV	6.3.6		Optional.
+
+\_SB	5.3.1		Required on arm64; all devices must be defined in this
+			namespace.
+
+_SEG	6.5.6		Use as needed; PCI-specific.
+
+\_SI	5.3.1,		Optional.
+	9.1
+
+_SLI	6.2.15		Optional; recommended when SLIT table is in use.
+
+_STA	6.3.7,		It is recommended to define this method for any device
+	7.1.4		that can be turned on or off.
+
+_SRS	6.2.16		Optional; see also _PRS.
+
+_STR	6.1.10		Recommended for conveying device names to end users;
+			this is preferred over using _DDN.
+
+_SUB	6.1.9		Use as needed; _HID or _CID are preferred.
+
+_SUN	6.1.11		Optional.
+
+\_Sx	7.3.2		Use as needed; power management specific.
+
+_SxD	7.2.16-19	Use as needed; power management specific.
+
+_SxW	7.2.20-24	Use as needed; power management specific.
+
+_SWS	7.3.3		Use as needed; power management specific; this may
+			require specification changes for use on arm64.
+
+\_TTS	7.3.4		Use as needed; power management specific.
+
+\_TZ	5.3.1		Optional.
+
+_UID	6.1.12		Recommended for distinguishing devices of the same
+			class; define it if at all possible.
+
+\_WAK	7.3.5		Use as needed; power management specific.
+
+
+ACPI Event Model
+----------------
+Do not use GPE block devices; these are not supported in the hardware reduced
+profile used by arm64.  Since there are no GPE blocks defined for use on ARM
+platforms, GPIO-signaled interrupts should be used for creating system events.
+
+
+ACPI Processor Control
+----------------------
+Section 8 of the ACPI specification is currently undergoing change that
+should be completed in the 6.0 version of the specification.  Processor
+performance control will be handled differently for arm64 at that point
+in time.  Processor aggregator devices (section 8.5) will not be used,
+for example, but another similar mechanism instead.
+
+While UEFI constrains what we can say until the release of 6.0, it is
+recommended that CPPC (8.4.5) be used as the primary model.  This will
+still be useful into the future.  C-states and P-states will still be
+provided, but most of the current design work appears to favor CPPC.
+
+Further, it is essential that the ARMv8 SoC provide a fully functional
+implementation of PSCI; this will be the only mechanism supported by ACPI
+to control CPU power state (including secondary CPU booting).
+
+More details will be provided on the release of the ACPI 6.0 specification.
+
+
+ACPI System Address Map Interfaces
+----------------------------------
+In Section 15 of the ACPI specification, several methods are mentioned as
+possible mechanisms for conveying memory resource information to the kernel.
+For arm64, we will only support UEFI for booting with ACPI, hence the UEFI
+GetMemoryMap() boot service is the only mechanism that will be used.
+
+
+ACPI Platform Error Interfaces (APEI)
+-------------------------------------
+The APEI tables supported are described above.
+
+APEI requires the equivalent of an SCI and an NMI on ARMv8.  The SCI is used
+to notify the OSPM of errors that have occurred but can be corrected and the
+system can continue correct operation, even if possibly degraded.  The NMI is
+used to indicate fatal errors that cannot be corrected, and require immediate
+attention.
+
+Since there is no direct equivalent of the x86 SCI or NMI, arm64 handles
+these slightly differently.  The SCI is handled as a normal GPIO-signaled
+interrupt; given that these are corrected (or correctable) errors being
+reported, this is sufficient.  The NMI is emulated as the highest priority
+GPIO-signaled interrupt possible.  This implies some caution must be used
+since there could be interrupts at higher privilege levels or even interrupts
+at the same priority as the emulated NMI.  In Linux, this should not be the
+case but one should be aware it could happen.
+
+
+ACPI Objects Not Supported on ARM64
+-----------------------------------
+While this may change in the future, there are several classes of objects
+that can be defined, but are not currently of general interest to ARM servers.
+
+These are not supported:
+
+	-- Section 9.2: ambient light sensor devices
+
+	-- Section 9.3: battery devices
+
+	-- Section 9.4: lids (e.g., laptop lids)
+
+	-- Section 9.8.2: IDE controllers
+
+	-- Section 9.9: floppy controllers
+
+	-- Section 9.10: GPE block devices
+
+	-- Section 9.15: PC/AT RTC/CMOS devices
+
+	-- Section 9.16: user presence detection devices
+
+	-- Section 9.17: I/O APIC devices; all GICs must be enumerable via MADT
+
+	-- Section 9.18: time and alarm devices (see 9.15)
+
+
+ACPI Objects Not Yet Implemented
+--------------------------------
+While these objects have x86 equivalents, and they do make some sense in ARM
+servers, there is either no hardware available at present, or in some cases
+there may not yet be a non-ARM implementation.  Hence, they are currently not
+implemented though that may change in the future.
+
+Not yet implemented are:
+
+	-- Section 10: power source and power meter devices
+
+	-- Section 11: thermal management
+
+	-- Section 12: embedded controllers interface
+
+	-- Section 13: SMBus interfaces
+
+	-- Section 17: NUMA support (prototypes have been submitted for
+	   review)
+
diff --git a/Documentation/arm64/arm-acpi.txt b/Documentation/arm64/arm-acpi.txt
new file mode 100644
index 0000000..275524e
--- /dev/null
+++ b/Documentation/arm64/arm-acpi.txt
@@ -0,0 +1,506 @@
+ACPI on ARMv8 Servers
+---------------------
+ACPI can be used for ARMv8 general purpose servers designed to follow
+the ARM SBSA (Server Base System Architecture) [0] and SBBR (Server 
+Base Boot Requirements) [1] specifications.  Please note that the SBBR
+can be retrieved simply by visiting [1], but the SBSA is currently only
+available to those with an ARM login due to ARM IP licensing concerns.
+
+The ARMv8 kernel implements the reduced hardware model of ACPI version
+5.1 or later.  Links to the specification and all external documents
+it refers to are managed by the UEFI Forum.  The specification is
+available at http://www.uefi.org/specifications and documents referenced
+by the specification can be found via http://www.uefi.org/acpi.
+
+If an ARMv8 system does not meet the requirements of the SBSA and SBBR,
+or cannot be described using the mechanisms defined in the required ACPI
+specifications, then ACPI may not be a good fit for the hardware.
+
+While the documents mentioned above set out the requirements for building
+industry-standard ARMv8 servers, they also apply to more than one operating
+system.  The purpose of this document is to describe the interaction between
+ACPI and Linux only, on an ARMv8 system -- that is, what Linux expects of
+ACPI and what ACPI can expect of Linux.
+
+
+Why ACPI on ARM?
+----------------
+Before examining the details of the interface between ACPI and Linux, it is
+useful to understand why ACPI is being used.  Several technologies already
+exist in Linux for describing non-enumerable hardware, after all.  In this
+section we summarize a blog post [2] from Grant Likely that outlines the
+reasoning behind ACPI on ARMv8 servers.  Actually, we snitch a good portion
+of the summary text almost directly, to be honest.
+
+The short form of the rationale for ACPI on ARM is:
+
+-- ACPI’s bytecode (AML) allows the platform to encode hardware behavior,
+   while DT explicitly does not support this.  For hardware vendors, being
+   able to encode behavior is a key tool used in supporting operating 
+   system releases on new hardware.
+
+-- ACPI’s OSPM defines a power management model that constrains what the 
+   platform is allowed to do into a specific model, while still providing
+   flexibility in hardware design.
+
+-- In the enterprise server environment, ACPI has established bindings (such
+   as for RAS) which are currently used in production systems.  DT does not.
+   Such bindings could be defined in DT at some point, but doing so means ARM
+   and x86 would end up using completely different code paths in both firmware
+   and the kernel.
+
+-- Choosing a single interface to describe the abstraction between a platform
+   and an OS is important.  Hardware vendors would not be required to implement
+   both DT and ACPI if they want to support multiple operating systems.  And,
+   agreeing on a single interface instead of being fragmented into per OS
+   interfaces makes for better interoperability overall.
+
+-- The new ACPI governance process works well and Linux is now at the same
+   table as hardware vendors and other OS vendors.  In fact, there is no
+   longer any reason to feel that ACPI is only belongs to Windows or that
+   Linux is in any way secondary to Microsoft in this arena.  The move of
+   ACPI governance into the UEFI forum has significantly opened up the
+   specification development process, and currently, a large portion of the
+   changes being made to ACPI is being driven by Linux.
+
+Key to the use of ACPI is the support model.  For servers in general, the
+responsibility for hardware behaviour cannot solely be the domain of the
+kernel, but rather must be split between the platform and the kernel, in
+order to allow for orderly change over time.  ACPI frees the OS from needing
+to understand all the minute details of the hardware so that the OS doesn’t
+need to be ported to each and every device individually.  It allows the
+hardware vendors to take responsibility for power management behaviour without
+depending on an OS release cycle which is not under their control.
+
+ACPI is also important because hardware and OS vendors have already worked
+out the mechanisms for supporting a general purpose computing ecosystem.  The
+infrastructure is in place, the bindings are in place, and the processes are
+in place.  DT does exactly what Linux needs it to when working with vertically
+integrated devices, but there are no good processes for supporting what the
+server vendors need.  Linux could potentially get there with DT, but doing so
+really just duplicates something that already works.  ACPI already does what
+the hardware vendors need, Microsoft won’t collaborate on DT, and hardware
+vendors would still end up providing two completely separate firmware
+interfaces -- one for Linux and one for Windows.
+
+
+Kernel Compatibility
+--------------------
+One of the primary motivations for ACPI is standardization, and using that
+to provide backward compatibility for Linux kernels.  In the server market,
+software and hardware are often used for long periods.  ACPI allows the
+kernel and firmware to agree on a consistent abstraction that can be
+maintained over time, even as hardware or software change.  As long as the
+abstraction is supported, systems can be updated without necessarily having
+to replace the kernel.
+
+When a Linux driver or subsystem is first implemented using ACPI, it by
+definition ends up requiring a specific version of the ACPI specification
+-- it's baseline.  ACPI firmware must continue to work, even though it may
+not be optimal, with the earliest kernel version that first provides support
+for that baseline version of ACPI.  There may be a need for additional drivers,
+but adding new functionality (e.g., CPU power management) should not break
+older kernel versions.  Further, ACPI firmware must also work with the most
+recent version of the kernel.
+
+
+Relationship with Device Tree
+-----------------------------
+ACPI support in drivers and subsystems for ARMv8 should never be mutually
+exclusive with DT support at compile time.
+
+At boot time the kernel will only use one description method depending on
+parameters passed from the bootloader (including kernel bootargs).
+
+Regardless of whether DT or ACPI is used, the kernel must always be capable
+of booting with either scheme (in kernels with both schemes enabled at compile
+time).
+
+
+Booting using ACPI tables
+-------------------------
+The only defined method for passing ACPI tables to the kernel on ARMv8
+is via the UEFI system configuration table.  Just so it is explicit, this
+means that ACPI is only supported on platforms that boot via UEFI.
+
+When an ARMv8 system boots, it can either have DT information, ACPI tables,
+or in some very unusual cases, both.  If no command line parameters are used,
+the kernel will try to use DT for device enumeration; if there is no DT
+present, the kernel will try to use ACPI tables, but only if they are present.
+In neither is available, the kernel will not boot.  If acpi=force is used
+on the command line, the kernel will attempt to use ACPI tables first, but
+fall back to DT if there are no ACPI tables present.  The basic idea is that
+the kernel will not fail to boot unless it absolutely has no other choice.
+
+Processing of ACPI tables may be disabled by passing acpi=off on the kernel
+command line; this is the default behavior.
+
+In order for the kernel to load and use ACPI tables, the UEFI implementation
+MUST set the ACPI_20_TABLE_GUID to point to the RSDP table (the table with
+the ACPI signature "RSD PTR ").  If this pointer is incorrect and acpi=force
+is used, the kernel will disable ACPI and try to use DT to boot instead; the
+kernel has, in effect, determined that ACPI tables are not present at that
+point.
+
+If the pointer to the RSDP table is correct, the table will be mapped into
+the kernel by the ACPI core, using the address provided by UEFI.
+
+The ACPI core will then locate and map in all other ACPI tables provided by
+using the addresses in the RSDP table to find the XSDT (eXtended System
+Description Table).  The XSDT in turn provides the addresses to all other
+ACPI tables provided by the system firmware; the ACPI core will then traverse
+this table and map in the tables listed.
+
+The ACPI core will ignore any provided RSDT (Root System Description Table).
+RSDTs have been deprecated and are ignored on arm64 since they only allow
+for 32-bit addresses.
+
+Further, the ACPI core will only use the 64-bit address fields in the FADT
+(Fixed ACPI Description Table).  Any 32-bit address fields in the FADT will
+be ignored on arm64.
+
+Hardware reduced mode (see Section 4.1 of the ACPI 5.1 specification) will
+be enforced by the ACPI core on arm64.  Doing so allows the ACPI core to
+run less complex code since it no longer has to provide support for legacy
+hardware from other architectures.  Any fields that are not to be used for
+hardware reduced mode must be set to zero.
+
+For the ACPI core to operate properly, and in turn provide the information
+the kernel needs to configure devices, it expects to find the following
+tables (all section numbers refer to the ACPI 5.1 specfication):
+
+    -- RSDP (Root System Description Pointer), section 5.2.5
+
+    -- XSDT (eXtended System Description Table), section 5.2.8
+
+    -- FADT (Fixed ACPI Description Table), section 5.2.9
+
+    -- DSDT (Differentiated System Description Table), section
+       5.2.11.1
+
+    -- MADT (Multiple APIC Description Table), section 5.2.12
+
+    -- GTDT (Generic Timer Description Table), section 5.2.24
+
+    -- If PCI is supported, the MCFG (Memory mapped ConFiGuration
+       Table), section 5.2.6, specifically Table 5-31.
+
+If the above tables are not all present, the kernel may or may not be
+able to boot properly since it may not be able to configure all of the
+devices available.
+
+
+ACPI Detection
+--------------
+Drivers should determine their probe() type by checking for a null
+value for ACPI_HANDLE, or checking .of_node, or other information in
+the device structure.  This is detailed further in the "Driver 
+Recommendations" section.
+
+In non-driver code, if the presence of ACPI needs to be detected at
+runtime, then check the value of acpi_disabled. If CONFIG_ACPI is not
+set, acpi_disabled will always be 1.
+
+
+Device Enumeration
+------------------
+Device descriptions in ACPI should use standard recognized ACPI interfaces.
+These may contain less information than is typically provided via a Device
+Tree description for the same device.  This is also one of the reasons that
+ACPI can be useful -- the driver takes into account that it may have less
+detailed information about the device and uses sensible defaults instead.
+If done properly in the driver, the hardware can change and improve over
+time without the driver having to change at all.
+
+Clocks provide an excellent example.  In DT, clocks need to be specified
+and the drivers need to take them into account.  In ACPI, the assumption
+is that UEFI will leave the device in a reasonable default state, including
+any clock settings.  If for some reason the driver needs to change a clock
+value, this can be done in an ACPI method; all the driver needs to do is
+invoke the method and not concern itself with what the method needs to do
+to change the clock.  Changing the hardware can then take place over time
+by changing what the ACPI method does, and not the driver.
+
+In DT, the parameters needed by the driver to set up clocks as in the example
+above are known as "bindings"; in ACPI, these are known as "Device Properties"
+and provided to a driver via the _DSD object.
+
+ACPI tables are described with a formal language called ASL, the ACPI
+Source Language (section 19 of the specification).  This means that there
+are always multiple ways to describe the same thing -- including device
+properties.  For example, device properties could use an ASL construct
+that looks like this: Name(KEY0, "value0").  An ACPI device driver would
+then retrieve the value of the property by evaluating the KEY0 object.
+However, using Name() this way has multiple problems: (1) ACPI limits
+names ("KEY0") to four characters unlike DT; (2) there is no industry
+wide registry that maintains a list of names, minimzing re-use; (3)
+there is also no registry for the definition of property values ("value0"),
+again making re-use difficult; and (4) how does one maintain backward
+compatibility as new hardware comes out?  The _DSD method was created
+to solve precisely these sorts of problems; Linux drivers should ALWAYS
+use the _DSD method for device properties and nothing else.
+
+The _DSM object (ACPI Section 9.14.1) could also be used for conveying
+device properties to a driver.  Linux drivers should only expect it to
+be used if _DSD cannot represent the data required, and there is no way
+to create a new UUID for the _DSD object.  Note that there is even less
+regulation of the use of _DSM than there is of _DSD.  Drivers that depend
+on the contents of _DSM objects will be more difficult to maintain over
+time because of this; as of this writing, the use of _DSM is the cause
+of quite a few firmware problems and is not recommended.
+
+Drivers should look for device properties in the _DSD object ONLY; the _DSD
+object is described in the ACPI specification section 6.2.5, but this only
+describes how to define the structure of an object returned via _DSD, and
+how specific data structures are defined by specific UUIDs.  Linux should
+only use the _DSD Device Properties UUID [5]:
+
+   -- UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301
+   
+   -- http://www.uefi.org/sites/default/files/resources/_DSD-device-properties-UUID.pdf
+
+The UEFI Forum provides a mechanism for registering device properties [4]
+so that they may be used across all operating systems supporting ACPI.
+Device properties that have not been registered with the UEFI Forum should
+not be used.
+
+Before creating new device properties, check to be sure that they have not
+been defined before and either registered in the Linux kernel documentation
+as DT bindings, or the UEFI Forum as device properties.  While we do not want
+to simply move all DT bindings into ACPI device properties, we can learn from
+what has been previously defined.
+
+If it is necessary to define a new device property, or if it makes sense to
+synthesize the definition of a binding so it can be used in any firmware,
+both DT bindings and ACPI device properties for device drivers have review
+processes.  Use them both.  When the driver itself is submitted for review
+to the Linux mailing lists, the device property definitions needed must be
+submitted at the same time.  A driver that supports ACPI and uses device
+properties will not be considered complete without their definitions.  Once
+the device property has been accepted by the Linux community, it must be
+registered with the UEFI Forum [4], which will review it again for consistency
+within the registry.  This may require iteration.  The UEFI Forum, though,
+will always be the canonical site for device property definitions.
+
+It may make sense to provide notice to the UEFI Forum that there is the
+intent to register a previously unused device property name as a means of
+reserving the name for later use.  Other operating system vendors will
+also be submitting registration requests and this may help smooth the
+process.
+
+Once registration and review have been completed, the kernel provides an
+interface for looking up device properties in a manner independent of
+whether DT or ACPI is being used.  This API should be used [6]; it can
+eliminate some duplication of code paths in driver probing functions and
+discourage divergence between DT bindings and ACPI device properties.
+
+
+Programmable Power Control Resources
+------------------------------------
+Programmable power control resources include such resources as voltage/current
+providers (regulators) and clock sources.
+
+With ACPI, the kernel clock and regulator framework is not expected to be used
+at all.
+
+The kernel assumes that power control of these resources is represented with
+Power Resource Objects (ACPI section 7.1).  The ACPI core will then handle
+correctly enabling and disabling resources as they are needed.  In order to
+get that to work, ACPI assumes each device has defined D-states and that these
+can be controlled through the optional ACPI methods _PS0, _PS1, _PS2, and _PS3;
+in ACPI, _PS0 is the method to invoke to turn a device full on, and _PS3 is for
+turning a device full off.
+
+There are two options for using those Power Resources.  They can:
+
+   -- be managed in a _PSx method which gets called on entry to power
+      state Dx.
+
+   -- be declared separately as power resources with their own _ON and _OFF
+      methods.  They are then tied back to D-states for a particular device
+      via _PRx which specifies which power resources a device needs to be on
+      while in Dx.  Kernel then tracks number of devices using a power resource
+      and calls _ON/_OFF as needed.
+
+The kernel ACPI code will also assume that the _PSx methods follow the normal
+ACPI rules for such methods:
+
+   -- If either _PS0 or _PS3 is implemented, then the other method must also
+      be implemented.
+
+   -- If a device requires usage or setup of a power resource when on, the ASL
+      should organize that it is allocated/enabled using the _PS0 method.
+
+   -- Resources allocated or enabled in the _PS0 method should be disabled
+      or de-allocated in the _PS3 method.
+
+   -- Firmware will leave the resources in a reasonable state before handing
+      over control to the kernel.
+
+Such code in _PSx methods will of course be very platform specific.  But,
+this allows the driver to abstract out the interface for operating the device
+and avoid having to read special non-standard values from ACPI tables. Further,
+abstracting the use of these resources allows the hardware to change over time
+without requiring updates to the driver.
+
+
+Clocks
+------
+ACPI makes the assumption that clocks are initialized by the firmware -- 
+UEFI, in this case -- to some working value before control is handed over
+to the kernel.  This has implications for devices such as UARTs, or SoC-driven
+LCD displays, for example.
+
+When the kernel boots, the clocks are assumed to be set to reasonable
+working values.  If for some reason the frequency needs to change -- e.g.,
+throttling for power management -- the device driver should expect that 
+process to be abstracted out into some ACPI method that can be invoked 
+(please see the ACPI specification for further recommendations on standard
+methods to be expected).  The only exceptions to this are CPU clocks where
+CPPC provides a much richer interface than ACPI methods.  If the clocks
+are not set, there is no direct way for Linux to control them.
+
+If an SoC vendor wants to provide fine-grained control of the system clocks,
+they could do so by providing ACPI methods that could be invoked by Linux
+drivers.  However, this is NOT recommended and Linux drivers should NOT use
+such methods, even if they are provided.  Such methods are not currently
+standardized in the ACPI specification, and using them could tie a kernel
+to a very specific SoC, or tie an SoC to a very specific version of the
+kernel, both of which we are trying to avoid.
+
+
+Driver Recommendations
+----------------------
+DO NOT remove any DT handling when adding ACPI support for a driver.  The
+same device may be used on many different systems.
+
+DO try to structure the driver so that it is data-driven.  That is, set up
+a struct containing internal per-device state based on defaults and whatever
+else must be discovered by the driver probe function.  Then, have the rest
+of the driver operate off of the contents of that struct.  Doing so should
+allow most divergence between ACPI and DT functionality to be kept local to
+the probe function instead of being scattered throughout the driver.  For
+example:
+
+static int device_probe_dt(struct platform_device *pdev)
+{
+	/* DT specific functionality */
+	...
+}
+
+static int device_probe_acpi(struct platform_device *pdev)
+{
+	/* ACPI specific functionality */
+	...
+}
+
+static int device_probe(stuct platform_device *pdev)
+{
+	...
+	struct device_node node = pdev->dev.of_node;
+	...
+
+	if (node)
+		ret = device_probe_dt(pdev);
+	else if (ACPI_HANDLE(&pdev->dev))
+		ret = device_probe_acpi(pdev);
+	else
+		/* other initialization */
+		...
+	/* Continue with any generic probe operations */
+	...
+}
+
+DO keep the MODULE_DEVICE_TABLE entries together in the driver to make it
+clear the different names the driver is probed for, both from DT and from
+ACPI:
+
+static struct of_device_id virtio_mmio_match[] = {
+        { .compatible = "virtio,mmio", },
+        { }
+};
+MODULE_DEVICE_TABLE(of, virtio_mmio_match);
+
+static const struct acpi_device_id virtio_mmio_acpi_match[] = {
+        { "LNRO0005", },
+        { }
+};
+MODULE_DEVICE_TABLE(acpi, virtio_mmio_acpi_match);
+
+
+ASWG
+----
+The ACPI specification changes regularly.  During the year 2014, for instance,
+version 5.1 was released and version 6.0 substantially completed, with most of
+the changes being driven by ARM-specific requirements.  Proposed changes are
+presented and discussed in the ASWG (ACPI Specification Working Group) which
+is a part of the UEFI Forum.
+
+Participation in this group is open to all UEFI members.  Please see
+http://www.uefi.org/workinggroup for details on group membership.
+
+It is the intent of the ARMv8 ACPI kernel code to follow the ACPI specification
+as closely as possible, and to only implement functionality that complies with
+the released standards from UEFI ASWG.  As a practical matter, there will be
+vendors that provide bad ACPI tables or violate the standards in some way.
+If this is because of errors, quirks and fixups may be necessary, but will
+be avoided if possible.  If there are features missing from ACPI that preclude
+it from being used on a platform, ECRs (Engineering Change Requests) should be
+submitted to ASWG and go through the normal approval process; for those that
+are not UEFI members, many other members of the Linux community are and would
+likely be willing to assist in submitting ECRs.
+
+
+Linux Code
+----------
+Individual items specific to Linux on ARM, contained in the the Linux
+source code, are in the list that follows:
+
+ACPI_OS_NAME		This macro defines the string to be returned when
+			an ACPI method invokes the _OS method.  On ARM64
+			systems, this macro will be "Linux" by default.
+			The command line parameter acpi_os=<string>
+			can be used to set it to some other value.  The
+			default value for other architectures is "Microsoft
+			Windows NT", for example.
+
+ACPI Objects
+------------
+Detailed expectations for ACPI tables and object are listed in the file
+Documentation/arm64/acpi_object_usage.txt.
+
+
+References
+----------
+[0] http://silver.arm.com -- document ARM-DEN-0029, or newer
+    "Server Base System Architecture", version 2.3, dated 27 Mar 2014
+
+[1] http://infocenter.arm.com/help/topic/com.arm.doc.den0044a/Server_Base_Boot_Requirements.pdf
+    Document ARM-DEN-0044A, or newer: "Server Base Boot Requirements, System
+    Software on ARM Platforms", dated 16 Aug 2014
+
+[2] http://www.secretlab.ca/archives/151, 10 Jan 2015, Copyright (c) 2015,
+    Linaro Ltd., written by Grant Likely.  A copy of the verbatim text (apart
+    from formatting) is also in Documentation/arm64/why_use_acpi.txt.
+
+[3] AMD ACPI for Seattle platform documentation:
+    http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Seattle_ACPI_Guide.pdf 
+
+[4] http://www.uefi.org/acpi -- please see the link for the "ACPI _DSD Device
+    Property Registry Instructions"
+
+[5] http://www.uefi.org/acpi -- please see the link for the "_DSD (Device
+    Specific Data) Implementation Guide"
+
+[6] Kernel code for the unified device property interface can be found in
+    include/linux/property.h and drivers/base/property.c.
+
+
+Authors
+-------
+Al Stone <al.stone@linaro.org>
+Graeme Gregory <graeme.gregory@linaro.org>
+Hanjun Guo <hanjun.guo@linaro.org>
+
+Grant Likely <grant.likely@linaro.org>, for the "Why ACPI on ARM?" section
+
diff --git a/Documentation/arm64/why_use_acpi.txt b/Documentation/arm64/why_use_acpi.txt
new file mode 100644
index 0000000..9bb583e
--- /dev/null
+++ b/Documentation/arm64/why_use_acpi.txt
@@ -0,0 +1,231 @@
+Why ACPI on ARM?
+----------------
+Copyright (c) 2015, Linaro, Ltd.
+Author: Grant Likely <grant.likely@linaro.org>
+
+Why are we doing ACPI on ARM? That question has been asked many times, but
+we haven’t yet had a good summary of the most important reasons for wanting
+ACPI on ARM. This article is an attempt to state the rationale clearly.
+
+During an email conversation late last year, Catalin Marinas asked for
+a summary of exactly why we want ACPI on ARM, Dong Wei replied with the 
+following list:
+> 1. Support multiple OSes, including Linux and Windows
+> 2. Support device configurations
+> 3. Support dynamic device configurations (hot add/removal)
+> 4. Support hardware abstraction through control methods
+> 5. Support power management
+> 6. Support thermal management
+> 7. Support RAS interfaces
+
+The above list is certainly true in that all of them need to be supported.
+However, that list doesn’t give the rationale for choosing ACPI. We already
+have DT mechanisms for doing most of the above, and can certainly create
+new bindings for anything that is missing. So, if it isn’t an issue of
+functionality, then how does ACPI differ from DT and why is ACPI a better
+fit for general purpose ARM servers?
+
+The difference is in the support model. To explain what I mean, I’m first
+going to expand on each of the items above and discuss the similarities and
+differences between ACPI and DT. Then, with that as the groundwork, I’ll
+discuss how ACPI is a better fit for the general purpose hardware support
+model.
+
+
+Device Configurations
+---------------------
+2. Support device configurations
+3. Support dynamic device configurations (hot add/removal)
+
+From day one, DT was about device configurations. There isn’t any significant
+difference between ACPI & DT here. In fact, the majority of ACPI tables are
+completely analogous to DT descriptions. With the exception of the DSDT and
+SSDT tables, most ACPI tables are merely flat data used to describe hardware.
+
+DT platforms have also supported dynamic configuration and hotplug for years.
+There isn’t a lot here that differentiates between ACPI and DT. The biggest
+difference is that dynamic changes to the ACPI namespace can be triggered by
+ACPI methods, whereas for DT changes are received as messages from firmware
+and have been very much platform specific (e.g. IBM pSeries does this)
+
+
+Power Management
+----------------
+4. Support hardware abstraction through control methods
+5. Support power management
+6. Support thermal management
+
+Power, thermal, and clock management can all be dealt with as a group. ACPI
+defines a power management model (OSPM) that both the platform and the OS
+conform to. The OS implements the OSPM state machine, but the platform can
+provide state change behaviour in the form of bytecode methods. Methods can
+access hardware directly or hand off PM operations to a coprocessor. The OS
+really doesn’t have to care about the details as long as the platform obeys
+the rules of the OSPM model.
+
+With DT, the kernel has device drivers for each and every component in the
+platform, and configures them using DT data. DT itself doesn’t have a PM model.
+Rather the PM model is an implementation detail of the kernel. Device drivers
+use DT data to decide how to handle PM state changes. We have clock, pinctrl,
+and regulator frameworks in the kernel for working out runtime PM. However,
+this only works when all the drivers and support code have been merged into
+the kernel. When the kernel’s PM model doesn’t work for new hardware, then we
+change the model. This works very well for mobile/embedded because the vendor
+controls the kernel. We can change things when we need to, but we also struggle
+with getting board support mainlined.
+
+This difference has a big impact when it comes to OS support. Engineers from
+hardware vendors, Microsoft, and most vocally Red Hat have all told me bluntly
+that rebuilding the kernel doesn’t work for enterprise OS support. Their model
+is based around a fixed OS release that ideally boots out-of-the-box. It may
+still need additional device drivers for specific peripherals/features, but
+from a system view, the OS works. When additional drivers are provided 
+separately, those drivers fit within the existing OSPM model for power 
+management. This is where ACPI has a technical advantage over DT. The ACPI
+OSPM model and it’s bytecode gives the HW vendors a level of abstraction 
+under their control, not the kernel’s. When the hardware behaves differently
+from what the OS expects, the vendor is able to change the behaviour without
+changing the HW or patching the OS.
+
+At this point you’d be right to point out that it is harder to get the whole
+system working correctly when behaviour is split between the kernel and the
+platform. The OS must trust that the platform doesn’t violate the OSPM model.
+All manner of bad things happen if it does. That is exactly why the DT model
+doesn’t encode behaviour: It is easier to make changes and fix bugs when
+everything is within the same code base. We don’t need a platform/kernel 
+split when we can modify the kernel.
+
+However, the enterprise folks don’t have that luxury. The platform/kernel
+split isn’t a design choice. It is a characteristic of the market. Hardware
+and OS vendors each have their own product timetables, and they don’t line
+up. The timeline for getting patches into the kernel and flowing through into
+OS releases puts OS support far downstream from the actual release of hardware.
+Hardware vendors simply cannot wait for OS support to come online to be able to
+release their products. They need to be able to work with available releases,
+and make their hardware behave in the way the OS expects. The advantage of ACPI
+OSPM is that it defines behaviour and limits what the hardware is allowed to do
+without involving the kernel.
+
+What remains is sorting out how we make sure everything works. How do we make
+sure there is enough cross platform testing to ensure new hardware doesn’t 
+ship broken and that new OS releases don’t break on old hardware? Those are
+the reasons why a UEFI/ACPI firmware summit is being organized, it’s why the
+UEFI forum holds plugfests 3 times a year, and it is why we’re working on 
+FWTS and LuvOS.
+
+
+Reliability, Availability & Serviceability (RAS)
+------------------------------------------------
+7. Support RAS interfaces
+
+This isn’t a question of whether or not DT can support RAS. Of course it can.
+Rather it is a matter of RAS bindings already existing for ACPI, including a
+usage model. We’ve barely begun to explore this on DT. This item doesn’t make
+ACPI technically superior to DT, but it certainly makes it more mature.
+
+
+Multiplatform Support
+---------------------
+1. Support multiple OSes, including Linux and Windows
+
+I’m tackling this item last because I think it is the most contentious for
+those of us in the Linux world. I wanted to get the other issues out of the
+way before addressing it.
+
+The separation between hardware vendors and OS vendors in the server market
+is new for ARM. For the first time ARM hardware and OS release cycles are
+completely decoupled from each other, and neither are expected to have specific
+knowledge of the other (ie. the hardware vendor doesn’t control the choice of
+OS). ARM and their partners want to create an ecosystem of independent OSes
+and hardware platforms that don’t explicitly require the former to be ported
+to the latter.
+
+Now, one could argue that Linux is driving the potential market for ARM
+servers, and therefore Linux is the only thing that matters, but hardware
+vendors don’t see it that way. For hardware vendors it is in their best
+interest to support as wide a choice of OSes as possible in order to catch
+the widest potential customer base. Even if the majority choose Linux, some
+will choose BSD, some will choose Windows, and some will choose something
+else. Whether or not we think this is foolish is beside the point; it isn’t
+something we have influence over.
+
+During early ARM server planning meetings between ARM, its partners and other
+industry representatives (myself included) we discussed this exact point.
+Before us were two options, DT and ACPI. As one of the Linux people in the
+room, I advised that ACPI’s closed governance model was a show stopper for
+Linux and that DT is the working interface. Microsoft on the other hand made
+it abundantly clear that ACPI was the only interface that they would support.
+For their part, the hardware vendors stated the platform abstraction behaviour
+of ACPI is a hard requirement for their support model and that they would not
+close the door on either Linux or Windows.
+
+However, the one thing that all of us could agree on was that supporting
+multiple interfaces doesn’t help anyone: It would require twice as much 
+effort on defining bindings (once for Linux-DT and once for Windows-ACPI)
+and it would require firmware to describe everything twice. Eventually we
+reached the compromise to use ACPI, but on the condition of opening the
+governance process to give Linux engineers equal influence over the
+specification. The fact that we now have a much better seat at the ACPI
+table, for both ARM and x86, is a direct result of these early ARM server
+negotiations. We are no longer second class citizens in the ACPI world and
+are actually driving much of the recent development.
+
+I know that this line of thought is more about market forces rather than a
+hard technical argument between ACPI and DT, but it is an equally significant
+one. Agreeing on a single way of doing things is important. The ARM server
+ecosystem is better for the agreement to use the same interface for all
+operating systems. This is what is meant by standards compliant. The standard
+is a codification of the mutually agreed interface. It provides confidence
+that all vendors are using the same rules for interoperability.
+
+
+Summary
+-------
+To summarize, here is the short form rationale for ACPI on ARM:
+
+-- ACPI’s bytecode allows the platform to encode behaviour. DT explicitly
+   does not support this. For hardware vendors, being able to encode behaviour
+   is an important tool for supporting operating system releases on new
+   hardware.
+
+-- ACPI’s OSPM defines a power management model that constrains what the 
+   platform is allowed into a specific model while still having flexibility
+   in hardware design.
+
+-- For enterprise use-cases, ACPI has extablished bindings, such as for RAS,
+   which are used in production. DT does not. Yes, we can define those bindings
+   but doing so means ARM and x86 will use completely different code paths in
+   both firmware and the kernel.
+
+-- Choosing a single interface for platform/OS abstraction is important. It
+   is not reasonable to require vendors to implement both DT and ACPI if they
+   want to support multiple operating systems. Agreeing on a single interface
+   instead of being fragmented into per-OS interfaces makes for better
+   interoperability overall.
+
+-- The ACPI governance process works well and we’re at the same table as HW
+   vendors and other OS vendors. In fact, there is no longer any reason to
+   feel that ACPI is a Windows thing or that we are playing second fiddle to
+   Microsoft. The move of ACPI governance into the UEFI forum has significantly
+   opened up the processes, and currently, a large portion of the changes being
+   made to ACPI is being driven by Linux.
+
+At the beginning of this article I made the statement that the difference
+is in the support model. For servers, responsibility for hardware behaviour
+cannot be purely the domain of the kernel, but rather is split between the
+platform and the kernel. ACPI frees the OS from needing to understand all
+the minute details of the hardware so that the OS doesn’t need to be ported
+to each and every device individually. It allows the hardware vendors to take
+responsibility for PM behaviour without depending on an OS release cycle which
+it is not under their control.
+
+ACPI is also important because hardware and OS vendors have already worked
+out how to use it to support the general purpose ecosystem. The infrastructure
+is in place, the bindings are in place, and the process is in place. DT does
+exactly what we need it to when working with vertically integrated devices,
+but we don’t have good processes for supporting what the server vendors need.
+We could potentially get there with DT, but doing so doesn’t buy us anything.
+ACPI already does what the hardware vendors need, Microsoft won’t collaborate
+with us on DT, and the hardware vendors would still need to provide two
+completely separate firmware interface; one for Linux and one for Windows.
+
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index bfcb1a6..d6c35a7 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -165,7 +165,7 @@ multipliers 'Kilo', 'Mega', and 'Giga', equalling 2^10, 2^20, and 2^30
 bytes respectively. Such letter suffixes can also be entirely omitted.
 
 
-	acpi=		[HW,ACPI,X86]
+	acpi=		[HW,ACPI,X86,ARM64]
 			Advanced Configuration and Power Interface
 			Format: { force | off | strict | noirq | rsdt }
 			force -- enable ACPI if default was off
@@ -175,6 +175,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 				strictly ACPI specification compliant.
 			rsdt -- prefer RSDT over (default) XSDT
 			copy_dsdt -- copy DSDT to memory
+			For ARM64, ONLY "acpi=off" or "acpi=force" are available
 
 			See also Documentation/power/runtime_pm.txt, pci=noacpi
 
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 676454a..6ef7874 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1,5 +1,6 @@
 config ARM64
 	def_bool y
+	select ACPI_REDUCED_HARDWARE_ONLY if ACPI
 	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
 	select ARCH_HAS_GCOV_PROFILE_ALL
@@ -194,6 +195,10 @@ config PCI_DOMAINS_GENERIC
 config PCI_SYSCALL
 	def_bool PCI
 
+config PCI_MMCONFIG
+	def_bool y
+	depends on PCI && ACPI
+
 source "drivers/pci/Kconfig"
 source "drivers/pci/pcie/Kconfig"
 source "drivers/pci/hotplug/Kconfig"
@@ -384,6 +389,9 @@ config SMP
 
 	  If you don't know what to do here, say N.
 
+config ARM_PARKING_PROTOCOL
+	def_bool y if SMP
+
 config SCHED_MC
 	bool "Multi-core scheduler support"
 	depends on SMP
@@ -658,6 +666,8 @@ source "drivers/Kconfig"
 
 source "drivers/firmware/Kconfig"
 
+source "drivers/acpi/Kconfig"
+
 source "fs/Kconfig"
 
 source "arch/arm64/kvm/Kconfig"
diff --git a/arch/arm64/include/asm/acenv.h b/arch/arm64/include/asm/acenv.h
new file mode 100644
index 0000000..b49166f
--- /dev/null
+++ b/arch/arm64/include/asm/acenv.h
@@ -0,0 +1,18 @@
+/*
+ * ARM64 specific ACPICA environments and implementation
+ *
+ * Copyright (C) 2014, Linaro Ltd.
+ *   Author: Hanjun Guo <hanjun.guo@linaro.org>
+ *   Author: Graeme Gregory <graeme.gregory@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _ASM_ACENV_H
+#define _ASM_ACENV_H
+
+/* It is required unconditionally by ACPI core, update it when needed. */
+
+#endif /* _ASM_ACENV_H */
diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
new file mode 100644
index 0000000..b4d1971
--- /dev/null
+++ b/arch/arm64/include/asm/acpi.h
@@ -0,0 +1,120 @@
+/*
+ *  Copyright (C) 2013-2014, Linaro Ltd.
+ *	Author: Al Stone <al.stone@linaro.org>
+ *	Author: Graeme Gregory <graeme.gregory@linaro.org>
+ *	Author: Hanjun Guo <hanjun.guo@linaro.org>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2 as
+ *  published by the Free Software Foundation;
+ */
+
+#ifndef _ASM_ACPI_H
+#define _ASM_ACPI_H
+
+#include <linux/irqchip/arm-gic-acpi.h>
+
+#include <linux/mm.h>
+#include <asm/smp_plat.h>
+
+/* Basic configuration for ACPI */
+#ifdef	CONFIG_ACPI
+#define acpi_strict 1	/* No out-of-spec workarounds on ARM64 */
+extern int acpi_disabled;
+extern int acpi_noirq;
+extern int acpi_pci_disabled;
+
+/* 1 to indicate PSCI 0.2+ is implemented */
+static inline bool acpi_psci_present(void)
+{
+	return acpi_gbl_FADT.arm_boot_flags & ACPI_FADT_PSCI_COMPLIANT;
+}
+
+/* 1 to indicate HVC must be used instead of SMC as the PSCI conduit */
+static inline bool acpi_psci_use_hvc(void)
+{
+	return acpi_gbl_FADT.arm_boot_flags & ACPI_FADT_PSCI_USE_HVC;
+}
+
+static inline void disable_acpi(void)
+{
+	acpi_disabled = 1;
+	acpi_pci_disabled = 1;
+	acpi_noirq = 1;
+}
+
+static inline void enable_acpi(void)
+{
+	acpi_disabled = 0;
+	acpi_pci_disabled = 0;
+	acpi_noirq = 0;
+}
+
+/* MPIDR value provided in GICC structure is 64 bits, but the
+ * existing phys_id (CPU hardware ID) using in acpi processor
+ * driver is 32-bit, to conform to the same datatype we need
+ * to repack the GICC structure MPIDR.
+ *
+ * bits other than following 32 bits are defined as 0, so it
+ * will be no information lost after repacked.
+ *
+ * Bits [0:7] Aff0;
+ * Bits [8:15] Aff1;
+ * Bits [16:23] Aff2;
+ * Bits [32:39] Aff3;
+ */
+static inline u32 pack_mpidr(u64 mpidr)
+{
+	return (u32) ((mpidr & 0xff00000000) >> 8) | mpidr;
+}
+
+/*
+ * The ACPI processor driver for ACPI core code needs this macro
+ * to find out this cpu was already mapped (mapping from CPU hardware
+ * ID to CPU logical ID) or not.
+ *
+ * cpu_logical_map(cpu) is the mapping of MPIDR and the logical cpu,
+ * and MPIDR is the cpu hardware ID we needed to pack.
+ */
+#define cpu_physical_id(cpu) pack_mpidr(cpu_logical_map(cpu))
+
+/*
+ * It's used from ACPI core in kdump to boot UP system with SMP kernel,
+ * with this check the ACPI core will not override the CPU index
+ * obtained from GICC with 0 and not print some error message as well.
+ * Since MADT must provide at least one GICC structure for GIC
+ * initialization, CPU will be always available in MADT on ARM64.
+ */
+static inline bool acpi_has_cpu_in_madt(void)
+{
+	return true;
+}
+
+static inline void arch_fix_phys_package_id(int num, u32 slot) { }
+void __init acpi_init_cpus(void);
+
+extern int acpi_get_cpu_parked_address(int cpu, u64 *addr);
+
+#else
+static inline void disable_acpi(void) { }
+static inline void enable_acpi(void) { }
+static inline bool acpi_psci_present(void) { return false; }
+static inline bool acpi_psci_use_hvc(void) { return false; }
+static inline void acpi_init_cpus(void) { }
+static inline int  acpi_get_cpu_parked_address(int cpu, u64 *addr) { return  -EOPNOTSUPP; }
+#endif /* CONFIG_ACPI */
+
+/*
+ * ACPI table mapping
+ */
+static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
+					    acpi_size size)
+{
+	if (!page_is_ram(phys >> PAGE_SHIFT))
+		return ioremap(phys, size);
+
+       return ioremap_cache(phys, size);
+}
+#define acpi_os_ioremap acpi_os_ioremap
+
+#endif /*_ASM_ACPI_H*/
diff --git a/arch/arm64/include/asm/cpu_ops.h b/arch/arm64/include/asm/cpu_ops.h
index da301ee..5a31d67 100644
--- a/arch/arm64/include/asm/cpu_ops.h
+++ b/arch/arm64/include/asm/cpu_ops.h
@@ -66,5 +66,6 @@ struct cpu_operations {
 extern const struct cpu_operations *cpu_ops[NR_CPUS];
 int __init cpu_read_ops(struct device_node *dn, int cpu);
 void __init cpu_read_bootcpu_ops(void);
+const struct cpu_operations *cpu_get_ops(const char *name);
 
 #endif /* ifndef __ASM_CPU_OPS_H */
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 1f65be3..c0f89a0 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -114,7 +114,8 @@ typedef struct user_fpsimd_state elf_fpregset_t;
  */
 #define elf_check_arch(x)		((x)->e_machine == EM_AARCH64)
 
-#define elf_read_implies_exec(ex,stk)	(stk != EXSTACK_DISABLE_X)
+#define elf_read_implies_exec(ex,stk)	(test_thread_flag(TIF_32BIT) \
+					 ? (stk == EXSTACK_ENABLE_X) : 0)
 
 #define CORE_DUMP_USE_REGSET
 #define ELF_EXEC_PAGESIZE	PAGE_SIZE
diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index defa0ff9..f196e40 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -62,6 +62,9 @@ void __init early_fixmap_init(void);
 
 #define __early_set_fixmap __set_fixmap
 
+#define __late_set_fixmap __set_fixmap
+#define __late_clear_fixmap(idx) __set_fixmap((idx), 0, FIXMAP_PAGE_CLEAR)
+
 extern void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot);
 
 #include <asm-generic/fixmap.h>
diff --git a/arch/arm64/include/asm/pci.h b/arch/arm64/include/asm/pci.h
index 872ba93..c47baa4 100644
--- a/arch/arm64/include/asm/pci.h
+++ b/arch/arm64/include/asm/pci.h
@@ -24,6 +24,12 @@
  */
 #define PCI_DMA_BUS_IS_PHYS	(0)
 
+static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+{
+	/* no legacy IRQ on arm64 */
+	return -ENODEV;
+}
+
 extern int isa_dma_bridge_buggy;
 
 #ifdef CONFIG_PCI
@@ -33,5 +39,65 @@ static inline int pci_proc_domain(struct pci_bus *bus)
 }
 #endif  /* CONFIG_PCI */
 
+struct acpi_device;
+
+struct pci_sysdata {
+	int		domain;		/* PCI domain */
+	int		node;		/* NUMA node */
+	struct acpi_device *companion;	/* ACPI companion device */
+	void		*iommu;		/* IOMMU private data */
+};
+
+
+static inline unsigned char mmio_config_readb(void __iomem *pos)
+{
+	int offset = (__force unsigned long)pos & 3;
+	int shift = offset * 8;
+
+	return readl(pos - offset) >> shift;
+}
+
+static inline unsigned short mmio_config_readw(void __iomem *pos)
+{
+	int offset = (__force unsigned long)pos & 3;
+	int shift = offset * 8;
+
+	return readl(pos - offset) >> shift;
+}
+
+static inline unsigned int mmio_config_readl(void __iomem *pos)
+{
+	return readl(pos);
+}
+
+static inline void mmio_config_writeb(void __iomem *pos, u8 val)
+{
+	int offset = (__force unsigned long)pos & 3;
+	int shift = offset * 8;
+	int mask = ~(0xff << shift);
+	u32 v;
+
+	pos -= offset;
+	v = readl(pos) & mask;
+	writel(v | (val << shift), pos);
+}
+
+static inline void mmio_config_writew(void __iomem *pos, u16 val)
+{
+	int offset = (__force unsigned long)pos & 3;
+	int shift = offset * 8;
+	int mask = ~(0xffff << shift);
+	u32 v;
+
+	pos -= offset;
+	v = readl(pos) & mask;
+	writel(v | (val << shift), pos);
+}
+
+static inline void mmio_config_writel(void __iomem *pos, u32 val)
+{
+	writel(val, pos);
+}
+
 #endif  /* __KERNEL__ */
 #endif  /* __ASM_PCI_H */
diff --git a/arch/arm64/include/asm/psci.h b/arch/arm64/include/asm/psci.h
index e5312ea..2454bc5 100644
--- a/arch/arm64/include/asm/psci.h
+++ b/arch/arm64/include/asm/psci.h
@@ -14,6 +14,7 @@
 #ifndef __ASM_PSCI_H
 #define __ASM_PSCI_H
 
-int psci_init(void);
+int psci_dt_init(void);
+int psci_acpi_init(void);
 
 #endif /* __ASM_PSCI_H */
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 780f82c..3411561 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -39,9 +39,10 @@ extern void show_ipi_list(struct seq_file *p, int prec);
 extern void handle_IPI(int ipinr, struct pt_regs *regs);
 
 /*
- * Setup the set of possible CPUs (via set_cpu_possible)
+ * Discover the set of possible CPUs and determine their
+ * SMP operations.
  */
-extern void smp_init_cpus(void);
+extern void of_smp_init_cpus(void);
 
 /*
  * Provide a function to raise an IPI cross call on CPUs in callmap.
@@ -51,6 +52,11 @@ extern void set_smp_cross_call(void (*)(const struct cpumask *, unsigned int));
 extern void (*__smp_cross_call)(const struct cpumask *, unsigned int);
 
 /*
+ * Provide a function to signal a parked secondary CPU.
+ */
+extern void set_smp_boot_wakeup_call(void (*)(int cpu));
+
+/*
  * Called from the secondary holding pen, this is the secondary CPU entry point.
  */
 asmlinkage void secondary_start_kernel(void);
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bef04af..f484339 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -23,7 +23,8 @@ arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
 					   ../../arm/kernel/opcodes.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
-arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o topology.o
+arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o topology.o \
+					   smp_parking_protocol.o
 arm64-obj-$(CONFIG_PERF_EVENTS)		+= perf_regs.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
@@ -34,6 +35,7 @@ arm64-obj-$(CONFIG_KGDB)		+= kgdb.o
 arm64-obj-$(CONFIG_EFI)			+= efi.o efi-stub.o efi-entry.o
 arm64-obj-$(CONFIG_PCI)			+= pci.o
 arm64-obj-$(CONFIG_ARMV8_DEPRECATED)	+= armv8_deprecated.o
+arm64-obj-$(CONFIG_ACPI)		+= acpi.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
new file mode 100644
index 0000000..56127e9
--- /dev/null
+++ b/arch/arm64/kernel/acpi.c
@@ -0,0 +1,397 @@
+/*
+ *  ARM64 Specific Low-Level ACPI Boot Support
+ *
+ *  Copyright (C) 2013-2014, Linaro Ltd.
+ *	Author: Al Stone <al.stone@linaro.org>
+ *	Author: Graeme Gregory <graeme.gregory@linaro.org>
+ *	Author: Hanjun Guo <hanjun.guo@linaro.org>
+ *	Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
+ *	Author: Naresh Bhat <naresh.bhat@linaro.org>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2 as
+ *  published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "ACPI: " fmt
+
+#include <linux/acpi.h>
+#include <linux/bootmem.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/memblock.h>
+#include <linux/smp.h>
+#include <linux/of.h>
+
+#include <asm/cputype.h>
+#include <asm/cpu_ops.h>
+#include <asm/smp_plat.h>
+
+int acpi_noirq;			/* skip ACPI IRQ initialization */
+int acpi_disabled;
+EXPORT_SYMBOL(acpi_disabled);
+
+int acpi_pci_disabled;		/* skip ACPI PCI scan and IRQ initialization */
+EXPORT_SYMBOL(acpi_pci_disabled);
+
+static int enabled_cpus;	/* Processors (GICC) with enabled flag in MADT */
+
+static char *boot_method;
+static u64 parked_address[NR_CPUS];
+
+/*
+ * Since we're on ARM, the default interrupt routing model
+ * clearly has to be GIC.
+ */
+enum acpi_irq_model_id acpi_irq_model = ACPI_IRQ_MODEL_GIC;
+
+/*
+ * __acpi_map_table() will be called before page_init(), so early_ioremap()
+ * or early_memremap() should be called here to for ACPI table mapping.
+ */
+char *__init __acpi_map_table(unsigned long phys, unsigned long size)
+{
+	if (!phys || !size)
+		return NULL;
+
+	return early_memremap(phys, size);
+}
+
+void __init __acpi_unmap_table(char *map, unsigned long size)
+{
+	if (!map || !size)
+		return;
+
+	early_memunmap(map, size);
+}
+
+/**
+ * acpi_map_gic_cpu_interface - generates a logical cpu number
+ * and map to MPIDR represented by GICC structure
+ * @mpidr: CPU's hardware id to register, MPIDR represented in MADT
+ * @enabled: this cpu is enabled or not
+ *
+ * Returns the logical cpu number which maps to MPIDR
+ */
+static int __init acpi_map_gic_cpu_interface(u64 mpidr, u64 parked_addr, u8 enabled)
+{
+	int cpu;
+
+	if (mpidr == INVALID_HWID) {
+		pr_info("Skip MADT cpu entry with invalid MPIDR\n");
+		return -EINVAL;
+	}
+
+	total_cpus++;
+	if (!enabled)
+		return -EINVAL;
+
+	if (enabled_cpus >=  NR_CPUS) {
+		pr_warn("NR_CPUS limit of %d reached, Processor %d/0x%llx ignored.\n",
+			NR_CPUS, total_cpus, mpidr);
+		return -EINVAL;
+	}
+
+	/* No need to check duplicate MPIDRs for the first CPU */
+	if (enabled_cpus) {
+		/*
+		 * Duplicate MPIDRs are a recipe for disaster. Scan
+		 * all initialized entries and check for
+		 * duplicates. If any is found just ignore the CPU.
+		 */
+		for_each_possible_cpu(cpu) {
+			if (cpu_logical_map(cpu) == mpidr) {
+				pr_err("Firmware bug, duplicate CPU MPIDR: 0x%llx in MADT\n",
+				       mpidr);
+				return -EINVAL;
+			}
+		}
+
+		/* allocate a logical cpu id for the new comer */
+		cpu = cpumask_next_zero(-1, cpu_possible_mask);
+	} else {
+		/*
+		 * First GICC entry must be BSP as ACPI spec said
+		 * in section 5.2.12.15
+		 */
+		if  (cpu_logical_map(0) != mpidr) {
+			pr_err("First GICC entry with MPIDR 0x%llx is not BSP\n",
+			       mpidr);
+			return -EINVAL;
+		}
+
+		/*
+		 * boot_cpu_init() already hold bit 0 in cpu_possible_mask
+		 * for BSP, no need to allocate again.
+		 */
+		cpu = 0;
+	}
+
+	if (!boot_method)
+		return -EOPNOTSUPP;
+
+	parked_address[cpu] = parked_addr;
+	cpu_ops[cpu] = cpu_get_ops(boot_method);
+	/* CPU 0 was already initialized */
+	if (cpu) {
+		if (!cpu_ops[cpu])
+			return -EINVAL;
+
+		if (cpu_ops[cpu]->cpu_init(NULL, cpu))
+			return -EOPNOTSUPP;
+
+		/* map the logical cpu id to cpu MPIDR */
+		cpu_logical_map(cpu) = mpidr;
+
+		set_cpu_possible(cpu, true);
+	}
+
+	enabled_cpus++;
+	return cpu;
+}
+
+static int __init
+acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
+				const unsigned long end)
+{
+	struct acpi_madt_generic_interrupt *processor;
+
+	processor = (struct acpi_madt_generic_interrupt *)header;
+
+	if (BAD_MADT_ENTRY(processor, end))
+		return -EINVAL;
+
+	acpi_table_print_madt_entry(header);
+
+	acpi_map_gic_cpu_interface(processor->arm_mpidr & MPIDR_HWID_BITMASK,
+		processor->parked_address, processor->flags & ACPI_MADT_ENABLED);
+
+	return 0;
+}
+
+/* Parse GIC cpu interface entries in MADT for SMP init */
+void __init acpi_init_cpus(void)
+{
+	int count;
+
+	/*
+	 * do a partial walk of MADT to determine how many CPUs
+	 * we have including disabled CPUs, and get information
+	 * we need for SMP init
+	 */
+	count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
+			acpi_parse_gic_cpu_interface, 0);
+
+	if (!count) {
+		pr_err("No GIC CPU interface entries present\n");
+		return;
+	} else if (count < 0) {
+		pr_err("Error parsing GIC CPU interface entry\n");
+		return;
+	}
+
+	/* Make boot-up look pretty */
+	pr_info("%d CPUs enabled, %d CPUs total\n", enabled_cpus, total_cpus);
+}
+
+static struct irq_domain *acpi_irq_domain;
+
+int acpi_gsi_to_irq(u32 gsi, unsigned int *irq)
+{
+	*irq = irq_find_mapping(acpi_irq_domain, gsi);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(acpi_gsi_to_irq);
+
+
+/*
+ * success: return IRQ number (>0)
+ * failure: return =< 0
+ */
+int acpi_register_gsi(struct device *dev, u32 gsi, int trigger, int polarity)
+{
+	unsigned int irq;
+	unsigned int irq_type;
+	struct of_phandle_args args;
+	struct irq_data *d;
+
+	/*
+	 * ACPI have no bindings to indicate SPI or PPI, so we
+	 * use different mappings from DT in ACPI.
+	 *
+	 * For FDT
+	 * PPI interrupt: in the range [0, 15];
+	 * SPI interrupt: in the range [0, 987];
+	 *
+	 * For ACPI, GSI should be unique so using
+	 * the hwirq directly for the mapping:
+	 * PPI interrupt: in the range [16, 31];
+	 * SPI interrupt: in the range [32, 1019];
+	 */
+
+	if (trigger == ACPI_EDGE_SENSITIVE &&
+				polarity == ACPI_ACTIVE_LOW)
+		irq_type = IRQ_TYPE_EDGE_FALLING;
+	else if (trigger == ACPI_EDGE_SENSITIVE &&
+				polarity == ACPI_ACTIVE_HIGH)
+		irq_type = IRQ_TYPE_EDGE_RISING;
+	else if (trigger == ACPI_LEVEL_SENSITIVE &&
+				polarity == ACPI_ACTIVE_LOW)
+		irq_type = IRQ_TYPE_LEVEL_LOW;
+	else if (trigger == ACPI_LEVEL_SENSITIVE &&
+				polarity == ACPI_ACTIVE_HIGH)
+		irq_type = IRQ_TYPE_LEVEL_HIGH;
+	else
+		irq_type = IRQ_TYPE_NONE;
+
+	if (!acpi_irq_domain)
+		BUG();
+
+	args.np = acpi_irq_domain->of_node;
+	args.args_count = 3;
+	args.args[0] = 0;
+	args.args[1] = gsi - 32;
+	args.args[2] = irq_type;
+
+	irq = __irq_domain_alloc_irqs(acpi_irq_domain, -1, 1,
+				      dev_to_node(dev), &args, false);
+	if (irq < 0)
+		return -ENOSPC;
+
+	d = irq_domain_get_irq_data(acpi_irq_domain, irq);
+	if (!d)
+		return -EFAULT;
+
+	d->chip->irq_set_type(d, irq_type);
+
+	return irq;
+}
+EXPORT_SYMBOL_GPL(acpi_register_gsi);
+
+void acpi_unregister_gsi(u32 gsi)
+{
+}
+EXPORT_SYMBOL_GPL(acpi_unregister_gsi);
+
+static int __init acpi_parse_fadt(struct acpi_table_header *table)
+{
+	struct acpi_table_fadt *fadt = (struct acpi_table_fadt *)table;
+
+	/*
+	 * Revision in table header is the FADT Major revision, and there
+	 * is a minor revision of FADT which was introduced by ACPI 5.1,
+	 * we only deal with ACPI 5.1 or newer revision to get GIC and SMP
+	 * boot protocol configuration data, or we will disable ACPI.
+	 */
+	if (table->revision > 5 ||
+	    (table->revision == 5 && fadt->minor_revision >= 1)) {
+		/*
+		 * ACPI 5.1 only has two explicit methods to boot up SMP,
+		 * PSCI and Parking protocol, but the Parking protocol is
+		 * only specified for ARMv7 now, so make PSCI as the only
+		 * way for the SMP boot protocol before some updates for
+		 * the Parking protocol spec.
+		 */
+		if (acpi_psci_present())
+			boot_method = "psci";
+		else if (IS_ENABLED(CONFIG_ARM_PARKING_PROTOCOL))
+			boot_method = "parking-protocol";
+
+		if (!boot_method)
+			pr_warn("No PSCI support, will not bring up secondary CPUs\n");
+		return -EOPNOTSUPP;
+	}
+
+	pr_warn("Unsupported FADT revision %d.%d, should be 5.1+, will disable ACPI\n",
+		table->revision, fadt->minor_revision);
+	disable_acpi();
+
+	return -EINVAL;
+}
+
+/*
+ * acpi_boot_table_init() called from setup_arch(), always.
+ *	1. find RSDP and get its address, and then find XSDT
+ *	2. extract all tables and checksums them all
+ *	3. check ACPI FADT revision
+ *
+ * We can parse ACPI boot-time tables such as MADT after
+ * this function is called.
+ */
+void __init acpi_boot_table_init(void)
+{
+	/* If acpi_disabled, bail out */
+	if (acpi_disabled)
+		return;
+
+	/* Initialize the ACPI boot-time table parser. */
+	if (acpi_table_init()) {
+		disable_acpi();
+		return;
+	}
+
+	if (acpi_table_parse(ACPI_SIG_FADT, acpi_parse_fadt)) {
+		/* disable ACPI if no FADT is found */
+		disable_acpi();
+		pr_err("Can't find FADT\n");
+	}
+}
+
+void __init acpi_gic_init(void)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	acpi_size tbl_size;
+	int err;
+
+	if (acpi_disabled)
+		return;
+
+	status = acpi_get_table_with_size(ACPI_SIG_MADT, 0, &table, &tbl_size);
+	if (ACPI_FAILURE(status)) {
+		const char *msg = acpi_format_exception(status);
+
+		pr_err("Failed to get MADT table, %s\n", msg);
+		return;
+	}
+
+	err = gic_v2_acpi_init(table, &acpi_irq_domain);
+	if (err || !acpi_irq_domain)
+		pr_err("Failed to initialize GIC IRQ controller");
+
+	early_acpi_os_unmap_memory((char *)table, tbl_size);
+}
+
+/*
+ * Parked Address in ACPI GIC structure will be used as the CPU
+ * release address
+ */
+int acpi_get_cpu_parked_address(int cpu, u64 *addr)
+{
+	if (!addr || !parked_address[cpu])
+		return -EINVAL;
+
+	*addr = parked_address[cpu];
+
+	return 0;
+}
+
+static int __init parse_acpi(char *arg)
+{
+	if (!arg)
+		return -EINVAL;
+
+	/* "acpi=off" disables both ACPI table parsing and interpreter */
+	if (strcmp(arg, "off") == 0)
+		disable_acpi();
+	else if (strcmp(arg, "force") == 0) /* force ACPI to be enabled */
+		enable_acpi();
+	else
+		return -EINVAL;	/* Core will print when we return error */
+
+	return 0;
+}
+early_param("acpi", parse_acpi);
diff --git a/arch/arm64/kernel/cpu_ops.c b/arch/arm64/kernel/cpu_ops.c
index cce9524..c50ca8f 100644
--- a/arch/arm64/kernel/cpu_ops.c
+++ b/arch/arm64/kernel/cpu_ops.c
@@ -23,6 +23,7 @@
 #include <linux/string.h>
 
 extern const struct cpu_operations smp_spin_table_ops;
+extern const struct cpu_operations smp_parking_protocol_ops;
 extern const struct cpu_operations cpu_psci_ops;
 
 const struct cpu_operations *cpu_ops[NR_CPUS];
@@ -30,12 +31,15 @@ const struct cpu_operations *cpu_ops[NR_CPUS];
 static const struct cpu_operations *supported_cpu_ops[] __initconst = {
 #ifdef CONFIG_SMP
 	&smp_spin_table_ops,
+#ifdef CONFIG_ARM_PARKING_PROTOCOL
+	&smp_parking_protocol_ops,
+#endif
 #endif
 	&cpu_psci_ops,
 	NULL,
 };
 
-static const struct cpu_operations * __init cpu_get_ops(const char *name)
+const struct cpu_operations * __init cpu_get_ops(const char *name)
 {
 	const struct cpu_operations **ops = supported_cpu_ops;
 
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index b42c7b4..a92be8e 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -354,3 +354,40 @@ void efi_virtmap_unload(void)
 	efi_set_pgd(current->active_mm);
 	preempt_enable();
 }
+
+/*
+ * If nothing else is handling pm_power_off, use EFI
+ *
+ * When Guenter Roeck's power-off handler call chain patches land,
+ * we just need to return true unconditionally.
+ */
+bool efi_poweroff_required(void)
+{
+	return pm_power_off == NULL;
+}
+
+static int arm64_efi_restart(struct notifier_block *this,
+			   unsigned long mode, void *cmd)
+{
+	efi_reboot(reboot_mode, cmd);
+	return NOTIFY_DONE;
+}
+
+static struct notifier_block arm64_efi_restart_nb = {
+	.notifier_call = arm64_efi_restart,
+	.priority = INT_MAX,
+};
+
+static int __init arm64_register_efi_restart(void)
+{
+	int ret = 0;
+
+	if (efi_enabled(EFI_RUNTIME_SERVICES)) {
+		ret = register_restart_handler(&arm64_efi_restart_nb);
+		if (ret)
+			pr_err("%s: cannot register restart handler, %d\n",
+			       __func__, ret);
+	}
+	return ret;
+}
+late_initcall(arm64_register_efi_restart);
diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
index 6f93c24..c870fa4 100644
--- a/arch/arm64/kernel/pci.c
+++ b/arch/arm64/kernel/pci.c
@@ -10,14 +10,17 @@
  *
  */
 
+#include <linux/acpi.h>
 #include <linux/init.h>
 #include <linux/io.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
 #include <linux/of_pci.h>
 #include <linux/of_platform.h>
+#include <linux/of_address.h>
 #include <linux/slab.h>
-
+#include <linux/pci-acpi.h>
+#include <linux/mmconfig.h>
 #include <asm/pci-bridge.h>
 
 /*
@@ -37,12 +40,429 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
 	return res->start;
 }
 
+int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
+{
+	struct pci_sysdata *sd;
+
+	if (!acpi_disabled) {
+		sd = bridge->bus->sysdata;
+		ACPI_COMPANION_SET(&bridge->dev, sd->companion);
+	}
+	return 0;
+}
+
 /*
  * Try to assign the IRQ number from DT when adding a new device
  */
 int pcibios_add_device(struct pci_dev *dev)
 {
-	dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
+	if (acpi_disabled)
+		dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
+
+	return 0;
+}
+
+void pcibios_add_bus(struct pci_bus *bus)
+{
+	if (!acpi_disabled)
+		acpi_pci_add_bus(bus);
+}
+
+void pcibios_remove_bus(struct pci_bus *bus)
+{
+	if (!acpi_disabled)
+		acpi_pci_remove_bus(bus);
+}
 
+int pcibios_enable_irq(struct pci_dev *dev)
+{
+	if (!acpi_disabled && !pci_dev_msi_enabled(dev))
+		acpi_pci_irq_enable(dev);
 	return 0;
 }
+
+int pcibios_disable_irq(struct pci_dev *dev)
+{
+	if (!acpi_disabled && !pci_dev_msi_enabled(dev))
+		acpi_pci_irq_disable(dev);
+	return 0;
+}
+
+int pcibios_enable_device(struct pci_dev *dev, int bars)
+{
+	int err;
+
+	err = pci_enable_resources(dev, bars);
+	if (err < 0)
+		return err;
+
+	if (!pci_dev_msi_enabled(dev))
+		return pcibios_enable_irq(dev);
+	return 0;
+}
+
+static int __init pcibios_assign_resources(void)
+{
+	struct pci_bus *root_bus;
+
+	if (acpi_disabled)
+		return 0;
+
+	list_for_each_entry(root_bus, &pci_root_buses, node) {
+		pcibios_resource_survey_bus(root_bus);
+		pci_assign_unassigned_root_bus_resources(root_bus);
+	}
+	return 0;
+}
+
+/*
+ * fs_initcall comes after subsys_initcall, so we know acpi scan
+ * has run.
+ */
+fs_initcall(pcibios_assign_resources);
+
+#ifdef CONFIG_ACPI
+
+static int pci_read(struct pci_bus *bus, unsigned int devfn, int where,
+		    int size, u32 *value)
+{
+	return raw_pci_read(pci_domain_nr(bus), bus->number,
+			    devfn, where, size, value);
+}
+
+static int pci_write(struct pci_bus *bus, unsigned int devfn, int where,
+		     int size, u32 value)
+{
+	return raw_pci_write(pci_domain_nr(bus), bus->number,
+			     devfn, where, size, value);
+}
+
+struct pci_ops pci_root_ops = {
+	.read = pci_read,
+	.write = pci_write,
+};
+
+struct pci_root_info {
+	struct acpi_device *bridge;
+	char name[16];
+	unsigned int res_num;
+	struct resource *res;
+	resource_size_t *res_offset;
+	struct pci_sysdata sd;
+	u16 segment;
+	u8 start_bus;
+	u8 end_bus;
+};
+
+static acpi_status resource_to_addr(struct acpi_resource *resource,
+				    struct acpi_resource_address64 *addr)
+{
+	acpi_status status;
+
+	memset(addr, 0, sizeof(*addr));
+	switch (resource->type) {
+	case ACPI_RESOURCE_TYPE_ADDRESS16:
+	case ACPI_RESOURCE_TYPE_ADDRESS32:
+	case ACPI_RESOURCE_TYPE_ADDRESS64:
+		status = acpi_resource_to_address64(resource, addr);
+		if (ACPI_SUCCESS(status) &&
+		    (addr->resource_type == ACPI_MEMORY_RANGE ||
+		    addr->resource_type == ACPI_IO_RANGE) &&
+		    addr->address.address_length > 0) {
+			return AE_OK;
+		}
+		break;
+	}
+	return AE_ERROR;
+}
+
+static acpi_status count_resource(struct acpi_resource *acpi_res, void *data)
+{
+	struct pci_root_info *info = data;
+	struct acpi_resource_address64 addr;
+	acpi_status status;
+
+	status = resource_to_addr(acpi_res, &addr);
+	if (ACPI_SUCCESS(status))
+		info->res_num++;
+	return AE_OK;
+}
+
+static acpi_status setup_resource(struct acpi_resource *acpi_res, void *data)
+{
+	struct pci_root_info *info = data;
+	struct resource *res;
+	struct acpi_resource_address64 addr;
+	acpi_status status;
+	unsigned long flags;
+	u64 start, end;
+
+	status = resource_to_addr(acpi_res, &addr);
+	if (!ACPI_SUCCESS(status))
+		return AE_OK;
+
+	if (addr.resource_type == ACPI_MEMORY_RANGE) {
+		flags = IORESOURCE_MEM;
+		if (addr.info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
+			flags |= IORESOURCE_PREFETCH;
+	} else if (addr.resource_type == ACPI_IO_RANGE) {
+		flags = IORESOURCE_IO;
+	} else
+		return AE_OK;
+
+	start = addr.address.minimum + addr.address.translation_offset;
+	end = addr.address.maximum + addr.address.translation_offset;
+
+	res = &info->res[info->res_num];
+	res->name = info->name;
+	res->flags = flags;
+	res->start = start;
+	res->end = end;
+
+	if (flags & IORESOURCE_IO) {
+		unsigned long port;
+		int err;
+
+		err = pci_register_io_range(start, addr.address.address_length);
+		if (err)
+			return AE_OK;
+
+		port = pci_address_to_pio(start);
+		if (port == (unsigned long)-1) {
+			res->start = -1;
+			res->end = -1;
+			return AE_OK;
+		}
+
+		res->start = port;
+		res->end = res->start + addr.address.address_length - 1;
+
+		if (pci_remap_iospace(res, start) < 0)
+			return AE_OK;
+
+		info->res_offset[info->res_num] = port - addr.address.minimum;
+	} else
+		info->res_offset[info->res_num] = addr.address.translation_offset;
+
+	info->res_num++;
+
+	return AE_OK;
+}
+
+static void coalesce_windows(struct pci_root_info *info, unsigned long type)
+{
+	int i, j;
+	struct resource *res1, *res2;
+
+	for (i = 0; i < info->res_num; i++) {
+		res1 = &info->res[i];
+		if (!(res1->flags & type))
+			continue;
+
+		for (j = i + 1; j < info->res_num; j++) {
+			res2 = &info->res[j];
+			if (!(res2->flags & type))
+				continue;
+
+			/*
+			 * I don't like throwing away windows because then
+			 * our resources no longer match the ACPI _CRS, but
+			 * the kernel resource tree doesn't allow overlaps.
+			 */
+			if (resource_overlaps(res1, res2)) {
+				res2->start = min(res1->start, res2->start);
+				res2->end = max(res1->end, res2->end);
+				dev_info(&info->bridge->dev,
+					 "host bridge window expanded to %pR; %pR ignored\n",
+					 res2, res1);
+				res1->flags = 0;
+			}
+		}
+	}
+}
+
+static void add_resources(struct pci_root_info *info,
+			  struct list_head *resources)
+{
+	int i;
+	struct resource *res, *root, *conflict;
+
+	coalesce_windows(info, IORESOURCE_MEM);
+	coalesce_windows(info, IORESOURCE_IO);
+
+	for (i = 0; i < info->res_num; i++) {
+		res = &info->res[i];
+
+		if (res->flags & IORESOURCE_MEM)
+			root = &iomem_resource;
+		else if (res->flags & IORESOURCE_IO)
+			root = &ioport_resource;
+		else
+			continue;
+
+		conflict = insert_resource_conflict(root, res);
+		if (conflict)
+			dev_info(&info->bridge->dev,
+				 "ignoring host bridge window %pR (conflicts with %s %pR)\n",
+				 res, conflict->name, conflict);
+		else
+			pci_add_resource_offset(resources, res,
+						info->res_offset[i]);
+	}
+}
+
+static void free_pci_root_info_res(struct pci_root_info *info)
+{
+	kfree(info->res);
+	info->res = NULL;
+	kfree(info->res_offset);
+	info->res_offset = NULL;
+	info->res_num = 0;
+}
+
+static void __release_pci_root_info(struct pci_root_info *info)
+{
+	int i;
+	struct resource *res;
+
+	for (i = 0; i < info->res_num; i++) {
+		res = &info->res[i];
+
+		if (!res->parent)
+			continue;
+
+		if (!(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
+			continue;
+
+		release_resource(res);
+	}
+
+	free_pci_root_info_res(info);
+
+	kfree(info);
+}
+
+static void release_pci_root_info(struct pci_host_bridge *bridge)
+{
+	struct pci_root_info *info = bridge->release_data;
+
+	__release_pci_root_info(info);
+}
+
+static void probe_pci_root_info(struct pci_root_info *info,
+				struct acpi_device *device,
+				int busnum, int domain)
+{
+	size_t size;
+
+	sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum);
+	info->bridge = device;
+
+	info->res_num = 0;
+	acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_resource,
+				info);
+	if (!info->res_num)
+		return;
+
+	size = sizeof(*info->res) * info->res_num;
+	info->res = kzalloc_node(size, GFP_KERNEL, info->sd.node);
+	if (!info->res) {
+		info->res_num = 0;
+		return;
+	}
+
+	size = sizeof(*info->res_offset) * info->res_num;
+	info->res_num = 0;
+	info->res_offset = kzalloc_node(size, GFP_KERNEL, info->sd.node);
+	if (!info->res_offset) {
+		kfree(info->res);
+		info->res = NULL;
+		return;
+	}
+
+	acpi_walk_resources(device->handle, METHOD_NAME__CRS, setup_resource,
+				info);
+}
+
+/* Root bridge scanning */
+struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+{
+	struct acpi_device *device = root->device;
+	struct pci_mmcfg_region *mcfg;
+	struct pci_root_info *info;
+	int domain = root->segment;
+	int busnum = root->secondary.start;
+	LIST_HEAD(resources);
+	struct pci_bus *bus;
+	struct pci_sysdata *sd;
+	int node;
+
+	/* we need mmconfig */
+	mcfg = pci_mmconfig_lookup(domain, busnum);
+	if (!mcfg) {
+		pr_err("pci_bus %04x:%02x has no MCFG table\n",
+		       domain, busnum);
+		return NULL;
+	}
+
+	/* temporary hack */
+	if (mcfg->fixup)
+		(*mcfg->fixup)(root, mcfg);
+
+	if (domain && !pci_domains_supported) {
+		pr_warn("PCI %04x:%02x: multiple domains not supported.\n",
+			domain, busnum);
+		return NULL;
+	}
+
+	node = NUMA_NO_NODE;
+
+	info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
+	if (!info) {
+		pr_warn("PCI %04x:%02x: ignored (out of memory)\n",
+			domain, busnum);
+		return NULL;
+	}
+	info->segment = domain;
+	info->start_bus = busnum;
+	info->end_bus = root->secondary.end;
+
+	sd = &info->sd;
+	sd->domain = domain;
+	sd->node = node;
+	sd->companion = device;
+
+	probe_pci_root_info(info, device, busnum, domain);
+
+	/* insert busn res at first */
+	pci_add_resource(&resources,  &root->secondary);
+
+	/* then _CRS resources */
+	add_resources(info, &resources);
+
+	bus = pci_create_root_bus(NULL, busnum, &pci_root_ops, sd, &resources);
+	if (bus) {
+		pci_scan_child_bus(bus);
+		pci_set_host_bridge_release(to_pci_host_bridge(bus->bridge),
+					    release_pci_root_info, info);
+	} else {
+		pci_free_resource_list(&resources);
+		__release_pci_root_info(info);
+	}
+
+	/* After the PCI-E bus has been walked and all devices discovered,
+	 * configure any settings of the fabric that might be necessary.
+	 */
+	if (bus) {
+		struct pci_bus *child;
+
+		list_for_each_entry(child, &bus->children, node)
+			pcie_bus_configure_settings(child);
+	}
+
+	if (bus && node != NUMA_NO_NODE)
+		dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node);
+
+	return bus;
+}
+#endif
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 25a5308..1e4fd17 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -29,6 +29,7 @@
 #include <linux/platform_device.h>
 #include <linux/spinlock.h>
 #include <linux/uaccess.h>
+#include <linux/acpi.h>
 
 #include <asm/cputype.h>
 #include <asm/irq.h>
@@ -1310,6 +1311,107 @@ static int __init register_pmu_driver(void)
 }
 device_initcall(register_pmu_driver);
 
+#ifdef CONFIG_ACPI
+struct acpi_pmu_irq {
+	int gsi;
+	int trigger;
+};
+
+static struct acpi_pmu_irq acpi_pmu_irqs[NR_CPUS] __initdata;
+
+static int __init
+acpi_parse_pmu_irqs(struct acpi_subtable_header *header,
+		    const unsigned long end)
+{
+	struct acpi_madt_generic_interrupt *gic;
+	int cpu;
+	u64 mpidr;
+
+	gic = (struct acpi_madt_generic_interrupt *)header;
+	if (BAD_MADT_ENTRY(gic, end))
+		return -EINVAL;
+
+	mpidr = gic->arm_mpidr & MPIDR_HWID_BITMASK;
+
+	for_each_possible_cpu(cpu) {
+		if (cpu_logical_map(cpu) != mpidr)
+			continue;
+
+		acpi_pmu_irqs[cpu].gsi = gic->performance_interrupt;
+		if (gic->flags & ACPI_MADT_PERFORMANCE_IRQ_MODE)
+			acpi_pmu_irqs[cpu].trigger = ACPI_EDGE_SENSITIVE;
+		else
+			acpi_pmu_irqs[cpu].trigger = ACPI_LEVEL_SENSITIVE;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static int __init pmu_acpi_init(void)
+{
+	struct platform_device *pdev;
+	struct acpi_pmu_irq *pirq = acpi_pmu_irqs;
+	struct resource	*res, *r;
+	int err = -ENOMEM;
+	int i, count, irq;
+
+	if (acpi_disabled)
+		return 0;
+
+	count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
+				      acpi_parse_pmu_irqs, num_possible_cpus());
+	/* Must have irq for boot boot cpu, at least */
+	if (count <= 0 || pirq->gsi == 0)
+		return -EINVAL;
+
+	irq = acpi_register_gsi(NULL, pirq->gsi, pirq->trigger,
+				ACPI_ACTIVE_HIGH);
+
+	if (irq_is_percpu(irq))
+		count = 1;
+
+	pdev = platform_device_alloc("arm-pmu", -1);
+	if (!pdev)
+		return err;
+
+	res = kcalloc(count, sizeof(*res), GFP_KERNEL);
+	if (!res)
+		goto err_free_device;
+
+	for (i = 0, r = res; i < count; i++, pirq++, r++) {
+		if (i)
+			irq = acpi_register_gsi(NULL, pirq->gsi, pirq->trigger,
+						ACPI_ACTIVE_HIGH);
+		r->start = r->end = irq;
+		r->flags = IORESOURCE_IRQ;
+		if (pirq->trigger == ACPI_EDGE_SENSITIVE)
+			r->flags |= IORESOURCE_IRQ_HIGHEDGE;
+		else
+			r->flags |= IORESOURCE_IRQ_HIGHLEVEL;
+	}
+
+	err = platform_device_add_resources(pdev, res, count);
+	if (err)
+		goto err_free_res;
+
+	err = platform_device_add(pdev);
+	if (err)
+		goto err_free_res;
+
+	return 0;
+
+err_free_res:
+	kfree(res);
+
+err_free_device:
+	platform_device_put(pdev);
+	return err;
+}
+arch_initcall(pmu_acpi_init);
+
+#endif /* ACPI */
+
 static struct pmu_hw_events *armpmu_get_cpu_events(void)
 {
 	return this_cpu_ptr(&cpu_hw_events);
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 3425f31..bab2bea 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -15,6 +15,7 @@
 
 #define pr_fmt(fmt) "psci: " fmt
 
+#include <linux/acpi.h>
 #include <linux/init.h>
 #include <linux/of.h>
 #include <linux/smp.h>
@@ -24,6 +25,7 @@
 #include <linux/slab.h>
 #include <uapi/linux/psci.h>
 
+#include <asm/acpi.h>
 #include <asm/compiler.h>
 #include <asm/cpu_ops.h>
 #include <asm/errno.h>
@@ -304,6 +306,33 @@ static void psci_sys_poweroff(void)
 	invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0);
 }
 
+static void __init psci_0_2_set_functions(void)
+{
+	pr_info("Using standard PSCI v0.2 function IDs\n");
+	psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN64_CPU_SUSPEND;
+	psci_ops.cpu_suspend = psci_cpu_suspend;
+
+	psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF;
+	psci_ops.cpu_off = psci_cpu_off;
+
+	psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN64_CPU_ON;
+	psci_ops.cpu_on = psci_cpu_on;
+
+	psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN64_MIGRATE;
+	psci_ops.migrate = psci_migrate;
+
+	psci_function_id[PSCI_FN_AFFINITY_INFO] = PSCI_0_2_FN64_AFFINITY_INFO;
+	psci_ops.affinity_info = psci_affinity_info;
+
+	psci_function_id[PSCI_FN_MIGRATE_INFO_TYPE] =
+		PSCI_0_2_FN_MIGRATE_INFO_TYPE;
+	psci_ops.migrate_info_type = psci_migrate_info_type;
+
+	arm_pm_restart = psci_sys_reset;
+
+	pm_power_off = psci_sys_poweroff;
+}
+
 /*
  * PSCI Function IDs for v0.2+ are well defined so use
  * standard values.
@@ -337,29 +366,7 @@ static int __init psci_0_2_init(struct device_node *np)
 		}
 	}
 
-	pr_info("Using standard PSCI v0.2 function IDs\n");
-	psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN64_CPU_SUSPEND;
-	psci_ops.cpu_suspend = psci_cpu_suspend;
-
-	psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF;
-	psci_ops.cpu_off = psci_cpu_off;
-
-	psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN64_CPU_ON;
-	psci_ops.cpu_on = psci_cpu_on;
-
-	psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN64_MIGRATE;
-	psci_ops.migrate = psci_migrate;
-
-	psci_function_id[PSCI_FN_AFFINITY_INFO] = PSCI_0_2_FN64_AFFINITY_INFO;
-	psci_ops.affinity_info = psci_affinity_info;
-
-	psci_function_id[PSCI_FN_MIGRATE_INFO_TYPE] =
-		PSCI_0_2_FN_MIGRATE_INFO_TYPE;
-	psci_ops.migrate_info_type = psci_migrate_info_type;
-
-	arm_pm_restart = psci_sys_reset;
-
-	pm_power_off = psci_sys_poweroff;
+	psci_0_2_set_functions();
 
 out_put_node:
 	of_node_put(np);
@@ -412,7 +419,7 @@ static const struct of_device_id psci_of_match[] __initconst = {
 	{},
 };
 
-int __init psci_init(void)
+int __init psci_dt_init(void)
 {
 	struct device_node *np;
 	const struct of_device_id *matched_np;
@@ -427,6 +434,29 @@ int __init psci_init(void)
 	return init_fn(np);
 }
 
+/*
+ * We use PSCI 0.2+ when ACPI is deployed on ARM64 and it's
+ * explicitly clarified in SBBR
+ */
+int __init psci_acpi_init(void)
+{
+	if (!acpi_psci_present()) {
+		pr_info("is not implemented in ACPI.\n");
+		return -EOPNOTSUPP;
+	}
+
+	pr_info("probing for conduit method from ACPI.\n");
+
+	if (acpi_psci_use_hvc())
+		invoke_psci_fn = __invoke_psci_fn_hvc;
+	else
+		invoke_psci_fn = __invoke_psci_fn_smc;
+
+	psci_0_2_set_functions();
+
+	return 0;
+}
+
 #ifdef CONFIG_SMP
 
 static int __init cpu_psci_cpu_init(struct device_node *dn, unsigned int cpu)
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index e8420f6..0029b7a 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -17,6 +17,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/acpi.h>
 #include <linux/export.h>
 #include <linux/kernel.h>
 #include <linux/stddef.h>
@@ -62,6 +63,7 @@
 #include <asm/memblock.h>
 #include <asm/psci.h>
 #include <asm/efi.h>
+#include <asm/acpi.h>
 
 unsigned int processor_id;
 EXPORT_SYMBOL(processor_id);
@@ -351,6 +353,29 @@ static void __init request_standard_resources(void)
 	}
 }
 
+static int __init dt_scan_chosen(unsigned long node, const char *uname,
+			int depth, void *data)
+{
+	const char *p;
+
+	if (depth != 1 || !data || (strcmp(uname, "chosen") != 0))
+		return 0;
+
+	p = of_get_flat_dt_prop(node, "linux,uefi-stub-generated-dtb", NULL);
+	*(bool *)data = p ? true : false;
+
+	return 1;
+}
+
+static bool __init is_uefi_stub_generated_dtb(void)
+{
+	bool flag = false;
+
+	of_scan_flat_dt(dt_scan_chosen, &flag);
+
+	return flag;
+}
+
 u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
 
 void __init setup_arch(char **cmdline_p)
@@ -369,9 +394,23 @@ void __init setup_arch(char **cmdline_p)
 	early_fixmap_init();
 	early_ioremap_init();
 
+	/*
+	 * Disable ACPI before early parameters parsed and
+	 * it will be enabled in parse_early_param() if
+	 * "acpi=force" is passed
+	 */
+	disable_acpi();
+
 	parse_early_param();
 
 	/*
+	 * If no dtb provided by firmware, enable ACPI and give system a
+	 * chance to boot with ACPI configuration data
+	 */
+	if (is_uefi_stub_generated_dtb() && acpi_disabled)
+		enable_acpi();
+
+	/*
 	 *  Unmask asynchronous aborts after bringing up possible earlycon.
 	 * (Report possible System Errors once we can report this occurred)
 	 */
@@ -380,18 +419,27 @@ void __init setup_arch(char **cmdline_p)
 	efi_init();
 	arm64_memblock_init();
 
+	/* Parse the ACPI tables for possible boot-time configuration */
+	acpi_boot_table_init();
+
 	paging_init();
 	request_standard_resources();
 
 	early_ioremap_reset();
 
-	unflatten_device_tree();
-
-	psci_init();
+	if (acpi_disabled) {
+		unflatten_device_tree();
+		psci_dt_init();
+		cpu_read_bootcpu_ops();
+#ifdef CONFIG_SMP
+		of_smp_init_cpus();
+#endif
+	} else {
+		psci_acpi_init();
+		acpi_init_cpus();
+	}
 
-	cpu_read_bootcpu_ops();
 #ifdef CONFIG_SMP
-	smp_init_cpus();
 	smp_build_mpidr_hash();
 #endif
 
@@ -547,3 +595,25 @@ const struct seq_operations cpuinfo_op = {
 	.stop	= c_stop,
 	.show	= c_show
 };
+
+/*
+ * Temporary hack to avoid need for console= on command line
+ */
+static int __init arm64_console_setup(void)
+{
+	/* Allow cmdline to override our assumed preferences */
+	if (console_set_on_cmdline)
+		return 0;
+
+	if (IS_ENABLED(CONFIG_SBSAUART_TTY))
+		add_preferred_console("ttySBSA", 0, "115200");
+
+	if (IS_ENABLED(CONFIG_SERIAL_AMBA_PL011))
+		add_preferred_console("ttyAMA", 0, "115200");
+
+	if (IS_ENABLED(CONFIG_SERIAL_8250))
+		add_preferred_console("ttyS", 0, "115200");
+
+	return 0;
+}
+early_initcall(arm64_console_setup);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 328b8ce..52998b7 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -322,7 +322,7 @@ void __init smp_prepare_boot_cpu(void)
  * cpu logical map array containing MPIDR values related to logical
  * cpus. Assumes that cpu_logical_map(0) has already been initialized.
  */
-void __init smp_init_cpus(void)
+void __init of_smp_init_cpus(void)
 {
 	struct device_node *dn = NULL;
 	unsigned int i, cpu = 1;
diff --git a/arch/arm64/kernel/smp_parking_protocol.c b/arch/arm64/kernel/smp_parking_protocol.c
new file mode 100644
index 0000000..e1153ce
--- /dev/null
+++ b/arch/arm64/kernel/smp_parking_protocol.c
@@ -0,0 +1,110 @@
+/*
+ * Parking Protocol SMP initialisation
+ *
+ * Based largely on spin-table method.
+ *
+ * Copyright (C) 2013 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/of.h>
+#include <linux/smp.h>
+#include <linux/types.h>
+#include <linux/acpi.h>
+
+#include <asm/cacheflush.h>
+#include <asm/cpu_ops.h>
+#include <asm/cputype.h>
+#include <asm/smp_plat.h>
+
+static phys_addr_t cpu_mailbox_addr[NR_CPUS];
+
+static void (*__smp_boot_wakeup)(int cpu);
+
+void set_smp_boot_wakeup_call(void (*fn)(int cpu))
+{
+	__smp_boot_wakeup = fn;
+}
+
+static int smp_parking_protocol_cpu_init(struct device_node *dn,
+					 unsigned int cpu)
+{
+	/*
+	 * Determine the mailbox address.
+	 */
+	if (!acpi_get_cpu_parked_address(cpu, &cpu_mailbox_addr[cpu])) {
+		pr_info("%s: ACPI parked addr=%llx\n",
+			__func__, cpu_mailbox_addr[cpu]);
+		return 0;
+	}
+
+	pr_err("CPU %d: missing or invalid parking protocol mailbox\n", cpu);
+
+	return -1;
+}
+
+static int smp_parking_protocol_cpu_prepare(unsigned int cpu)
+{
+	return 0;
+}
+
+struct parking_protocol_mailbox {
+	__le32 cpu_id;
+	__le32 reserved;
+	__le64 entry_point;
+};
+
+static int smp_parking_protocol_cpu_boot(unsigned int cpu)
+{
+	struct parking_protocol_mailbox __iomem *mailbox;
+
+	if (!cpu_mailbox_addr[cpu] || !__smp_boot_wakeup)
+		return -ENODEV;
+
+	/*
+	 * The mailbox may or may not be inside the linear mapping.
+	 * As ioremap_cache will either give us a new mapping or reuse the
+	 * existing linear mapping, we can use it to cover both cases. In
+	 * either case the memory will be MT_NORMAL.
+	 */
+	mailbox = ioremap_cache(cpu_mailbox_addr[cpu], sizeof(*mailbox));
+	if (!mailbox)
+		return -ENOMEM;
+
+	/*
+	 * We write the entry point and cpu id as LE regardless of the
+	 * native endianess of the kernel. Therefore, any boot-loaders
+	 * that read this address need to convert this address to the
+	 * Boot-Loader's endianess before jumping.
+	 */
+	writeq(__pa(secondary_entry), &mailbox->entry_point);
+	writel(cpu, &mailbox->cpu_id);
+	__flush_dcache_area(mailbox, sizeof(*mailbox));
+	__smp_boot_wakeup(cpu);
+
+	/* temp hack for broken firmware */
+	sev();
+
+	iounmap(mailbox);
+
+	return 0;
+}
+
+const struct cpu_operations smp_parking_protocol_ops = {
+	.name		= "parking-protocol",
+	.cpu_init	= smp_parking_protocol_cpu_init,
+	.cpu_prepare	= smp_parking_protocol_cpu_prepare,
+	.cpu_boot	= smp_parking_protocol_cpu_boot,
+};
diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c
index 1a7125c..42f9195 100644
--- a/arch/arm64/kernel/time.c
+++ b/arch/arm64/kernel/time.c
@@ -35,6 +35,7 @@
 #include <linux/delay.h>
 #include <linux/clocksource.h>
 #include <linux/clk-provider.h>
+#include <linux/acpi.h>
 
 #include <clocksource/arm_arch_timer.h>
 
@@ -72,6 +73,12 @@ void __init time_init(void)
 
 	tick_setup_hrtimer_broadcast();
 
+	/*
+	 * Since ACPI or FDT will only one be available in the system,
+	 * we can use acpi_generic_timer_init() here safely
+	 */
+	acpi_generic_timer_init();
+
 	arch_timer_rate = arch_timer_get_rate();
 	if (!arch_timer_rate)
 		panic("Unable to initialise architected timer.\n");
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 0a24b9b..af90cdb 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -23,8 +23,14 @@
 #include <linux/genalloc.h>
 #include <linux/dma-mapping.h>
 #include <linux/dma-contiguous.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
 #include <linux/vmalloc.h>
 #include <linux/swiotlb.h>
+#include <linux/amba/bus.h>
+#include <linux/acpi.h>
+#include <linux/pci.h>
 
 #include <asm/cacheflush.h>
 
diff --git a/arch/x86/include/asm/pci.h b/arch/x86/include/asm/pci.h
index 4e370a5..f2b132b 100644
--- a/arch/x86/include/asm/pci.h
+++ b/arch/x86/include/asm/pci.h
@@ -71,6 +71,48 @@ void pcibios_set_master(struct pci_dev *dev);
 struct irq_routing_table *pcibios_get_irq_routing_table(void);
 int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq);
 
+/*
+ * AMD Fam10h CPUs are buggy, and cannot access MMIO config space
+ * on their northbrige except through the * %eax register. As such, you MUST
+ * NOT use normal IOMEM accesses, you need to only use the magic mmio-config
+ * accessor functions.
+ * In fact just use pci_config_*, nothing else please.
+ */
+static inline unsigned char mmio_config_readb(void __iomem *pos)
+{
+	u8 val;
+	asm volatile("movb (%1),%%al" : "=a" (val) : "r" (pos));
+	return val;
+}
+
+static inline unsigned short mmio_config_readw(void __iomem *pos)
+{
+	u16 val;
+	asm volatile("movw (%1),%%ax" : "=a" (val) : "r" (pos));
+	return val;
+}
+
+static inline unsigned int mmio_config_readl(void __iomem *pos)
+{
+	u32 val;
+	asm volatile("movl (%1),%%eax" : "=a" (val) : "r" (pos));
+	return val;
+}
+
+static inline void mmio_config_writeb(void __iomem *pos, u8 val)
+{
+	asm volatile("movb %%al,(%1)" : : "a" (val), "r" (pos) : "memory");
+}
+
+static inline void mmio_config_writew(void __iomem *pos, u16 val)
+{
+	asm volatile("movw %%ax,(%1)" : : "a" (val), "r" (pos) : "memory");
+}
+
+static inline void mmio_config_writel(void __iomem *pos, u32 val)
+{
+	asm volatile("movl %%eax,(%1)" : : "a" (val), "r" (pos) : "memory");
+}
 
 #define HAVE_PCI_MMAP
 extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h
index fa1195d..42e7332 100644
--- a/arch/x86/include/asm/pci_x86.h
+++ b/arch/x86/include/asm/pci_x86.h
@@ -121,78 +121,6 @@ extern int __init pcibios_init(void);
 extern int pci_legacy_init(void);
 extern void pcibios_fixup_irqs(void);
 
-/* pci-mmconfig.c */
-
-/* "PCI MMCONFIG %04x [bus %02x-%02x]" */
-#define PCI_MMCFG_RESOURCE_NAME_LEN (22 + 4 + 2 + 2)
-
-struct pci_mmcfg_region {
-	struct list_head list;
-	struct resource res;
-	u64 address;
-	char __iomem *virt;
-	u16 segment;
-	u8 start_bus;
-	u8 end_bus;
-	char name[PCI_MMCFG_RESOURCE_NAME_LEN];
-};
-
-extern int __init pci_mmcfg_arch_init(void);
-extern void __init pci_mmcfg_arch_free(void);
-extern int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg);
-extern void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg);
-extern int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end,
-			       phys_addr_t addr);
-extern int pci_mmconfig_delete(u16 seg, u8 start, u8 end);
-extern struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus);
-
-extern struct list_head pci_mmcfg_list;
-
-#define PCI_MMCFG_BUS_OFFSET(bus)      ((bus) << 20)
-
-/*
- * AMD Fam10h CPUs are buggy, and cannot access MMIO config space
- * on their northbrige except through the * %eax register. As such, you MUST
- * NOT use normal IOMEM accesses, you need to only use the magic mmio-config
- * accessor functions.
- * In fact just use pci_config_*, nothing else please.
- */
-static inline unsigned char mmio_config_readb(void __iomem *pos)
-{
-	u8 val;
-	asm volatile("movb (%1),%%al" : "=a" (val) : "r" (pos));
-	return val;
-}
-
-static inline unsigned short mmio_config_readw(void __iomem *pos)
-{
-	u16 val;
-	asm volatile("movw (%1),%%ax" : "=a" (val) : "r" (pos));
-	return val;
-}
-
-static inline unsigned int mmio_config_readl(void __iomem *pos)
-{
-	u32 val;
-	asm volatile("movl (%1),%%eax" : "=a" (val) : "r" (pos));
-	return val;
-}
-
-static inline void mmio_config_writeb(void __iomem *pos, u8 val)
-{
-	asm volatile("movb %%al,(%1)" : : "a" (val), "r" (pos) : "memory");
-}
-
-static inline void mmio_config_writew(void __iomem *pos, u16 val)
-{
-	asm volatile("movw %%ax,(%1)" : : "a" (val), "r" (pos) : "memory");
-}
-
-static inline void mmio_config_writel(void __iomem *pos, u32 val)
-{
-	asm volatile("movl %%eax,(%1)" : : "a" (val), "r" (pos) : "memory");
-}
-
 #ifdef CONFIG_PCI
 # ifdef CONFIG_ACPI
 #  define x86_default_pci_init		pci_acpi_init
diff --git a/arch/x86/pci/Makefile b/arch/x86/pci/Makefile
index 5c6fc35..35c765b 100644
--- a/arch/x86/pci/Makefile
+++ b/arch/x86/pci/Makefile
@@ -1,7 +1,10 @@
 obj-y				:= i386.o init.o
 
 obj-$(CONFIG_PCI_BIOS)		+= pcbios.o
-obj-$(CONFIG_PCI_MMCONFIG)	+= mmconfig_$(BITS).o direct.o mmconfig-shared.o
+obj-$(CONFIG_PCI_MMCONFIG)	+= direct.o mmconfig-shared.o
+ifeq ($(BITS),32)
+obj-$(CONFIG_PCI_MMCONFIG)	+= mmconfig_32.o
+endif
 obj-$(CONFIG_PCI_DIRECT)	+= direct.o
 obj-$(CONFIG_PCI_OLPC)		+= olpc.o
 obj-$(CONFIG_PCI_XEN)		+= xen.o
diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
index 6ac2738..eae3846 100644
--- a/arch/x86/pci/acpi.c
+++ b/arch/x86/pci/acpi.c
@@ -4,6 +4,7 @@
 #include <linux/irq.h>
 #include <linux/dmi.h>
 #include <linux/slab.h>
+#include <linux/mmconfig.h>
 #include <asm/numa.h>
 #include <asm/pci_x86.h>
 
diff --git a/arch/x86/pci/init.c b/arch/x86/pci/init.c
index adb62aa..b4a55df 100644
--- a/arch/x86/pci/init.c
+++ b/arch/x86/pci/init.c
@@ -1,5 +1,6 @@
 #include <linux/pci.h>
 #include <linux/init.h>
+#include <linux/mmconfig.h>
 #include <asm/pci_x86.h>
 #include <asm/x86_init.h>
 
diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c
index dd30b7e..ce3d93c 100644
--- a/arch/x86/pci/mmconfig-shared.c
+++ b/arch/x86/pci/mmconfig-shared.c
@@ -18,6 +18,7 @@
 #include <linux/slab.h>
 #include <linux/mutex.h>
 #include <linux/rculist.h>
+#include <linux/mmconfig.h>
 #include <asm/e820.h>
 #include <asm/pci_x86.h>
 #include <asm/acpi.h>
@@ -27,103 +28,11 @@
 /* Indicate if the mmcfg resources have been placed into the resource table. */
 static bool pci_mmcfg_running_state;
 static bool pci_mmcfg_arch_init_failed;
-static DEFINE_MUTEX(pci_mmcfg_lock);
 
-LIST_HEAD(pci_mmcfg_list);
-
-static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg)
-{
-	if (cfg->res.parent)
-		release_resource(&cfg->res);
-	list_del(&cfg->list);
-	kfree(cfg);
-}
-
-static void __init free_all_mmcfg(void)
-{
-	struct pci_mmcfg_region *cfg, *tmp;
-
-	pci_mmcfg_arch_free();
-	list_for_each_entry_safe(cfg, tmp, &pci_mmcfg_list, list)
-		pci_mmconfig_remove(cfg);
-}
-
-static void list_add_sorted(struct pci_mmcfg_region *new)
-{
-	struct pci_mmcfg_region *cfg;
-
-	/* keep list sorted by segment and starting bus number */
-	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
-		if (cfg->segment > new->segment ||
-		    (cfg->segment == new->segment &&
-		     cfg->start_bus >= new->start_bus)) {
-			list_add_tail_rcu(&new->list, &cfg->list);
-			return;
-		}
-	}
-	list_add_tail_rcu(&new->list, &pci_mmcfg_list);
-}
-
-static struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start,
-						   int end, u64 addr)
-{
-	struct pci_mmcfg_region *new;
-	struct resource *res;
-
-	if (addr == 0)
-		return NULL;
-
-	new = kzalloc(sizeof(*new), GFP_KERNEL);
-	if (!new)
-		return NULL;
-
-	new->address = addr;
-	new->segment = segment;
-	new->start_bus = start;
-	new->end_bus = end;
-
-	res = &new->res;
-	res->start = addr + PCI_MMCFG_BUS_OFFSET(start);
-	res->end = addr + PCI_MMCFG_BUS_OFFSET(end + 1) - 1;
-	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
-	snprintf(new->name, PCI_MMCFG_RESOURCE_NAME_LEN,
-		 "PCI MMCONFIG %04x [bus %02x-%02x]", segment, start, end);
-	res->name = new->name;
-
-	return new;
-}
-
-static struct pci_mmcfg_region *__init pci_mmconfig_add(int segment, int start,
-							int end, u64 addr)
-{
-	struct pci_mmcfg_region *new;
-
-	new = pci_mmconfig_alloc(segment, start, end, addr);
-	if (new) {
-		mutex_lock(&pci_mmcfg_lock);
-		list_add_sorted(new);
-		mutex_unlock(&pci_mmcfg_lock);
-
-		pr_info(PREFIX
-		       "MMCONFIG for domain %04x [bus %02x-%02x] at %pR "
-		       "(base %#lx)\n",
-		       segment, start, end, &new->res, (unsigned long)addr);
-	}
-
-	return new;
-}
-
-struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus)
-{
-	struct pci_mmcfg_region *cfg;
-
-	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
-		if (cfg->segment == segment &&
-		    cfg->start_bus <= bus && bus <= cfg->end_bus)
-			return cfg;
-
-	return NULL;
-}
+const struct pci_raw_ops pci_mmcfg = {
+	.read =		pci_mmcfg_read,
+	.write =	pci_mmcfg_write,
+};
 
 static const char *__init pci_mmcfg_e7520(void)
 {
@@ -543,7 +452,7 @@ static void __init pci_mmcfg_reject_broken(int early)
 	}
 }
 
-static int __init acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg,
+int __init acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg,
 					struct acpi_mcfg_allocation *cfg)
 {
 	int year;
@@ -652,9 +561,10 @@ static void __init __pci_mmcfg_init(int early)
 		}
 	}
 
-	if (pci_mmcfg_arch_init())
+	if (pci_mmcfg_arch_init()) {
+		raw_pci_ext_ops = &pci_mmcfg;
 		pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
-	else {
+	} else {
 		free_all_mmcfg();
 		pci_mmcfg_arch_init_failed = true;
 	}
@@ -731,88 +641,40 @@ int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end,
 	if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed)
 		return -ENODEV;
 
-	if (start > end)
+	if (start > end || !addr)
 		return -EINVAL;
 
-	mutex_lock(&pci_mmcfg_lock);
-	cfg = pci_mmconfig_lookup(seg, start);
-	if (cfg) {
-		if (cfg->end_bus < end)
-			dev_info(dev, FW_INFO
-				 "MMCONFIG for "
-				 "domain %04x [bus %02x-%02x] "
-				 "only partially covers this bridge\n",
-				  cfg->segment, cfg->start_bus, cfg->end_bus);
-		mutex_unlock(&pci_mmcfg_lock);
-		return -EEXIST;
-	}
-
-	if (!addr) {
-		mutex_unlock(&pci_mmcfg_lock);
-		return -EINVAL;
-	}
-
 	rc = -EBUSY;
 	cfg = pci_mmconfig_alloc(seg, start, end, addr);
 	if (cfg == NULL) {
 		dev_warn(dev, "fail to add MMCONFIG (out of memory)\n");
-		rc = -ENOMEM;
+		return -ENOMEM;
 	} else if (!pci_mmcfg_check_reserved(dev, cfg, 0)) {
 		dev_warn(dev, FW_BUG "MMCONFIG %pR isn't reserved\n",
 			 &cfg->res);
-	} else {
-		/* Insert resource if it's not in boot stage */
-		if (pci_mmcfg_running_state)
-			tmp = insert_resource_conflict(&iomem_resource,
-						       &cfg->res);
-
-		if (tmp) {
-			dev_warn(dev,
-				 "MMCONFIG %pR conflicts with "
-				 "%s %pR\n",
-				 &cfg->res, tmp->name, tmp);
-		} else if (pci_mmcfg_arch_map(cfg)) {
-			dev_warn(dev, "fail to map MMCONFIG %pR.\n",
-				 &cfg->res);
-		} else {
-			list_add_sorted(cfg);
-			dev_info(dev, "MMCONFIG at %pR (base %#lx)\n",
-				 &cfg->res, (unsigned long)addr);
-			cfg = NULL;
-			rc = 0;
-		}
-	}
-
-	if (cfg) {
-		if (cfg->res.parent)
-			release_resource(&cfg->res);
-		kfree(cfg);
+		goto error;
 	}
 
-	mutex_unlock(&pci_mmcfg_lock);
+	/* Insert resource if it's not in boot stage */
+	if (pci_mmcfg_running_state)
+		tmp = insert_resource_conflict(&iomem_resource, &cfg->res);
 
-	return rc;
-}
+	if (tmp) {
+		dev_warn(dev,
+			 "MMCONFIG %pR conflicts with %s %pR\n",
+			 &cfg->res, tmp->name, tmp);
+		goto error;
+	}
 
-/* Delete MMCFG information for host bridges */
-int pci_mmconfig_delete(u16 seg, u8 start, u8 end)
-{
-	struct pci_mmcfg_region *cfg;
+	rc = pci_mmconfig_inject(cfg);
+	if (rc)
+		goto error;
 
-	mutex_lock(&pci_mmcfg_lock);
-	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
-		if (cfg->segment == seg && cfg->start_bus == start &&
-		    cfg->end_bus == end) {
-			list_del_rcu(&cfg->list);
-			synchronize_rcu();
-			pci_mmcfg_arch_unmap(cfg);
-			if (cfg->res.parent)
-				release_resource(&cfg->res);
-			mutex_unlock(&pci_mmcfg_lock);
-			kfree(cfg);
-			return 0;
-		}
-	mutex_unlock(&pci_mmcfg_lock);
+	return 0;
 
-	return -ENOENT;
+error:
+	if (cfg->res.parent)
+		release_resource(&cfg->res);
+	kfree(cfg);
+	return rc;
 }
diff --git a/arch/x86/pci/mmconfig_32.c b/arch/x86/pci/mmconfig_32.c
index 43984bc..c0106a6 100644
--- a/arch/x86/pci/mmconfig_32.c
+++ b/arch/x86/pci/mmconfig_32.c
@@ -12,6 +12,7 @@
 #include <linux/pci.h>
 #include <linux/init.h>
 #include <linux/rcupdate.h>
+#include <linux/mmconfig.h>
 #include <asm/e820.h>
 #include <asm/pci_x86.h>
 
@@ -49,7 +50,7 @@ static void pci_exp_set_dev_base(unsigned int base, int bus, int devfn)
 	}
 }
 
-static int pci_mmcfg_read(unsigned int seg, unsigned int bus,
+int pci_mmcfg_read(unsigned int seg, unsigned int bus,
 			  unsigned int devfn, int reg, int len, u32 *value)
 {
 	unsigned long flags;
@@ -88,7 +89,7 @@ err:		*value = -1;
 	return 0;
 }
 
-static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
+int pci_mmcfg_write(unsigned int seg, unsigned int bus,
 			   unsigned int devfn, int reg, int len, u32 value)
 {
 	unsigned long flags;
@@ -125,15 +126,9 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
 	return 0;
 }
 
-const struct pci_raw_ops pci_mmcfg = {
-	.read =		pci_mmcfg_read,
-	.write =	pci_mmcfg_write,
-};
-
 int __init pci_mmcfg_arch_init(void)
 {
 	printk(KERN_INFO "PCI: Using MMCONFIG for extended config space\n");
-	raw_pci_ext_ops = &pci_mmcfg;
 	return 1;
 }
 
diff --git a/arch/x86/pci/mmconfig_64.c b/arch/x86/pci/mmconfig_64.c
deleted file mode 100644
index bea5249..0000000
--- a/arch/x86/pci/mmconfig_64.c
+++ /dev/null
@@ -1,153 +0,0 @@
-/*
- * mmconfig.c - Low-level direct PCI config space access via MMCONFIG
- *
- * This is an 64bit optimized version that always keeps the full mmconfig
- * space mapped. This allows lockless config space operation.
- */
-
-#include <linux/pci.h>
-#include <linux/init.h>
-#include <linux/acpi.h>
-#include <linux/bitmap.h>
-#include <linux/rcupdate.h>
-#include <asm/e820.h>
-#include <asm/pci_x86.h>
-
-#define PREFIX "PCI: "
-
-static char __iomem *pci_dev_base(unsigned int seg, unsigned int bus, unsigned int devfn)
-{
-	struct pci_mmcfg_region *cfg = pci_mmconfig_lookup(seg, bus);
-
-	if (cfg && cfg->virt)
-		return cfg->virt + (PCI_MMCFG_BUS_OFFSET(bus) | (devfn << 12));
-	return NULL;
-}
-
-static int pci_mmcfg_read(unsigned int seg, unsigned int bus,
-			  unsigned int devfn, int reg, int len, u32 *value)
-{
-	char __iomem *addr;
-
-	/* Why do we have this when nobody checks it. How about a BUG()!? -AK */
-	if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) {
-err:		*value = -1;
-		return -EINVAL;
-	}
-
-	rcu_read_lock();
-	addr = pci_dev_base(seg, bus, devfn);
-	if (!addr) {
-		rcu_read_unlock();
-		goto err;
-	}
-
-	switch (len) {
-	case 1:
-		*value = mmio_config_readb(addr + reg);
-		break;
-	case 2:
-		*value = mmio_config_readw(addr + reg);
-		break;
-	case 4:
-		*value = mmio_config_readl(addr + reg);
-		break;
-	}
-	rcu_read_unlock();
-
-	return 0;
-}
-
-static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
-			   unsigned int devfn, int reg, int len, u32 value)
-{
-	char __iomem *addr;
-
-	/* Why do we have this when nobody checks it. How about a BUG()!? -AK */
-	if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095)))
-		return -EINVAL;
-
-	rcu_read_lock();
-	addr = pci_dev_base(seg, bus, devfn);
-	if (!addr) {
-		rcu_read_unlock();
-		return -EINVAL;
-	}
-
-	switch (len) {
-	case 1:
-		mmio_config_writeb(addr + reg, value);
-		break;
-	case 2:
-		mmio_config_writew(addr + reg, value);
-		break;
-	case 4:
-		mmio_config_writel(addr + reg, value);
-		break;
-	}
-	rcu_read_unlock();
-
-	return 0;
-}
-
-const struct pci_raw_ops pci_mmcfg = {
-	.read =		pci_mmcfg_read,
-	.write =	pci_mmcfg_write,
-};
-
-static void __iomem *mcfg_ioremap(struct pci_mmcfg_region *cfg)
-{
-	void __iomem *addr;
-	u64 start, size;
-	int num_buses;
-
-	start = cfg->address + PCI_MMCFG_BUS_OFFSET(cfg->start_bus);
-	num_buses = cfg->end_bus - cfg->start_bus + 1;
-	size = PCI_MMCFG_BUS_OFFSET(num_buses);
-	addr = ioremap_nocache(start, size);
-	if (addr)
-		addr -= PCI_MMCFG_BUS_OFFSET(cfg->start_bus);
-	return addr;
-}
-
-int __init pci_mmcfg_arch_init(void)
-{
-	struct pci_mmcfg_region *cfg;
-
-	list_for_each_entry(cfg, &pci_mmcfg_list, list)
-		if (pci_mmcfg_arch_map(cfg)) {
-			pci_mmcfg_arch_free();
-			return 0;
-		}
-
-	raw_pci_ext_ops = &pci_mmcfg;
-
-	return 1;
-}
-
-void __init pci_mmcfg_arch_free(void)
-{
-	struct pci_mmcfg_region *cfg;
-
-	list_for_each_entry(cfg, &pci_mmcfg_list, list)
-		pci_mmcfg_arch_unmap(cfg);
-}
-
-int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg)
-{
-	cfg->virt = mcfg_ioremap(cfg);
-	if (!cfg->virt) {
-		pr_err(PREFIX "can't map MMCONFIG at %pR\n", &cfg->res);
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg)
-{
-	if (cfg && cfg->virt) {
-		iounmap(cfg->virt + PCI_MMCFG_BUS_OFFSET(cfg->start_bus));
-		cfg->virt = NULL;
-	}
-}
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index e6c3ddd..aad0a08 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -5,7 +5,7 @@
 menuconfig ACPI
 	bool "ACPI (Advanced Configuration and Power Interface) Support"
 	depends on !IA64_HP_SIM
-	depends on IA64 || X86
+	depends on IA64 || X86 || ARM64
 	depends on PCI
 	select PNP
 	default y
@@ -163,6 +163,7 @@ config ACPI_PROCESSOR
 	tristate "Processor"
 	select THERMAL
 	select CPU_IDLE
+	depends on X86 || IA64
 	default y
 	help
 	  This driver installs ACPI as the idle handler for Linux and uses
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index b18cd21..0e6abf9 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -23,7 +23,11 @@ acpi-y				+= nvs.o
 
 # Power management related files
 acpi-y				+= wakeup.o
+ifeq ($(ARCH), arm64)
+acpi-y				+= sleep_arm.o
+else # X86, IA64
 acpi-y				+= sleep.o
+endif
 acpi-y				+= device_pm.o
 acpi-$(CONFIG_ACPI_SLEEP)	+= proc.o
 
@@ -66,6 +70,7 @@ obj-$(CONFIG_ACPI_BUTTON)	+= button.o
 obj-$(CONFIG_ACPI_FAN)		+= fan.o
 obj-$(CONFIG_ACPI_VIDEO)	+= video.o
 obj-$(CONFIG_ACPI_PCI_SLOT)	+= pci_slot.o
+obj-$(CONFIG_PCI_MMCONFIG)	+= mmconfig.o
 obj-$(CONFIG_ACPI_PROCESSOR)	+= processor.o
 obj-y				+= container.o
 obj-$(CONFIG_ACPI_THERMAL)	+= thermal.o
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index 8b67bd0..6d5412ab 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -41,6 +41,7 @@
 #include <acpi/apei.h>
 #include <linux/dmi.h>
 #include <linux/suspend.h>
+#include <linux/mmconfig.h>
 
 #include "internal.h"
 
@@ -448,6 +449,9 @@ static int __init acpi_bus_init_irq(void)
 	case ACPI_IRQ_MODEL_IOSAPIC:
 		message = "IOSAPIC";
 		break;
+	case ACPI_IRQ_MODEL_GIC:
+		message = "GIC";
+		break;
 	case ACPI_IRQ_MODEL_PLATFORM:
 		message = "platform specific model";
 		break;
diff --git a/drivers/acpi/mmconfig.c b/drivers/acpi/mmconfig.c
new file mode 100644
index 0000000..b13a9e4
--- /dev/null
+++ b/drivers/acpi/mmconfig.c
@@ -0,0 +1,414 @@
+/*
+ * Arch agnostic low-level direct PCI config space access via MMCONFIG
+ *
+ * Per-architecture code takes care of the mappings, region validation and
+ * accesses themselves.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/mutex.h>
+#include <linux/rculist.h>
+#include <linux/pci.h>
+#include <linux/mmconfig.h>
+
+#include <asm/pci.h>
+
+#define PREFIX "PCI: "
+
+static DEFINE_MUTEX(pci_mmcfg_lock);
+
+LIST_HEAD(pci_mmcfg_list);
+
+extern struct acpi_mcfg_fixup __start_acpi_mcfg_fixups[];
+extern struct acpi_mcfg_fixup __end_acpi_mcfg_fixups[];
+
+/*
+ * raw_pci_read/write - ACPI PCI config space accessors.
+ *
+ * ACPI spec defines MMCFG as the way we can access PCI config space,
+ * so let MMCFG be default (__weak).
+ *
+ * If platform needs more fancy stuff, should provides its own implementation.
+ */
+int __weak raw_pci_read(unsigned int domain, unsigned int bus,
+			unsigned int devfn, int reg, int len, u32 *val)
+{
+	return pci_mmcfg_read(domain, bus, devfn, reg, len, val);
+}
+
+int __weak raw_pci_write(unsigned int domain, unsigned int bus,
+			 unsigned int devfn, int reg, int len, u32 val)
+{
+	return pci_mmcfg_write(domain, bus, devfn, reg, len, val);
+}
+
+int __weak pci_mmcfg_read(unsigned int seg, unsigned int bus,
+			  unsigned int devfn, int reg, int len, u32 *value)
+{
+	struct pci_mmcfg_region *cfg;
+	char __iomem *addr;
+
+	/* Why do we have this when nobody checks it. How about a BUG()!? -AK */
+	if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095))) {
+err:		*value = -1;
+		return -EINVAL;
+	}
+
+	rcu_read_lock();
+	cfg = pci_mmconfig_lookup(seg, bus);
+	if (!cfg || !cfg->virt) {
+		rcu_read_unlock();
+		goto err;
+	}
+	if (cfg->read)
+		(*cfg->read)(cfg, bus, devfn, reg, len, value);
+	else {
+		addr = cfg->virt + (PCI_MMCFG_BUS_OFFSET(bus) | (devfn << 12));
+
+		switch (len) {
+		case 1:
+			*value = mmio_config_readb(addr + reg);
+			break;
+		case 2:
+			*value = mmio_config_readw(addr + reg);
+			break;
+		case 4:
+			*value = mmio_config_readl(addr + reg);
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return 0;
+}
+
+int __weak pci_mmcfg_write(unsigned int seg, unsigned int bus,
+			   unsigned int devfn, int reg, int len, u32 value)
+{
+	struct pci_mmcfg_region *cfg;
+	char __iomem *addr;
+
+	/* Why do we have this when nobody checks it. How about a BUG()!? -AK */
+	if (unlikely((bus > 255) || (devfn > 255) || (reg > 4095)))
+		return -EINVAL;
+
+	rcu_read_lock();
+	cfg = pci_mmconfig_lookup(seg, bus);
+	if (!cfg || !cfg->virt) {
+		rcu_read_unlock();
+		return -EINVAL;
+	}
+	if (cfg->write)
+		(*cfg->write)(cfg, bus, devfn, reg, len, value);
+	else {
+		addr = cfg->virt + (PCI_MMCFG_BUS_OFFSET(bus) | (devfn << 12));
+
+		switch (len) {
+		case 1:
+			mmio_config_writeb(addr + reg, value);
+			break;
+		case 2:
+			mmio_config_writew(addr + reg, value);
+			break;
+		case 4:
+			mmio_config_writel(addr + reg, value);
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return 0;
+}
+
+static void __iomem *mcfg_ioremap(struct pci_mmcfg_region *cfg)
+{
+	void __iomem *addr;
+	u64 start, size;
+	int num_buses;
+
+	start = cfg->address + PCI_MMCFG_BUS_OFFSET(cfg->start_bus);
+	num_buses = cfg->end_bus - cfg->start_bus + 1;
+	size = PCI_MMCFG_BUS_OFFSET(num_buses);
+	addr = ioremap_nocache(start, size);
+	if (addr)
+		addr -= PCI_MMCFG_BUS_OFFSET(cfg->start_bus);
+	return addr;
+}
+
+int __init __weak pci_mmcfg_arch_init(void)
+{
+	struct pci_mmcfg_region *cfg;
+
+	list_for_each_entry(cfg, &pci_mmcfg_list, list)
+		if (pci_mmcfg_arch_map(cfg)) {
+			pci_mmcfg_arch_free();
+			return 0;
+		}
+
+	return 1;
+}
+
+void __init __weak pci_mmcfg_arch_free(void)
+{
+	struct pci_mmcfg_region *cfg;
+
+	list_for_each_entry(cfg, &pci_mmcfg_list, list)
+		pci_mmcfg_arch_unmap(cfg);
+}
+
+int __weak pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg)
+{
+	cfg->virt = mcfg_ioremap(cfg);
+	if (!cfg->virt) {
+		pr_err(PREFIX "can't map MMCONFIG at %pR\n", &cfg->res);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+void __weak pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg)
+{
+	if (cfg && cfg->virt) {
+		iounmap(cfg->virt + PCI_MMCFG_BUS_OFFSET(cfg->start_bus));
+		cfg->virt = NULL;
+	}
+}
+
+static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg)
+{
+	if (cfg->res.parent)
+		release_resource(&cfg->res);
+	list_del(&cfg->list);
+	kfree(cfg);
+}
+
+void __init free_all_mmcfg(void)
+{
+	struct pci_mmcfg_region *cfg, *tmp;
+
+	pci_mmcfg_arch_free();
+	list_for_each_entry_safe(cfg, tmp, &pci_mmcfg_list, list)
+		pci_mmconfig_remove(cfg);
+}
+
+void list_add_sorted(struct pci_mmcfg_region *new)
+{
+	struct pci_mmcfg_region *cfg;
+
+	/* keep list sorted by segment and starting bus number */
+	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
+		if (cfg->segment > new->segment ||
+		    (cfg->segment == new->segment &&
+		     cfg->start_bus >= new->start_bus)) {
+			list_add_tail_rcu(&new->list, &cfg->list);
+			return;
+		}
+	}
+	list_add_tail_rcu(&new->list, &pci_mmcfg_list);
+}
+
+struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start,
+					    int end, u64 addr)
+{
+	struct pci_mmcfg_region *new;
+	struct resource *res;
+
+	if (addr == 0)
+		return NULL;
+
+	new = kzalloc(sizeof(*new), GFP_KERNEL);
+	if (!new)
+		return NULL;
+
+	new->address = addr;
+	new->segment = segment;
+	new->start_bus = start;
+	new->end_bus = end;
+
+	res = &new->res;
+	res->start = addr + PCI_MMCFG_BUS_OFFSET(start);
+	res->end = addr + PCI_MMCFG_BUS_OFFSET(end + 1) - 1;
+	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+	snprintf(new->name, PCI_MMCFG_RESOURCE_NAME_LEN,
+		 "PCI MMCONFIG %04x [bus %02x-%02x]", segment, start, end);
+	res->name = new->name;
+
+	return new;
+}
+
+struct pci_mmcfg_region *pci_mmconfig_add(int segment, int start,
+					  int end, u64 addr)
+{
+	struct pci_mmcfg_region *new;
+
+	new = pci_mmconfig_alloc(segment, start, end, addr);
+	if (new) {
+		mutex_lock(&pci_mmcfg_lock);
+		list_add_sorted(new);
+		mutex_unlock(&pci_mmcfg_lock);
+
+		pr_info(PREFIX
+		       "MMCONFIG for domain %04x [bus %02x-%02x] at %pR "
+		       "(base %#lx)\n",
+		       segment, start, end, &new->res, (unsigned long)addr);
+	}
+
+	return new;
+}
+
+int __init pci_mmconfig_inject(struct pci_mmcfg_region *cfg)
+{
+	struct pci_mmcfg_region *cfg_conflict;
+	int err = 0;
+
+	mutex_lock(&pci_mmcfg_lock);
+	cfg_conflict = pci_mmconfig_lookup(cfg->segment, cfg->start_bus);
+	if (cfg_conflict) {
+		if (cfg_conflict->end_bus < cfg->end_bus)
+			pr_info(FW_INFO "MMCONFIG for "
+				"domain %04x [bus %02x-%02x] "
+				"only partially covers this bridge\n",
+				cfg_conflict->segment, cfg_conflict->start_bus,
+				cfg_conflict->end_bus);
+		err = -EEXIST;
+		goto out;
+	}
+
+	if (pci_mmcfg_arch_map(cfg)) {
+		pr_warn("fail to map MMCONFIG %pR.\n", &cfg->res);
+		err = -ENOMEM;
+		goto out;
+	} else {
+		list_add_sorted(cfg);
+		pr_info("MMCONFIG at %pR (base %#lx)\n",
+			&cfg->res, (unsigned long)cfg->address);
+
+	}
+out:
+	mutex_unlock(&pci_mmcfg_lock);
+	return err;
+}
+
+struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus)
+{
+	struct pci_mmcfg_region *cfg;
+
+	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
+		if (cfg->segment == segment &&
+		    cfg->start_bus <= bus && bus <= cfg->end_bus)
+			return cfg;
+
+	return NULL;
+}
+
+int __init __weak acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg,
+					struct acpi_mcfg_allocation *cfg)
+{
+	return 0;
+}
+
+int __init pci_parse_mcfg(struct acpi_table_header *header)
+{
+	struct acpi_table_mcfg *mcfg;
+	struct acpi_mcfg_allocation *cfg_table, *cfg;
+	struct acpi_mcfg_fixup *fixup;
+	struct pci_mmcfg_region *new;
+	unsigned long i;
+	int entries;
+
+	if (!header)
+		return -EINVAL;
+
+	mcfg = (struct acpi_table_mcfg *)header;
+
+	/* how many config structures do we have */
+	free_all_mmcfg();
+	entries = 0;
+	i = header->length - sizeof(struct acpi_table_mcfg);
+	while (i >= sizeof(struct acpi_mcfg_allocation)) {
+		entries++;
+		i -= sizeof(struct acpi_mcfg_allocation);
+	}
+	if (entries == 0) {
+		pr_err(PREFIX "MMCONFIG has no entries\n");
+		return -ENODEV;
+	}
+
+	fixup = __start_acpi_mcfg_fixups;
+	while (fixup < __end_acpi_mcfg_fixups) {
+		if (!strncmp(fixup->oem_id, header->oem_id, 6) &&
+		    !strncmp(fixup->oem_table_id, header->oem_table_id, 8))
+			break;
+		++fixup;
+	}
+
+	cfg_table = (struct acpi_mcfg_allocation *) &mcfg[1];
+	for (i = 0; i < entries; i++) {
+		cfg = &cfg_table[i];
+		if (acpi_mcfg_check_entry(mcfg, cfg)) {
+			free_all_mmcfg();
+			return -ENODEV;
+		}
+
+		new = pci_mmconfig_add(cfg->pci_segment, cfg->start_bus_number,
+				       cfg->end_bus_number, cfg->address);
+		if (!new) {
+			pr_warn(PREFIX "no memory for MCFG entries\n");
+			free_all_mmcfg();
+			return -ENOMEM;
+		}
+		if (fixup < __end_acpi_mcfg_fixups)
+			new->fixup = fixup->hook;
+	}
+
+	return 0;
+}
+
+/* Delete MMCFG information for host bridges */
+int pci_mmconfig_delete(u16 seg, u8 start, u8 end)
+{
+	struct pci_mmcfg_region *cfg;
+
+	mutex_lock(&pci_mmcfg_lock);
+	list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
+		if (cfg->segment == seg && cfg->start_bus == start &&
+		    cfg->end_bus == end) {
+			list_del_rcu(&cfg->list);
+			synchronize_rcu();
+			pci_mmcfg_arch_unmap(cfg);
+			if (cfg->res.parent)
+				release_resource(&cfg->res);
+			mutex_unlock(&pci_mmcfg_lock);
+			kfree(cfg);
+			return 0;
+		}
+	mutex_unlock(&pci_mmcfg_lock);
+
+	return -ENOENT;
+}
+
+void __init __weak pci_mmcfg_early_init(void)
+{
+
+}
+
+void __init __weak pci_mmcfg_late_init(void)
+{
+	struct pci_mmcfg_region *cfg;
+
+	acpi_table_parse(ACPI_SIG_MCFG, pci_parse_mcfg);
+
+	if (list_empty(&pci_mmcfg_list))
+		return;
+
+	if (!pci_mmcfg_arch_init())
+		free_all_mmcfg();
+
+	list_for_each_entry(cfg, &pci_mmcfg_list, list)
+		insert_resource(&iomem_resource, &cfg->res);
+}
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index f9eeae8..39748bb 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -336,11 +336,11 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
 	return NULL;
 }
 
-#ifndef CONFIG_IA64
-#define should_use_kmap(pfn)   page_is_ram(pfn)
-#else
+#if defined(CONFIG_IA64) || defined(CONFIG_ARM64)
 /* ioremap will take care of cache attributes */
 #define should_use_kmap(pfn)   0
+#else
+#define should_use_kmap(pfn)   page_is_ram(pfn)
 #endif
 
 static void __iomem *acpi_map(acpi_physical_address pg_off, unsigned long pg_sz)
diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c
index 7962651..b289cb4 100644
--- a/drivers/acpi/processor_core.c
+++ b/drivers/acpi/processor_core.c
@@ -83,6 +83,38 @@ static int map_lsapic_id(struct acpi_subtable_header *entry,
 	return 0;
 }
 
+/*
+ * On ARM platform, MPIDR value is the hardware ID as apic ID
+ * on Intel platforms
+ */
+static int map_gicc_mpidr(struct acpi_subtable_header *entry,
+		int device_declaration, u32 acpi_id, int *mpidr)
+{
+	struct acpi_madt_generic_interrupt *gicc =
+	    container_of(entry, struct acpi_madt_generic_interrupt, header);
+
+	if (!(gicc->flags & ACPI_MADT_ENABLED))
+		return -ENODEV;
+
+	/* In the GIC interrupt model, logical processors are
+	 * required to have a Processor Device object in the DSDT,
+	 * so we should check device_declaration here
+	 */
+	if (device_declaration && (gicc->uid == acpi_id)) {
+		/*
+		 * bits other than [0:7] Aff0, [8:15] Aff1, [16:23] Aff2 and
+		 * [32:39] Aff3 must be 0 which is defined in ACPI 5.1, so pack
+		 * the Affx fields into a single 32 bit identifier to accommodate
+		 * the acpi processor drivers.
+		 */
+		*mpidr = ((gicc->arm_mpidr & 0xff00000000) >> 8)
+			 | gicc->arm_mpidr;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
 static int map_madt_entry(int type, u32 acpi_id)
 {
 	unsigned long madt_end, entry;
@@ -111,6 +143,9 @@ static int map_madt_entry(int type, u32 acpi_id)
 		} else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC) {
 			if (!map_lsapic_id(header, type, acpi_id, &phys_id))
 				break;
+		} else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) {
+			if (!map_gicc_mpidr(header, type, acpi_id, &phys_id))
+				break;
 		}
 		entry += header->length;
 	}
@@ -143,6 +178,8 @@ static int map_mat_entry(acpi_handle handle, int type, u32 acpi_id)
 		map_lsapic_id(header, type, acpi_id, &phys_id);
 	else if (header->type == ACPI_MADT_TYPE_LOCAL_X2APIC)
 		map_x2apic_id(header, type, acpi_id, &phys_id);
+	else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT)
+		map_gicc_mpidr(header, type, acpi_id, &phys_id);
 
 exit:
 	kfree(buffer.pointer);
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index bbca783..1b9cca3 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -987,13 +987,82 @@ static bool acpi_of_driver_match_device(struct device *dev,
 bool acpi_driver_match_device(struct device *dev,
 			      const struct device_driver *drv)
 {
-	if (!drv->acpi_match_table)
-		return acpi_of_driver_match_device(dev, drv);
+	bool ret = false;
 
-	return !!acpi_match_device(drv->acpi_match_table, dev);
+	if (drv->acpi_match_table)
+		ret = !!acpi_match_device(drv->acpi_match_table, dev);
+
+	/* Next, try to match with special "PRP0001" _HID */
+	if (!ret && drv->of_match_table)
+		ret = acpi_of_driver_match_device(dev, drv);
+
+	/* Next, try to match with PCI-defined class-code */
+	if (!ret && drv->acpi_match_cls)
+		ret = acpi_match_device_cls(drv->acpi_match_cls, dev);
+
+	return ret;
 }
 EXPORT_SYMBOL_GPL(acpi_driver_match_device);
 
+/**
+ * acpi_match_device_cls - Match a struct device against a ACPI _CLS method
+ * @dev_cls: A pointer to struct acpi_device_cls object to match against.
+ * @dev: The ACPI device structure to match.
+ *
+ * Check if @dev has a valid ACPI and _CLS handle. If there is a
+ * struct acpi_device_cls object for that handle, use that object to match
+ * against the given struct acpi_device_cls object.
+ *
+ * Return true on success or false on failure.
+ */
+bool acpi_match_device_cls(const struct acpi_device_cls *dev_cls,
+			  const struct device *dev)
+{
+	acpi_status status;
+	union acpi_object *pkg;
+	struct acpi_device_cls cls;
+	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+	struct acpi_buffer format = { sizeof("NNN"), "NNN" };
+	struct acpi_buffer state = { 0, NULL };
+	struct acpi_device *adev = ACPI_COMPANION(dev);
+
+	if (!adev || !adev->status.present || !dev_cls)
+		return false;
+
+	status = acpi_evaluate_object(adev->handle, METHOD_NAME__CLS,
+				      NULL, &buffer);
+	if (ACPI_FAILURE(status))
+		return false;
+
+	/**
+	 * Note:
+	 * ACPIv5.1 defines the package to contain 3 integers for
+	 * Base-Class code, Sub-Class code, and Programming Interface code.
+	 */
+	pkg = buffer.pointer;
+	if (!pkg ||
+	    (pkg->type != ACPI_TYPE_PACKAGE) ||
+	    (pkg->package.count != 3)) {
+		dev_dbg(&adev->dev, "Invalid _CLS data\n");
+		goto out;
+	}
+
+	state.length = sizeof(struct acpi_device_cls);
+	state.pointer = &cls;
+
+	status = acpi_extract_package(pkg, &format, &state);
+	if (ACPI_FAILURE(status))
+		goto out;
+
+	return (dev_cls->base_class == cls.base_class &&
+		dev_cls->sub_class == cls.sub_class &&
+		dev_cls->prog_interface == cls.prog_interface);
+out:
+	kfree(pkg);
+	return false;
+}
+EXPORT_SYMBOL_GPL(acpi_match_device_cls);
+
 static void acpi_free_power_resources_lists(struct acpi_device *device)
 {
 	int i;
diff --git a/drivers/acpi/sleep_arm.c b/drivers/acpi/sleep_arm.c
new file mode 100644
index 0000000..54578ef
--- /dev/null
+++ b/drivers/acpi/sleep_arm.c
@@ -0,0 +1,28 @@
+/*
+ *  ARM64 Specific Sleep Functionality
+ *
+ *  Copyright (C) 2013-2014, Linaro Ltd.
+ *      Author: Graeme Gregory <graeme.gregory@linaro.org>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2 as
+ *  published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+
+/*
+ * Currently the ACPI 5.1 standard does not define S states in a
+ * manner which is usable for ARM64. These two stubs are sufficient
+ * that system initialises and device PM works.
+ */
+u32 acpi_target_system_state(void)
+{
+	return ACPI_STATE_S0;
+}
+EXPORT_SYMBOL_GPL(acpi_target_system_state);
+
+int __init acpi_sleep_init(void)
+{
+	return -ENOSYS;
+}
diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
index 93b8152..42d314f 100644
--- a/drivers/acpi/tables.c
+++ b/drivers/acpi/tables.c
@@ -183,6 +183,49 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
 		}
 		break;
 
+	case ACPI_MADT_TYPE_GENERIC_INTERRUPT:
+		{
+			struct acpi_madt_generic_interrupt *p =
+				(struct acpi_madt_generic_interrupt *)header;
+			pr_info("GICC (acpi_id[0x%04x] address[%p] MPIDR[0x%llx] %s)\n",
+				p->uid, (void *)(unsigned long)p->base_address,
+				p->arm_mpidr,
+				(p->flags & ACPI_MADT_ENABLED) ? "enabled" : "disabled");
+
+		}
+		break;
+
+	case ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR:
+		{
+			struct acpi_madt_generic_distributor *p =
+				(struct acpi_madt_generic_distributor *)header;
+			pr_info("GIC Distributor (gic_id[0x%04x] address[%p] gsi_base[%d])\n",
+				p->gic_id,
+				(void *)(unsigned long)p->base_address,
+				p->global_irq_base);
+		}
+		break;
+
+	case ACPI_MADT_TYPE_GENERIC_MSI_FRAME:
+		{
+			struct acpi_madt_generic_msi_frame *p =
+				(struct acpi_madt_generic_msi_frame *)header;
+			pr_info("GIC MSI Frame (msi_fame_id[%d] address[%p])\n",
+				p->msi_frame_id,
+				(void *)(unsigned long)p->base_address);
+		}
+		break;
+
+	case ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR:
+		{
+			struct acpi_madt_generic_redistributor *p =
+				(struct acpi_madt_generic_redistributor *)header;
+			pr_info("GIC Redistributor (address[%p] region_size[0x%x])\n",
+				(void *)(unsigned long)p->base_address,
+				p->length);
+		}
+		break;
+
 	default:
 		pr_warn("Found unsupported MADT entry (type = 0x%x)\n",
 			header->type);
diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
index cd49a39..7f68f96 100644
--- a/drivers/acpi/utils.c
+++ b/drivers/acpi/utils.c
@@ -712,3 +712,29 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs)
 	return false;
 }
 EXPORT_SYMBOL(acpi_check_dsm);
+
+/**
+ * acpi_check_coherency - check for memory coherency of a device
+ * @handle: ACPI device handle
+ * @val:    Pointer to returned value
+ *
+ * Search a device and its parents for a _CCA method and return
+ * its value.
+ */
+acpi_status acpi_check_coherency(acpi_handle handle, int *val)
+{
+	unsigned long long data;
+	acpi_status status;
+
+	do {
+		status = acpi_evaluate_integer(handle, "_CCA", NULL, &data);
+		if (!ACPI_FAILURE(status)) {
+			*val = data;
+			break;
+		}
+		status = acpi_get_parent(handle, &handle);
+	} while (!ACPI_FAILURE(status));
+
+	return status;
+}
+EXPORT_SYMBOL(acpi_check_coherency);
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 5f60155..50305e3 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -48,7 +48,7 @@ config ATA_VERBOSE_ERROR
 
 config ATA_ACPI
 	bool "ATA ACPI Support"
-	depends on ACPI && PCI
+	depends on ACPI
 	default y
 	help
 	  This option adds support for ATA-related ACPI objects.
diff --git a/drivers/ata/ahci_platform.c b/drivers/ata/ahci_platform.c
index 78d6ae0..d110c95 100644
--- a/drivers/ata/ahci_platform.c
+++ b/drivers/ata/ahci_platform.c
@@ -78,12 +78,24 @@ static const struct of_device_id ahci_of_match[] = {
 };
 MODULE_DEVICE_TABLE(of, ahci_of_match);
 
+#ifdef CONFIG_ATA_ACPI
+static const struct acpi_device_id ahci_acpi_match[] = {
+	{ "AMDI0600", 0 }, /* AMD Seattle AHCI */
+	{ },
+};
+MODULE_DEVICE_TABLE(acpi, ahci_acpi_match);
+#endif
+
+static const struct acpi_device_cls ahci_cls = {0x01, 0x06, 0x01};
+
 static struct platform_driver ahci_driver = {
 	.probe = ahci_probe,
 	.remove = ata_platform_remove_one,
 	.driver = {
 		.name = DRV_NAME,
 		.of_match_table = ahci_of_match,
+		.acpi_match_cls = &ahci_cls,
+		.acpi_match_table = ACPI_PTR(ahci_acpi_match),
 		.pm = &ahci_pm_ops,
 	},
 };
diff --git a/drivers/ata/ahci_xgene.c b/drivers/ata/ahci_xgene.c
index 2e8bb60..33d7784 100644
--- a/drivers/ata/ahci_xgene.c
+++ b/drivers/ata/ahci_xgene.c
@@ -28,6 +28,7 @@
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
 #include <linux/phy/phy.h>
+#include <linux/acpi.h>
 #include "ahci.h"
 
 #define DRV_NAME "xgene-ahci"
@@ -225,14 +226,6 @@ static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc)
 	return rc;
 }
 
-static bool xgene_ahci_is_memram_inited(struct xgene_ahci_context *ctx)
-{
-	void __iomem *diagcsr = ctx->csr_diag;
-
-	return (readl(diagcsr + CFG_MEM_RAM_SHUTDOWN) == 0 &&
-	        readl(diagcsr + BLOCK_MEM_RDY) == 0xFFFFFFFF);
-}
-
 /**
  * xgene_ahci_read_id - Read ID data from the specified device
  * @dev: device
@@ -685,11 +678,6 @@ static int xgene_ahci_probe(struct platform_device *pdev)
 		return -ENODEV;
 	}
 
-	if (xgene_ahci_is_memram_inited(ctx)) {
-		dev_info(dev, "skip clock and PHY initialization\n");
-		goto skip_clk_phy;
-	}
-
 	/* Due to errata, HW requires full toggle transition */
 	rc = ahci_platform_enable_clks(hpriv);
 	if (rc)
@@ -702,7 +690,7 @@ static int xgene_ahci_probe(struct platform_device *pdev)
 
 	/* Configure the host controller */
 	xgene_ahci_hw_init(hpriv);
-skip_clk_phy:
+
 	hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_NCQ;
 
 	rc = ahci_platform_init_host(pdev, hpriv, &xgene_ahci_port_info,
@@ -718,6 +706,16 @@ disable_resources:
 	return rc;
 }
 
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id xgene_ahci_acpi_match[] = {
+	{ "APMC0D00", },
+	{ "APMC0D0D", },
+	{ "APMC0D09", },
+	{ }
+};
+MODULE_DEVICE_TABLE(acpi, xgene_ahci_acpi_match);
+#endif
+
 static const struct of_device_id xgene_ahci_of_match[] = {
 	{.compatible = "apm,xgene-ahci"},
 	{},
@@ -730,6 +728,7 @@ static struct platform_driver xgene_ahci_driver = {
 	.driver = {
 		.name = DRV_NAME,
 		.of_match_table = xgene_ahci_of_match,
+		.acpi_match_table = ACPI_PTR(xgene_ahci_acpi_match),
 	},
 };
 
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index a3025e7..3b2e2d0 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -22,6 +22,7 @@
 #include <linux/io.h>
 #include <linux/slab.h>
 #include <linux/sched_clock.h>
+#include <linux/acpi.h>
 
 #include <asm/arch_timer.h>
 #include <asm/virt.h>
@@ -62,7 +63,8 @@ enum ppi_nr {
 	MAX_TIMER_PPI
 };
 
-static int arch_timer_ppi[MAX_TIMER_PPI];
+int arch_timer_ppi[MAX_TIMER_PPI];
+EXPORT_SYMBOL(arch_timer_ppi);
 
 static struct clock_event_device __percpu *arch_timer_evt;
 
@@ -371,8 +373,12 @@ arch_timer_detect_rate(void __iomem *cntbase, struct device_node *np)
 	if (arch_timer_rate)
 		return;
 
-	/* Try to determine the frequency from the device tree or CNTFRQ */
-	if (of_property_read_u32(np, "clock-frequency", &arch_timer_rate)) {
+	/*
+	 * Try to determine the frequency from the device tree or CNTFRQ,
+	 * if ACPI is enabled, get the frequency from CNTFRQ ONLY.
+	 */
+	if (!acpi_disabled ||
+	    of_property_read_u32(np, "clock-frequency", &arch_timer_rate)) {
 		if (cntbase)
 			arch_timer_rate = readl_relaxed(cntbase + CNTFRQ);
 		else
@@ -691,28 +697,8 @@ static void __init arch_timer_common_init(void)
 	arch_timer_arch_init();
 }
 
-static void __init arch_timer_init(struct device_node *np)
+static void __init arch_timer_init(void)
 {
-	int i;
-
-	if (arch_timers_present & ARCH_CP15_TIMER) {
-		pr_warn("arch_timer: multiple nodes in dt, skipping\n");
-		return;
-	}
-
-	arch_timers_present |= ARCH_CP15_TIMER;
-	for (i = PHYS_SECURE_PPI; i < MAX_TIMER_PPI; i++)
-		arch_timer_ppi[i] = irq_of_parse_and_map(np, i);
-	arch_timer_detect_rate(NULL, np);
-
-	/*
-	 * If we cannot rely on firmware initializing the timer registers then
-	 * we should use the physical timers instead.
-	 */
-	if (IS_ENABLED(CONFIG_ARM) &&
-	    of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
-			arch_timer_use_virtual = false;
-
 	/*
 	 * If HYP mode is available, we know that the physical timer
 	 * has been configured to be accessible from PL1. Use it, so
@@ -731,13 +717,39 @@ static void __init arch_timer_init(struct device_node *np)
 		}
 	}
 
-	arch_timer_c3stop = !of_property_read_bool(np, "always-on");
-
 	arch_timer_register();
 	arch_timer_common_init();
 }
-CLOCKSOURCE_OF_DECLARE(armv7_arch_timer, "arm,armv7-timer", arch_timer_init);
-CLOCKSOURCE_OF_DECLARE(armv8_arch_timer, "arm,armv8-timer", arch_timer_init);
+
+static void __init arch_timer_of_init(struct device_node *np)
+{
+	int i;
+
+	if (arch_timers_present & ARCH_CP15_TIMER) {
+		pr_warn("arch_timer: multiple nodes in dt, skipping\n");
+		return;
+	}
+
+	arch_timers_present |= ARCH_CP15_TIMER;
+	for (i = PHYS_SECURE_PPI; i < MAX_TIMER_PPI; i++)
+		arch_timer_ppi[i] = irq_of_parse_and_map(np, i);
+
+	arch_timer_detect_rate(NULL, np);
+
+	arch_timer_c3stop = !of_property_read_bool(np, "always-on");
+
+	/*
+	 * If we cannot rely on firmware initializing the timer registers then
+	 * we should use the physical timers instead.
+	 */
+	if (IS_ENABLED(CONFIG_ARM) &&
+	    of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
+			arch_timer_use_virtual = false;
+
+	arch_timer_init();
+}
+CLOCKSOURCE_OF_DECLARE(armv7_arch_timer, "arm,armv7-timer", arch_timer_of_init);
+CLOCKSOURCE_OF_DECLARE(armv8_arch_timer, "arm,armv8-timer", arch_timer_of_init);
 
 static void __init arch_timer_mem_init(struct device_node *np)
 {
@@ -804,3 +816,70 @@ static void __init arch_timer_mem_init(struct device_node *np)
 }
 CLOCKSOURCE_OF_DECLARE(armv7_arch_timer_mem, "arm,armv7-timer-mem",
 		       arch_timer_mem_init);
+
+#ifdef CONFIG_ACPI
+static int __init map_generic_timer_interrupt(u32 interrupt, u32 flags)
+{
+	int trigger, polarity;
+
+	if (!interrupt)
+		return 0;
+
+	trigger = (flags & ACPI_GTDT_INTERRUPT_MODE) ? ACPI_EDGE_SENSITIVE
+			: ACPI_LEVEL_SENSITIVE;
+
+	polarity = (flags & ACPI_GTDT_INTERRUPT_POLARITY) ? ACPI_ACTIVE_LOW
+			: ACPI_ACTIVE_HIGH;
+
+	return acpi_register_gsi(NULL, interrupt, trigger, polarity);
+}
+
+/* Initialize per-processor generic timer */
+static int __init arch_timer_acpi_init(struct acpi_table_header *table)
+{
+	struct acpi_table_gtdt *gtdt;
+
+	if (arch_timers_present & ARCH_CP15_TIMER) {
+		pr_warn("arch_timer: already initialized, skipping\n");
+		return -EINVAL;
+	}
+
+	gtdt = container_of(table, struct acpi_table_gtdt, header);
+
+	arch_timers_present |= ARCH_CP15_TIMER;
+
+	arch_timer_ppi[PHYS_SECURE_PPI] =
+		map_generic_timer_interrupt(gtdt->secure_el1_interrupt,
+		gtdt->secure_el1_flags);
+
+	arch_timer_ppi[PHYS_NONSECURE_PPI] =
+		map_generic_timer_interrupt(gtdt->non_secure_el1_interrupt,
+		gtdt->non_secure_el1_flags);
+
+	arch_timer_ppi[VIRT_PPI] =
+		map_generic_timer_interrupt(gtdt->virtual_timer_interrupt,
+		gtdt->virtual_timer_flags);
+
+	arch_timer_ppi[HYP_PPI] =
+		map_generic_timer_interrupt(gtdt->non_secure_el2_interrupt,
+		gtdt->non_secure_el2_flags);
+
+	/* Get the frequency from CNTFRQ */
+	arch_timer_detect_rate(NULL, NULL);
+
+	/* Always-on capability */
+	arch_timer_c3stop = !(gtdt->non_secure_el1_flags & ACPI_GTDT_ALWAYS_ON);
+
+	arch_timer_init();
+	return 0;
+}
+
+/* Initialize all the generic timers presented in GTDT */
+void __init acpi_generic_timer_init(void)
+{
+	if (acpi_disabled)
+		return;
+
+	acpi_table_parse(ACPI_SIG_GTDT, arch_timer_acpi_init);
+}
+#endif
diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c
index e0f1cb3..9b396d7 100644
--- a/drivers/firmware/dmi-sysfs.c
+++ b/drivers/firmware/dmi-sysfs.c
@@ -29,6 +29,8 @@
 #define MAX_ENTRY_TYPE 255 /* Most of these aren't used, but we consider
 			      the top entry type is only 8 bits */
 
+static const u8 *smbios_raw_header;
+
 struct dmi_sysfs_entry {
 	struct dmi_header dh;
 	struct kobject kobj;
@@ -646,9 +648,37 @@ static void cleanup_entry_list(void)
 	}
 }
 
+static ssize_t smbios_entry_area_raw_read(struct file *filp,
+					  struct kobject *kobj,
+					  struct bin_attribute *bin_attr,
+					  char *buf, loff_t pos, size_t count)
+{
+	ssize_t size;
+
+	size = bin_attr->size;
+
+	if (size > pos)
+		size -= pos;
+	else
+		return 0;
+
+	if (count < size)
+		size = count;
+
+	memcpy(buf, &smbios_raw_header[pos], size);
+
+	return size;
+}
+
+static struct bin_attribute smbios_raw_area_attr = {
+	.read = smbios_entry_area_raw_read,
+	.attr = {.name = "smbios_raw_header", .mode = 0400},
+};
+
 static int __init dmi_sysfs_init(void)
 {
 	int error = -ENOMEM;
+	int size;
 	int val;
 
 	/* Set up our directory */
@@ -669,6 +699,18 @@ static int __init dmi_sysfs_init(void)
 		goto err;
 	}
 
+	smbios_raw_header = dmi_get_smbios_entry_area(&size);
+	if (!smbios_raw_header) {
+		pr_debug("dmi-sysfs: SMBIOS raw data is not available.\n");
+		error = -EINVAL;
+		goto err;
+	}
+
+	/* Create the raw binary file to access the entry area */
+	smbios_raw_area_attr.size = size;
+	if (sysfs_create_bin_file(dmi_kobj, &smbios_raw_area_attr))
+		goto err;
+
 	pr_debug("dmi-sysfs: loaded.\n");
 
 	return 0;
diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
index c5f7b4e..d55c712 100644
--- a/drivers/firmware/dmi_scan.c
+++ b/drivers/firmware/dmi_scan.c
@@ -113,6 +113,8 @@ static void dmi_table(u8 *buf, int len, int num,
 	}
 }
 
+static u8 smbios_header[32];
+static int smbios_header_size;
 static phys_addr_t dmi_base;
 static u16 dmi_len;
 static u16 dmi_num;
@@ -474,6 +476,8 @@ static int __init dmi_present(const u8 *buf)
 	if (memcmp(buf, "_SM_", 4) == 0 &&
 	    buf[5] < 32 && dmi_checksum(buf, buf[5])) {
 		smbios_ver = get_unaligned_be16(buf + 6);
+		smbios_header_size = buf[5];
+		memcpy(smbios_header, buf, smbios_header_size);
 
 		/* Some BIOS report weird SMBIOS version, fix that up */
 		switch (smbios_ver) {
@@ -505,6 +509,8 @@ static int __init dmi_present(const u8 *buf)
 				pr_info("SMBIOS %d.%d present.\n",
 				       dmi_ver >> 8, dmi_ver & 0xFF);
 			} else {
+				smbios_header_size = 15;
+				memcpy(smbios_header, buf, smbios_header_size);
 				dmi_ver = (buf[14] & 0xF0) << 4 |
 					   (buf[14] & 0x0F);
 				pr_info("Legacy DMI %d.%d present.\n",
@@ -530,6 +536,8 @@ static int __init dmi_smbios3_present(const u8 *buf)
 		dmi_ver = get_unaligned_be16(buf + 7);
 		dmi_len = get_unaligned_le32(buf + 12);
 		dmi_base = get_unaligned_le64(buf + 16);
+		smbios_header_size = buf[6];
+		memcpy(smbios_header, buf, smbios_header_size);
 
 		/*
 		 * The 64-bit SMBIOS 3.0 entry point no longer has a field
@@ -941,3 +949,21 @@ void dmi_memdev_name(u16 handle, const char **bank, const char **device)
 	}
 }
 EXPORT_SYMBOL_GPL(dmi_memdev_name);
+
+/**
+ * dmi_get_smbios_entry_area - copy SMBIOS entry point area to array.
+ * @size - pointer to assign actual size of SMBIOS entry point area.
+ *
+ * returns NULL if table is not available, otherwise returns pointer on
+ * SMBIOS entry point area array.
+ */
+const u8 *dmi_get_smbios_entry_area(int *size)
+{
+	if (!smbios_header_size || !dmi_available)
+		return NULL;
+
+	*size = smbios_header_size;
+
+	return smbios_header;
+}
+EXPORT_SYMBOL_GPL(dmi_get_smbios_entry_area);
diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
index 91da56c..7c62760 100644
--- a/drivers/firmware/efi/libstub/fdt.c
+++ b/drivers/firmware/efi/libstub/fdt.c
@@ -156,6 +156,14 @@ efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
 	if (status)
 		goto fdt_set_fail;
 
+	/* Add a property to show the dtb is generated by uefi stub */
+	if (!orig_fdt) {
+		status = fdt_setprop(fdt, node,
+				     "linux,uefi-stub-generated-dtb", NULL, 0);
+		if (status)
+			goto fdt_set_fail;
+	}
+
 	return EFI_SUCCESS;
 
 fdt_set_fail:
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index fc13dd5..3143a6e 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -381,7 +381,10 @@ static struct device_node *dev_get_dev_node(struct device *dev)
 
 		while (!pci_is_root_bus(bus))
 			bus = bus->parent;
-		return bus->bridge->parent->of_node;
+		if (bus->bridge->parent)
+			return bus->bridge->parent->of_node;
+		else
+			return NULL;
 	}
 
 	return dev->of_node;
@@ -497,6 +500,9 @@ static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
 	struct arm_smmu_master *master = NULL;
 	struct device_node *dev_node = dev_get_dev_node(dev);
 
+	if (!dev_node)
+		return NULL;
+
 	spin_lock(&arm_smmu_devices_lock);
 	list_for_each_entry(smmu, &arm_smmu_devices, list) {
 		master = find_smmu_master(smmu, dev_node);
diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
index fdf7065..235524d 100644
--- a/drivers/irqchip/irq-gic-v2m.c
+++ b/drivers/irqchip/irq-gic-v2m.c
@@ -15,6 +15,7 @@
 
 #define pr_fmt(fmt) "GICv2m: " fmt
 
+#include <linux/acpi.h>
 #include <linux/irq.h>
 #include <linux/irqdomain.h>
 #include <linux/kernel.h>
@@ -45,7 +46,6 @@
 
 struct v2m_data {
 	spinlock_t msi_cnt_lock;
-	struct msi_controller mchip;
 	struct resource res;	/* GICv2m resource */
 	void __iomem *base;	/* GICv2m virt address */
 	u32 spi_start;		/* The SPI number that MSIs start */
@@ -192,7 +192,7 @@ static void gicv2m_irq_domain_free(struct irq_domain *domain,
 	irq_domain_free_irqs_parent(domain, virq, nr_irqs);
 }
 
-static const struct irq_domain_ops gicv2m_domain_ops = {
+static struct irq_domain_ops gicv2m_domain_ops = {
 	.alloc			= gicv2m_irq_domain_alloc,
 	.free			= gicv2m_irq_domain_free,
 };
@@ -213,11 +213,17 @@ static bool is_msi_spi_valid(u32 base, u32 num)
 	return true;
 }
 
-static int __init gicv2m_init_one(struct device_node *node,
-				  struct irq_domain *parent)
+char gicv2m_msi_domain_name[] = "V2M-MSI";
+char gicv2m_domain_name[] = "GICV2M";
+static int __init gicv2m_init_one(struct irq_domain *parent,
+				  u32 spi_start, u32 nr_spis,
+				  struct resource *res,
+				  struct device_node *node,
+				  u32 msi_frame_id)
 {
 	int ret;
 	struct v2m_data *v2m;
+	struct irq_domain *inner_domain;
 
 	v2m = kzalloc(sizeof(struct v2m_data), GFP_KERNEL);
 	if (!v2m) {
@@ -225,23 +231,17 @@ static int __init gicv2m_init_one(struct device_node *node,
 		return -ENOMEM;
 	}
 
-	ret = of_address_to_resource(node, 0, &v2m->res);
-	if (ret) {
-		pr_err("Failed to allocate v2m resource.\n");
-		goto err_free_v2m;
-	}
-
-	v2m->base = ioremap(v2m->res.start, resource_size(&v2m->res));
+	v2m->base = ioremap(res->start, resource_size(res));
 	if (!v2m->base) {
 		pr_err("Failed to map GICv2m resource\n");
 		ret = -ENOMEM;
 		goto err_free_v2m;
 	}
+	memcpy(&v2m->res,res, sizeof(struct resource));
 
-	if (!of_property_read_u32(node, "arm,msi-base-spi", &v2m->spi_start) &&
-	    !of_property_read_u32(node, "arm,msi-num-spis", &v2m->nr_spis)) {
-		pr_info("Overriding V2M MSI_TYPER (base:%u, num:%u)\n",
-			v2m->spi_start, v2m->nr_spis);
+	if (spi_start && nr_spis) {
+		v2m->spi_start = spi_start;
+		v2m->nr_spis = nr_spis;
 	} else {
 		u32 typer = readl_relaxed(v2m->base + V2M_MSI_TYPER);
 
@@ -261,43 +261,35 @@ static int __init gicv2m_init_one(struct device_node *node,
 		goto err_iounmap;
 	}
 
-	v2m->domain = irq_domain_add_tree(NULL, &gicv2m_domain_ops, v2m);
-	if (!v2m->domain) {
+	inner_domain = irq_domain_add_tree(NULL, &gicv2m_domain_ops, v2m);
+	if (!inner_domain) {
 		pr_err("Failed to create GICv2m domain\n");
 		ret = -ENOMEM;
 		goto err_free_bm;
 	}
 
-	v2m->domain->parent = parent;
-	v2m->mchip.of_node = node;
-	v2m->mchip.domain = pci_msi_create_irq_domain(node,
-						      &gicv2m_msi_domain_info,
-						      v2m->domain);
-	if (!v2m->mchip.domain) {
+	inner_domain->parent = parent;
+	inner_domain->name = gicv2m_domain_name;
+	gicv2m_msi_domain_info.acpi_msi_frame_id = msi_frame_id;
+	v2m->domain = pci_msi_create_irq_domain(node, &gicv2m_msi_domain_info,
+						inner_domain);
+	if (!v2m->domain) {
 		pr_err("Failed to create MSI domain\n");
 		ret = -ENOMEM;
 		goto err_free_domains;
 	}
 
-	spin_lock_init(&v2m->msi_cnt_lock);
+	v2m->domain->name = gicv2m_msi_domain_name;
 
-	ret = of_pci_msi_chip_add(&v2m->mchip);
-	if (ret) {
-		pr_err("Failed to add msi_chip.\n");
-		goto err_free_domains;
-	}
-
-	pr_info("Node %s: range[%#lx:%#lx], SPI[%d:%d]\n", node->name,
-		(unsigned long)v2m->res.start, (unsigned long)v2m->res.end,
-		v2m->spi_start, (v2m->spi_start + v2m->nr_spis));
+	spin_lock_init(&v2m->msi_cnt_lock);
 
 	return 0;
 
 err_free_domains:
-	if (v2m->mchip.domain)
-		irq_domain_remove(v2m->mchip.domain);
 	if (v2m->domain)
 		irq_domain_remove(v2m->domain);
+	if (inner_domain)
+		irq_domain_remove(inner_domain);
 err_free_bm:
 	kfree(v2m->bm);
 err_iounmap:
@@ -319,15 +311,99 @@ int __init gicv2m_of_init(struct device_node *node, struct irq_domain *parent)
 
 	for (child = of_find_matching_node(node, gicv2m_device_id); child;
 	     child = of_find_matching_node(child, gicv2m_device_id)) {
+		u32 spi_start = 0, nr_spis = 0;
+		struct resource res;
+
 		if (!of_find_property(child, "msi-controller", NULL))
 			continue;
 
-		ret = gicv2m_init_one(child, parent);
+		ret = of_address_to_resource(child, 0, &res);
+		if (ret) {
+			pr_err("Failed to allocate v2m resource.\n");
+			break;
+		}
+
+		if (!of_property_read_u32(child, "arm,msi-base-spi", &spi_start) &&
+		    !of_property_read_u32(child, "arm,msi-num-spis", &nr_spis))
+			pr_info("Overriding V2M MSI_TYPER (base:%u, num:%u)\n",
+				spi_start, nr_spis);
+
+		ret = gicv2m_init_one(parent, spi_start, nr_spis, &res, child, 0);
 		if (ret) {
 			of_node_put(node);
 			break;
 		}
+
+		pr_info("Node %s: range[%#lx:%#lx], SPI[%d:%d]\n", child->name,
+			(unsigned long)res.start, (unsigned long)res.end,
+			spi_start, (spi_start + nr_spis));
+	}
+
+	return ret;
+}
+
+#ifdef CONFIG_ACPI
+static struct acpi_madt_generic_msi_frame *msi_frame;
+
+static int __init
+gic_acpi_parse_madt_msi(struct acpi_subtable_header *header,
+			const unsigned long end)
+{
+	struct acpi_madt_generic_msi_frame *frame;
+
+	frame = (struct acpi_madt_generic_msi_frame *)header;
+	if (BAD_MADT_ENTRY(frame, end))
+		return -EINVAL;
+
+	if (msi_frame)
+		pr_warn("Only one GIC MSI FRAME supported.\n");
+	else
+		msi_frame = frame;
+
+	return 0;
+}
+
+int __init gicv2m_acpi_init(struct acpi_table_header *table,
+			    struct irq_domain *parent)
+{
+	int ret = 0;
+	int count, i;
+	static struct acpi_madt_generic_msi_frame *cur;
+
+	count = acpi_parse_entries(ACPI_SIG_MADT, sizeof(struct acpi_table_madt),
+				   gic_acpi_parse_madt_msi, table,
+				   ACPI_MADT_TYPE_GENERIC_MSI_FRAME, 0);
+	if ((count <= 0) || !msi_frame) {
+		pr_debug("No valid ACPI GIC MSI FRAME exist\n");
+		return 0;
 	}
 
+	for (i = 0, cur = msi_frame; i < count; i++, cur++) {
+		struct resource res;
+		u32 spi_start = 0, nr_spis = 0;
+
+		res.start = cur->base_address;
+		res.end = cur->base_address + 0x1000;
+
+		if (cur->flags & ACPI_MADT_OVERRIDE_SPI_VALUES) {
+			spi_start = cur->spi_base;
+			nr_spis = cur->spi_count;
+
+			pr_info("ACPI overriding V2M MSI_TYPER (base:%u, num:%u)\n",
+				spi_start, nr_spis);
+		}
+
+		ret = gicv2m_init_one(parent, spi_start, nr_spis, &res, NULL,
+				      cur->msi_frame_id);
+		if (ret)
+			break;
+
+		pr_info("MSI frame ID %u: range[%#lx:%#lx], SPI[%d:%d]\n",
+			cur->msi_frame_id,
+			(unsigned long)res.start, (unsigned long)res.end,
+			spi_start, (spi_start + nr_spis));
+	}
 	return ret;
 }
+
+#endif /* CONFIG_ACPI */
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index d8996bd..c76884b 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -54,13 +54,12 @@ struct its_collection {
 
 /*
  * The ITS structure - contains most of the infrastructure, with the
- * msi_controller, the command queue, the collections, and the list of
- * devices writing to it.
+ * top-level MSI domain, the command queue, the collections, and the
+ * list of devices writing to it.
  */
 struct its_node {
 	raw_spinlock_t		lock;
 	struct list_head	entry;
-	struct msi_controller	msi_chip;
 	struct irq_domain	*domain;
 	void __iomem		*base;
 	unsigned long		phys_base;
@@ -875,7 +874,7 @@ retry_baser:
 
 		if (val != tmp) {
 			pr_err("ITS: %s: GITS_BASER%d doesn't stick: %lx %lx\n",
-			       its->msi_chip.of_node->full_name, i,
+			       its->domain->of_node->full_name, i,
 			       (unsigned long) val, (unsigned long) tmp);
 			err = -ENXIO;
 			goto out_free;
@@ -1260,6 +1259,7 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
 	struct resource res;
 	struct its_node *its;
 	void __iomem *its_base;
+	struct irq_domain *inner_domain = NULL;
 	u32 val;
 	u64 baser, tmp;
 	int err;
@@ -1296,7 +1296,6 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
 	INIT_LIST_HEAD(&its->its_device_list);
 	its->base = its_base;
 	its->phys_base = res.start;
-	its->msi_chip.of_node = node;
 	its->ite_size = ((readl_relaxed(its_base + GITS_TYPER) >> 4) & 0xf) + 1;
 
 	its->cmd_base = kzalloc(ITS_CMD_QUEUE_SZ, GFP_KERNEL);
@@ -1330,26 +1329,22 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
 		its->flags |= ITS_FLAGS_CMDQ_NEEDS_FLUSHING;
 	}
 
-	if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) {
-		its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
-		if (!its->domain) {
+	if (of_property_read_bool(node, "msi-controller")) {
+		inner_domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
+		if (!inner_domain) {
 			err = -ENOMEM;
 			goto out_free_tables;
 		}
 
-		its->domain->parent = parent;
+		inner_domain->parent = parent;
 
-		its->msi_chip.domain = pci_msi_create_irq_domain(node,
-								 &its_pci_msi_domain_info,
-								 its->domain);
-		if (!its->msi_chip.domain) {
+		its->domain = pci_msi_create_irq_domain(node,
+							&its_pci_msi_domain_info,
+							inner_domain);
+		if (!its->domain) {
 			err = -ENOMEM;
 			goto out_free_domains;
 		}
-
-		err = of_pci_msi_chip_add(&its->msi_chip);
-		if (err)
-			goto out_free_domains;
 	}
 
 	spin_lock(&its_lock);
@@ -1359,10 +1354,10 @@ static int its_probe(struct device_node *node, struct irq_domain *parent)
 	return 0;
 
 out_free_domains:
-	if (its->msi_chip.domain)
-		irq_domain_remove(its->msi_chip.domain);
 	if (its->domain)
 		irq_domain_remove(its->domain);
+	if (inner_domain)
+		irq_domain_remove(inner_domain);
 out_free_tables:
 	its_free_tables(its);
 out_free_cmd:
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 1c6dea2..7f073f0 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -524,9 +524,19 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	isb();
 }
 
+#ifdef CONFIG_ARM_PARKING_PROTOCOL
+static void gic_wakeup_parked_cpu(int cpu)
+{
+	gic_raise_softirq(cpumask_of(cpu), 0);
+}
+#endif
+
 static void gic_smp_init(void)
 {
 	set_smp_cross_call(gic_raise_softirq);
+#ifdef CONFIG_ARM_PARKING_PROTOCOL
+	set_smp_boot_wakeup_call(gic_wakeup_parked_cpu);
+#endif
 	register_cpu_notifier(&gic_cpu_notifier);
 }
 
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 4634cf7..d2df23b 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -33,12 +33,14 @@
 #include <linux/of.h>
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
+#include <linux/acpi.h>
 #include <linux/irqdomain.h>
 #include <linux/interrupt.h>
 #include <linux/percpu.h>
 #include <linux/slab.h>
 #include <linux/irqchip/chained_irq.h>
 #include <linux/irqchip/arm-gic.h>
+#include <linux/irqchip/arm-gic-acpi.h>
 
 #include <asm/cputype.h>
 #include <asm/irq.h>
@@ -644,6 +646,13 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 
 	raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
 }
+
+#ifdef CONFIG_ARM_PARKING_PROTOCOL
+static void gic_wakeup_parked_cpu(int cpu)
+{
+	gic_raise_softirq(cpumask_of(cpu), GIC_DIST_SOFTINT_NSATT);
+}
+#endif
 #endif
 
 #ifdef CONFIG_BL_SWITCHER
@@ -984,7 +993,10 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
 		gic_irqs = 1020;
 	gic->gic_irqs = gic_irqs;
 
-	if (node) {		/* DT case */
+	if (!acpi_disabled) {	/* ACPI case */
+		gic->domain = irq_domain_add_linear(node, gic_irqs,
+					    &gic_irq_domain_hierarchy_ops, gic);
+	} else if (node) {	/* DT case */
 		const struct irq_domain_ops *ops = &gic_irq_domain_hierarchy_ops;
 
 		if (!of_property_read_u32(node, "arm,routable-irqs",
@@ -1028,6 +1040,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
 #ifdef CONFIG_SMP
 		set_smp_cross_call(gic_raise_softirq);
 		register_cpu_notifier(&gic_cpu_notifier);
+#ifdef CONFIG_ARM_PARKING_PROTOCOL
+		set_smp_boot_wakeup_call(gic_wakeup_parked_cpu);
+#endif
 #endif
 		set_handle_irq(gic_handle_irq);
 	}
@@ -1038,9 +1053,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
 	gic_pm_init(gic);
 }
 
-#ifdef CONFIG_OF
 static int gic_cnt __initdata;
 
+#ifdef CONFIG_OF
 static int __init
 gic_of_init(struct device_node *node, struct device_node *parent)
 {
@@ -1086,3 +1101,109 @@ IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init);
 IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init);
 
 #endif
+
+#ifdef CONFIG_ACPI
+static phys_addr_t dist_phy_base, cpu_phy_base;
+static int cpu_base_assigned;
+
+static int __init
+gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
+			const unsigned long end)
+{
+	struct acpi_madt_generic_interrupt *processor;
+	phys_addr_t gic_cpu_base;
+
+	processor = (struct acpi_madt_generic_interrupt *)header;
+
+	if (BAD_MADT_ENTRY(processor, end))
+		return -EINVAL;
+
+	/*
+	 * There is no support for non-banked GICv1/2 register in ACPI spec.
+	 * All CPU interface addresses have to be the same.
+	 */
+	gic_cpu_base = processor->base_address;
+	if (cpu_base_assigned && gic_cpu_base != cpu_phy_base)
+		return -EINVAL;
+
+	cpu_phy_base = gic_cpu_base;
+	cpu_base_assigned = 1;
+	return 0;
+}
+
+static int __init
+gic_acpi_parse_madt_distributor(struct acpi_subtable_header *header,
+				const unsigned long end)
+{
+	struct acpi_madt_generic_distributor *dist;
+
+	dist = (struct acpi_madt_generic_distributor *)header;
+
+	if (BAD_MADT_ENTRY(dist, end))
+		return -EINVAL;
+
+	dist_phy_base = dist->base_address;
+	return 0;
+}
+
+int __init
+gic_v2_acpi_init(struct acpi_table_header *table, struct irq_domain **domain)
+{
+	void __iomem *cpu_base, *dist_base;
+	int count;
+
+	/* Collect CPU base addresses */
+	count = acpi_parse_entries(ACPI_SIG_MADT,
+				   sizeof(struct acpi_table_madt),
+				   gic_acpi_parse_madt_cpu, table,
+				   ACPI_MADT_TYPE_GENERIC_INTERRUPT, 0);
+	if (count <= 0) {
+		pr_err("No valid GICC entries exist\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Find distributor base address. We expect one distributor entry since
+	 * ACPI 5.1 spec neither support multi-GIC instances nor GIC cascade.
+	 */
+	count = acpi_parse_entries(ACPI_SIG_MADT,
+				   sizeof(struct acpi_table_madt),
+				   gic_acpi_parse_madt_distributor, table,
+				   ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR, 0);
+	if (count <= 0) {
+		pr_err("No valid GICD entries exist\n");
+		return -EINVAL;
+	} else if (count > 1) {
+		pr_err("More than one GICD entry detected\n");
+		return -EINVAL;
+	}
+
+	cpu_base = ioremap(cpu_phy_base, ACPI_GIC_CPU_IF_MEM_SIZE);
+	if (!cpu_base) {
+		pr_err("Unable to map GICC registers\n");
+		return -ENOMEM;
+	}
+
+	dist_base = ioremap(dist_phy_base, ACPI_GICV2_DIST_MEM_SIZE);
+	if (!dist_base) {
+		pr_err("Unable to map GICD registers\n");
+		iounmap(cpu_base);
+		return -ENOMEM;
+	}
+
+	gic_init_bases(gic_cnt, -1, dist_base, cpu_base, 0, NULL);
+	*domain = gic_data[gic_cnt].domain;
+
+	if (!*domain) {
+		pr_err("Unable to create domain\n");
+		return -EFAULT;
+	}
+
+	if (IS_ENABLED(CONFIG_ARM_GIC_V2M)) {
+		gicv2m_acpi_init(table, gic_data[gic_cnt].domain);
+	}
+
+	gic_cnt++;
+	return 0;
+}
+#endif
diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
index 0fe2f71..5855240 100644
--- a/drivers/irqchip/irqchip.c
+++ b/drivers/irqchip/irqchip.c
@@ -8,6 +8,7 @@
  * warranty of any kind, whether express or implied.
  */
 
+#include <linux/acpi.h>
 #include <linux/init.h>
 #include <linux/of_irq.h>
 #include <linux/irqchip.h>
@@ -26,4 +27,6 @@ extern struct of_device_id __irqchip_of_table[];
 void __init irqchip_init(void)
 {
 	of_irq_init(__irqchip_of_table);
+
+	acpi_irq_init();
 }
diff --git a/drivers/net/ethernet/amd/Makefile b/drivers/net/ethernet/amd/Makefile
index a38a2dc..bf0cf2f 100644
--- a/drivers/net/ethernet/amd/Makefile
+++ b/drivers/net/ethernet/amd/Makefile
@@ -18,3 +18,4 @@ obj-$(CONFIG_PCNET32) += pcnet32.o
 obj-$(CONFIG_SUN3LANCE) += sun3lance.o
 obj-$(CONFIG_SUNLANCE) += sunlance.o
 obj-$(CONFIG_AMD_XGBE) += xgbe/
+obj-$(CONFIG_AMD_XGBE) += xgbe-a0/
diff --git a/drivers/net/ethernet/amd/xgbe-a0/Makefile b/drivers/net/ethernet/amd/xgbe-a0/Makefile
new file mode 100644
index 0000000..561116f
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/Makefile
@@ -0,0 +1,8 @@
+obj-$(CONFIG_AMD_XGBE) += amd-xgbe-a0.o
+
+amd-xgbe-a0-objs := xgbe-main.o xgbe-drv.o xgbe-dev.o \
+		 xgbe-desc.o xgbe-ethtool.o xgbe-mdio.o \
+		 xgbe-ptp.o
+
+amd-xgbe-a0-$(CONFIG_AMD_XGBE_DCB) += xgbe-dcb.o
+amd-xgbe-a0-$(CONFIG_DEBUG_FS) += xgbe-debugfs.o
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-common.h b/drivers/net/ethernet/amd/xgbe-a0/xgbe-common.h
new file mode 100644
index 0000000..75b08c6
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-common.h
@@ -0,0 +1,1142 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __XGBE_COMMON_H__
+#define __XGBE_COMMON_H__
+
+/* DMA register offsets */
+#define DMA_MR				0x3000
+#define DMA_SBMR			0x3004
+#define DMA_ISR				0x3008
+#define DMA_AXIARCR			0x3010
+#define DMA_AXIAWCR			0x3018
+#define DMA_DSR0			0x3020
+#define DMA_DSR1			0x3024
+
+/* DMA register entry bit positions and sizes */
+#define DMA_AXIARCR_DRC_INDEX		0
+#define DMA_AXIARCR_DRC_WIDTH		4
+#define DMA_AXIARCR_DRD_INDEX		4
+#define DMA_AXIARCR_DRD_WIDTH		2
+#define DMA_AXIARCR_TEC_INDEX		8
+#define DMA_AXIARCR_TEC_WIDTH		4
+#define DMA_AXIARCR_TED_INDEX		12
+#define DMA_AXIARCR_TED_WIDTH		2
+#define DMA_AXIARCR_THC_INDEX		16
+#define DMA_AXIARCR_THC_WIDTH		4
+#define DMA_AXIARCR_THD_INDEX		20
+#define DMA_AXIARCR_THD_WIDTH		2
+#define DMA_AXIAWCR_DWC_INDEX		0
+#define DMA_AXIAWCR_DWC_WIDTH		4
+#define DMA_AXIAWCR_DWD_INDEX		4
+#define DMA_AXIAWCR_DWD_WIDTH		2
+#define DMA_AXIAWCR_RPC_INDEX		8
+#define DMA_AXIAWCR_RPC_WIDTH		4
+#define DMA_AXIAWCR_RPD_INDEX		12
+#define DMA_AXIAWCR_RPD_WIDTH		2
+#define DMA_AXIAWCR_RHC_INDEX		16
+#define DMA_AXIAWCR_RHC_WIDTH		4
+#define DMA_AXIAWCR_RHD_INDEX		20
+#define DMA_AXIAWCR_RHD_WIDTH		2
+#define DMA_AXIAWCR_TDC_INDEX		24
+#define DMA_AXIAWCR_TDC_WIDTH		4
+#define DMA_AXIAWCR_TDD_INDEX		28
+#define DMA_AXIAWCR_TDD_WIDTH		2
+#define DMA_ISR_MACIS_INDEX		17
+#define DMA_ISR_MACIS_WIDTH		1
+#define DMA_ISR_MTLIS_INDEX		16
+#define DMA_ISR_MTLIS_WIDTH		1
+#define DMA_MR_SWR_INDEX		0
+#define DMA_MR_SWR_WIDTH		1
+#define DMA_SBMR_EAME_INDEX		11
+#define DMA_SBMR_EAME_WIDTH		1
+#define DMA_SBMR_BLEN_256_INDEX		7
+#define DMA_SBMR_BLEN_256_WIDTH		1
+#define DMA_SBMR_UNDEF_INDEX		0
+#define DMA_SBMR_UNDEF_WIDTH		1
+
+/* DMA register values */
+#define DMA_DSR_RPS_WIDTH		4
+#define DMA_DSR_TPS_WIDTH		4
+#define DMA_DSR_Q_WIDTH			(DMA_DSR_RPS_WIDTH + DMA_DSR_TPS_WIDTH)
+#define DMA_DSR0_RPS_START		8
+#define DMA_DSR0_TPS_START		12
+#define DMA_DSRX_FIRST_QUEUE		3
+#define DMA_DSRX_INC			4
+#define DMA_DSRX_QPR			4
+#define DMA_DSRX_RPS_START		0
+#define DMA_DSRX_TPS_START		4
+#define DMA_TPS_STOPPED			0x00
+#define DMA_TPS_SUSPENDED		0x06
+
+/* DMA channel register offsets
+ *   Multiple channels can be active.  The first channel has registers
+ *   that begin at 0x3100.  Each subsequent channel has registers that
+ *   are accessed using an offset of 0x80 from the previous channel.
+ */
+#define DMA_CH_BASE			0x3100
+#define DMA_CH_INC			0x80
+
+#define DMA_CH_CR			0x00
+#define DMA_CH_TCR			0x04
+#define DMA_CH_RCR			0x08
+#define DMA_CH_TDLR_HI			0x10
+#define DMA_CH_TDLR_LO			0x14
+#define DMA_CH_RDLR_HI			0x18
+#define DMA_CH_RDLR_LO			0x1c
+#define DMA_CH_TDTR_LO			0x24
+#define DMA_CH_RDTR_LO			0x2c
+#define DMA_CH_TDRLR			0x30
+#define DMA_CH_RDRLR			0x34
+#define DMA_CH_IER			0x38
+#define DMA_CH_RIWT			0x3c
+#define DMA_CH_CATDR_LO			0x44
+#define DMA_CH_CARDR_LO			0x4c
+#define DMA_CH_CATBR_HI			0x50
+#define DMA_CH_CATBR_LO			0x54
+#define DMA_CH_CARBR_HI			0x58
+#define DMA_CH_CARBR_LO			0x5c
+#define DMA_CH_SR			0x60
+
+/* DMA channel register entry bit positions and sizes */
+#define DMA_CH_CR_PBLX8_INDEX		16
+#define DMA_CH_CR_PBLX8_WIDTH		1
+#define DMA_CH_CR_SPH_INDEX		24
+#define DMA_CH_CR_SPH_WIDTH		1
+#define DMA_CH_IER_AIE_INDEX		15
+#define DMA_CH_IER_AIE_WIDTH		1
+#define DMA_CH_IER_FBEE_INDEX		12
+#define DMA_CH_IER_FBEE_WIDTH		1
+#define DMA_CH_IER_NIE_INDEX		16
+#define DMA_CH_IER_NIE_WIDTH		1
+#define DMA_CH_IER_RBUE_INDEX		7
+#define DMA_CH_IER_RBUE_WIDTH		1
+#define DMA_CH_IER_RIE_INDEX		6
+#define DMA_CH_IER_RIE_WIDTH		1
+#define DMA_CH_IER_RSE_INDEX		8
+#define DMA_CH_IER_RSE_WIDTH		1
+#define DMA_CH_IER_TBUE_INDEX		2
+#define DMA_CH_IER_TBUE_WIDTH		1
+#define DMA_CH_IER_TIE_INDEX		0
+#define DMA_CH_IER_TIE_WIDTH		1
+#define DMA_CH_IER_TXSE_INDEX		1
+#define DMA_CH_IER_TXSE_WIDTH		1
+#define DMA_CH_RCR_PBL_INDEX		16
+#define DMA_CH_RCR_PBL_WIDTH		6
+#define DMA_CH_RCR_RBSZ_INDEX		1
+#define DMA_CH_RCR_RBSZ_WIDTH		14
+#define DMA_CH_RCR_SR_INDEX		0
+#define DMA_CH_RCR_SR_WIDTH		1
+#define DMA_CH_RIWT_RWT_INDEX		0
+#define DMA_CH_RIWT_RWT_WIDTH		8
+#define DMA_CH_SR_FBE_INDEX		12
+#define DMA_CH_SR_FBE_WIDTH		1
+#define DMA_CH_SR_RBU_INDEX		7
+#define DMA_CH_SR_RBU_WIDTH		1
+#define DMA_CH_SR_RI_INDEX		6
+#define DMA_CH_SR_RI_WIDTH		1
+#define DMA_CH_SR_RPS_INDEX		8
+#define DMA_CH_SR_RPS_WIDTH		1
+#define DMA_CH_SR_TBU_INDEX		2
+#define DMA_CH_SR_TBU_WIDTH		1
+#define DMA_CH_SR_TI_INDEX		0
+#define DMA_CH_SR_TI_WIDTH		1
+#define DMA_CH_SR_TPS_INDEX		1
+#define DMA_CH_SR_TPS_WIDTH		1
+#define DMA_CH_TCR_OSP_INDEX		4
+#define DMA_CH_TCR_OSP_WIDTH		1
+#define DMA_CH_TCR_PBL_INDEX		16
+#define DMA_CH_TCR_PBL_WIDTH		6
+#define DMA_CH_TCR_ST_INDEX		0
+#define DMA_CH_TCR_ST_WIDTH		1
+#define DMA_CH_TCR_TSE_INDEX		12
+#define DMA_CH_TCR_TSE_WIDTH		1
+
+/* DMA channel register values */
+#define DMA_OSP_DISABLE			0x00
+#define DMA_OSP_ENABLE			0x01
+#define DMA_PBL_1			1
+#define DMA_PBL_2			2
+#define DMA_PBL_4			4
+#define DMA_PBL_8			8
+#define DMA_PBL_16			16
+#define DMA_PBL_32			32
+#define DMA_PBL_64			64      /* 8 x 8 */
+#define DMA_PBL_128			128     /* 8 x 16 */
+#define DMA_PBL_256			256     /* 8 x 32 */
+#define DMA_PBL_X8_DISABLE		0x00
+#define DMA_PBL_X8_ENABLE		0x01
+
+/* MAC register offsets */
+#define MAC_TCR				0x0000
+#define MAC_RCR				0x0004
+#define MAC_PFR				0x0008
+#define MAC_WTR				0x000c
+#define MAC_HTR0			0x0010
+#define MAC_VLANTR			0x0050
+#define MAC_VLANHTR			0x0058
+#define MAC_VLANIR			0x0060
+#define MAC_IVLANIR			0x0064
+#define MAC_RETMR			0x006c
+#define MAC_Q0TFCR			0x0070
+#define MAC_RFCR			0x0090
+#define MAC_RQC0R			0x00a0
+#define MAC_RQC1R			0x00a4
+#define MAC_RQC2R			0x00a8
+#define MAC_RQC3R			0x00ac
+#define MAC_ISR				0x00b0
+#define MAC_IER				0x00b4
+#define MAC_RTSR			0x00b8
+#define MAC_PMTCSR			0x00c0
+#define MAC_RWKPFR			0x00c4
+#define MAC_LPICSR			0x00d0
+#define MAC_LPITCR			0x00d4
+#define MAC_VR				0x0110
+#define MAC_DR				0x0114
+#define MAC_HWF0R			0x011c
+#define MAC_HWF1R			0x0120
+#define MAC_HWF2R			0x0124
+#define MAC_GPIOCR			0x0278
+#define MAC_GPIOSR			0x027c
+#define MAC_MACA0HR			0x0300
+#define MAC_MACA0LR			0x0304
+#define MAC_MACA1HR			0x0308
+#define MAC_MACA1LR			0x030c
+#define MAC_RSSCR			0x0c80
+#define MAC_RSSAR			0x0c88
+#define MAC_RSSDR			0x0c8c
+#define MAC_TSCR			0x0d00
+#define MAC_SSIR			0x0d04
+#define MAC_STSR			0x0d08
+#define MAC_STNR			0x0d0c
+#define MAC_STSUR			0x0d10
+#define MAC_STNUR			0x0d14
+#define MAC_TSAR			0x0d18
+#define MAC_TSSR			0x0d20
+#define MAC_TXSNR			0x0d30
+#define MAC_TXSSR			0x0d34
+
+#define MAC_QTFCR_INC			4
+#define MAC_MACA_INC			4
+#define MAC_HTR_INC			4
+
+#define MAC_RQC2_INC			4
+#define MAC_RQC2_Q_PER_REG		4
+
+/* MAC register entry bit positions and sizes */
+#define MAC_HWF0R_ADDMACADRSEL_INDEX	18
+#define MAC_HWF0R_ADDMACADRSEL_WIDTH	5
+#define MAC_HWF0R_ARPOFFSEL_INDEX	9
+#define MAC_HWF0R_ARPOFFSEL_WIDTH	1
+#define MAC_HWF0R_EEESEL_INDEX		13
+#define MAC_HWF0R_EEESEL_WIDTH		1
+#define MAC_HWF0R_GMIISEL_INDEX		1
+#define MAC_HWF0R_GMIISEL_WIDTH		1
+#define MAC_HWF0R_MGKSEL_INDEX		7
+#define MAC_HWF0R_MGKSEL_WIDTH		1
+#define MAC_HWF0R_MMCSEL_INDEX		8
+#define MAC_HWF0R_MMCSEL_WIDTH		1
+#define MAC_HWF0R_RWKSEL_INDEX		6
+#define MAC_HWF0R_RWKSEL_WIDTH		1
+#define MAC_HWF0R_RXCOESEL_INDEX	16
+#define MAC_HWF0R_RXCOESEL_WIDTH	1
+#define MAC_HWF0R_SAVLANINS_INDEX	27
+#define MAC_HWF0R_SAVLANINS_WIDTH	1
+#define MAC_HWF0R_SMASEL_INDEX		5
+#define MAC_HWF0R_SMASEL_WIDTH		1
+#define MAC_HWF0R_TSSEL_INDEX		12
+#define MAC_HWF0R_TSSEL_WIDTH		1
+#define MAC_HWF0R_TSSTSSEL_INDEX	25
+#define MAC_HWF0R_TSSTSSEL_WIDTH	2
+#define MAC_HWF0R_TXCOESEL_INDEX	14
+#define MAC_HWF0R_TXCOESEL_WIDTH	1
+#define MAC_HWF0R_VLHASH_INDEX		4
+#define MAC_HWF0R_VLHASH_WIDTH		1
+#define MAC_HWF1R_ADVTHWORD_INDEX	13
+#define MAC_HWF1R_ADVTHWORD_WIDTH	1
+#define MAC_HWF1R_DBGMEMA_INDEX		19
+#define MAC_HWF1R_DBGMEMA_WIDTH		1
+#define MAC_HWF1R_DCBEN_INDEX		16
+#define MAC_HWF1R_DCBEN_WIDTH		1
+#define MAC_HWF1R_HASHTBLSZ_INDEX	24
+#define MAC_HWF1R_HASHTBLSZ_WIDTH	3
+#define MAC_HWF1R_L3L4FNUM_INDEX	27
+#define MAC_HWF1R_L3L4FNUM_WIDTH	4
+#define MAC_HWF1R_NUMTC_INDEX		21
+#define MAC_HWF1R_NUMTC_WIDTH		3
+#define MAC_HWF1R_RSSEN_INDEX		20
+#define MAC_HWF1R_RSSEN_WIDTH		1
+#define MAC_HWF1R_RXFIFOSIZE_INDEX	0
+#define MAC_HWF1R_RXFIFOSIZE_WIDTH	5
+#define MAC_HWF1R_SPHEN_INDEX		17
+#define MAC_HWF1R_SPHEN_WIDTH		1
+#define MAC_HWF1R_TSOEN_INDEX		18
+#define MAC_HWF1R_TSOEN_WIDTH		1
+#define MAC_HWF1R_TXFIFOSIZE_INDEX	6
+#define MAC_HWF1R_TXFIFOSIZE_WIDTH	5
+#define MAC_HWF2R_AUXSNAPNUM_INDEX	28
+#define MAC_HWF2R_AUXSNAPNUM_WIDTH	3
+#define MAC_HWF2R_PPSOUTNUM_INDEX	24
+#define MAC_HWF2R_PPSOUTNUM_WIDTH	3
+#define MAC_HWF2R_RXCHCNT_INDEX		12
+#define MAC_HWF2R_RXCHCNT_WIDTH		4
+#define MAC_HWF2R_RXQCNT_INDEX		0
+#define MAC_HWF2R_RXQCNT_WIDTH		4
+#define MAC_HWF2R_TXCHCNT_INDEX		18
+#define MAC_HWF2R_TXCHCNT_WIDTH		4
+#define MAC_HWF2R_TXQCNT_INDEX		6
+#define MAC_HWF2R_TXQCNT_WIDTH		4
+#define MAC_IER_TSIE_INDEX		12
+#define MAC_IER_TSIE_WIDTH		1
+#define MAC_ISR_MMCRXIS_INDEX		9
+#define MAC_ISR_MMCRXIS_WIDTH		1
+#define MAC_ISR_MMCTXIS_INDEX		10
+#define MAC_ISR_MMCTXIS_WIDTH		1
+#define MAC_ISR_PMTIS_INDEX		4
+#define MAC_ISR_PMTIS_WIDTH		1
+#define MAC_ISR_TSIS_INDEX		12
+#define MAC_ISR_TSIS_WIDTH		1
+#define MAC_MACA1HR_AE_INDEX		31
+#define MAC_MACA1HR_AE_WIDTH		1
+#define MAC_PFR_HMC_INDEX		2
+#define MAC_PFR_HMC_WIDTH		1
+#define MAC_PFR_HPF_INDEX		10
+#define MAC_PFR_HPF_WIDTH		1
+#define MAC_PFR_HUC_INDEX		1
+#define MAC_PFR_HUC_WIDTH		1
+#define MAC_PFR_PM_INDEX		4
+#define MAC_PFR_PM_WIDTH		1
+#define MAC_PFR_PR_INDEX		0
+#define MAC_PFR_PR_WIDTH		1
+#define MAC_PFR_VTFE_INDEX		16
+#define MAC_PFR_VTFE_WIDTH		1
+#define MAC_PMTCSR_MGKPKTEN_INDEX	1
+#define MAC_PMTCSR_MGKPKTEN_WIDTH	1
+#define MAC_PMTCSR_PWRDWN_INDEX		0
+#define MAC_PMTCSR_PWRDWN_WIDTH		1
+#define MAC_PMTCSR_RWKFILTRST_INDEX	31
+#define MAC_PMTCSR_RWKFILTRST_WIDTH	1
+#define MAC_PMTCSR_RWKPKTEN_INDEX	2
+#define MAC_PMTCSR_RWKPKTEN_WIDTH	1
+#define MAC_Q0TFCR_PT_INDEX		16
+#define MAC_Q0TFCR_PT_WIDTH		16
+#define MAC_Q0TFCR_TFE_INDEX		1
+#define MAC_Q0TFCR_TFE_WIDTH		1
+#define MAC_RCR_ACS_INDEX		1
+#define MAC_RCR_ACS_WIDTH		1
+#define MAC_RCR_CST_INDEX		2
+#define MAC_RCR_CST_WIDTH		1
+#define MAC_RCR_DCRCC_INDEX		3
+#define MAC_RCR_DCRCC_WIDTH		1
+#define MAC_RCR_HDSMS_INDEX		12
+#define MAC_RCR_HDSMS_WIDTH		3
+#define MAC_RCR_IPC_INDEX		9
+#define MAC_RCR_IPC_WIDTH		1
+#define MAC_RCR_JE_INDEX		8
+#define MAC_RCR_JE_WIDTH		1
+#define MAC_RCR_LM_INDEX		10
+#define MAC_RCR_LM_WIDTH		1
+#define MAC_RCR_RE_INDEX		0
+#define MAC_RCR_RE_WIDTH		1
+#define MAC_RFCR_PFCE_INDEX		8
+#define MAC_RFCR_PFCE_WIDTH		1
+#define MAC_RFCR_RFE_INDEX		0
+#define MAC_RFCR_RFE_WIDTH		1
+#define MAC_RFCR_UP_INDEX		1
+#define MAC_RFCR_UP_WIDTH		1
+#define MAC_RQC0R_RXQ0EN_INDEX		0
+#define MAC_RQC0R_RXQ0EN_WIDTH		2
+#define MAC_RSSAR_ADDRT_INDEX		2
+#define MAC_RSSAR_ADDRT_WIDTH		1
+#define MAC_RSSAR_CT_INDEX		1
+#define MAC_RSSAR_CT_WIDTH		1
+#define MAC_RSSAR_OB_INDEX		0
+#define MAC_RSSAR_OB_WIDTH		1
+#define MAC_RSSAR_RSSIA_INDEX		8
+#define MAC_RSSAR_RSSIA_WIDTH		8
+#define MAC_RSSCR_IP2TE_INDEX		1
+#define MAC_RSSCR_IP2TE_WIDTH		1
+#define MAC_RSSCR_RSSE_INDEX		0
+#define MAC_RSSCR_RSSE_WIDTH		1
+#define MAC_RSSCR_TCP4TE_INDEX		2
+#define MAC_RSSCR_TCP4TE_WIDTH		1
+#define MAC_RSSCR_UDP4TE_INDEX		3
+#define MAC_RSSCR_UDP4TE_WIDTH		1
+#define MAC_RSSDR_DMCH_INDEX		0
+#define MAC_RSSDR_DMCH_WIDTH		4
+#define MAC_SSIR_SNSINC_INDEX		8
+#define MAC_SSIR_SNSINC_WIDTH		8
+#define MAC_SSIR_SSINC_INDEX		16
+#define MAC_SSIR_SSINC_WIDTH		8
+#define MAC_TCR_SS_INDEX		29
+#define MAC_TCR_SS_WIDTH		2
+#define MAC_TCR_TE_INDEX		0
+#define MAC_TCR_TE_WIDTH		1
+#define MAC_TSCR_AV8021ASMEN_INDEX	28
+#define MAC_TSCR_AV8021ASMEN_WIDTH	1
+#define MAC_TSCR_SNAPTYPSEL_INDEX	16
+#define MAC_TSCR_SNAPTYPSEL_WIDTH	2
+#define MAC_TSCR_TSADDREG_INDEX		5
+#define MAC_TSCR_TSADDREG_WIDTH		1
+#define MAC_TSCR_TSCFUPDT_INDEX		1
+#define MAC_TSCR_TSCFUPDT_WIDTH		1
+#define MAC_TSCR_TSCTRLSSR_INDEX	9
+#define MAC_TSCR_TSCTRLSSR_WIDTH	1
+#define MAC_TSCR_TSENA_INDEX		0
+#define MAC_TSCR_TSENA_WIDTH		1
+#define MAC_TSCR_TSENALL_INDEX		8
+#define MAC_TSCR_TSENALL_WIDTH		1
+#define MAC_TSCR_TSEVNTENA_INDEX	14
+#define MAC_TSCR_TSEVNTENA_WIDTH	1
+#define MAC_TSCR_TSINIT_INDEX		2
+#define MAC_TSCR_TSINIT_WIDTH		1
+#define MAC_TSCR_TSIPENA_INDEX		11
+#define MAC_TSCR_TSIPENA_WIDTH		1
+#define MAC_TSCR_TSIPV4ENA_INDEX	13
+#define MAC_TSCR_TSIPV4ENA_WIDTH	1
+#define MAC_TSCR_TSIPV6ENA_INDEX	12
+#define MAC_TSCR_TSIPV6ENA_WIDTH	1
+#define MAC_TSCR_TSMSTRENA_INDEX	15
+#define MAC_TSCR_TSMSTRENA_WIDTH	1
+#define MAC_TSCR_TSVER2ENA_INDEX	10
+#define MAC_TSCR_TSVER2ENA_WIDTH	1
+#define MAC_TSCR_TXTSSTSM_INDEX		24
+#define MAC_TSCR_TXTSSTSM_WIDTH		1
+#define MAC_TSSR_TXTSC_INDEX		15
+#define MAC_TSSR_TXTSC_WIDTH		1
+#define MAC_TXSNR_TXTSSTSMIS_INDEX	31
+#define MAC_TXSNR_TXTSSTSMIS_WIDTH	1
+#define MAC_VLANHTR_VLHT_INDEX		0
+#define MAC_VLANHTR_VLHT_WIDTH		16
+#define MAC_VLANIR_VLTI_INDEX		20
+#define MAC_VLANIR_VLTI_WIDTH		1
+#define MAC_VLANIR_CSVL_INDEX		19
+#define MAC_VLANIR_CSVL_WIDTH		1
+#define MAC_VLANTR_DOVLTC_INDEX		20
+#define MAC_VLANTR_DOVLTC_WIDTH		1
+#define MAC_VLANTR_ERSVLM_INDEX		19
+#define MAC_VLANTR_ERSVLM_WIDTH		1
+#define MAC_VLANTR_ESVL_INDEX		18
+#define MAC_VLANTR_ESVL_WIDTH		1
+#define MAC_VLANTR_ETV_INDEX		16
+#define MAC_VLANTR_ETV_WIDTH		1
+#define MAC_VLANTR_EVLS_INDEX		21
+#define MAC_VLANTR_EVLS_WIDTH		2
+#define MAC_VLANTR_EVLRXS_INDEX		24
+#define MAC_VLANTR_EVLRXS_WIDTH		1
+#define MAC_VLANTR_VL_INDEX		0
+#define MAC_VLANTR_VL_WIDTH		16
+#define MAC_VLANTR_VTHM_INDEX		25
+#define MAC_VLANTR_VTHM_WIDTH		1
+#define MAC_VLANTR_VTIM_INDEX		17
+#define MAC_VLANTR_VTIM_WIDTH		1
+#define MAC_VR_DEVID_INDEX		8
+#define MAC_VR_DEVID_WIDTH		8
+#define MAC_VR_SNPSVER_INDEX		0
+#define MAC_VR_SNPSVER_WIDTH		8
+#define MAC_VR_USERVER_INDEX		16
+#define MAC_VR_USERVER_WIDTH		8
+
+/* MMC register offsets */
+#define MMC_CR				0x0800
+#define MMC_RISR			0x0804
+#define MMC_TISR			0x0808
+#define MMC_RIER			0x080c
+#define MMC_TIER			0x0810
+#define MMC_TXOCTETCOUNT_GB_LO		0x0814
+#define MMC_TXOCTETCOUNT_GB_HI		0x0818
+#define MMC_TXFRAMECOUNT_GB_LO		0x081c
+#define MMC_TXFRAMECOUNT_GB_HI		0x0820
+#define MMC_TXBROADCASTFRAMES_G_LO	0x0824
+#define MMC_TXBROADCASTFRAMES_G_HI	0x0828
+#define MMC_TXMULTICASTFRAMES_G_LO	0x082c
+#define MMC_TXMULTICASTFRAMES_G_HI	0x0830
+#define MMC_TX64OCTETS_GB_LO		0x0834
+#define MMC_TX64OCTETS_GB_HI		0x0838
+#define MMC_TX65TO127OCTETS_GB_LO	0x083c
+#define MMC_TX65TO127OCTETS_GB_HI	0x0840
+#define MMC_TX128TO255OCTETS_GB_LO	0x0844
+#define MMC_TX128TO255OCTETS_GB_HI	0x0848
+#define MMC_TX256TO511OCTETS_GB_LO	0x084c
+#define MMC_TX256TO511OCTETS_GB_HI	0x0850
+#define MMC_TX512TO1023OCTETS_GB_LO	0x0854
+#define MMC_TX512TO1023OCTETS_GB_HI	0x0858
+#define MMC_TX1024TOMAXOCTETS_GB_LO	0x085c
+#define MMC_TX1024TOMAXOCTETS_GB_HI	0x0860
+#define MMC_TXUNICASTFRAMES_GB_LO	0x0864
+#define MMC_TXUNICASTFRAMES_GB_HI	0x0868
+#define MMC_TXMULTICASTFRAMES_GB_LO	0x086c
+#define MMC_TXMULTICASTFRAMES_GB_HI	0x0870
+#define MMC_TXBROADCASTFRAMES_GB_LO	0x0874
+#define MMC_TXBROADCASTFRAMES_GB_HI	0x0878
+#define MMC_TXUNDERFLOWERROR_LO		0x087c
+#define MMC_TXUNDERFLOWERROR_HI		0x0880
+#define MMC_TXOCTETCOUNT_G_LO		0x0884
+#define MMC_TXOCTETCOUNT_G_HI		0x0888
+#define MMC_TXFRAMECOUNT_G_LO		0x088c
+#define MMC_TXFRAMECOUNT_G_HI		0x0890
+#define MMC_TXPAUSEFRAMES_LO		0x0894
+#define MMC_TXPAUSEFRAMES_HI		0x0898
+#define MMC_TXVLANFRAMES_G_LO		0x089c
+#define MMC_TXVLANFRAMES_G_HI		0x08a0
+#define MMC_RXFRAMECOUNT_GB_LO		0x0900
+#define MMC_RXFRAMECOUNT_GB_HI		0x0904
+#define MMC_RXOCTETCOUNT_GB_LO		0x0908
+#define MMC_RXOCTETCOUNT_GB_HI		0x090c
+#define MMC_RXOCTETCOUNT_G_LO		0x0910
+#define MMC_RXOCTETCOUNT_G_HI		0x0914
+#define MMC_RXBROADCASTFRAMES_G_LO	0x0918
+#define MMC_RXBROADCASTFRAMES_G_HI	0x091c
+#define MMC_RXMULTICASTFRAMES_G_LO	0x0920
+#define MMC_RXMULTICASTFRAMES_G_HI	0x0924
+#define MMC_RXCRCERROR_LO		0x0928
+#define MMC_RXCRCERROR_HI		0x092c
+#define MMC_RXRUNTERROR			0x0930
+#define MMC_RXJABBERERROR		0x0934
+#define MMC_RXUNDERSIZE_G		0x0938
+#define MMC_RXOVERSIZE_G		0x093c
+#define MMC_RX64OCTETS_GB_LO		0x0940
+#define MMC_RX64OCTETS_GB_HI		0x0944
+#define MMC_RX65TO127OCTETS_GB_LO	0x0948
+#define MMC_RX65TO127OCTETS_GB_HI	0x094c
+#define MMC_RX128TO255OCTETS_GB_LO	0x0950
+#define MMC_RX128TO255OCTETS_GB_HI	0x0954
+#define MMC_RX256TO511OCTETS_GB_LO	0x0958
+#define MMC_RX256TO511OCTETS_GB_HI	0x095c
+#define MMC_RX512TO1023OCTETS_GB_LO	0x0960
+#define MMC_RX512TO1023OCTETS_GB_HI	0x0964
+#define MMC_RX1024TOMAXOCTETS_GB_LO	0x0968
+#define MMC_RX1024TOMAXOCTETS_GB_HI	0x096c
+#define MMC_RXUNICASTFRAMES_G_LO	0x0970
+#define MMC_RXUNICASTFRAMES_G_HI	0x0974
+#define MMC_RXLENGTHERROR_LO		0x0978
+#define MMC_RXLENGTHERROR_HI		0x097c
+#define MMC_RXOUTOFRANGETYPE_LO		0x0980
+#define MMC_RXOUTOFRANGETYPE_HI		0x0984
+#define MMC_RXPAUSEFRAMES_LO		0x0988
+#define MMC_RXPAUSEFRAMES_HI		0x098c
+#define MMC_RXFIFOOVERFLOW_LO		0x0990
+#define MMC_RXFIFOOVERFLOW_HI		0x0994
+#define MMC_RXVLANFRAMES_GB_LO		0x0998
+#define MMC_RXVLANFRAMES_GB_HI		0x099c
+#define MMC_RXWATCHDOGERROR		0x09a0
+
+/* MMC register entry bit positions and sizes */
+#define MMC_CR_CR_INDEX				0
+#define MMC_CR_CR_WIDTH				1
+#define MMC_CR_CSR_INDEX			1
+#define MMC_CR_CSR_WIDTH			1
+#define MMC_CR_ROR_INDEX			2
+#define MMC_CR_ROR_WIDTH			1
+#define MMC_CR_MCF_INDEX			3
+#define MMC_CR_MCF_WIDTH			1
+#define MMC_CR_MCT_INDEX			4
+#define MMC_CR_MCT_WIDTH			2
+#define MMC_RIER_ALL_INTERRUPTS_INDEX		0
+#define MMC_RIER_ALL_INTERRUPTS_WIDTH		23
+#define MMC_RISR_RXFRAMECOUNT_GB_INDEX		0
+#define MMC_RISR_RXFRAMECOUNT_GB_WIDTH		1
+#define MMC_RISR_RXOCTETCOUNT_GB_INDEX		1
+#define MMC_RISR_RXOCTETCOUNT_GB_WIDTH		1
+#define MMC_RISR_RXOCTETCOUNT_G_INDEX		2
+#define MMC_RISR_RXOCTETCOUNT_G_WIDTH		1
+#define MMC_RISR_RXBROADCASTFRAMES_G_INDEX	3
+#define MMC_RISR_RXBROADCASTFRAMES_G_WIDTH	1
+#define MMC_RISR_RXMULTICASTFRAMES_G_INDEX	4
+#define MMC_RISR_RXMULTICASTFRAMES_G_WIDTH	1
+#define MMC_RISR_RXCRCERROR_INDEX		5
+#define MMC_RISR_RXCRCERROR_WIDTH		1
+#define MMC_RISR_RXRUNTERROR_INDEX		6
+#define MMC_RISR_RXRUNTERROR_WIDTH		1
+#define MMC_RISR_RXJABBERERROR_INDEX		7
+#define MMC_RISR_RXJABBERERROR_WIDTH		1
+#define MMC_RISR_RXUNDERSIZE_G_INDEX		8
+#define MMC_RISR_RXUNDERSIZE_G_WIDTH		1
+#define MMC_RISR_RXOVERSIZE_G_INDEX		9
+#define MMC_RISR_RXOVERSIZE_G_WIDTH		1
+#define MMC_RISR_RX64OCTETS_GB_INDEX		10
+#define MMC_RISR_RX64OCTETS_GB_WIDTH		1
+#define MMC_RISR_RX65TO127OCTETS_GB_INDEX	11
+#define MMC_RISR_RX65TO127OCTETS_GB_WIDTH	1
+#define MMC_RISR_RX128TO255OCTETS_GB_INDEX	12
+#define MMC_RISR_RX128TO255OCTETS_GB_WIDTH	1
+#define MMC_RISR_RX256TO511OCTETS_GB_INDEX	13
+#define MMC_RISR_RX256TO511OCTETS_GB_WIDTH	1
+#define MMC_RISR_RX512TO1023OCTETS_GB_INDEX	14
+#define MMC_RISR_RX512TO1023OCTETS_GB_WIDTH	1
+#define MMC_RISR_RX1024TOMAXOCTETS_GB_INDEX	15
+#define MMC_RISR_RX1024TOMAXOCTETS_GB_WIDTH	1
+#define MMC_RISR_RXUNICASTFRAMES_G_INDEX	16
+#define MMC_RISR_RXUNICASTFRAMES_G_WIDTH	1
+#define MMC_RISR_RXLENGTHERROR_INDEX		17
+#define MMC_RISR_RXLENGTHERROR_WIDTH		1
+#define MMC_RISR_RXOUTOFRANGETYPE_INDEX		18
+#define MMC_RISR_RXOUTOFRANGETYPE_WIDTH		1
+#define MMC_RISR_RXPAUSEFRAMES_INDEX		19
+#define MMC_RISR_RXPAUSEFRAMES_WIDTH		1
+#define MMC_RISR_RXFIFOOVERFLOW_INDEX		20
+#define MMC_RISR_RXFIFOOVERFLOW_WIDTH		1
+#define MMC_RISR_RXVLANFRAMES_GB_INDEX		21
+#define MMC_RISR_RXVLANFRAMES_GB_WIDTH		1
+#define MMC_RISR_RXWATCHDOGERROR_INDEX		22
+#define MMC_RISR_RXWATCHDOGERROR_WIDTH		1
+#define MMC_TIER_ALL_INTERRUPTS_INDEX		0
+#define MMC_TIER_ALL_INTERRUPTS_WIDTH		18
+#define MMC_TISR_TXOCTETCOUNT_GB_INDEX		0
+#define MMC_TISR_TXOCTETCOUNT_GB_WIDTH		1
+#define MMC_TISR_TXFRAMECOUNT_GB_INDEX		1
+#define MMC_TISR_TXFRAMECOUNT_GB_WIDTH		1
+#define MMC_TISR_TXBROADCASTFRAMES_G_INDEX	2
+#define MMC_TISR_TXBROADCASTFRAMES_G_WIDTH	1
+#define MMC_TISR_TXMULTICASTFRAMES_G_INDEX	3
+#define MMC_TISR_TXMULTICASTFRAMES_G_WIDTH	1
+#define MMC_TISR_TX64OCTETS_GB_INDEX		4
+#define MMC_TISR_TX64OCTETS_GB_WIDTH		1
+#define MMC_TISR_TX65TO127OCTETS_GB_INDEX	5
+#define MMC_TISR_TX65TO127OCTETS_GB_WIDTH	1
+#define MMC_TISR_TX128TO255OCTETS_GB_INDEX	6
+#define MMC_TISR_TX128TO255OCTETS_GB_WIDTH	1
+#define MMC_TISR_TX256TO511OCTETS_GB_INDEX	7
+#define MMC_TISR_TX256TO511OCTETS_GB_WIDTH	1
+#define MMC_TISR_TX512TO1023OCTETS_GB_INDEX	8
+#define MMC_TISR_TX512TO1023OCTETS_GB_WIDTH	1
+#define MMC_TISR_TX1024TOMAXOCTETS_GB_INDEX	9
+#define MMC_TISR_TX1024TOMAXOCTETS_GB_WIDTH	1
+#define MMC_TISR_TXUNICASTFRAMES_GB_INDEX	10
+#define MMC_TISR_TXUNICASTFRAMES_GB_WIDTH	1
+#define MMC_TISR_TXMULTICASTFRAMES_GB_INDEX	11
+#define MMC_TISR_TXMULTICASTFRAMES_GB_WIDTH	1
+#define MMC_TISR_TXBROADCASTFRAMES_GB_INDEX	12
+#define MMC_TISR_TXBROADCASTFRAMES_GB_WIDTH	1
+#define MMC_TISR_TXUNDERFLOWERROR_INDEX		13
+#define MMC_TISR_TXUNDERFLOWERROR_WIDTH		1
+#define MMC_TISR_TXOCTETCOUNT_G_INDEX		14
+#define MMC_TISR_TXOCTETCOUNT_G_WIDTH		1
+#define MMC_TISR_TXFRAMECOUNT_G_INDEX		15
+#define MMC_TISR_TXFRAMECOUNT_G_WIDTH		1
+#define MMC_TISR_TXPAUSEFRAMES_INDEX		16
+#define MMC_TISR_TXPAUSEFRAMES_WIDTH		1
+#define MMC_TISR_TXVLANFRAMES_G_INDEX		17
+#define MMC_TISR_TXVLANFRAMES_G_WIDTH		1
+
+/* MTL register offsets */
+#define MTL_OMR				0x1000
+#define MTL_FDCR			0x1008
+#define MTL_FDSR			0x100c
+#define MTL_FDDR			0x1010
+#define MTL_ISR				0x1020
+#define MTL_RQDCM0R			0x1030
+#define MTL_TCPM0R			0x1040
+#define MTL_TCPM1R			0x1044
+
+#define MTL_RQDCM_INC			4
+#define MTL_RQDCM_Q_PER_REG		4
+#define MTL_TCPM_INC			4
+#define MTL_TCPM_TC_PER_REG		4
+
+/* MTL register entry bit positions and sizes */
+#define MTL_OMR_ETSALG_INDEX		5
+#define MTL_OMR_ETSALG_WIDTH		2
+#define MTL_OMR_RAA_INDEX		2
+#define MTL_OMR_RAA_WIDTH		1
+
+/* MTL queue register offsets
+ *   Multiple queues can be active.  The first queue has registers
+ *   that begin at 0x1100.  Each subsequent queue has registers that
+ *   are accessed using an offset of 0x80 from the previous queue.
+ */
+#define MTL_Q_BASE			0x1100
+#define MTL_Q_INC			0x80
+
+#define MTL_Q_TQOMR			0x00
+#define MTL_Q_TQUR			0x04
+#define MTL_Q_TQDR			0x08
+#define MTL_Q_RQOMR			0x40
+#define MTL_Q_RQMPOCR			0x44
+#define MTL_Q_RQDR			0x4c
+#define MTL_Q_IER			0x70
+#define MTL_Q_ISR			0x74
+
+/* MTL queue register entry bit positions and sizes */
+#define MTL_Q_RQOMR_EHFC_INDEX		7
+#define MTL_Q_RQOMR_EHFC_WIDTH		1
+#define MTL_Q_RQOMR_RFA_INDEX		8
+#define MTL_Q_RQOMR_RFA_WIDTH		3
+#define MTL_Q_RQOMR_RFD_INDEX		13
+#define MTL_Q_RQOMR_RFD_WIDTH		3
+#define MTL_Q_RQOMR_RQS_INDEX		16
+#define MTL_Q_RQOMR_RQS_WIDTH		9
+#define MTL_Q_RQOMR_RSF_INDEX		5
+#define MTL_Q_RQOMR_RSF_WIDTH		1
+#define MTL_Q_RQOMR_RTC_INDEX		0
+#define MTL_Q_RQOMR_RTC_WIDTH		2
+#define MTL_Q_TQOMR_FTQ_INDEX		0
+#define MTL_Q_TQOMR_FTQ_WIDTH		1
+#define MTL_Q_TQOMR_Q2TCMAP_INDEX	8
+#define MTL_Q_TQOMR_Q2TCMAP_WIDTH	3
+#define MTL_Q_TQOMR_TQS_INDEX		16
+#define MTL_Q_TQOMR_TQS_WIDTH		10
+#define MTL_Q_TQOMR_TSF_INDEX		1
+#define MTL_Q_TQOMR_TSF_WIDTH		1
+#define MTL_Q_TQOMR_TTC_INDEX		4
+#define MTL_Q_TQOMR_TTC_WIDTH		3
+#define MTL_Q_TQOMR_TXQEN_INDEX		2
+#define MTL_Q_TQOMR_TXQEN_WIDTH		2
+
+/* MTL queue register value */
+#define MTL_RSF_DISABLE			0x00
+#define MTL_RSF_ENABLE			0x01
+#define MTL_TSF_DISABLE			0x00
+#define MTL_TSF_ENABLE			0x01
+
+#define MTL_RX_THRESHOLD_64		0x00
+#define MTL_RX_THRESHOLD_96		0x02
+#define MTL_RX_THRESHOLD_128		0x03
+#define MTL_TX_THRESHOLD_32		0x01
+#define MTL_TX_THRESHOLD_64		0x00
+#define MTL_TX_THRESHOLD_96		0x02
+#define MTL_TX_THRESHOLD_128		0x03
+#define MTL_TX_THRESHOLD_192		0x04
+#define MTL_TX_THRESHOLD_256		0x05
+#define MTL_TX_THRESHOLD_384		0x06
+#define MTL_TX_THRESHOLD_512		0x07
+
+#define MTL_ETSALG_WRR			0x00
+#define MTL_ETSALG_WFQ			0x01
+#define MTL_ETSALG_DWRR			0x02
+#define MTL_RAA_SP			0x00
+#define MTL_RAA_WSP			0x01
+
+#define MTL_Q_DISABLED			0x00
+#define MTL_Q_ENABLED			0x02
+
+/* MTL traffic class register offsets
+ *   Multiple traffic classes can be active.  The first class has registers
+ *   that begin at 0x1100.  Each subsequent queue has registers that
+ *   are accessed using an offset of 0x80 from the previous queue.
+ */
+#define MTL_TC_BASE			MTL_Q_BASE
+#define MTL_TC_INC			MTL_Q_INC
+
+#define MTL_TC_ETSCR			0x10
+#define MTL_TC_ETSSR			0x14
+#define MTL_TC_QWR			0x18
+
+/* MTL traffic class register entry bit positions and sizes */
+#define MTL_TC_ETSCR_TSA_INDEX		0
+#define MTL_TC_ETSCR_TSA_WIDTH		2
+#define MTL_TC_QWR_QW_INDEX		0
+#define MTL_TC_QWR_QW_WIDTH		21
+
+/* MTL traffic class register value */
+#define MTL_TSA_SP			0x00
+#define MTL_TSA_ETS			0x02
+
+/* PCS MMD select register offset
+ *  The MMD select register is used for accessing PCS registers
+ *  when the underlying APB3 interface is using indirect addressing.
+ *  Indirect addressing requires accessing registers in two phases,
+ *  an address phase and a data phase.  The address phases requires
+ *  writing an address selection value to the MMD select regiesters.
+ */
+#define PCS_MMD_SELECT			0xff
+
+/* Descriptor/Packet entry bit positions and sizes */
+#define RX_PACKET_ERRORS_CRC_INDEX		2
+#define RX_PACKET_ERRORS_CRC_WIDTH		1
+#define RX_PACKET_ERRORS_FRAME_INDEX		3
+#define RX_PACKET_ERRORS_FRAME_WIDTH		1
+#define RX_PACKET_ERRORS_LENGTH_INDEX		0
+#define RX_PACKET_ERRORS_LENGTH_WIDTH		1
+#define RX_PACKET_ERRORS_OVERRUN_INDEX		1
+#define RX_PACKET_ERRORS_OVERRUN_WIDTH		1
+
+#define RX_PACKET_ATTRIBUTES_CSUM_DONE_INDEX	0
+#define RX_PACKET_ATTRIBUTES_CSUM_DONE_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_VLAN_CTAG_INDEX	1
+#define RX_PACKET_ATTRIBUTES_VLAN_CTAG_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_INCOMPLETE_INDEX	2
+#define RX_PACKET_ATTRIBUTES_INCOMPLETE_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_CONTEXT_NEXT_INDEX	3
+#define RX_PACKET_ATTRIBUTES_CONTEXT_NEXT_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_CONTEXT_INDEX	4
+#define RX_PACKET_ATTRIBUTES_CONTEXT_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_RX_TSTAMP_INDEX	5
+#define RX_PACKET_ATTRIBUTES_RX_TSTAMP_WIDTH	1
+#define RX_PACKET_ATTRIBUTES_RSS_HASH_INDEX	6
+#define RX_PACKET_ATTRIBUTES_RSS_HASH_WIDTH	1
+
+#define RX_NORMAL_DESC0_OVT_INDEX		0
+#define RX_NORMAL_DESC0_OVT_WIDTH		16
+#define RX_NORMAL_DESC2_HL_INDEX		0
+#define RX_NORMAL_DESC2_HL_WIDTH		10
+#define RX_NORMAL_DESC3_CDA_INDEX		27
+#define RX_NORMAL_DESC3_CDA_WIDTH		1
+#define RX_NORMAL_DESC3_CTXT_INDEX		30
+#define RX_NORMAL_DESC3_CTXT_WIDTH		1
+#define RX_NORMAL_DESC3_ES_INDEX		15
+#define RX_NORMAL_DESC3_ES_WIDTH		1
+#define RX_NORMAL_DESC3_ETLT_INDEX		16
+#define RX_NORMAL_DESC3_ETLT_WIDTH		4
+#define RX_NORMAL_DESC3_FD_INDEX		29
+#define RX_NORMAL_DESC3_FD_WIDTH		1
+#define RX_NORMAL_DESC3_INTE_INDEX		30
+#define RX_NORMAL_DESC3_INTE_WIDTH		1
+#define RX_NORMAL_DESC3_L34T_INDEX		20
+#define RX_NORMAL_DESC3_L34T_WIDTH		4
+#define RX_NORMAL_DESC3_LD_INDEX		28
+#define RX_NORMAL_DESC3_LD_WIDTH		1
+#define RX_NORMAL_DESC3_OWN_INDEX		31
+#define RX_NORMAL_DESC3_OWN_WIDTH		1
+#define RX_NORMAL_DESC3_PL_INDEX		0
+#define RX_NORMAL_DESC3_PL_WIDTH		14
+#define RX_NORMAL_DESC3_RSV_INDEX		26
+#define RX_NORMAL_DESC3_RSV_WIDTH		1
+
+#define RX_DESC3_L34T_IPV4_TCP			1
+#define RX_DESC3_L34T_IPV4_UDP			2
+#define RX_DESC3_L34T_IPV4_ICMP			3
+#define RX_DESC3_L34T_IPV6_TCP			9
+#define RX_DESC3_L34T_IPV6_UDP			10
+#define RX_DESC3_L34T_IPV6_ICMP			11
+
+#define RX_CONTEXT_DESC3_TSA_INDEX		4
+#define RX_CONTEXT_DESC3_TSA_WIDTH		1
+#define RX_CONTEXT_DESC3_TSD_INDEX		6
+#define RX_CONTEXT_DESC3_TSD_WIDTH		1
+
+#define TX_PACKET_ATTRIBUTES_CSUM_ENABLE_INDEX	0
+#define TX_PACKET_ATTRIBUTES_CSUM_ENABLE_WIDTH	1
+#define TX_PACKET_ATTRIBUTES_TSO_ENABLE_INDEX	1
+#define TX_PACKET_ATTRIBUTES_TSO_ENABLE_WIDTH	1
+#define TX_PACKET_ATTRIBUTES_VLAN_CTAG_INDEX	2
+#define TX_PACKET_ATTRIBUTES_VLAN_CTAG_WIDTH	1
+#define TX_PACKET_ATTRIBUTES_PTP_INDEX		3
+#define TX_PACKET_ATTRIBUTES_PTP_WIDTH		1
+
+#define TX_CONTEXT_DESC2_MSS_INDEX		0
+#define TX_CONTEXT_DESC2_MSS_WIDTH		15
+#define TX_CONTEXT_DESC3_CTXT_INDEX		30
+#define TX_CONTEXT_DESC3_CTXT_WIDTH		1
+#define TX_CONTEXT_DESC3_TCMSSV_INDEX		26
+#define TX_CONTEXT_DESC3_TCMSSV_WIDTH		1
+#define TX_CONTEXT_DESC3_VLTV_INDEX		16
+#define TX_CONTEXT_DESC3_VLTV_WIDTH		1
+#define TX_CONTEXT_DESC3_VT_INDEX		0
+#define TX_CONTEXT_DESC3_VT_WIDTH		16
+
+#define TX_NORMAL_DESC2_HL_B1L_INDEX		0
+#define TX_NORMAL_DESC2_HL_B1L_WIDTH		14
+#define TX_NORMAL_DESC2_IC_INDEX		31
+#define TX_NORMAL_DESC2_IC_WIDTH		1
+#define TX_NORMAL_DESC2_TTSE_INDEX		30
+#define TX_NORMAL_DESC2_TTSE_WIDTH		1
+#define TX_NORMAL_DESC2_VTIR_INDEX		14
+#define TX_NORMAL_DESC2_VTIR_WIDTH		2
+#define TX_NORMAL_DESC3_CIC_INDEX		16
+#define TX_NORMAL_DESC3_CIC_WIDTH		2
+#define TX_NORMAL_DESC3_CPC_INDEX		26
+#define TX_NORMAL_DESC3_CPC_WIDTH		2
+#define TX_NORMAL_DESC3_CTXT_INDEX		30
+#define TX_NORMAL_DESC3_CTXT_WIDTH		1
+#define TX_NORMAL_DESC3_FD_INDEX		29
+#define TX_NORMAL_DESC3_FD_WIDTH		1
+#define TX_NORMAL_DESC3_FL_INDEX		0
+#define TX_NORMAL_DESC3_FL_WIDTH		15
+#define TX_NORMAL_DESC3_LD_INDEX		28
+#define TX_NORMAL_DESC3_LD_WIDTH		1
+#define TX_NORMAL_DESC3_OWN_INDEX		31
+#define TX_NORMAL_DESC3_OWN_WIDTH		1
+#define TX_NORMAL_DESC3_TCPHDRLEN_INDEX		19
+#define TX_NORMAL_DESC3_TCPHDRLEN_WIDTH		4
+#define TX_NORMAL_DESC3_TCPPL_INDEX		0
+#define TX_NORMAL_DESC3_TCPPL_WIDTH		18
+#define TX_NORMAL_DESC3_TSE_INDEX		18
+#define TX_NORMAL_DESC3_TSE_WIDTH		1
+
+#define TX_NORMAL_DESC2_VLAN_INSERT		0x2
+
+/* MDIO undefined or vendor specific registers */
+#ifndef MDIO_AN_COMP_STAT
+#define MDIO_AN_COMP_STAT		0x0030
+#endif
+
+/* Bit setting and getting macros
+ *  The get macro will extract the current bit field value from within
+ *  the variable
+ *
+ *  The set macro will clear the current bit field value within the
+ *  variable and then set the bit field of the variable to the
+ *  specified value
+ */
+#define GET_BITS(_var, _index, _width)					\
+	(((_var) >> (_index)) & ((0x1 << (_width)) - 1))
+
+#define SET_BITS(_var, _index, _width, _val)				\
+do {									\
+	(_var) &= ~(((0x1 << (_width)) - 1) << (_index));		\
+	(_var) |= (((_val) & ((0x1 << (_width)) - 1)) << (_index));	\
+} while (0)
+
+#define GET_BITS_LE(_var, _index, _width)				\
+	((le32_to_cpu((_var)) >> (_index)) & ((0x1 << (_width)) - 1))
+
+#define SET_BITS_LE(_var, _index, _width, _val)				\
+do {									\
+	(_var) &= cpu_to_le32(~(((0x1 << (_width)) - 1) << (_index)));	\
+	(_var) |= cpu_to_le32((((_val) &				\
+			      ((0x1 << (_width)) - 1)) << (_index)));	\
+} while (0)
+
+/* Bit setting and getting macros based on register fields
+ *  The get macro uses the bit field definitions formed using the input
+ *  names to extract the current bit field value from within the
+ *  variable
+ *
+ *  The set macro uses the bit field definitions formed using the input
+ *  names to set the bit field of the variable to the specified value
+ */
+#define XGMAC_GET_BITS(_var, _prefix, _field)				\
+	GET_BITS((_var),						\
+		 _prefix##_##_field##_INDEX,				\
+		 _prefix##_##_field##_WIDTH)
+
+#define XGMAC_SET_BITS(_var, _prefix, _field, _val)			\
+	SET_BITS((_var),						\
+		 _prefix##_##_field##_INDEX,				\
+		 _prefix##_##_field##_WIDTH, (_val))
+
+#define XGMAC_GET_BITS_LE(_var, _prefix, _field)			\
+	GET_BITS_LE((_var),						\
+		 _prefix##_##_field##_INDEX,				\
+		 _prefix##_##_field##_WIDTH)
+
+#define XGMAC_SET_BITS_LE(_var, _prefix, _field, _val)			\
+	SET_BITS_LE((_var),						\
+		 _prefix##_##_field##_INDEX,				\
+		 _prefix##_##_field##_WIDTH, (_val))
+
+/* Macros for reading or writing registers
+ *  The ioread macros will get bit fields or full values using the
+ *  register definitions formed using the input names
+ *
+ *  The iowrite macros will set bit fields or full values using the
+ *  register definitions formed using the input names
+ */
+#define XGMAC_IOREAD(_pdata, _reg)					\
+	ioread32((_pdata)->xgmac_regs + _reg)
+
+#define XGMAC_IOREAD_BITS(_pdata, _reg, _field)				\
+	GET_BITS(XGMAC_IOREAD((_pdata), _reg),				\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XGMAC_IOWRITE(_pdata, _reg, _val)				\
+	iowrite32((_val), (_pdata)->xgmac_regs + _reg)
+
+#define XGMAC_IOWRITE_BITS(_pdata, _reg, _field, _val)			\
+do {									\
+	u32 reg_val = XGMAC_IOREAD((_pdata), _reg);			\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XGMAC_IOWRITE((_pdata), _reg, reg_val);				\
+} while (0)
+
+/* Macros for reading or writing MTL queue or traffic class registers
+ *  Similar to the standard read and write macros except that the
+ *  base register value is calculated by the queue or traffic class number
+ */
+#define XGMAC_MTL_IOREAD(_pdata, _n, _reg)				\
+	ioread32((_pdata)->xgmac_regs +					\
+		 MTL_Q_BASE + ((_n) * MTL_Q_INC) + _reg)
+
+#define XGMAC_MTL_IOREAD_BITS(_pdata, _n, _reg, _field)			\
+	GET_BITS(XGMAC_MTL_IOREAD((_pdata), (_n), _reg),		\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XGMAC_MTL_IOWRITE(_pdata, _n, _reg, _val)			\
+	iowrite32((_val), (_pdata)->xgmac_regs +			\
+		  MTL_Q_BASE + ((_n) * MTL_Q_INC) + _reg)
+
+#define XGMAC_MTL_IOWRITE_BITS(_pdata, _n, _reg, _field, _val)		\
+do {									\
+	u32 reg_val = XGMAC_MTL_IOREAD((_pdata), (_n), _reg);		\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XGMAC_MTL_IOWRITE((_pdata), (_n), _reg, reg_val);		\
+} while (0)
+
+/* Macros for reading or writing DMA channel registers
+ *  Similar to the standard read and write macros except that the
+ *  base register value is obtained from the ring
+ */
+#define XGMAC_DMA_IOREAD(_channel, _reg)				\
+	ioread32((_channel)->dma_regs + _reg)
+
+#define XGMAC_DMA_IOREAD_BITS(_channel, _reg, _field)			\
+	GET_BITS(XGMAC_DMA_IOREAD((_channel), _reg),			\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XGMAC_DMA_IOWRITE(_channel, _reg, _val)				\
+	iowrite32((_val), (_channel)->dma_regs + _reg)
+
+#define XGMAC_DMA_IOWRITE_BITS(_channel, _reg, _field, _val)		\
+do {									\
+	u32 reg_val = XGMAC_DMA_IOREAD((_channel), _reg);		\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XGMAC_DMA_IOWRITE((_channel), _reg, reg_val);			\
+} while (0)
+
+/* Macros for building, reading or writing register values or bits
+ * within the register values of XPCS registers.
+ */
+#define XPCS_IOWRITE(_pdata, _off, _val)				\
+	iowrite32(_val, (_pdata)->xpcs_regs + (_off))
+
+#define XPCS_IOREAD(_pdata, _off)					\
+	ioread32((_pdata)->xpcs_regs + (_off))
+
+/* Macros for building, reading or writing register values or bits
+ * using MDIO.  Different from above because of the use of standardized
+ * Linux include values.  No shifting is performed with the bit
+ * operations, everything works on mask values.
+ */
+#define XMDIO_READ(_pdata, _mmd, _reg)					\
+	((_pdata)->hw_if.read_mmd_regs((_pdata), 0,			\
+		MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff)))
+
+#define XMDIO_READ_BITS(_pdata, _mmd, _reg, _mask)			\
+	(XMDIO_READ((_pdata), _mmd, _reg) & _mask)
+
+#define XMDIO_WRITE(_pdata, _mmd, _reg, _val)				\
+	((_pdata)->hw_if.write_mmd_regs((_pdata), 0,			\
+		MII_ADDR_C45 | (_mmd << 16) | ((_reg) & 0xffff), (_val)))
+
+#define XMDIO_WRITE_BITS(_pdata, _mmd, _reg, _mask, _val)		\
+do {									\
+	u32 mmd_val = XMDIO_READ((_pdata), _mmd, _reg);			\
+	mmd_val &= ~_mask;						\
+	mmd_val |= (_val);						\
+	XMDIO_WRITE((_pdata), _mmd, _reg, mmd_val);			\
+} while (0)
+
+#endif
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-dcb.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-dcb.c
new file mode 100644
index 0000000..343301c
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-dcb.c
@@ -0,0 +1,269 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/netdevice.h>
+#include <net/dcbnl.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static int xgbe_dcb_ieee_getets(struct net_device *netdev,
+				struct ieee_ets *ets)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	/* Set number of supported traffic classes */
+	ets->ets_cap = pdata->hw_feat.tc_cnt;
+
+	if (pdata->ets) {
+		ets->cbs = pdata->ets->cbs;
+		memcpy(ets->tc_tx_bw, pdata->ets->tc_tx_bw,
+		       sizeof(ets->tc_tx_bw));
+		memcpy(ets->tc_tsa, pdata->ets->tc_tsa,
+		       sizeof(ets->tc_tsa));
+		memcpy(ets->prio_tc, pdata->ets->prio_tc,
+		       sizeof(ets->prio_tc));
+	}
+
+	return 0;
+}
+
+static int xgbe_dcb_ieee_setets(struct net_device *netdev,
+				struct ieee_ets *ets)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	unsigned int i, tc_ets, tc_ets_weight;
+
+	tc_ets = 0;
+	tc_ets_weight = 0;
+	for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+		DBGPR("  TC%u: tx_bw=%hhu, rx_bw=%hhu, tsa=%hhu\n", i,
+		      ets->tc_tx_bw[i], ets->tc_rx_bw[i], ets->tc_tsa[i]);
+		DBGPR("  PRIO%u: TC=%hhu\n", i, ets->prio_tc[i]);
+
+		if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) &&
+		    (i >= pdata->hw_feat.tc_cnt))
+				return -EINVAL;
+
+		if (ets->prio_tc[i] >= pdata->hw_feat.tc_cnt)
+			return -EINVAL;
+
+		switch (ets->tc_tsa[i]) {
+		case IEEE_8021QAZ_TSA_STRICT:
+			break;
+		case IEEE_8021QAZ_TSA_ETS:
+			tc_ets = 1;
+			tc_ets_weight += ets->tc_tx_bw[i];
+			break;
+
+		default:
+			return -EINVAL;
+		}
+	}
+
+	/* Weights must add up to 100% */
+	if (tc_ets && (tc_ets_weight != 100))
+		return -EINVAL;
+
+	if (!pdata->ets) {
+		pdata->ets = devm_kzalloc(pdata->dev, sizeof(*pdata->ets),
+					  GFP_KERNEL);
+		if (!pdata->ets)
+			return -ENOMEM;
+	}
+
+	memcpy(pdata->ets, ets, sizeof(*pdata->ets));
+
+	pdata->hw_if.config_dcb_tc(pdata);
+
+	return 0;
+}
+
+static int xgbe_dcb_ieee_getpfc(struct net_device *netdev,
+				struct ieee_pfc *pfc)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	/* Set number of supported PFC traffic classes */
+	pfc->pfc_cap = pdata->hw_feat.tc_cnt;
+
+	if (pdata->pfc) {
+		pfc->pfc_en = pdata->pfc->pfc_en;
+		pfc->mbc = pdata->pfc->mbc;
+		pfc->delay = pdata->pfc->delay;
+	}
+
+	return 0;
+}
+
+static int xgbe_dcb_ieee_setpfc(struct net_device *netdev,
+				struct ieee_pfc *pfc)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	DBGPR("  cap=%hhu, en=%hhx, mbc=%hhu, delay=%hhu\n",
+	      pfc->pfc_cap, pfc->pfc_en, pfc->mbc, pfc->delay);
+
+	if (!pdata->pfc) {
+		pdata->pfc = devm_kzalloc(pdata->dev, sizeof(*pdata->pfc),
+					  GFP_KERNEL);
+		if (!pdata->pfc)
+			return -ENOMEM;
+	}
+
+	memcpy(pdata->pfc, pfc, sizeof(*pdata->pfc));
+
+	pdata->hw_if.config_dcb_pfc(pdata);
+
+	return 0;
+}
+
+static u8 xgbe_dcb_getdcbx(struct net_device *netdev)
+{
+	return DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
+}
+
+static u8 xgbe_dcb_setdcbx(struct net_device *netdev, u8 dcbx)
+{
+	u8 support = xgbe_dcb_getdcbx(netdev);
+
+	DBGPR("  DCBX=%#hhx\n", dcbx);
+
+	if (dcbx & ~support)
+		return 1;
+
+	if ((dcbx & support) != support)
+		return 1;
+
+	return 0;
+}
+
+static const struct dcbnl_rtnl_ops xgbe_dcbnl_ops = {
+	/* IEEE 802.1Qaz std */
+	.ieee_getets = xgbe_dcb_ieee_getets,
+	.ieee_setets = xgbe_dcb_ieee_setets,
+	.ieee_getpfc = xgbe_dcb_ieee_getpfc,
+	.ieee_setpfc = xgbe_dcb_ieee_setpfc,
+
+	/* DCBX configuration */
+	.getdcbx     = xgbe_dcb_getdcbx,
+	.setdcbx     = xgbe_dcb_setdcbx,
+};
+
+const struct dcbnl_rtnl_ops *xgbe_a0_get_dcbnl_ops(void)
+{
+	return &xgbe_dcbnl_ops;
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-debugfs.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-debugfs.c
new file mode 100644
index 0000000..ecfa6f9
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-debugfs.c
@@ -0,0 +1,373 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static ssize_t xgbe_common_read(char __user *buffer, size_t count,
+				loff_t *ppos, unsigned int value)
+{
+	char *buf;
+	ssize_t len;
+
+	if (*ppos != 0)
+		return 0;
+
+	buf = kasprintf(GFP_KERNEL, "0x%08x\n", value);
+	if (!buf)
+		return -ENOMEM;
+
+	if (count < strlen(buf)) {
+		kfree(buf);
+		return -ENOSPC;
+	}
+
+	len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
+	kfree(buf);
+
+	return len;
+}
+
+static ssize_t xgbe_common_write(const char __user *buffer, size_t count,
+				 loff_t *ppos, unsigned int *value)
+{
+	char workarea[32];
+	ssize_t len;
+	int ret;
+
+	if (*ppos != 0)
+		return 0;
+
+	if (count >= sizeof(workarea))
+		return -ENOSPC;
+
+	len = simple_write_to_buffer(workarea, sizeof(workarea) - 1, ppos,
+				     buffer, count);
+	if (len < 0)
+		return len;
+
+	workarea[len] = '\0';
+	ret = kstrtouint(workarea, 16, value);
+	if (ret)
+		return -EIO;
+
+	return len;
+}
+
+static ssize_t xgmac_reg_addr_read(struct file *filp, char __user *buffer,
+				   size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_read(buffer, count, ppos, pdata->debugfs_xgmac_reg);
+}
+
+static ssize_t xgmac_reg_addr_write(struct file *filp,
+				    const char __user *buffer,
+				    size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_write(buffer, count, ppos,
+				 &pdata->debugfs_xgmac_reg);
+}
+
+static ssize_t xgmac_reg_value_read(struct file *filp, char __user *buffer,
+				    size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+	unsigned int value;
+
+	value = XGMAC_IOREAD(pdata, pdata->debugfs_xgmac_reg);
+
+	return xgbe_common_read(buffer, count, ppos, value);
+}
+
+static ssize_t xgmac_reg_value_write(struct file *filp,
+				     const char __user *buffer,
+				     size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+	unsigned int value;
+	ssize_t len;
+
+	len = xgbe_common_write(buffer, count, ppos, &value);
+	if (len < 0)
+		return len;
+
+	XGMAC_IOWRITE(pdata, pdata->debugfs_xgmac_reg, value);
+
+	return len;
+}
+
+static const struct file_operations xgmac_reg_addr_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  xgmac_reg_addr_read,
+	.write = xgmac_reg_addr_write,
+};
+
+static const struct file_operations xgmac_reg_value_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  xgmac_reg_value_read,
+	.write = xgmac_reg_value_write,
+};
+
+static ssize_t xpcs_mmd_read(struct file *filp, char __user *buffer,
+			     size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_read(buffer, count, ppos, pdata->debugfs_xpcs_mmd);
+}
+
+static ssize_t xpcs_mmd_write(struct file *filp, const char __user *buffer,
+			      size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_write(buffer, count, ppos,
+				 &pdata->debugfs_xpcs_mmd);
+}
+
+static ssize_t xpcs_reg_addr_read(struct file *filp, char __user *buffer,
+				  size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_read(buffer, count, ppos, pdata->debugfs_xpcs_reg);
+}
+
+static ssize_t xpcs_reg_addr_write(struct file *filp, const char __user *buffer,
+				   size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+
+	return xgbe_common_write(buffer, count, ppos,
+				 &pdata->debugfs_xpcs_reg);
+}
+
+static ssize_t xpcs_reg_value_read(struct file *filp, char __user *buffer,
+				   size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+	unsigned int value;
+
+	value = XMDIO_READ(pdata, pdata->debugfs_xpcs_mmd,
+			   pdata->debugfs_xpcs_reg);
+
+	return xgbe_common_read(buffer, count, ppos, value);
+}
+
+static ssize_t xpcs_reg_value_write(struct file *filp,
+				    const char __user *buffer,
+				    size_t count, loff_t *ppos)
+{
+	struct xgbe_prv_data *pdata = filp->private_data;
+	unsigned int value;
+	ssize_t len;
+
+	len = xgbe_common_write(buffer, count, ppos, &value);
+	if (len < 0)
+		return len;
+
+	XMDIO_WRITE(pdata, pdata->debugfs_xpcs_mmd, pdata->debugfs_xpcs_reg,
+		    value);
+
+	return len;
+}
+
+static const struct file_operations xpcs_mmd_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  xpcs_mmd_read,
+	.write = xpcs_mmd_write,
+};
+
+static const struct file_operations xpcs_reg_addr_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  xpcs_reg_addr_read,
+	.write = xpcs_reg_addr_write,
+};
+
+static const struct file_operations xpcs_reg_value_fops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read =  xpcs_reg_value_read,
+	.write = xpcs_reg_value_write,
+};
+
+void xgbe_a0_debugfs_init(struct xgbe_prv_data *pdata)
+{
+	struct dentry *pfile;
+	char *buf;
+
+	/* Set defaults */
+	pdata->debugfs_xgmac_reg = 0;
+	pdata->debugfs_xpcs_mmd = 1;
+	pdata->debugfs_xpcs_reg = 0;
+
+	buf = kasprintf(GFP_KERNEL, "amd-xgbe-a0-%s", pdata->netdev->name);
+	pdata->xgbe_debugfs = debugfs_create_dir(buf, NULL);
+	if (!pdata->xgbe_debugfs) {
+		netdev_err(pdata->netdev, "debugfs_create_dir failed\n");
+		return;
+	}
+
+	pfile = debugfs_create_file("xgmac_register", 0600,
+				    pdata->xgbe_debugfs, pdata,
+				    &xgmac_reg_addr_fops);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_create_file failed\n");
+
+	pfile = debugfs_create_file("xgmac_register_value", 0600,
+				    pdata->xgbe_debugfs, pdata,
+				    &xgmac_reg_value_fops);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_create_file failed\n");
+
+	pfile = debugfs_create_file("xpcs_mmd", 0600,
+				    pdata->xgbe_debugfs, pdata,
+				    &xpcs_mmd_fops);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_create_file failed\n");
+
+	pfile = debugfs_create_file("xpcs_register", 0600,
+				    pdata->xgbe_debugfs, pdata,
+				    &xpcs_reg_addr_fops);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_create_file failed\n");
+
+	pfile = debugfs_create_file("xpcs_register_value", 0600,
+				    pdata->xgbe_debugfs, pdata,
+				    &xpcs_reg_value_fops);
+	if (!pfile)
+		netdev_err(pdata->netdev, "debugfs_create_file failed\n");
+
+	kfree(buf);
+}
+
+void xgbe_a0_debugfs_exit(struct xgbe_prv_data *pdata)
+{
+	debugfs_remove_recursive(pdata->xgbe_debugfs);
+	pdata->xgbe_debugfs = NULL;
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-desc.c
new file mode 100644
index 0000000..5dd5777
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-desc.c
@@ -0,0 +1,636 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static void xgbe_unmap_rdata(struct xgbe_prv_data *, struct xgbe_ring_data *);
+
+static void xgbe_free_ring(struct xgbe_prv_data *pdata,
+			   struct xgbe_ring *ring)
+{
+	struct xgbe_ring_data *rdata;
+	unsigned int i;
+
+	if (!ring)
+		return;
+
+	if (ring->rdata) {
+		for (i = 0; i < ring->rdesc_count; i++) {
+			rdata = XGBE_GET_DESC_DATA(ring, i);
+			xgbe_unmap_rdata(pdata, rdata);
+		}
+
+		kfree(ring->rdata);
+		ring->rdata = NULL;
+	}
+
+	if (ring->rx_hdr_pa.pages) {
+		dma_unmap_page(pdata->dev, ring->rx_hdr_pa.pages_dma,
+			       ring->rx_hdr_pa.pages_len, DMA_FROM_DEVICE);
+		put_page(ring->rx_hdr_pa.pages);
+
+		ring->rx_hdr_pa.pages = NULL;
+		ring->rx_hdr_pa.pages_len = 0;
+		ring->rx_hdr_pa.pages_offset = 0;
+		ring->rx_hdr_pa.pages_dma = 0;
+	}
+
+	if (ring->rx_buf_pa.pages) {
+		dma_unmap_page(pdata->dev, ring->rx_buf_pa.pages_dma,
+			       ring->rx_buf_pa.pages_len, DMA_FROM_DEVICE);
+		put_page(ring->rx_buf_pa.pages);
+
+		ring->rx_buf_pa.pages = NULL;
+		ring->rx_buf_pa.pages_len = 0;
+		ring->rx_buf_pa.pages_offset = 0;
+		ring->rx_buf_pa.pages_dma = 0;
+	}
+
+	if (ring->rdesc) {
+		dma_free_coherent(pdata->dev,
+				  (sizeof(struct xgbe_ring_desc) *
+				   ring->rdesc_count),
+				  ring->rdesc, ring->rdesc_dma);
+		ring->rdesc = NULL;
+	}
+}
+
+static void xgbe_free_ring_resources(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	DBGPR("-->xgbe_free_ring_resources\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		xgbe_free_ring(pdata, channel->tx_ring);
+		xgbe_free_ring(pdata, channel->rx_ring);
+	}
+
+	DBGPR("<--xgbe_free_ring_resources\n");
+}
+
+static int xgbe_init_ring(struct xgbe_prv_data *pdata,
+			  struct xgbe_ring *ring, unsigned int rdesc_count)
+{
+	DBGPR("-->xgbe_init_ring\n");
+
+	if (!ring)
+		return 0;
+
+	/* Descriptors */
+	ring->rdesc_count = rdesc_count;
+	ring->rdesc = dma_alloc_coherent(pdata->dev,
+					 (sizeof(struct xgbe_ring_desc) *
+					  rdesc_count), &ring->rdesc_dma,
+					 GFP_KERNEL);
+	if (!ring->rdesc)
+		return -ENOMEM;
+
+	/* Descriptor information */
+	ring->rdata = kcalloc(rdesc_count, sizeof(struct xgbe_ring_data),
+			      GFP_KERNEL);
+	if (!ring->rdata)
+		return -ENOMEM;
+
+	DBGPR("    rdesc=0x%p, rdesc_dma=0x%llx, rdata=0x%p\n",
+	      ring->rdesc, ring->rdesc_dma, ring->rdata);
+
+	DBGPR("<--xgbe_init_ring\n");
+
+	return 0;
+}
+
+static int xgbe_alloc_ring_resources(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+	int ret;
+
+	DBGPR("-->xgbe_alloc_ring_resources\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		DBGPR("  %s - tx_ring:\n", channel->name);
+		ret = xgbe_init_ring(pdata, channel->tx_ring,
+				     pdata->tx_desc_count);
+		if (ret) {
+			netdev_alert(pdata->netdev,
+				     "error initializing Tx ring\n");
+			goto err_ring;
+		}
+
+		DBGPR("  %s - rx_ring:\n", channel->name);
+		ret = xgbe_init_ring(pdata, channel->rx_ring,
+				     pdata->rx_desc_count);
+		if (ret) {
+			netdev_alert(pdata->netdev,
+				     "error initializing Tx ring\n");
+			goto err_ring;
+		}
+	}
+
+	DBGPR("<--xgbe_alloc_ring_resources\n");
+
+	return 0;
+
+err_ring:
+	xgbe_free_ring_resources(pdata);
+
+	return ret;
+}
+
+static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+			    struct xgbe_page_alloc *pa, gfp_t gfp, int order)
+{
+	struct page *pages = NULL;
+	dma_addr_t pages_dma;
+	int ret;
+
+	/* Try to obtain pages, decreasing order if necessary */
+	gfp |= __GFP_COLD | __GFP_COMP;
+	while (order >= 0) {
+		pages = alloc_pages(gfp, order);
+		if (pages)
+			break;
+
+		order--;
+	}
+	if (!pages)
+		return -ENOMEM;
+
+	/* Map the pages */
+	pages_dma = dma_map_page(pdata->dev, pages, 0,
+				 PAGE_SIZE << order, DMA_FROM_DEVICE);
+	ret = dma_mapping_error(pdata->dev, pages_dma);
+	if (ret) {
+		put_page(pages);
+		return ret;
+	}
+
+	pa->pages = pages;
+	pa->pages_len = PAGE_SIZE << order;
+	pa->pages_offset = 0;
+	pa->pages_dma = pages_dma;
+
+	return 0;
+}
+
+static void xgbe_set_buffer_data(struct xgbe_buffer_data *bd,
+				 struct xgbe_page_alloc *pa,
+				 unsigned int len)
+{
+	get_page(pa->pages);
+	bd->pa = *pa;
+
+	bd->dma = pa->pages_dma + pa->pages_offset;
+	bd->dma_len = len;
+
+	pa->pages_offset += len;
+	if ((pa->pages_offset + len) > pa->pages_len) {
+		/* This data descriptor is responsible for unmapping page(s) */
+		bd->pa_unmap = *pa;
+
+		/* Get a new allocation next time */
+		pa->pages = NULL;
+		pa->pages_len = 0;
+		pa->pages_offset = 0;
+		pa->pages_dma = 0;
+	}
+}
+
+static int xgbe_map_rx_buffer(struct xgbe_prv_data *pdata,
+			      struct xgbe_ring *ring,
+			      struct xgbe_ring_data *rdata)
+{
+	int order, ret;
+
+	if (!ring->rx_hdr_pa.pages) {
+		ret = xgbe_alloc_pages(pdata, &ring->rx_hdr_pa, GFP_ATOMIC, 0);
+		if (ret)
+			return ret;
+	}
+
+	if (!ring->rx_buf_pa.pages) {
+		order = max_t(int, PAGE_ALLOC_COSTLY_ORDER - 1, 0);
+		ret = xgbe_alloc_pages(pdata, &ring->rx_buf_pa, GFP_ATOMIC,
+				       order);
+		if (ret)
+			return ret;
+	}
+
+	/* Set up the header page info */
+	xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
+			     XGBE_SKB_ALLOC_SIZE);
+
+	/* Set up the buffer page info */
+	xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa,
+			     pdata->rx_buf_size);
+
+	return 0;
+}
+
+static void xgbe_wrapper_tx_descriptor_init(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+	dma_addr_t rdesc_dma;
+	unsigned int i, j;
+
+	DBGPR("-->xgbe_wrapper_tx_descriptor_init\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		ring = channel->tx_ring;
+		if (!ring)
+			break;
+
+		rdesc = ring->rdesc;
+		rdesc_dma = ring->rdesc_dma;
+
+		for (j = 0; j < ring->rdesc_count; j++) {
+			rdata = XGBE_GET_DESC_DATA(ring, j);
+
+			rdata->rdesc = rdesc;
+			rdata->rdesc_dma = rdesc_dma;
+
+			rdesc++;
+			rdesc_dma += sizeof(struct xgbe_ring_desc);
+		}
+
+		ring->cur = 0;
+		ring->dirty = 0;
+		memset(&ring->tx, 0, sizeof(ring->tx));
+
+		hw_if->tx_desc_init(channel);
+	}
+
+	DBGPR("<--xgbe_wrapper_tx_descriptor_init\n");
+}
+
+static void xgbe_wrapper_rx_descriptor_init(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	struct xgbe_ring_desc *rdesc;
+	struct xgbe_ring_data *rdata;
+	dma_addr_t rdesc_dma;
+	unsigned int i, j;
+
+	DBGPR("-->xgbe_wrapper_rx_descriptor_init\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		ring = channel->rx_ring;
+		if (!ring)
+			break;
+
+		rdesc = ring->rdesc;
+		rdesc_dma = ring->rdesc_dma;
+
+		for (j = 0; j < ring->rdesc_count; j++) {
+			rdata = XGBE_GET_DESC_DATA(ring, j);
+
+			rdata->rdesc = rdesc;
+			rdata->rdesc_dma = rdesc_dma;
+
+			if (xgbe_map_rx_buffer(pdata, ring, rdata))
+				break;
+
+			rdesc++;
+			rdesc_dma += sizeof(struct xgbe_ring_desc);
+		}
+
+		ring->cur = 0;
+		ring->dirty = 0;
+
+		hw_if->rx_desc_init(channel);
+	}
+
+	DBGPR("<--xgbe_wrapper_rx_descriptor_init\n");
+}
+
+static void xgbe_unmap_rdata(struct xgbe_prv_data *pdata,
+			     struct xgbe_ring_data *rdata)
+{
+	if (rdata->skb_dma) {
+		if (rdata->mapped_as_page) {
+			dma_unmap_page(pdata->dev, rdata->skb_dma,
+				       rdata->skb_dma_len, DMA_TO_DEVICE);
+		} else {
+			dma_unmap_single(pdata->dev, rdata->skb_dma,
+					 rdata->skb_dma_len, DMA_TO_DEVICE);
+		}
+		rdata->skb_dma = 0;
+		rdata->skb_dma_len = 0;
+	}
+
+	if (rdata->skb) {
+		dev_kfree_skb_any(rdata->skb);
+		rdata->skb = NULL;
+	}
+
+	if (rdata->rx.hdr.pa.pages)
+		put_page(rdata->rx.hdr.pa.pages);
+
+	if (rdata->rx.hdr.pa_unmap.pages) {
+		dma_unmap_page(pdata->dev, rdata->rx.hdr.pa_unmap.pages_dma,
+			       rdata->rx.hdr.pa_unmap.pages_len,
+			       DMA_FROM_DEVICE);
+		put_page(rdata->rx.hdr.pa_unmap.pages);
+	}
+
+	if (rdata->rx.buf.pa.pages)
+		put_page(rdata->rx.buf.pa.pages);
+
+	if (rdata->rx.buf.pa_unmap.pages) {
+		dma_unmap_page(pdata->dev, rdata->rx.buf.pa_unmap.pages_dma,
+			       rdata->rx.buf.pa_unmap.pages_len,
+			       DMA_FROM_DEVICE);
+		put_page(rdata->rx.buf.pa_unmap.pages);
+	}
+
+	memset(&rdata->tx, 0, sizeof(rdata->tx));
+	memset(&rdata->rx, 0, sizeof(rdata->rx));
+
+	rdata->mapped_as_page = 0;
+
+	if (rdata->state_saved) {
+		rdata->state_saved = 0;
+		rdata->state.incomplete = 0;
+		rdata->state.context_next = 0;
+		rdata->state.skb = NULL;
+		rdata->state.len = 0;
+		rdata->state.error = 0;
+	}
+}
+
+static int xgbe_map_tx_skb(struct xgbe_channel *channel, struct sk_buff *skb)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_ring *ring = channel->tx_ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_packet_data *packet;
+	struct skb_frag_struct *frag;
+	dma_addr_t skb_dma;
+	unsigned int start_index, cur_index;
+	unsigned int offset, tso, vlan, datalen, len;
+	unsigned int i;
+
+	DBGPR("-->xgbe_map_tx_skb: cur = %d\n", ring->cur);
+
+	offset = 0;
+	start_index = ring->cur;
+	cur_index = ring->cur;
+
+	packet = &ring->packet_data;
+	packet->rdesc_count = 0;
+	packet->length = 0;
+
+	tso = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			     TSO_ENABLE);
+	vlan = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			      VLAN_CTAG);
+
+	/* Save space for a context descriptor if needed */
+	if ((tso && (packet->mss != ring->tx.cur_mss)) ||
+	    (vlan && (packet->vlan_ctag != ring->tx.cur_vlan_ctag)))
+		cur_index++;
+	rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+
+	if (tso) {
+		DBGPR("  TSO packet\n");
+
+		/* Map the TSO header */
+		skb_dma = dma_map_single(pdata->dev, skb->data,
+					 packet->header_len, DMA_TO_DEVICE);
+		if (dma_mapping_error(pdata->dev, skb_dma)) {
+			netdev_alert(pdata->netdev, "dma_map_single failed\n");
+			goto err_out;
+		}
+		rdata->skb_dma = skb_dma;
+		rdata->skb_dma_len = packet->header_len;
+
+		offset = packet->header_len;
+
+		packet->length += packet->header_len;
+
+		cur_index++;
+		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+	}
+
+	/* Map the (remainder of the) packet */
+	for (datalen = skb_headlen(skb) - offset; datalen; ) {
+		len = min_t(unsigned int, datalen, XGBE_TX_MAX_BUF_SIZE);
+
+		skb_dma = dma_map_single(pdata->dev, skb->data + offset, len,
+					 DMA_TO_DEVICE);
+		if (dma_mapping_error(pdata->dev, skb_dma)) {
+			netdev_alert(pdata->netdev, "dma_map_single failed\n");
+			goto err_out;
+		}
+		rdata->skb_dma = skb_dma;
+		rdata->skb_dma_len = len;
+		DBGPR("  skb data: index=%u, dma=0x%llx, len=%u\n",
+		      cur_index, skb_dma, len);
+
+		datalen -= len;
+		offset += len;
+
+		packet->length += len;
+
+		cur_index++;
+		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+	}
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		DBGPR("  mapping frag %u\n", i);
+
+		frag = &skb_shinfo(skb)->frags[i];
+		offset = 0;
+
+		for (datalen = skb_frag_size(frag); datalen; ) {
+			len = min_t(unsigned int, datalen,
+				    XGBE_TX_MAX_BUF_SIZE);
+
+			skb_dma = skb_frag_dma_map(pdata->dev, frag, offset,
+						   len, DMA_TO_DEVICE);
+			if (dma_mapping_error(pdata->dev, skb_dma)) {
+				netdev_alert(pdata->netdev,
+					     "skb_frag_dma_map failed\n");
+				goto err_out;
+			}
+			rdata->skb_dma = skb_dma;
+			rdata->skb_dma_len = len;
+			rdata->mapped_as_page = 1;
+			DBGPR("  skb data: index=%u, dma=0x%llx, len=%u\n",
+			      cur_index, skb_dma, len);
+
+			datalen -= len;
+			offset += len;
+
+			packet->length += len;
+
+			cur_index++;
+			rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+		}
+	}
+
+	/* Save the skb address in the last entry. We always have some data
+	 * that has been mapped so rdata is always advanced past the last
+	 * piece of mapped data - use the entry pointed to by cur_index - 1.
+	 */
+	rdata = XGBE_GET_DESC_DATA(ring, cur_index - 1);
+	rdata->skb = skb;
+
+	/* Save the number of descriptor entries used */
+	packet->rdesc_count = cur_index - start_index;
+
+	DBGPR("<--xgbe_map_tx_skb: count=%u\n", packet->rdesc_count);
+
+	return packet->rdesc_count;
+
+err_out:
+	while (start_index < cur_index) {
+		rdata = XGBE_GET_DESC_DATA(ring, start_index++);
+		xgbe_unmap_rdata(pdata, rdata);
+	}
+
+	DBGPR("<--xgbe_map_tx_skb: count=0\n");
+
+	return 0;
+}
+
+void xgbe_a0_init_function_ptrs_desc(struct xgbe_desc_if *desc_if)
+{
+	DBGPR("-->xgbe_a0_init_function_ptrs_desc\n");
+
+	desc_if->alloc_ring_resources = xgbe_alloc_ring_resources;
+	desc_if->free_ring_resources = xgbe_free_ring_resources;
+	desc_if->map_tx_skb = xgbe_map_tx_skb;
+	desc_if->map_rx_buffer = xgbe_map_rx_buffer;
+	desc_if->unmap_rdata = xgbe_unmap_rdata;
+	desc_if->wrapper_tx_desc_init = xgbe_wrapper_tx_descriptor_init;
+	desc_if->wrapper_rx_desc_init = xgbe_wrapper_rx_descriptor_init;
+
+	DBGPR("<--xgbe_a0_init_function_ptrs_desc\n");
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-dev.c
new file mode 100644
index 0000000..f6a3a58
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-dev.c
@@ -0,0 +1,2964 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/phy.h>
+#include <linux/mdio.h>
+#include <linux/clk.h>
+#include <linux/bitrev.h>
+#include <linux/crc32.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static unsigned int xgbe_usec_to_riwt(struct xgbe_prv_data *pdata,
+				      unsigned int usec)
+{
+	unsigned long rate;
+	unsigned int ret;
+
+	DBGPR("-->xgbe_usec_to_riwt\n");
+
+	rate = pdata->sysclk_rate;
+
+	/*
+	 * Convert the input usec value to the watchdog timer value. Each
+	 * watchdog timer value is equivalent to 256 clock cycles.
+	 * Calculate the required value as:
+	 *   ( usec * ( system_clock_mhz / 10^6 ) / 256
+	 */
+	ret = (usec * (rate / 1000000)) / 256;
+
+	DBGPR("<--xgbe_usec_to_riwt\n");
+
+	return ret;
+}
+
+static unsigned int xgbe_riwt_to_usec(struct xgbe_prv_data *pdata,
+				      unsigned int riwt)
+{
+	unsigned long rate;
+	unsigned int ret;
+
+	DBGPR("-->xgbe_riwt_to_usec\n");
+
+	rate = pdata->sysclk_rate;
+
+	/*
+	 * Convert the input watchdog timer value to the usec value. Each
+	 * watchdog timer value is equivalent to 256 clock cycles.
+	 * Calculate the required value as:
+	 *   ( riwt * 256 ) / ( system_clock_mhz / 10^6 )
+	 */
+	ret = (riwt * 256) / (rate / 1000000);
+
+	DBGPR("<--xgbe_riwt_to_usec\n");
+
+	return ret;
+}
+
+static int xgbe_config_pblx8(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++)
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_CR, PBLX8,
+				       pdata->pblx8);
+
+	return 0;
+}
+
+static int xgbe_get_tx_pbl_val(struct xgbe_prv_data *pdata)
+{
+	return XGMAC_DMA_IOREAD_BITS(pdata->channel, DMA_CH_TCR, PBL);
+}
+
+static int xgbe_config_tx_pbl_val(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, PBL,
+				       pdata->tx_pbl);
+	}
+
+	return 0;
+}
+
+static int xgbe_get_rx_pbl_val(struct xgbe_prv_data *pdata)
+{
+	return XGMAC_DMA_IOREAD_BITS(pdata->channel, DMA_CH_RCR, PBL);
+}
+
+static int xgbe_config_rx_pbl_val(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, PBL,
+				       pdata->rx_pbl);
+	}
+
+	return 0;
+}
+
+static int xgbe_config_osp_mode(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, OSP,
+				       pdata->tx_osp_mode);
+	}
+
+	return 0;
+}
+
+static int xgbe_config_rsf_mode(struct xgbe_prv_data *pdata, unsigned int val)
+{
+	unsigned int i;
+
+	for (i = 0; i < pdata->rx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RSF, val);
+
+	return 0;
+}
+
+static int xgbe_config_tsf_mode(struct xgbe_prv_data *pdata, unsigned int val)
+{
+	unsigned int i;
+
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TSF, val);
+
+	return 0;
+}
+
+static int xgbe_config_rx_threshold(struct xgbe_prv_data *pdata,
+				    unsigned int val)
+{
+	unsigned int i;
+
+	for (i = 0; i < pdata->rx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RTC, val);
+
+	return 0;
+}
+
+static int xgbe_config_tx_threshold(struct xgbe_prv_data *pdata,
+				    unsigned int val)
+{
+	unsigned int i;
+
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TTC, val);
+
+	return 0;
+}
+
+static int xgbe_config_rx_coalesce(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RIWT, RWT,
+				       pdata->rx_riwt);
+	}
+
+	return 0;
+}
+
+static int xgbe_config_tx_coalesce(struct xgbe_prv_data *pdata)
+{
+	return 0;
+}
+
+static void xgbe_config_rx_buffer_size(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, RBSZ,
+				       pdata->rx_buf_size);
+	}
+}
+
+static void xgbe_config_tso_mode(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, TSE, 1);
+	}
+}
+
+static void xgbe_config_sph_mode(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_CR, SPH, 1);
+	}
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE);
+}
+
+static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type,
+			      unsigned int index, unsigned int val)
+{
+	unsigned int wait;
+	int ret = 0;
+
+	mutex_lock(&pdata->rss_mutex);
+
+	if (XGMAC_IOREAD_BITS(pdata, MAC_RSSAR, OB)) {
+		ret = -EBUSY;
+		goto unlock;
+	}
+
+	XGMAC_IOWRITE(pdata, MAC_RSSDR, val);
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSAR, RSSIA, index);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSAR, ADDRT, type);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSAR, CT, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSAR, OB, 1);
+
+	wait = 1000;
+	while (wait--) {
+		if (!XGMAC_IOREAD_BITS(pdata, MAC_RSSAR, OB))
+			goto unlock;
+
+		usleep_range(1000, 1500);
+	}
+
+	ret = -EBUSY;
+
+unlock:
+	mutex_unlock(&pdata->rss_mutex);
+
+	return ret;
+}
+
+static int xgbe_write_rss_hash_key(struct xgbe_prv_data *pdata)
+{
+	unsigned int key_regs = sizeof(pdata->rss_key) / sizeof(u32);
+	unsigned int *key = (unsigned int *)&pdata->rss_key;
+	int ret;
+
+	while (key_regs--) {
+		ret = xgbe_write_rss_reg(pdata, XGBE_RSS_HASH_KEY_TYPE,
+					 key_regs, *key++);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int xgbe_write_rss_lookup_table(struct xgbe_prv_data *pdata)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < ARRAY_SIZE(pdata->rss_table); i++) {
+		ret = xgbe_write_rss_reg(pdata,
+					 XGBE_RSS_LOOKUP_TABLE_TYPE, i,
+					 pdata->rss_table[i]);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int xgbe_set_rss_hash_key(struct xgbe_prv_data *pdata, const u8 *key)
+{
+	memcpy(pdata->rss_key, key, sizeof(pdata->rss_key));
+
+	return xgbe_write_rss_hash_key(pdata);
+}
+
+static int xgbe_set_rss_lookup_table(struct xgbe_prv_data *pdata,
+				     const u32 *table)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pdata->rss_table); i++)
+		XGMAC_SET_BITS(pdata->rss_table[i], MAC_RSSDR, DMCH, table[i]);
+
+	return xgbe_write_rss_lookup_table(pdata);
+}
+
+static int xgbe_enable_rss(struct xgbe_prv_data *pdata)
+{
+	int ret;
+
+	if (!pdata->hw_feat.rss)
+		return -EOPNOTSUPP;
+
+	/* Program the hash key */
+	ret = xgbe_write_rss_hash_key(pdata);
+	if (ret)
+		return ret;
+
+	/* Program the lookup table */
+	ret = xgbe_write_rss_lookup_table(pdata);
+	if (ret)
+		return ret;
+
+	/* Set the RSS options */
+	XGMAC_IOWRITE(pdata, MAC_RSSCR, pdata->rss_options);
+
+	/* Enable RSS */
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSCR, RSSE, 1);
+
+	return 0;
+}
+
+static int xgbe_disable_rss(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->hw_feat.rss)
+		return -EOPNOTSUPP;
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_RSSCR, RSSE, 0);
+
+	return 0;
+}
+
+static void xgbe_config_rss(struct xgbe_prv_data *pdata)
+{
+	int ret;
+
+	if (!pdata->hw_feat.rss)
+		return;
+
+	if (pdata->netdev->features & NETIF_F_RXHASH)
+		ret = xgbe_enable_rss(pdata);
+	else
+		ret = xgbe_disable_rss(pdata);
+
+	if (ret)
+		netdev_err(pdata->netdev,
+			   "error configuring RSS, RSS disabled\n");
+}
+
+static int xgbe_disable_tx_flow_control(struct xgbe_prv_data *pdata)
+{
+	unsigned int max_q_count, q_count;
+	unsigned int reg, reg_val;
+	unsigned int i;
+
+	/* Clear MTL flow control */
+	for (i = 0; i < pdata->rx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, EHFC, 0);
+
+	/* Clear MAC flow control */
+	max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
+	q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
+	reg = MAC_Q0TFCR;
+	for (i = 0; i < q_count; i++) {
+		reg_val = XGMAC_IOREAD(pdata, reg);
+		XGMAC_SET_BITS(reg_val, MAC_Q0TFCR, TFE, 0);
+		XGMAC_IOWRITE(pdata, reg, reg_val);
+
+		reg += MAC_QTFCR_INC;
+	}
+
+	return 0;
+}
+
+static int xgbe_enable_tx_flow_control(struct xgbe_prv_data *pdata)
+{
+	unsigned int max_q_count, q_count;
+	unsigned int reg, reg_val;
+	unsigned int i;
+
+	/* Set MTL flow control */
+	for (i = 0; i < pdata->rx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, EHFC, 1);
+
+	/* Set MAC flow control */
+	max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
+	q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
+	reg = MAC_Q0TFCR;
+	for (i = 0; i < q_count; i++) {
+		reg_val = XGMAC_IOREAD(pdata, reg);
+
+		/* Enable transmit flow control */
+		XGMAC_SET_BITS(reg_val, MAC_Q0TFCR, TFE, 1);
+		/* Set pause time */
+		XGMAC_SET_BITS(reg_val, MAC_Q0TFCR, PT, 0xffff);
+
+		XGMAC_IOWRITE(pdata, reg, reg_val);
+
+		reg += MAC_QTFCR_INC;
+	}
+
+	return 0;
+}
+
+static int xgbe_disable_rx_flow_control(struct xgbe_prv_data *pdata)
+{
+	XGMAC_IOWRITE_BITS(pdata, MAC_RFCR, RFE, 0);
+
+	return 0;
+}
+
+static int xgbe_enable_rx_flow_control(struct xgbe_prv_data *pdata)
+{
+	XGMAC_IOWRITE_BITS(pdata, MAC_RFCR, RFE, 1);
+
+	return 0;
+}
+
+static int xgbe_config_tx_flow_control(struct xgbe_prv_data *pdata)
+{
+	struct ieee_pfc *pfc = pdata->pfc;
+
+	if (pdata->tx_pause || (pfc && pfc->pfc_en))
+		xgbe_enable_tx_flow_control(pdata);
+	else
+		xgbe_disable_tx_flow_control(pdata);
+
+	return 0;
+}
+
+static int xgbe_config_rx_flow_control(struct xgbe_prv_data *pdata)
+{
+	struct ieee_pfc *pfc = pdata->pfc;
+
+	if (pdata->rx_pause || (pfc && pfc->pfc_en))
+		xgbe_enable_rx_flow_control(pdata);
+	else
+		xgbe_disable_rx_flow_control(pdata);
+
+	return 0;
+}
+
+static void xgbe_config_flow_control(struct xgbe_prv_data *pdata)
+{
+	struct ieee_pfc *pfc = pdata->pfc;
+
+	xgbe_config_tx_flow_control(pdata);
+	xgbe_config_rx_flow_control(pdata);
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE,
+			   (pfc && pfc->pfc_en) ? 1 : 0);
+}
+
+static void xgbe_enable_dma_interrupts(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int dma_ch_isr, dma_ch_ier;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		/* Clear all the interrupts which are set */
+		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
+		XGMAC_DMA_IOWRITE(channel, DMA_CH_SR, dma_ch_isr);
+
+		/* Clear all interrupt enable bits */
+		dma_ch_ier = 0;
+
+		/* Enable following interrupts
+		 *   NIE  - Normal Interrupt Summary Enable
+		 *   AIE  - Abnormal Interrupt Summary Enable
+		 *   FBEE - Fatal Bus Error Enable
+		 */
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, NIE, 1);
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, AIE, 1);
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 1);
+
+		if (channel->tx_ring) {
+			/* Enable the following Tx interrupts
+			 *   TIE  - Transmit Interrupt Enable (unless using
+			 *          per channel interrupts)
+			 */
+			if (!pdata->per_channel_irq)
+				XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
+		}
+		if (channel->rx_ring) {
+			/* Enable following Rx interrupts
+			 *   RBUE - Receive Buffer Unavailable Enable
+			 *   RIE  - Receive Interrupt Enable (unless using
+			 *          per channel interrupts)
+			 */
+			XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 1);
+			if (!pdata->per_channel_irq)
+				XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+		}
+
+		XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+	}
+}
+
+static void xgbe_enable_mtl_interrupts(struct xgbe_prv_data *pdata)
+{
+	unsigned int mtl_q_isr;
+	unsigned int q_count, i;
+
+	q_count = max(pdata->hw_feat.tx_q_cnt, pdata->hw_feat.rx_q_cnt);
+	for (i = 0; i < q_count; i++) {
+		/* Clear all the interrupts which are set */
+		mtl_q_isr = XGMAC_MTL_IOREAD(pdata, i, MTL_Q_ISR);
+		XGMAC_MTL_IOWRITE(pdata, i, MTL_Q_ISR, mtl_q_isr);
+
+		/* No MTL interrupts to be enabled */
+		XGMAC_MTL_IOWRITE(pdata, i, MTL_Q_IER, 0);
+	}
+}
+
+static void xgbe_enable_mac_interrupts(struct xgbe_prv_data *pdata)
+{
+	unsigned int mac_ier = 0;
+
+	/* Enable Timestamp interrupt */
+	XGMAC_SET_BITS(mac_ier, MAC_IER, TSIE, 1);
+
+	XGMAC_IOWRITE(pdata, MAC_IER, mac_ier);
+
+	/* Enable all counter interrupts */
+	XGMAC_IOWRITE_BITS(pdata, MMC_RIER, ALL_INTERRUPTS, 0xffffffff);
+	XGMAC_IOWRITE_BITS(pdata, MMC_TIER, ALL_INTERRUPTS, 0xffffffff);
+}
+
+static int xgbe_set_gmii_speed(struct xgbe_prv_data *pdata)
+{
+	if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0x3)
+		return 0;
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0x3);
+
+	return 0;
+}
+
+static int xgbe_set_gmii_2500_speed(struct xgbe_prv_data *pdata)
+{
+	if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0x2)
+		return 0;
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0x2);
+
+	return 0;
+}
+
+static int xgbe_set_xgmii_speed(struct xgbe_prv_data *pdata)
+{
+	if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0)
+		return 0;
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0);
+
+	return 0;
+}
+
+static int xgbe_set_promiscuous_mode(struct xgbe_prv_data *pdata,
+				     unsigned int enable)
+{
+	unsigned int val = enable ? 1 : 0;
+
+	if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PR) == val)
+		return 0;
+
+	DBGPR("  %s promiscuous mode\n", enable ? "entering" : "leaving");
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PR, val);
+
+	return 0;
+}
+
+static int xgbe_set_all_multicast_mode(struct xgbe_prv_data *pdata,
+				       unsigned int enable)
+{
+	unsigned int val = enable ? 1 : 0;
+
+	if (XGMAC_IOREAD_BITS(pdata, MAC_PFR, PM) == val)
+		return 0;
+
+	DBGPR("  %s allmulti mode\n", enable ? "entering" : "leaving");
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, PM, val);
+
+	return 0;
+}
+
+static void xgbe_set_mac_reg(struct xgbe_prv_data *pdata,
+			     struct netdev_hw_addr *ha, unsigned int *mac_reg)
+{
+	unsigned int mac_addr_hi, mac_addr_lo;
+	u8 *mac_addr;
+
+	mac_addr_lo = 0;
+	mac_addr_hi = 0;
+
+	if (ha) {
+		mac_addr = (u8 *)&mac_addr_lo;
+		mac_addr[0] = ha->addr[0];
+		mac_addr[1] = ha->addr[1];
+		mac_addr[2] = ha->addr[2];
+		mac_addr[3] = ha->addr[3];
+		mac_addr = (u8 *)&mac_addr_hi;
+		mac_addr[0] = ha->addr[4];
+		mac_addr[1] = ha->addr[5];
+
+		DBGPR("  adding mac address %pM at 0x%04x\n", ha->addr,
+		      *mac_reg);
+
+		XGMAC_SET_BITS(mac_addr_hi, MAC_MACA1HR, AE, 1);
+	}
+
+	XGMAC_IOWRITE(pdata, *mac_reg, mac_addr_hi);
+	*mac_reg += MAC_MACA_INC;
+	XGMAC_IOWRITE(pdata, *mac_reg, mac_addr_lo);
+	*mac_reg += MAC_MACA_INC;
+}
+
+static void xgbe_set_mac_addn_addrs(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+	struct netdev_hw_addr *ha;
+	unsigned int mac_reg;
+	unsigned int addn_macs;
+
+	mac_reg = MAC_MACA1HR;
+	addn_macs = pdata->hw_feat.addn_mac;
+
+	if (netdev_uc_count(netdev) > addn_macs) {
+		xgbe_set_promiscuous_mode(pdata, 1);
+	} else {
+		netdev_for_each_uc_addr(ha, netdev) {
+			xgbe_set_mac_reg(pdata, ha, &mac_reg);
+			addn_macs--;
+		}
+
+		if (netdev_mc_count(netdev) > addn_macs) {
+			xgbe_set_all_multicast_mode(pdata, 1);
+		} else {
+			netdev_for_each_mc_addr(ha, netdev) {
+				xgbe_set_mac_reg(pdata, ha, &mac_reg);
+				addn_macs--;
+			}
+		}
+	}
+
+	/* Clear remaining additional MAC address entries */
+	while (addn_macs--)
+		xgbe_set_mac_reg(pdata, NULL, &mac_reg);
+}
+
+static void xgbe_set_mac_hash_table(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+	struct netdev_hw_addr *ha;
+	unsigned int hash_reg;
+	unsigned int hash_table_shift, hash_table_count;
+	u32 hash_table[XGBE_MAC_HASH_TABLE_SIZE];
+	u32 crc;
+	unsigned int i;
+
+	hash_table_shift = 26 - (pdata->hw_feat.hash_table_size >> 7);
+	hash_table_count = pdata->hw_feat.hash_table_size / 32;
+	memset(hash_table, 0, sizeof(hash_table));
+
+	/* Build the MAC Hash Table register values */
+	netdev_for_each_uc_addr(ha, netdev) {
+		crc = bitrev32(~crc32_le(~0, ha->addr, ETH_ALEN));
+		crc >>= hash_table_shift;
+		hash_table[crc >> 5] |= (1 << (crc & 0x1f));
+	}
+
+	netdev_for_each_mc_addr(ha, netdev) {
+		crc = bitrev32(~crc32_le(~0, ha->addr, ETH_ALEN));
+		crc >>= hash_table_shift;
+		hash_table[crc >> 5] |= (1 << (crc & 0x1f));
+	}
+
+	/* Set the MAC Hash Table registers */
+	hash_reg = MAC_HTR0;
+	for (i = 0; i < hash_table_count; i++) {
+		XGMAC_IOWRITE(pdata, hash_reg, hash_table[i]);
+		hash_reg += MAC_HTR_INC;
+	}
+}
+
+static int xgbe_add_mac_addresses(struct xgbe_prv_data *pdata)
+{
+	if (pdata->hw_feat.hash_table_size)
+		xgbe_set_mac_hash_table(pdata);
+	else
+		xgbe_set_mac_addn_addrs(pdata);
+
+	return 0;
+}
+
+static int xgbe_set_mac_address(struct xgbe_prv_data *pdata, u8 *addr)
+{
+	unsigned int mac_addr_hi, mac_addr_lo;
+
+	mac_addr_hi = (addr[5] <<  8) | (addr[4] <<  0);
+	mac_addr_lo = (addr[3] << 24) | (addr[2] << 16) |
+		      (addr[1] <<  8) | (addr[0] <<  0);
+
+	XGMAC_IOWRITE(pdata, MAC_MACA0HR, mac_addr_hi);
+	XGMAC_IOWRITE(pdata, MAC_MACA0LR, mac_addr_lo);
+
+	return 0;
+}
+
+static int xgbe_read_mmd_regs(struct xgbe_prv_data *pdata, int prtad,
+			      int mmd_reg)
+{
+	unsigned int mmd_address;
+	int mmd_data;
+
+	if (mmd_reg & MII_ADDR_C45)
+		mmd_address = mmd_reg & ~MII_ADDR_C45;
+	else
+		mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
+
+	/* The PCS implementation has reversed the devices in
+	 * package registers so we need to change 05 to 06 and
+	 * 06 to 05 if being read (these registers are readonly
+	 * so no need to do this in the write function)
+	 */
+	if ((mmd_address & 0xffff) == 0x05)
+		mmd_address = (mmd_address & ~0xffff) | 0x06;
+	else if ((mmd_address & 0xffff) == 0x06)
+		mmd_address = (mmd_address & ~0xffff) | 0x05;
+
+	/* The PCS registers are accessed using mmio. The underlying APB3
+	 * management interface uses indirect addressing to access the MMD
+	 * register sets. This requires accessing of the PCS register in two
+	 * phases, an address phase and a data phase.
+	 *
+	 * The mmio interface is based on 32-bit offsets and values. All
+	 * register offsets must therefore be adjusted by left shifting the
+	 * offset 2 bits and reading 32 bits of data.
+	 */
+	mutex_lock(&pdata->xpcs_mutex);
+	XPCS_IOWRITE(pdata, PCS_MMD_SELECT << 2, mmd_address >> 8);
+	mmd_data = XPCS_IOREAD(pdata, (mmd_address & 0xff) << 2);
+	mutex_unlock(&pdata->xpcs_mutex);
+
+	return mmd_data;
+}
+
+static void xgbe_write_mmd_regs(struct xgbe_prv_data *pdata, int prtad,
+				int mmd_reg, int mmd_data)
+{
+	unsigned int mmd_address;
+
+	if (mmd_reg & MII_ADDR_C45)
+		mmd_address = mmd_reg & ~MII_ADDR_C45;
+	else
+		mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
+
+	/* If the PCS is changing modes, match the MAC speed to it */
+	if (((mmd_address >> 16) == MDIO_MMD_PCS) &&
+	    ((mmd_address & 0xffff) == MDIO_CTRL2)) {
+		struct phy_device *phydev = pdata->phydev;
+
+		if (mmd_data & MDIO_PCS_CTRL2_TYPE) {
+			/* KX mode */
+			if (phydev->supported & SUPPORTED_1000baseKX_Full)
+				xgbe_set_gmii_speed(pdata);
+			else
+				xgbe_set_gmii_2500_speed(pdata);
+		} else {
+			/* KR mode */
+			xgbe_set_xgmii_speed(pdata);
+		}
+	}
+
+	/* The PCS registers are accessed using mmio. The underlying APB3
+	 * management interface uses indirect addressing to access the MMD
+	 * register sets. This requires accessing of the PCS register in two
+	 * phases, an address phase and a data phase.
+	 *
+	 * The mmio interface is based on 32-bit offsets and values. All
+	 * register offsets must therefore be adjusted by left shifting the
+	 * offset 2 bits and reading 32 bits of data.
+	 */
+	mutex_lock(&pdata->xpcs_mutex);
+	XPCS_IOWRITE(pdata, PCS_MMD_SELECT << 2, mmd_address >> 8);
+	XPCS_IOWRITE(pdata, (mmd_address & 0xff) << 2, mmd_data);
+	mutex_unlock(&pdata->xpcs_mutex);
+}
+
+static int xgbe_tx_complete(struct xgbe_ring_desc *rdesc)
+{
+	return !XGMAC_GET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN);
+}
+
+static int xgbe_disable_rx_csum(struct xgbe_prv_data *pdata)
+{
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, IPC, 0);
+
+	return 0;
+}
+
+static int xgbe_enable_rx_csum(struct xgbe_prv_data *pdata)
+{
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, IPC, 1);
+
+	return 0;
+}
+
+static int xgbe_enable_rx_vlan_stripping(struct xgbe_prv_data *pdata)
+{
+	/* Put the VLAN tag in the Rx descriptor */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, EVLRXS, 1);
+
+	/* Don't check the VLAN type */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, DOVLTC, 1);
+
+	/* Check only C-TAG (0x8100) packets */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, ERSVLM, 0);
+
+	/* Don't consider an S-TAG (0x88A8) packet as a VLAN packet */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, ESVL, 0);
+
+	/* Enable VLAN tag stripping */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, EVLS, 0x3);
+
+	return 0;
+}
+
+static int xgbe_disable_rx_vlan_stripping(struct xgbe_prv_data *pdata)
+{
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, EVLS, 0);
+
+	return 0;
+}
+
+static int xgbe_enable_rx_vlan_filtering(struct xgbe_prv_data *pdata)
+{
+	/* Enable VLAN filtering */
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, VTFE, 1);
+
+	/* Enable VLAN Hash Table filtering */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, VTHM, 1);
+
+	/* Disable VLAN tag inverse matching */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, VTIM, 0);
+
+	/* Only filter on the lower 12-bits of the VLAN tag */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, ETV, 1);
+
+	/* In order for the VLAN Hash Table filtering to be effective,
+	 * the VLAN tag identifier in the VLAN Tag Register must not
+	 * be zero.  Set the VLAN tag identifier to "1" to enable the
+	 * VLAN Hash Table filtering.  This implies that a VLAN tag of
+	 * 1 will always pass filtering.
+	 */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANTR, VL, 1);
+
+	return 0;
+}
+
+static int xgbe_disable_rx_vlan_filtering(struct xgbe_prv_data *pdata)
+{
+	/* Disable VLAN filtering */
+	XGMAC_IOWRITE_BITS(pdata, MAC_PFR, VTFE, 0);
+
+	return 0;
+}
+
+#ifndef CRCPOLY_LE
+#define CRCPOLY_LE 0xedb88320
+#endif
+static u32 xgbe_vid_crc32_le(__le16 vid_le)
+{
+	u32 poly = CRCPOLY_LE;
+	u32 crc = ~0;
+	u32 temp = 0;
+	unsigned char *data = (unsigned char *)&vid_le;
+	unsigned char data_byte = 0;
+	int i, bits;
+
+	bits = get_bitmask_order(VLAN_VID_MASK);
+	for (i = 0; i < bits; i++) {
+		if ((i % 8) == 0)
+			data_byte = data[i / 8];
+
+		temp = ((crc & 1) ^ data_byte) & 1;
+		crc >>= 1;
+		data_byte >>= 1;
+
+		if (temp)
+			crc ^= poly;
+	}
+
+	return crc;
+}
+
+static int xgbe_update_vlan_hash_table(struct xgbe_prv_data *pdata)
+{
+	u32 crc;
+	u16 vid;
+	__le16 vid_le;
+	u16 vlan_hash_table = 0;
+
+	/* Generate the VLAN Hash Table value */
+	for_each_set_bit(vid, pdata->active_vlans, VLAN_N_VID) {
+		/* Get the CRC32 value of the VLAN ID */
+		vid_le = cpu_to_le16(vid);
+		crc = bitrev32(~xgbe_vid_crc32_le(vid_le)) >> 28;
+
+		vlan_hash_table |= (1 << crc);
+	}
+
+	/* Set the VLAN Hash Table filtering register */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANHTR, VLHT, vlan_hash_table);
+
+	return 0;
+}
+
+static void xgbe_tx_desc_reset(struct xgbe_ring_data *rdata)
+{
+	struct xgbe_ring_desc *rdesc = rdata->rdesc;
+
+	/* Reset the Tx descriptor
+	 *   Set buffer 1 (lo) address to zero
+	 *   Set buffer 1 (hi) address to zero
+	 *   Reset all other control bits (IC, TTSE, B2L & B1L)
+	 *   Reset all other control bits (OWN, CTXT, FD, LD, CPC, CIC, etc)
+	 */
+	rdesc->desc0 = 0;
+	rdesc->desc1 = 0;
+	rdesc->desc2 = 0;
+	rdesc->desc3 = 0;
+
+	/* Make sure ownership is written to the descriptor */
+	wmb();
+}
+
+static void xgbe_tx_desc_init(struct xgbe_channel *channel)
+{
+	struct xgbe_ring *ring = channel->tx_ring;
+	struct xgbe_ring_data *rdata;
+	int i;
+	int start_index = ring->cur;
+
+	DBGPR("-->tx_desc_init\n");
+
+	/* Initialze all descriptors */
+	for (i = 0; i < ring->rdesc_count; i++) {
+		rdata = XGBE_GET_DESC_DATA(ring, i);
+
+		/* Initialize Tx descriptor */
+		xgbe_tx_desc_reset(rdata);
+	}
+
+	/* Update the total number of Tx descriptors */
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_TDRLR, ring->rdesc_count - 1);
+
+	/* Update the starting address of descriptor ring */
+	rdata = XGBE_GET_DESC_DATA(ring, start_index);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_TDLR_HI,
+			  upper_32_bits(rdata->rdesc_dma));
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_TDLR_LO,
+			  lower_32_bits(rdata->rdesc_dma));
+
+	DBGPR("<--tx_desc_init\n");
+}
+
+static void xgbe_rx_desc_reset(struct xgbe_ring_data *rdata)
+{
+	struct xgbe_ring_desc *rdesc = rdata->rdesc;
+
+	/* Reset the Rx descriptor
+	 *   Set buffer 1 (lo) address to header dma address (lo)
+	 *   Set buffer 1 (hi) address to header dma address (hi)
+	 *   Set buffer 2 (lo) address to buffer dma address (lo)
+	 *   Set buffer 2 (hi) address to buffer dma address (hi) and
+	 *     set control bits OWN and INTE
+	 */
+	rdesc->desc0 = cpu_to_le32(lower_32_bits(rdata->rx.hdr.dma));
+	rdesc->desc1 = cpu_to_le32(upper_32_bits(rdata->rx.hdr.dma));
+	rdesc->desc2 = cpu_to_le32(lower_32_bits(rdata->rx.buf.dma));
+	rdesc->desc3 = cpu_to_le32(upper_32_bits(rdata->rx.buf.dma));
+
+	XGMAC_SET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, INTE,
+			  rdata->interrupt ? 1 : 0);
+
+	/* Since the Rx DMA engine is likely running, make sure everything
+	 * is written to the descriptor(s) before setting the OWN bit
+	 * for the descriptor
+	 */
+	wmb();
+
+	XGMAC_SET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, OWN, 1);
+
+	/* Make sure ownership is written to the descriptor */
+	wmb();
+}
+
+static void xgbe_rx_desc_init(struct xgbe_channel *channel)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_ring *ring = channel->rx_ring;
+	struct xgbe_ring_data *rdata;
+	unsigned int start_index = ring->cur;
+	unsigned int rx_coalesce, rx_frames;
+	unsigned int i;
+
+	DBGPR("-->rx_desc_init\n");
+
+	rx_coalesce = (pdata->rx_riwt || pdata->rx_frames) ? 1 : 0;
+	rx_frames = pdata->rx_frames;
+
+	/* Initialize all descriptors */
+	for (i = 0; i < ring->rdesc_count; i++) {
+		rdata = XGBE_GET_DESC_DATA(ring, i);
+
+		/* Set interrupt on completion bit as appropriate */
+		if (rx_coalesce && (!rx_frames || ((i + 1) % rx_frames)))
+			rdata->interrupt = 0;
+		else
+			rdata->interrupt = 1;
+
+		/* Initialize Rx descriptor */
+		xgbe_rx_desc_reset(rdata);
+	}
+
+	/* Update the total number of Rx descriptors */
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_RDRLR, ring->rdesc_count - 1);
+
+	/* Update the starting address of descriptor ring */
+	rdata = XGBE_GET_DESC_DATA(ring, start_index);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_RDLR_HI,
+			  upper_32_bits(rdata->rdesc_dma));
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_RDLR_LO,
+			  lower_32_bits(rdata->rdesc_dma));
+
+	/* Update the Rx Descriptor Tail Pointer */
+	rdata = XGBE_GET_DESC_DATA(ring, start_index + ring->rdesc_count - 1);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_RDTR_LO,
+			  lower_32_bits(rdata->rdesc_dma));
+
+	DBGPR("<--rx_desc_init\n");
+}
+
+static void xgbe_update_tstamp_addend(struct xgbe_prv_data *pdata,
+				      unsigned int addend)
+{
+	/* Set the addend register value and tell the device */
+	XGMAC_IOWRITE(pdata, MAC_TSAR, addend);
+	XGMAC_IOWRITE_BITS(pdata, MAC_TSCR, TSADDREG, 1);
+
+	/* Wait for addend update to complete */
+	while (XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSADDREG))
+		udelay(5);
+}
+
+static void xgbe_set_tstamp_time(struct xgbe_prv_data *pdata, unsigned int sec,
+				 unsigned int nsec)
+{
+	/* Set the time values and tell the device */
+	XGMAC_IOWRITE(pdata, MAC_STSUR, sec);
+	XGMAC_IOWRITE(pdata, MAC_STNUR, nsec);
+	XGMAC_IOWRITE_BITS(pdata, MAC_TSCR, TSINIT, 1);
+
+	/* Wait for time update to complete */
+	while (XGMAC_IOREAD_BITS(pdata, MAC_TSCR, TSINIT))
+		udelay(5);
+}
+
+static u64 xgbe_get_tstamp_time(struct xgbe_prv_data *pdata)
+{
+	u64 nsec;
+
+	nsec = XGMAC_IOREAD(pdata, MAC_STSR);
+	nsec *= NSEC_PER_SEC;
+	nsec += XGMAC_IOREAD(pdata, MAC_STNR);
+
+	return nsec;
+}
+
+static u64 xgbe_get_tx_tstamp(struct xgbe_prv_data *pdata)
+{
+	unsigned int tx_snr;
+	u64 nsec;
+
+	tx_snr = XGMAC_IOREAD(pdata, MAC_TXSNR);
+	if (XGMAC_GET_BITS(tx_snr, MAC_TXSNR, TXTSSTSMIS))
+		return 0;
+
+	nsec = XGMAC_IOREAD(pdata, MAC_TXSSR);
+	nsec *= NSEC_PER_SEC;
+	nsec += tx_snr;
+
+	return nsec;
+}
+
+static void xgbe_get_rx_tstamp(struct xgbe_packet_data *packet,
+			       struct xgbe_ring_desc *rdesc)
+{
+	u64 nsec;
+
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_CONTEXT_DESC3, TSA) &&
+	    !XGMAC_GET_BITS_LE(rdesc->desc3, RX_CONTEXT_DESC3, TSD)) {
+		nsec = le32_to_cpu(rdesc->desc1);
+		nsec <<= 32;
+		nsec |= le32_to_cpu(rdesc->desc0);
+		if (nsec != 0xffffffffffffffffULL) {
+			packet->rx_tstamp = nsec;
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       RX_TSTAMP, 1);
+		}
+	}
+}
+
+static int xgbe_config_tstamp(struct xgbe_prv_data *pdata,
+			      unsigned int mac_tscr)
+{
+	/* Set one nano-second accuracy */
+	XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSCTRLSSR, 1);
+
+	/* Set fine timestamp update */
+	XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSCFUPDT, 1);
+
+	/* Overwrite earlier timestamps */
+	XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TXTSSTSM, 1);
+
+	XGMAC_IOWRITE(pdata, MAC_TSCR, mac_tscr);
+
+	/* Exit if timestamping is not enabled */
+	if (!XGMAC_GET_BITS(mac_tscr, MAC_TSCR, TSENA))
+		return 0;
+
+	/* Initialize time registers */
+	XGMAC_IOWRITE_BITS(pdata, MAC_SSIR, SSINC, XGBE_TSTAMP_SSINC);
+	XGMAC_IOWRITE_BITS(pdata, MAC_SSIR, SNSINC, XGBE_TSTAMP_SNSINC);
+	xgbe_update_tstamp_addend(pdata, pdata->tstamp_addend);
+	xgbe_set_tstamp_time(pdata, 0, 0);
+
+	/* Initialize the timecounter */
+	timecounter_init(&pdata->tstamp_tc, &pdata->tstamp_cc,
+			 ktime_to_ns(ktime_get_real()));
+
+	return 0;
+}
+
+static void xgbe_config_dcb_tc(struct xgbe_prv_data *pdata)
+{
+	struct ieee_ets *ets = pdata->ets;
+	unsigned int total_weight, min_weight, weight;
+	unsigned int i;
+
+	if (!ets)
+		return;
+
+	/* Set Tx to deficit weighted round robin scheduling algorithm (when
+	 * traffic class is using ETS algorithm)
+	 */
+	XGMAC_IOWRITE_BITS(pdata, MTL_OMR, ETSALG, MTL_ETSALG_DWRR);
+
+	/* Set Traffic Class algorithms */
+	total_weight = pdata->netdev->mtu * pdata->hw_feat.tc_cnt;
+	min_weight = total_weight / 100;
+	if (!min_weight)
+		min_weight = 1;
+
+	for (i = 0; i < pdata->hw_feat.tc_cnt; i++) {
+		switch (ets->tc_tsa[i]) {
+		case IEEE_8021QAZ_TSA_STRICT:
+			DBGPR("  TC%u using SP\n", i);
+			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
+					       MTL_TSA_SP);
+			break;
+		case IEEE_8021QAZ_TSA_ETS:
+			weight = total_weight * ets->tc_tx_bw[i] / 100;
+			weight = clamp(weight, min_weight, total_weight);
+
+			DBGPR("  TC%u using DWRR (weight %u)\n", i, weight);
+			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
+					       MTL_TSA_ETS);
+			XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_QWR, QW,
+					       weight);
+			break;
+		}
+	}
+}
+
+static void xgbe_config_dcb_pfc(struct xgbe_prv_data *pdata)
+{
+	struct ieee_pfc *pfc = pdata->pfc;
+	struct ieee_ets *ets = pdata->ets;
+	unsigned int mask, reg, reg_val;
+	unsigned int tc, prio;
+
+	if (!pfc || !ets)
+		return;
+
+	for (tc = 0; tc < pdata->hw_feat.tc_cnt; tc++) {
+		mask = 0;
+		for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; prio++) {
+			if ((pfc->pfc_en & (1 << prio)) &&
+			    (ets->prio_tc[prio] == tc))
+				mask |= (1 << prio);
+		}
+		mask &= 0xff;
+
+		DBGPR("  TC%u PFC mask=%#x\n", tc, mask);
+		reg = MTL_TCPM0R + (MTL_TCPM_INC * (tc / MTL_TCPM_TC_PER_REG));
+		reg_val = XGMAC_IOREAD(pdata, reg);
+
+		reg_val &= ~(0xff << ((tc % MTL_TCPM_TC_PER_REG) << 3));
+		reg_val |= (mask << ((tc % MTL_TCPM_TC_PER_REG) << 3));
+
+		XGMAC_IOWRITE(pdata, reg, reg_val);
+	}
+
+	xgbe_config_flow_control(pdata);
+}
+
+static void xgbe_tx_start_xmit(struct xgbe_channel *channel,
+			       struct xgbe_ring *ring)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_ring_data *rdata;
+
+	/* Issue a poll command to Tx DMA by writing address
+	 * of next immediate free descriptor */
+	rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_TDTR_LO,
+			  lower_32_bits(rdata->rdesc_dma));
+
+	/* Start the Tx coalescing timer */
+	if (pdata->tx_usecs && !channel->tx_timer_active) {
+		channel->tx_timer_active = 1;
+		hrtimer_start(&channel->tx_timer,
+			      ktime_set(0, pdata->tx_usecs * NSEC_PER_USEC),
+			      HRTIMER_MODE_REL);
+	}
+
+	ring->tx.xmit_more = 0;
+}
+
+static void xgbe_dev_xmit(struct xgbe_channel *channel)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_ring *ring = channel->tx_ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+	struct xgbe_packet_data *packet = &ring->packet_data;
+	unsigned int csum, tso, vlan;
+	unsigned int tso_context, vlan_context;
+	unsigned int tx_set_ic;
+	int start_index = ring->cur;
+	int cur_index = ring->cur;
+	int i;
+
+	DBGPR("-->xgbe_dev_xmit\n");
+
+	csum = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			      CSUM_ENABLE);
+	tso = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			     TSO_ENABLE);
+	vlan = XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			      VLAN_CTAG);
+
+	if (tso && (packet->mss != ring->tx.cur_mss))
+		tso_context = 1;
+	else
+		tso_context = 0;
+
+	if (vlan && (packet->vlan_ctag != ring->tx.cur_vlan_ctag))
+		vlan_context = 1;
+	else
+		vlan_context = 0;
+
+	/* Determine if an interrupt should be generated for this Tx:
+	 *   Interrupt:
+	 *     - Tx frame count exceeds the frame count setting
+	 *     - Addition of Tx frame count to the frame count since the
+	 *       last interrupt was set exceeds the frame count setting
+	 *   No interrupt:
+	 *     - No frame count setting specified (ethtool -C ethX tx-frames 0)
+	 *     - Addition of Tx frame count to the frame count since the
+	 *       last interrupt was set does not exceed the frame count setting
+	 */
+	ring->coalesce_count += packet->tx_packets;
+	if (!pdata->tx_frames)
+		tx_set_ic = 0;
+	else if (packet->tx_packets > pdata->tx_frames)
+		tx_set_ic = 1;
+	else if ((ring->coalesce_count % pdata->tx_frames) <
+		 packet->tx_packets)
+		tx_set_ic = 1;
+	else
+		tx_set_ic = 0;
+
+	rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+	rdesc = rdata->rdesc;
+
+	/* Create a context descriptor if this is a TSO packet */
+	if (tso_context || vlan_context) {
+		if (tso_context) {
+			DBGPR("  TSO context descriptor, mss=%u\n",
+			      packet->mss);
+
+			/* Set the MSS size */
+			XGMAC_SET_BITS_LE(rdesc->desc2, TX_CONTEXT_DESC2,
+					  MSS, packet->mss);
+
+			/* Mark it as a CONTEXT descriptor */
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
+					  CTXT, 1);
+
+			/* Indicate this descriptor contains the MSS */
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
+					  TCMSSV, 1);
+
+			ring->tx.cur_mss = packet->mss;
+		}
+
+		if (vlan_context) {
+			DBGPR("  VLAN context descriptor, ctag=%u\n",
+			      packet->vlan_ctag);
+
+			/* Mark it as a CONTEXT descriptor */
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
+					  CTXT, 1);
+
+			/* Set the VLAN tag */
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
+					  VT, packet->vlan_ctag);
+
+			/* Indicate this descriptor contains the VLAN tag */
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_CONTEXT_DESC3,
+					  VLTV, 1);
+
+			ring->tx.cur_vlan_ctag = packet->vlan_ctag;
+		}
+
+		cur_index++;
+		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+		rdesc = rdata->rdesc;
+	}
+
+	/* Update buffer address (for TSO this is the header) */
+	rdesc->desc0 =  cpu_to_le32(lower_32_bits(rdata->skb_dma));
+	rdesc->desc1 =  cpu_to_le32(upper_32_bits(rdata->skb_dma));
+
+	/* Update the buffer length */
+	XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, HL_B1L,
+			  rdata->skb_dma_len);
+
+	/* VLAN tag insertion check */
+	if (vlan)
+		XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, VTIR,
+				  TX_NORMAL_DESC2_VLAN_INSERT);
+
+	/* Timestamp enablement check */
+	if (XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES, PTP))
+		XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, TTSE, 1);
+
+	/* Mark it as First Descriptor */
+	XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, FD, 1);
+
+	/* Mark it as a NORMAL descriptor */
+	XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CTXT, 0);
+
+	/* Set OWN bit if not the first descriptor */
+	if (cur_index != start_index)
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
+
+	if (tso) {
+		/* Enable TSO */
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TSE, 1);
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPPL,
+				  packet->tcp_payload_len);
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, TCPHDRLEN,
+				  packet->tcp_header_len / 4);
+	} else {
+		/* Enable CRC and Pad Insertion */
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CPC, 0);
+
+		/* Enable HW CSUM */
+		if (csum)
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3,
+					  CIC, 0x3);
+
+		/* Set the total length to be transmitted */
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, FL,
+				  packet->length);
+	}
+
+	for (i = cur_index - start_index + 1; i < packet->rdesc_count; i++) {
+		cur_index++;
+		rdata = XGBE_GET_DESC_DATA(ring, cur_index);
+		rdesc = rdata->rdesc;
+
+		/* Update buffer address */
+		rdesc->desc0 = cpu_to_le32(lower_32_bits(rdata->skb_dma));
+		rdesc->desc1 = cpu_to_le32(upper_32_bits(rdata->skb_dma));
+
+		/* Update the buffer length */
+		XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, HL_B1L,
+				  rdata->skb_dma_len);
+
+		/* Set OWN bit */
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
+
+		/* Mark it as NORMAL descriptor */
+		XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CTXT, 0);
+
+		/* Enable HW CSUM */
+		if (csum)
+			XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3,
+					  CIC, 0x3);
+	}
+
+	/* Set LAST bit for the last descriptor */
+	XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, LD, 1);
+
+	/* Set IC bit based on Tx coalescing settings */
+	if (tx_set_ic)
+		XGMAC_SET_BITS_LE(rdesc->desc2, TX_NORMAL_DESC2, IC, 1);
+
+	/* Save the Tx info to report back during cleanup */
+	rdata->tx.packets = packet->tx_packets;
+	rdata->tx.bytes = packet->tx_bytes;
+
+	/* In case the Tx DMA engine is running, make sure everything
+	 * is written to the descriptor(s) before setting the OWN bit
+	 * for the first descriptor
+	 */
+	wmb();
+
+	/* Set OWN bit for the first descriptor */
+	rdata = XGBE_GET_DESC_DATA(ring, start_index);
+	rdesc = rdata->rdesc;
+	XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
+
+#ifdef XGMAC_ENABLE_TX_DESC_DUMP
+	xgbe_a0_dump_tx_desc(ring, start_index, packet->rdesc_count, 1);
+#endif
+
+	/* Make sure ownership is written to the descriptor */
+	wmb();
+
+	ring->cur = cur_index + 1;
+	if (!packet->skb->xmit_more ||
+	    netif_xmit_stopped(netdev_get_tx_queue(pdata->netdev,
+						   channel->queue_index)))
+		xgbe_tx_start_xmit(channel, ring);
+	else
+		ring->tx.xmit_more = 1;
+
+	DBGPR("  %s: descriptors %u to %u written\n",
+	      channel->name, start_index & (ring->rdesc_count - 1),
+	      (ring->cur - 1) & (ring->rdesc_count - 1));
+
+	DBGPR("<--xgbe_dev_xmit\n");
+}
+
+static int xgbe_dev_read(struct xgbe_channel *channel)
+{
+	struct xgbe_ring *ring = channel->rx_ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+	struct xgbe_packet_data *packet = &ring->packet_data;
+	struct net_device *netdev = channel->pdata->netdev;
+	unsigned int err, etlt, l34t;
+
+	DBGPR("-->xgbe_dev_read: cur = %d\n", ring->cur);
+
+	rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+	rdesc = rdata->rdesc;
+
+	/* Check for data availability */
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, OWN))
+		return 1;
+
+	/* Make sure descriptor fields are read after reading the OWN bit */
+	rmb();
+
+#ifdef XGMAC_ENABLE_RX_DESC_DUMP
+	xgbe_a0_dump_rx_desc(ring, rdesc, ring->cur);
+#endif
+
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, CTXT)) {
+		/* Timestamp Context Descriptor */
+		xgbe_get_rx_tstamp(packet, rdesc);
+
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       CONTEXT, 1);
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       CONTEXT_NEXT, 0);
+		return 0;
+	}
+
+	/* Normal Descriptor, be sure Context Descriptor bit is off */
+	XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES, CONTEXT, 0);
+
+	/* Indicate if a Context Descriptor is next */
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, CDA))
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       CONTEXT_NEXT, 1);
+
+	/* Get the header length */
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD))
+		rdata->rx.hdr_len = XGMAC_GET_BITS_LE(rdesc->desc2,
+						      RX_NORMAL_DESC2, HL);
+
+	/* Get the RSS hash */
+	if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, RSV)) {
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       RSS_HASH, 1);
+
+		packet->rss_hash = le32_to_cpu(rdesc->desc1);
+
+		l34t = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, L34T);
+		switch (l34t) {
+		case RX_DESC3_L34T_IPV4_TCP:
+		case RX_DESC3_L34T_IPV4_UDP:
+		case RX_DESC3_L34T_IPV6_TCP:
+		case RX_DESC3_L34T_IPV6_UDP:
+			packet->rss_hash_type = PKT_HASH_TYPE_L4;
+			break;
+		default:
+			packet->rss_hash_type = PKT_HASH_TYPE_L3;
+		}
+	}
+
+	/* Get the packet length */
+	rdata->rx.len = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, PL);
+
+	if (!XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, LD)) {
+		/* Not all the data has been transferred for this packet */
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       INCOMPLETE, 1);
+		return 0;
+	}
+
+	/* This is the last of the data for this packet */
+	XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+		       INCOMPLETE, 0);
+
+	/* Set checksum done indicator as appropriate */
+	if (channel->pdata->netdev->features & NETIF_F_RXCSUM)
+		XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+			       CSUM_DONE, 1);
+
+	/* Check for errors (only valid in last descriptor) */
+	err = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ES);
+	etlt = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, ETLT);
+	DBGPR("  err=%u, etlt=%#x\n", err, etlt);
+
+	if (!err || !etlt) {
+		/* No error if err is 0 or etlt is 0 */
+		if ((etlt == 0x09) &&
+		    (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       VLAN_CTAG, 1);
+			packet->vlan_ctag = XGMAC_GET_BITS_LE(rdesc->desc0,
+							      RX_NORMAL_DESC0,
+							      OVT);
+			DBGPR("  vlan-ctag=0x%04x\n", packet->vlan_ctag);
+		}
+	} else {
+		if ((etlt == 0x05) || (etlt == 0x06))
+			XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+				       CSUM_DONE, 0);
+		else
+			XGMAC_SET_BITS(packet->errors, RX_PACKET_ERRORS,
+				       FRAME, 1);
+	}
+
+	DBGPR("<--xgbe_dev_read: %s - descriptor=%u (cur=%d)\n", channel->name,
+	      ring->cur & (ring->rdesc_count - 1), ring->cur);
+
+	return 0;
+}
+
+static int xgbe_is_context_desc(struct xgbe_ring_desc *rdesc)
+{
+	/* Rx and Tx share CTXT bit, so check TDES3.CTXT bit */
+	return XGMAC_GET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CTXT);
+}
+
+static int xgbe_is_last_desc(struct xgbe_ring_desc *rdesc)
+{
+	/* Rx and Tx share LD bit, so check TDES3.LD bit */
+	return XGMAC_GET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, LD);
+}
+
+static int xgbe_enable_int(struct xgbe_channel *channel,
+			   enum xgbe_int int_id)
+{
+	unsigned int dma_ch_ier;
+
+	dma_ch_ier = XGMAC_DMA_IOREAD(channel, DMA_CH_IER);
+
+	switch (int_id) {
+	case XGMAC_INT_DMA_CH_SR_TI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TPS:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TXSE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TBU:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TBUE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RBU:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RPS:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RSE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TI_RI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 1);
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 1);
+		break;
+	case XGMAC_INT_DMA_CH_SR_FBE:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 1);
+		break;
+	case XGMAC_INT_DMA_ALL:
+		dma_ch_ier |= channel->saved_ier;
+		break;
+	default:
+		return -1;
+	}
+
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+
+	return 0;
+}
+
+static int xgbe_disable_int(struct xgbe_channel *channel,
+			    enum xgbe_int int_id)
+{
+	unsigned int dma_ch_ier;
+
+	dma_ch_ier = XGMAC_DMA_IOREAD(channel, DMA_CH_IER);
+
+	switch (int_id) {
+	case XGMAC_INT_DMA_CH_SR_TI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TPS:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TXSE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TBU:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TBUE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RBU:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RBUE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_RPS:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RSE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_TI_RI:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, TIE, 0);
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, RIE, 0);
+		break;
+	case XGMAC_INT_DMA_CH_SR_FBE:
+		XGMAC_SET_BITS(dma_ch_ier, DMA_CH_IER, FBEE, 0);
+		break;
+	case XGMAC_INT_DMA_ALL:
+		channel->saved_ier = dma_ch_ier & XGBE_DMA_INTERRUPT_MASK;
+		dma_ch_ier &= ~XGBE_DMA_INTERRUPT_MASK;
+		break;
+	default:
+		return -1;
+	}
+
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_IER, dma_ch_ier);
+
+	return 0;
+}
+
+static int xgbe_exit(struct xgbe_prv_data *pdata)
+{
+	unsigned int count = 2000;
+
+	DBGPR("-->xgbe_exit\n");
+
+	/* Issue a software reset */
+	XGMAC_IOWRITE_BITS(pdata, DMA_MR, SWR, 1);
+	usleep_range(10, 15);
+
+	/* Poll Until Poll Condition */
+	while (count-- && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))
+		usleep_range(500, 600);
+
+	if (!count)
+		return -EBUSY;
+
+	DBGPR("<--xgbe_exit\n");
+
+	return 0;
+}
+
+static int xgbe_flush_tx_queues(struct xgbe_prv_data *pdata)
+{
+	unsigned int i, count;
+
+	if (XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) < 0x21)
+		return 0;
+
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, FTQ, 1);
+
+	/* Poll Until Poll Condition */
+	for (i = 0; i < pdata->tx_q_count; i++) {
+		count = 2000;
+		while (count-- && XGMAC_MTL_IOREAD_BITS(pdata, i,
+							MTL_Q_TQOMR, FTQ))
+			usleep_range(500, 600);
+
+		if (!count)
+			return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void xgbe_config_dma_bus(struct xgbe_prv_data *pdata)
+{
+	/* Set enhanced addressing mode */
+	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, EAME, 1);
+
+	/* Set the System Bus mode */
+	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, UNDEF, 1);
+	XGMAC_IOWRITE_BITS(pdata, DMA_SBMR, BLEN_256, 1);
+}
+
+static void xgbe_config_dma_cache(struct xgbe_prv_data *pdata)
+{
+	unsigned int arcache, awcache;
+
+	arcache = 0;
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, DRC, pdata->arcache);
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, DRD, pdata->axdomain);
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, TEC, pdata->arcache);
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, TED, pdata->axdomain);
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, THC, pdata->arcache);
+	XGMAC_SET_BITS(arcache, DMA_AXIARCR, THD, pdata->axdomain);
+	XGMAC_IOWRITE(pdata, DMA_AXIARCR, arcache);
+
+	awcache = 0;
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, DWC, pdata->awcache);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, DWD, pdata->axdomain);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RPC, pdata->awcache);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RPD, pdata->axdomain);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RHC, pdata->awcache);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, RHD, pdata->axdomain);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, TDC, pdata->awcache);
+	XGMAC_SET_BITS(awcache, DMA_AXIAWCR, TDD, pdata->axdomain);
+	XGMAC_IOWRITE(pdata, DMA_AXIAWCR, awcache);
+}
+
+static void xgbe_config_mtl_mode(struct xgbe_prv_data *pdata)
+{
+	unsigned int i;
+
+	/* Set Tx to weighted round robin scheduling algorithm */
+	XGMAC_IOWRITE_BITS(pdata, MTL_OMR, ETSALG, MTL_ETSALG_WRR);
+
+	/* Set Tx traffic classes to use WRR algorithm with equal weights */
+	for (i = 0; i < pdata->hw_feat.tc_cnt; i++) {
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_ETSCR, TSA,
+				       MTL_TSA_ETS);
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_TC_QWR, QW, 1);
+	}
+
+	/* Set Rx to strict priority algorithm */
+	XGMAC_IOWRITE_BITS(pdata, MTL_OMR, RAA, MTL_RAA_SP);
+}
+
+static unsigned int xgbe_calculate_per_queue_fifo(unsigned int fifo_size,
+						  unsigned int queue_count)
+{
+	unsigned int q_fifo_size = 0;
+	enum xgbe_mtl_fifo_size p_fifo = XGMAC_MTL_FIFO_SIZE_256;
+
+	/* Calculate Tx/Rx fifo share per queue */
+	switch (fifo_size) {
+	case 0:
+		q_fifo_size = XGBE_FIFO_SIZE_B(128);
+		break;
+	case 1:
+		q_fifo_size = XGBE_FIFO_SIZE_B(256);
+		break;
+	case 2:
+		q_fifo_size = XGBE_FIFO_SIZE_B(512);
+		break;
+	case 3:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(1);
+		break;
+	case 4:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(2);
+		break;
+	case 5:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(4);
+		break;
+	case 6:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(8);
+		break;
+	case 7:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(16);
+		break;
+	case 8:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(32);
+		break;
+	case 9:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(64);
+		break;
+	case 10:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(128);
+		break;
+	case 11:
+		q_fifo_size = XGBE_FIFO_SIZE_KB(256);
+		break;
+	}
+
+	/* The configured value is not the actual amount of fifo RAM */
+	q_fifo_size = min_t(unsigned int, XGBE_FIFO_MAX, q_fifo_size);
+
+	q_fifo_size = q_fifo_size / queue_count;
+
+	/* Set the queue fifo size programmable value */
+	if (q_fifo_size >= XGBE_FIFO_SIZE_KB(256))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_256K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(128))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_128K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(64))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_64K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(32))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_32K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(16))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_16K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(8))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_8K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(4))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_4K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(2))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_2K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_KB(1))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_1K;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_B(512))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_512;
+	else if (q_fifo_size >= XGBE_FIFO_SIZE_B(256))
+		p_fifo = XGMAC_MTL_FIFO_SIZE_256;
+
+	return p_fifo;
+}
+
+static void xgbe_config_tx_fifo_size(struct xgbe_prv_data *pdata)
+{
+	enum xgbe_mtl_fifo_size fifo_size;
+	unsigned int i;
+
+	fifo_size = xgbe_calculate_per_queue_fifo(pdata->hw_feat.tx_fifo_size,
+						  pdata->tx_q_count);
+
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TQS, fifo_size);
+
+	netdev_notice(pdata->netdev, "%d Tx queues, %d byte fifo per queue\n",
+		      pdata->tx_q_count, ((fifo_size + 1) * 256));
+}
+
+static void xgbe_config_rx_fifo_size(struct xgbe_prv_data *pdata)
+{
+	enum xgbe_mtl_fifo_size fifo_size;
+	unsigned int i;
+
+	fifo_size = xgbe_calculate_per_queue_fifo(pdata->hw_feat.rx_fifo_size,
+						  pdata->rx_q_count);
+
+	for (i = 0; i < pdata->rx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RQS, fifo_size);
+
+	netdev_notice(pdata->netdev, "%d Rx queues, %d byte fifo per queue\n",
+		      pdata->rx_q_count, ((fifo_size + 1) * 256));
+}
+
+static void xgbe_config_queue_mapping(struct xgbe_prv_data *pdata)
+{
+	unsigned int qptc, qptc_extra, queue;
+	unsigned int prio_queues;
+	unsigned int ppq, ppq_extra, prio;
+	unsigned int mask;
+	unsigned int i, j, reg, reg_val;
+
+	/* Map the MTL Tx Queues to Traffic Classes
+	 *   Note: Tx Queues >= Traffic Classes
+	 */
+	qptc = pdata->tx_q_count / pdata->hw_feat.tc_cnt;
+	qptc_extra = pdata->tx_q_count % pdata->hw_feat.tc_cnt;
+
+	for (i = 0, queue = 0; i < pdata->hw_feat.tc_cnt; i++) {
+		for (j = 0; j < qptc; j++) {
+			DBGPR("  TXq%u mapped to TC%u\n", queue, i);
+			XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
+					       Q2TCMAP, i);
+			pdata->q2tc_map[queue++] = i;
+		}
+
+		if (i < qptc_extra) {
+			DBGPR("  TXq%u mapped to TC%u\n", queue, i);
+			XGMAC_MTL_IOWRITE_BITS(pdata, queue, MTL_Q_TQOMR,
+					       Q2TCMAP, i);
+			pdata->q2tc_map[queue++] = i;
+		}
+	}
+
+	/* Map the 8 VLAN priority values to available MTL Rx queues */
+	prio_queues = min_t(unsigned int, IEEE_8021QAZ_MAX_TCS,
+			    pdata->rx_q_count);
+	ppq = IEEE_8021QAZ_MAX_TCS / prio_queues;
+	ppq_extra = IEEE_8021QAZ_MAX_TCS % prio_queues;
+
+	reg = MAC_RQC2R;
+	reg_val = 0;
+	for (i = 0, prio = 0; i < prio_queues;) {
+		mask = 0;
+		for (j = 0; j < ppq; j++) {
+			DBGPR("  PRIO%u mapped to RXq%u\n", prio, i);
+			mask |= (1 << prio);
+			pdata->prio2q_map[prio++] = i;
+		}
+
+		if (i < ppq_extra) {
+			DBGPR("  PRIO%u mapped to RXq%u\n", prio, i);
+			mask |= (1 << prio);
+			pdata->prio2q_map[prio++] = i;
+		}
+
+		reg_val |= (mask << ((i++ % MAC_RQC2_Q_PER_REG) << 3));
+
+		if ((i % MAC_RQC2_Q_PER_REG) && (i != prio_queues))
+			continue;
+
+		XGMAC_IOWRITE(pdata, reg, reg_val);
+		reg += MAC_RQC2_INC;
+		reg_val = 0;
+	}
+
+	/* Select dynamic mapping of MTL Rx queue to DMA Rx channel */
+	reg = MTL_RQDCM0R;
+	reg_val = 0;
+	for (i = 0; i < pdata->rx_q_count;) {
+		reg_val |= (0x80 << ((i++ % MTL_RQDCM_Q_PER_REG) << 3));
+
+		if ((i % MTL_RQDCM_Q_PER_REG) && (i != pdata->rx_q_count))
+			continue;
+
+		XGMAC_IOWRITE(pdata, reg, reg_val);
+
+		reg += MTL_RQDCM_INC;
+		reg_val = 0;
+	}
+}
+
+static void xgbe_config_flow_control_threshold(struct xgbe_prv_data *pdata)
+{
+	unsigned int i;
+
+	for (i = 0; i < pdata->rx_q_count; i++) {
+		/* Activate flow control when less than 4k left in fifo */
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RFA, 2);
+
+		/* De-activate flow control when more than 6k left in fifo */
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RFD, 4);
+	}
+}
+
+static void xgbe_config_mac_address(struct xgbe_prv_data *pdata)
+{
+	xgbe_set_mac_address(pdata, pdata->netdev->dev_addr);
+
+	/* Filtering is done using perfect filtering and hash filtering */
+	if (pdata->hw_feat.hash_table_size) {
+		XGMAC_IOWRITE_BITS(pdata, MAC_PFR, HPF, 1);
+		XGMAC_IOWRITE_BITS(pdata, MAC_PFR, HUC, 1);
+		XGMAC_IOWRITE_BITS(pdata, MAC_PFR, HMC, 1);
+	}
+}
+
+static void xgbe_config_jumbo_enable(struct xgbe_prv_data *pdata)
+{
+	unsigned int val;
+
+	val = (pdata->netdev->mtu > XGMAC_STD_PACKET_MTU) ? 1 : 0;
+
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+}
+
+static void xgbe_config_mac_speed(struct xgbe_prv_data *pdata)
+{
+	switch (pdata->phy_speed) {
+	case SPEED_10000:
+		xgbe_set_xgmii_speed(pdata);
+		break;
+
+	case SPEED_2500:
+		xgbe_set_gmii_2500_speed(pdata);
+		break;
+
+	case SPEED_1000:
+		xgbe_set_gmii_speed(pdata);
+		break;
+	}
+}
+
+static void xgbe_config_checksum_offload(struct xgbe_prv_data *pdata)
+{
+	if (pdata->netdev->features & NETIF_F_RXCSUM)
+		xgbe_enable_rx_csum(pdata);
+	else
+		xgbe_disable_rx_csum(pdata);
+}
+
+static void xgbe_config_vlan_support(struct xgbe_prv_data *pdata)
+{
+	/* Indicate that VLAN Tx CTAGs come from context descriptors */
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
+
+	/* Set the current VLAN Hash Table register value */
+	xgbe_update_vlan_hash_table(pdata);
+
+	if (pdata->netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+		xgbe_enable_rx_vlan_filtering(pdata);
+	else
+		xgbe_disable_rx_vlan_filtering(pdata);
+
+	if (pdata->netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
+		xgbe_enable_rx_vlan_stripping(pdata);
+	else
+		xgbe_disable_rx_vlan_stripping(pdata);
+}
+
+static u64 xgbe_mmc_read(struct xgbe_prv_data *pdata, unsigned int reg_lo)
+{
+	bool read_hi;
+	u64 val;
+
+	switch (reg_lo) {
+	/* These registers are always 64 bit */
+	case MMC_TXOCTETCOUNT_GB_LO:
+	case MMC_TXOCTETCOUNT_G_LO:
+	case MMC_RXOCTETCOUNT_GB_LO:
+	case MMC_RXOCTETCOUNT_G_LO:
+		read_hi = true;
+		break;
+
+	default:
+		read_hi = false;
+	};
+
+	val = XGMAC_IOREAD(pdata, reg_lo);
+
+	if (read_hi)
+		val |= ((u64)XGMAC_IOREAD(pdata, reg_lo + 4) << 32);
+
+	return val;
+}
+
+static void xgbe_tx_mmc_int(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_mmc_stats *stats = &pdata->mmc_stats;
+	unsigned int mmc_isr = XGMAC_IOREAD(pdata, MMC_TISR);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXOCTETCOUNT_GB))
+		stats->txoctetcount_gb +=
+			xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXFRAMECOUNT_GB))
+		stats->txframecount_gb +=
+			xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXBROADCASTFRAMES_G))
+		stats->txbroadcastframes_g +=
+			xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXMULTICASTFRAMES_G))
+		stats->txmulticastframes_g +=
+			xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX64OCTETS_GB))
+		stats->tx64octets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX64OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX65TO127OCTETS_GB))
+		stats->tx65to127octets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX65TO127OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX128TO255OCTETS_GB))
+		stats->tx128to255octets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX128TO255OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX256TO511OCTETS_GB))
+		stats->tx256to511octets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX256TO511OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX512TO1023OCTETS_GB))
+		stats->tx512to1023octets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX512TO1023OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TX1024TOMAXOCTETS_GB))
+		stats->tx1024tomaxoctets_gb +=
+			xgbe_mmc_read(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXUNICASTFRAMES_GB))
+		stats->txunicastframes_gb +=
+			xgbe_mmc_read(pdata, MMC_TXUNICASTFRAMES_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXMULTICASTFRAMES_GB))
+		stats->txmulticastframes_gb +=
+			xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXBROADCASTFRAMES_GB))
+		stats->txbroadcastframes_g +=
+			xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXUNDERFLOWERROR))
+		stats->txunderflowerror +=
+			xgbe_mmc_read(pdata, MMC_TXUNDERFLOWERROR_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXOCTETCOUNT_G))
+		stats->txoctetcount_g +=
+			xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXFRAMECOUNT_G))
+		stats->txframecount_g +=
+			xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXPAUSEFRAMES))
+		stats->txpauseframes +=
+			xgbe_mmc_read(pdata, MMC_TXPAUSEFRAMES_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_TISR, TXVLANFRAMES_G))
+		stats->txvlanframes_g +=
+			xgbe_mmc_read(pdata, MMC_TXVLANFRAMES_G_LO);
+}
+
+static void xgbe_rx_mmc_int(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_mmc_stats *stats = &pdata->mmc_stats;
+	unsigned int mmc_isr = XGMAC_IOREAD(pdata, MMC_RISR);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXFRAMECOUNT_GB))
+		stats->rxframecount_gb +=
+			xgbe_mmc_read(pdata, MMC_RXFRAMECOUNT_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOCTETCOUNT_GB))
+		stats->rxoctetcount_gb +=
+			xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOCTETCOUNT_G))
+		stats->rxoctetcount_g +=
+			xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXBROADCASTFRAMES_G))
+		stats->rxbroadcastframes_g +=
+			xgbe_mmc_read(pdata, MMC_RXBROADCASTFRAMES_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXMULTICASTFRAMES_G))
+		stats->rxmulticastframes_g +=
+			xgbe_mmc_read(pdata, MMC_RXMULTICASTFRAMES_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXCRCERROR))
+		stats->rxcrcerror +=
+			xgbe_mmc_read(pdata, MMC_RXCRCERROR_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXRUNTERROR))
+		stats->rxrunterror +=
+			xgbe_mmc_read(pdata, MMC_RXRUNTERROR);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXJABBERERROR))
+		stats->rxjabbererror +=
+			xgbe_mmc_read(pdata, MMC_RXJABBERERROR);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXUNDERSIZE_G))
+		stats->rxundersize_g +=
+			xgbe_mmc_read(pdata, MMC_RXUNDERSIZE_G);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOVERSIZE_G))
+		stats->rxoversize_g +=
+			xgbe_mmc_read(pdata, MMC_RXOVERSIZE_G);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX64OCTETS_GB))
+		stats->rx64octets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX64OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX65TO127OCTETS_GB))
+		stats->rx65to127octets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX65TO127OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX128TO255OCTETS_GB))
+		stats->rx128to255octets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX128TO255OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX256TO511OCTETS_GB))
+		stats->rx256to511octets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX256TO511OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX512TO1023OCTETS_GB))
+		stats->rx512to1023octets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX512TO1023OCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RX1024TOMAXOCTETS_GB))
+		stats->rx1024tomaxoctets_gb +=
+			xgbe_mmc_read(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXUNICASTFRAMES_G))
+		stats->rxunicastframes_g +=
+			xgbe_mmc_read(pdata, MMC_RXUNICASTFRAMES_G_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXLENGTHERROR))
+		stats->rxlengtherror +=
+			xgbe_mmc_read(pdata, MMC_RXLENGTHERROR_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXOUTOFRANGETYPE))
+		stats->rxoutofrangetype +=
+			xgbe_mmc_read(pdata, MMC_RXOUTOFRANGETYPE_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXPAUSEFRAMES))
+		stats->rxpauseframes +=
+			xgbe_mmc_read(pdata, MMC_RXPAUSEFRAMES_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXFIFOOVERFLOW))
+		stats->rxfifooverflow +=
+			xgbe_mmc_read(pdata, MMC_RXFIFOOVERFLOW_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXVLANFRAMES_GB))
+		stats->rxvlanframes_gb +=
+			xgbe_mmc_read(pdata, MMC_RXVLANFRAMES_GB_LO);
+
+	if (XGMAC_GET_BITS(mmc_isr, MMC_RISR, RXWATCHDOGERROR))
+		stats->rxwatchdogerror +=
+			xgbe_mmc_read(pdata, MMC_RXWATCHDOGERROR);
+}
+
+static void xgbe_read_mmc_stats(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_mmc_stats *stats = &pdata->mmc_stats;
+
+	/* Freeze counters */
+	XGMAC_IOWRITE_BITS(pdata, MMC_CR, MCF, 1);
+
+	stats->txoctetcount_gb +=
+		xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_GB_LO);
+
+	stats->txframecount_gb +=
+		xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_GB_LO);
+
+	stats->txbroadcastframes_g +=
+		xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_G_LO);
+
+	stats->txmulticastframes_g +=
+		xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_G_LO);
+
+	stats->tx64octets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX64OCTETS_GB_LO);
+
+	stats->tx65to127octets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX65TO127OCTETS_GB_LO);
+
+	stats->tx128to255octets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX128TO255OCTETS_GB_LO);
+
+	stats->tx256to511octets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX256TO511OCTETS_GB_LO);
+
+	stats->tx512to1023octets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX512TO1023OCTETS_GB_LO);
+
+	stats->tx1024tomaxoctets_gb +=
+		xgbe_mmc_read(pdata, MMC_TX1024TOMAXOCTETS_GB_LO);
+
+	stats->txunicastframes_gb +=
+		xgbe_mmc_read(pdata, MMC_TXUNICASTFRAMES_GB_LO);
+
+	stats->txmulticastframes_gb +=
+		xgbe_mmc_read(pdata, MMC_TXMULTICASTFRAMES_GB_LO);
+
+	stats->txbroadcastframes_g +=
+		xgbe_mmc_read(pdata, MMC_TXBROADCASTFRAMES_GB_LO);
+
+	stats->txunderflowerror +=
+		xgbe_mmc_read(pdata, MMC_TXUNDERFLOWERROR_LO);
+
+	stats->txoctetcount_g +=
+		xgbe_mmc_read(pdata, MMC_TXOCTETCOUNT_G_LO);
+
+	stats->txframecount_g +=
+		xgbe_mmc_read(pdata, MMC_TXFRAMECOUNT_G_LO);
+
+	stats->txpauseframes +=
+		xgbe_mmc_read(pdata, MMC_TXPAUSEFRAMES_LO);
+
+	stats->txvlanframes_g +=
+		xgbe_mmc_read(pdata, MMC_TXVLANFRAMES_G_LO);
+
+	stats->rxframecount_gb +=
+		xgbe_mmc_read(pdata, MMC_RXFRAMECOUNT_GB_LO);
+
+	stats->rxoctetcount_gb +=
+		xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_GB_LO);
+
+	stats->rxoctetcount_g +=
+		xgbe_mmc_read(pdata, MMC_RXOCTETCOUNT_G_LO);
+
+	stats->rxbroadcastframes_g +=
+		xgbe_mmc_read(pdata, MMC_RXBROADCASTFRAMES_G_LO);
+
+	stats->rxmulticastframes_g +=
+		xgbe_mmc_read(pdata, MMC_RXMULTICASTFRAMES_G_LO);
+
+	stats->rxcrcerror +=
+		xgbe_mmc_read(pdata, MMC_RXCRCERROR_LO);
+
+	stats->rxrunterror +=
+		xgbe_mmc_read(pdata, MMC_RXRUNTERROR);
+
+	stats->rxjabbererror +=
+		xgbe_mmc_read(pdata, MMC_RXJABBERERROR);
+
+	stats->rxundersize_g +=
+		xgbe_mmc_read(pdata, MMC_RXUNDERSIZE_G);
+
+	stats->rxoversize_g +=
+		xgbe_mmc_read(pdata, MMC_RXOVERSIZE_G);
+
+	stats->rx64octets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX64OCTETS_GB_LO);
+
+	stats->rx65to127octets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX65TO127OCTETS_GB_LO);
+
+	stats->rx128to255octets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX128TO255OCTETS_GB_LO);
+
+	stats->rx256to511octets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX256TO511OCTETS_GB_LO);
+
+	stats->rx512to1023octets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX512TO1023OCTETS_GB_LO);
+
+	stats->rx1024tomaxoctets_gb +=
+		xgbe_mmc_read(pdata, MMC_RX1024TOMAXOCTETS_GB_LO);
+
+	stats->rxunicastframes_g +=
+		xgbe_mmc_read(pdata, MMC_RXUNICASTFRAMES_G_LO);
+
+	stats->rxlengtherror +=
+		xgbe_mmc_read(pdata, MMC_RXLENGTHERROR_LO);
+
+	stats->rxoutofrangetype +=
+		xgbe_mmc_read(pdata, MMC_RXOUTOFRANGETYPE_LO);
+
+	stats->rxpauseframes +=
+		xgbe_mmc_read(pdata, MMC_RXPAUSEFRAMES_LO);
+
+	stats->rxfifooverflow +=
+		xgbe_mmc_read(pdata, MMC_RXFIFOOVERFLOW_LO);
+
+	stats->rxvlanframes_gb +=
+		xgbe_mmc_read(pdata, MMC_RXVLANFRAMES_GB_LO);
+
+	stats->rxwatchdogerror +=
+		xgbe_mmc_read(pdata, MMC_RXWATCHDOGERROR);
+
+	/* Un-freeze counters */
+	XGMAC_IOWRITE_BITS(pdata, MMC_CR, MCF, 0);
+}
+
+static void xgbe_config_mmc(struct xgbe_prv_data *pdata)
+{
+	/* Set counters to reset on read */
+	XGMAC_IOWRITE_BITS(pdata, MMC_CR, ROR, 1);
+
+	/* Reset the counters */
+	XGMAC_IOWRITE_BITS(pdata, MMC_CR, CR, 1);
+}
+
+static void xgbe_prepare_tx_stop(struct xgbe_prv_data *pdata,
+				 struct xgbe_channel *channel)
+{
+	unsigned int tx_dsr, tx_pos, tx_qidx;
+	unsigned int tx_status;
+	unsigned long tx_timeout;
+
+	/* Calculate the status register to read and the position within */
+	if (channel->queue_index < DMA_DSRX_FIRST_QUEUE) {
+		tx_dsr = DMA_DSR0;
+		tx_pos = (channel->queue_index * DMA_DSR_Q_WIDTH) +
+			 DMA_DSR0_TPS_START;
+	} else {
+		tx_qidx = channel->queue_index - DMA_DSRX_FIRST_QUEUE;
+
+		tx_dsr = DMA_DSR1 + ((tx_qidx / DMA_DSRX_QPR) * DMA_DSRX_INC);
+		tx_pos = ((tx_qidx % DMA_DSRX_QPR) * DMA_DSR_Q_WIDTH) +
+			 DMA_DSRX_TPS_START;
+	}
+
+	/* The Tx engine cannot be stopped if it is actively processing
+	 * descriptors. Wait for the Tx engine to enter the stopped or
+	 * suspended state.  Don't wait forever though...
+	 */
+	tx_timeout = jiffies + (XGBE_DMA_STOP_TIMEOUT * HZ);
+	while (time_before(jiffies, tx_timeout)) {
+		tx_status = XGMAC_IOREAD(pdata, tx_dsr);
+		tx_status = GET_BITS(tx_status, tx_pos, DMA_DSR_TPS_WIDTH);
+		if ((tx_status == DMA_TPS_STOPPED) ||
+		    (tx_status == DMA_TPS_SUSPENDED))
+			break;
+
+		usleep_range(500, 1000);
+	}
+
+	if (!time_before(jiffies, tx_timeout))
+		netdev_info(pdata->netdev,
+			    "timed out waiting for Tx DMA channel %u to stop\n",
+			    channel->queue_index);
+}
+
+static void xgbe_enable_tx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Enable each Tx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 1);
+	}
+
+	/* Enable each Tx queue */
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN,
+				       MTL_Q_ENABLED);
+
+	/* Enable MAC Tx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1);
+}
+
+static void xgbe_disable_tx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Prepare for Tx DMA channel stop */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		xgbe_prepare_tx_stop(pdata, channel);
+	}
+
+	/* Disable MAC Tx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0);
+
+	/* Disable each Tx queue */
+	for (i = 0; i < pdata->tx_q_count; i++)
+		XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, 0);
+
+	/* Disable each Tx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 0);
+	}
+
+	/*TODO: Poll to be sure the channels have stopped?
+	while (count--) {
+		if (XGMAC_IOREAD_BITS(pdata, DMA_DSR0, TPS) == 6)
+			break;
+		mdelay(1);
+	}
+	*/
+}
+
+static void xgbe_enable_rx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int reg_val, i;
+
+	/* Enable each Rx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 1);
+	}
+
+	/* Enable each Rx queue */
+	reg_val = 0;
+	for (i = 0; i < pdata->rx_q_count; i++)
+		reg_val |= (0x02 << (i << 1));
+	XGMAC_IOWRITE(pdata, MAC_RQC0R, reg_val);
+
+	/* Enable MAC Rx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 1);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 1);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 1);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 1);
+}
+
+static void xgbe_disable_rx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Disable MAC Rx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 0);
+	XGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 0);
+
+	/* Disable each Rx queue */
+	XGMAC_IOWRITE(pdata, MAC_RQC0R, 0);
+
+	/* Disable each Rx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 0);
+	}
+
+	/*TODO: Poll to be sure the channels have stopped?
+	while (count--) {
+		dma_sr0 = XGMAC_IOREAD_BITS(pdata, DMA_DSR0, RPS);
+		if (dma_sr0 == 3 || dma_sr0 == 4)
+			break;
+		mdelay(1);
+	}
+	*/
+}
+
+static void xgbe_powerup_tx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Enable each Tx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 1);
+	}
+
+	/* Enable MAC Tx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1);
+}
+
+static void xgbe_powerdown_tx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Prepare for Tx DMA channel stop */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		xgbe_prepare_tx_stop(pdata, channel);
+	}
+
+	/* Disable MAC Tx */
+	XGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0);
+
+	/* Disable each Tx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_TCR, ST, 0);
+	}
+
+	/*TODO: Poll to be sure the channels have stopped?
+	while (count--) {
+		if (XGMAC_IOREAD_BITS(pdata, DMA_DSR0, TPS) == 6)
+			break;
+		mdelay(1);
+	}
+	*/
+}
+
+static void xgbe_powerup_rx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Enable each Rx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 1);
+	}
+}
+
+static void xgbe_powerdown_rx(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	/* Disable each Rx DMA channel */
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->rx_ring)
+			break;
+
+		XGMAC_DMA_IOWRITE_BITS(channel, DMA_CH_RCR, SR, 0);
+	}
+
+	/*TODO: Poll to be sure the channels have stopped?
+	while (count--) {
+		dma_sr0 = XGMAC_IOREAD_BITS(pdata, DMA_DSR0, RPS);
+		if (dma_sr0 == 3 || dma_sr0 == 4)
+			break;
+		mdelay(1);
+	}
+	*/
+}
+
+static int xgbe_init(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	int ret;
+
+	DBGPR("-->xgbe_init\n");
+
+	/* Flush Tx queues */
+	ret = xgbe_flush_tx_queues(pdata);
+	if (ret)
+		return ret;
+
+	/*
+	 * Initialize DMA related features
+	 */
+	xgbe_config_dma_bus(pdata);
+	xgbe_config_dma_cache(pdata);
+	xgbe_config_osp_mode(pdata);
+	xgbe_config_pblx8(pdata);
+	xgbe_config_tx_pbl_val(pdata);
+	xgbe_config_rx_pbl_val(pdata);
+	xgbe_config_rx_coalesce(pdata);
+	xgbe_config_tx_coalesce(pdata);
+	xgbe_config_rx_buffer_size(pdata);
+	xgbe_config_tso_mode(pdata);
+	xgbe_config_sph_mode(pdata);
+	xgbe_config_rss(pdata);
+	desc_if->wrapper_tx_desc_init(pdata);
+	desc_if->wrapper_rx_desc_init(pdata);
+	xgbe_enable_dma_interrupts(pdata);
+
+	/*
+	 * Initialize MTL related features
+	 */
+	xgbe_config_mtl_mode(pdata);
+	xgbe_config_queue_mapping(pdata);
+	xgbe_config_tsf_mode(pdata, pdata->tx_sf_mode);
+	xgbe_config_rsf_mode(pdata, pdata->rx_sf_mode);
+	xgbe_config_tx_threshold(pdata, pdata->tx_threshold);
+	xgbe_config_rx_threshold(pdata, pdata->rx_threshold);
+	xgbe_config_tx_fifo_size(pdata);
+	xgbe_config_rx_fifo_size(pdata);
+	xgbe_config_flow_control_threshold(pdata);
+	/*TODO: Error Packet and undersized good Packet forwarding enable
+		(FEP and FUP)
+	 */
+	xgbe_config_dcb_tc(pdata);
+	xgbe_config_dcb_pfc(pdata);
+	xgbe_enable_mtl_interrupts(pdata);
+
+	/*
+	 * Initialize MAC related features
+	 */
+	xgbe_config_mac_address(pdata);
+	xgbe_config_jumbo_enable(pdata);
+	xgbe_config_flow_control(pdata);
+	xgbe_config_mac_speed(pdata);
+	xgbe_config_checksum_offload(pdata);
+	xgbe_config_vlan_support(pdata);
+	xgbe_config_mmc(pdata);
+	xgbe_enable_mac_interrupts(pdata);
+
+	DBGPR("<--xgbe_init\n");
+
+	return 0;
+}
+
+void xgbe_a0_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
+{
+	DBGPR("-->xgbe_a0_init_function_ptrs\n");
+
+	hw_if->tx_complete = xgbe_tx_complete;
+
+	hw_if->set_promiscuous_mode = xgbe_set_promiscuous_mode;
+	hw_if->set_all_multicast_mode = xgbe_set_all_multicast_mode;
+	hw_if->add_mac_addresses = xgbe_add_mac_addresses;
+	hw_if->set_mac_address = xgbe_set_mac_address;
+
+	hw_if->enable_rx_csum = xgbe_enable_rx_csum;
+	hw_if->disable_rx_csum = xgbe_disable_rx_csum;
+
+	hw_if->enable_rx_vlan_stripping = xgbe_enable_rx_vlan_stripping;
+	hw_if->disable_rx_vlan_stripping = xgbe_disable_rx_vlan_stripping;
+	hw_if->enable_rx_vlan_filtering = xgbe_enable_rx_vlan_filtering;
+	hw_if->disable_rx_vlan_filtering = xgbe_disable_rx_vlan_filtering;
+	hw_if->update_vlan_hash_table = xgbe_update_vlan_hash_table;
+
+	hw_if->read_mmd_regs = xgbe_read_mmd_regs;
+	hw_if->write_mmd_regs = xgbe_write_mmd_regs;
+
+	hw_if->set_gmii_speed = xgbe_set_gmii_speed;
+	hw_if->set_gmii_2500_speed = xgbe_set_gmii_2500_speed;
+	hw_if->set_xgmii_speed = xgbe_set_xgmii_speed;
+
+	hw_if->enable_tx = xgbe_enable_tx;
+	hw_if->disable_tx = xgbe_disable_tx;
+	hw_if->enable_rx = xgbe_enable_rx;
+	hw_if->disable_rx = xgbe_disable_rx;
+
+	hw_if->powerup_tx = xgbe_powerup_tx;
+	hw_if->powerdown_tx = xgbe_powerdown_tx;
+	hw_if->powerup_rx = xgbe_powerup_rx;
+	hw_if->powerdown_rx = xgbe_powerdown_rx;
+
+	hw_if->dev_xmit = xgbe_dev_xmit;
+	hw_if->dev_read = xgbe_dev_read;
+	hw_if->enable_int = xgbe_enable_int;
+	hw_if->disable_int = xgbe_disable_int;
+	hw_if->init = xgbe_init;
+	hw_if->exit = xgbe_exit;
+
+	/* Descriptor related Sequences have to be initialized here */
+	hw_if->tx_desc_init = xgbe_tx_desc_init;
+	hw_if->rx_desc_init = xgbe_rx_desc_init;
+	hw_if->tx_desc_reset = xgbe_tx_desc_reset;
+	hw_if->rx_desc_reset = xgbe_rx_desc_reset;
+	hw_if->is_last_desc = xgbe_is_last_desc;
+	hw_if->is_context_desc = xgbe_is_context_desc;
+	hw_if->tx_start_xmit = xgbe_tx_start_xmit;
+
+	/* For FLOW ctrl */
+	hw_if->config_tx_flow_control = xgbe_config_tx_flow_control;
+	hw_if->config_rx_flow_control = xgbe_config_rx_flow_control;
+
+	/* For RX coalescing */
+	hw_if->config_rx_coalesce = xgbe_config_rx_coalesce;
+	hw_if->config_tx_coalesce = xgbe_config_tx_coalesce;
+	hw_if->usec_to_riwt = xgbe_usec_to_riwt;
+	hw_if->riwt_to_usec = xgbe_riwt_to_usec;
+
+	/* For RX and TX threshold config */
+	hw_if->config_rx_threshold = xgbe_config_rx_threshold;
+	hw_if->config_tx_threshold = xgbe_config_tx_threshold;
+
+	/* For RX and TX Store and Forward Mode config */
+	hw_if->config_rsf_mode = xgbe_config_rsf_mode;
+	hw_if->config_tsf_mode = xgbe_config_tsf_mode;
+
+	/* For TX DMA Operating on Second Frame config */
+	hw_if->config_osp_mode = xgbe_config_osp_mode;
+
+	/* For RX and TX PBL config */
+	hw_if->config_rx_pbl_val = xgbe_config_rx_pbl_val;
+	hw_if->get_rx_pbl_val = xgbe_get_rx_pbl_val;
+	hw_if->config_tx_pbl_val = xgbe_config_tx_pbl_val;
+	hw_if->get_tx_pbl_val = xgbe_get_tx_pbl_val;
+	hw_if->config_pblx8 = xgbe_config_pblx8;
+
+	/* For MMC statistics support */
+	hw_if->tx_mmc_int = xgbe_tx_mmc_int;
+	hw_if->rx_mmc_int = xgbe_rx_mmc_int;
+	hw_if->read_mmc_stats = xgbe_read_mmc_stats;
+
+	/* For PTP config */
+	hw_if->config_tstamp = xgbe_config_tstamp;
+	hw_if->update_tstamp_addend = xgbe_update_tstamp_addend;
+	hw_if->set_tstamp_time = xgbe_set_tstamp_time;
+	hw_if->get_tstamp_time = xgbe_get_tstamp_time;
+	hw_if->get_tx_tstamp = xgbe_get_tx_tstamp;
+
+	/* For Data Center Bridging config */
+	hw_if->config_dcb_tc = xgbe_config_dcb_tc;
+	hw_if->config_dcb_pfc = xgbe_config_dcb_pfc;
+
+	/* For Receive Side Scaling */
+	hw_if->enable_rss = xgbe_enable_rss;
+	hw_if->disable_rss = xgbe_disable_rss;
+	hw_if->set_rss_hash_key = xgbe_set_rss_hash_key;
+	hw_if->set_rss_lookup_table = xgbe_set_rss_lookup_table;
+
+	DBGPR("<--xgbe_a0_init_function_ptrs\n");
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-drv.c
new file mode 100644
index 0000000..acaeaf5
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-drv.c
@@ -0,0 +1,2204 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/platform_device.h>
+#include <linux/spinlock.h>
+#include <linux/tcp.h>
+#include <linux/if_vlan.h>
+#include <net/busy_poll.h>
+#include <linux/clk.h>
+#include <linux/if_ether.h>
+#include <linux/net_tstamp.h>
+#include <linux/phy.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static int xgbe_one_poll(struct napi_struct *, int);
+static int xgbe_all_poll(struct napi_struct *, int);
+static void xgbe_set_rx_mode(struct net_device *);
+
+static int xgbe_alloc_channels(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel_mem, *channel;
+	struct xgbe_ring *tx_ring, *rx_ring;
+	unsigned int count, i;
+	int ret = -ENOMEM;
+
+	count = max_t(unsigned int, pdata->tx_ring_count, pdata->rx_ring_count);
+
+	channel_mem = kcalloc(count, sizeof(struct xgbe_channel), GFP_KERNEL);
+	if (!channel_mem)
+		goto err_channel;
+
+	tx_ring = kcalloc(pdata->tx_ring_count, sizeof(struct xgbe_ring),
+			  GFP_KERNEL);
+	if (!tx_ring)
+		goto err_tx_ring;
+
+	rx_ring = kcalloc(pdata->rx_ring_count, sizeof(struct xgbe_ring),
+			  GFP_KERNEL);
+	if (!rx_ring)
+		goto err_rx_ring;
+
+	for (i = 0, channel = channel_mem; i < count; i++, channel++) {
+		snprintf(channel->name, sizeof(channel->name), "channel-%d", i);
+		channel->pdata = pdata;
+		channel->queue_index = i;
+		channel->dma_regs = pdata->xgmac_regs + DMA_CH_BASE +
+				    (DMA_CH_INC * i);
+
+		if (pdata->per_channel_irq) {
+			/* Get the DMA interrupt (offset 1) */
+			ret = platform_get_irq(pdata->pdev, i + 1);
+			if (ret < 0) {
+				netdev_err(pdata->netdev,
+					   "platform_get_irq %u failed\n",
+					   i + 1);
+				goto err_irq;
+			}
+
+			channel->dma_irq = ret;
+		}
+
+		if (i < pdata->tx_ring_count) {
+			spin_lock_init(&tx_ring->lock);
+			channel->tx_ring = tx_ring++;
+		}
+
+		if (i < pdata->rx_ring_count) {
+			spin_lock_init(&rx_ring->lock);
+			channel->rx_ring = rx_ring++;
+		}
+
+		DBGPR("  %s: queue=%u, dma_regs=%p, dma_irq=%d, tx=%p, rx=%p\n",
+		      channel->name, channel->queue_index, channel->dma_regs,
+		      channel->dma_irq, channel->tx_ring, channel->rx_ring);
+	}
+
+	pdata->channel = channel_mem;
+	pdata->channel_count = count;
+
+	return 0;
+
+err_irq:
+	kfree(rx_ring);
+
+err_rx_ring:
+	kfree(tx_ring);
+
+err_tx_ring:
+	kfree(channel_mem);
+
+err_channel:
+	return ret;
+}
+
+static void xgbe_free_channels(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->channel)
+		return;
+
+	kfree(pdata->channel->rx_ring);
+	kfree(pdata->channel->tx_ring);
+	kfree(pdata->channel);
+
+	pdata->channel = NULL;
+	pdata->channel_count = 0;
+}
+
+static inline unsigned int xgbe_tx_avail_desc(struct xgbe_ring *ring)
+{
+	return (ring->rdesc_count - (ring->cur - ring->dirty));
+}
+
+static inline unsigned int xgbe_rx_dirty_desc(struct xgbe_ring *ring)
+{
+	return (ring->cur - ring->dirty);
+}
+
+static int xgbe_maybe_stop_tx_queue(struct xgbe_channel *channel,
+				    struct xgbe_ring *ring, unsigned int count)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+
+	if (count > xgbe_tx_avail_desc(ring)) {
+		DBGPR("  Tx queue stopped, not enough descriptors available\n");
+		netif_stop_subqueue(pdata->netdev, channel->queue_index);
+		ring->tx.queue_stopped = 1;
+
+		/* If we haven't notified the hardware because of xmit_more
+		 * support, tell it now
+		 */
+		if (ring->tx.xmit_more)
+			pdata->hw_if.tx_start_xmit(channel, ring);
+
+		return NETDEV_TX_BUSY;
+	}
+
+	return 0;
+}
+
+static int xgbe_calc_rx_buf_size(struct net_device *netdev, unsigned int mtu)
+{
+	unsigned int rx_buf_size;
+
+	if (mtu > XGMAC_JUMBO_PACKET_MTU) {
+		netdev_alert(netdev, "MTU exceeds maximum supported value\n");
+		return -EINVAL;
+	}
+
+	rx_buf_size = mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+	rx_buf_size = clamp_val(rx_buf_size, XGBE_RX_MIN_BUF_SIZE, PAGE_SIZE);
+
+	rx_buf_size = (rx_buf_size + XGBE_RX_BUF_ALIGN - 1) &
+		      ~(XGBE_RX_BUF_ALIGN - 1);
+
+	return rx_buf_size;
+}
+
+static void xgbe_enable_rx_tx_ints(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	enum xgbe_int int_id;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (channel->tx_ring && channel->rx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_TI_RI;
+		else if (channel->tx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_TI;
+		else if (channel->rx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_RI;
+		else
+			continue;
+
+		hw_if->enable_int(channel, int_id);
+	}
+}
+
+static void xgbe_disable_rx_tx_ints(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	enum xgbe_int int_id;
+	unsigned int i;
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (channel->tx_ring && channel->rx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_TI_RI;
+		else if (channel->tx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_TI;
+		else if (channel->rx_ring)
+			int_id = XGMAC_INT_DMA_CH_SR_RI;
+		else
+			continue;
+
+		hw_if->disable_int(channel, int_id);
+	}
+}
+
+static irqreturn_t xgbe_isr(int irq, void *data)
+{
+	struct xgbe_prv_data *pdata = data;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	unsigned int dma_isr, dma_ch_isr;
+	unsigned int mac_isr, mac_tssr;
+	unsigned int i;
+
+	/* The DMA interrupt status register also reports MAC and MTL
+	 * interrupts. So for polling mode, we just need to check for
+	 * this register to be non-zero
+	 */
+	dma_isr = XGMAC_IOREAD(pdata, DMA_ISR);
+	if (!dma_isr)
+		goto isr_done;
+
+	DBGPR("  DMA_ISR = %08x\n", dma_isr);
+
+	for (i = 0; i < pdata->channel_count; i++) {
+		if (!(dma_isr & (1 << i)))
+			continue;
+
+		channel = pdata->channel + i;
+
+		dma_ch_isr = XGMAC_DMA_IOREAD(channel, DMA_CH_SR);
+		DBGPR("  DMA_CH%u_ISR = %08x\n", i, dma_ch_isr);
+
+		/* If we get a TI or RI interrupt that means per channel DMA
+		 * interrupts are not enabled, so we use the private data napi
+		 * structure, not the per channel napi structure
+		 */
+		if (XGMAC_GET_BITS(dma_ch_isr, DMA_CH_SR, TI) ||
+		    XGMAC_GET_BITS(dma_ch_isr, DMA_CH_SR, RI)) {
+			if (napi_schedule_prep(&pdata->napi)) {
+				/* Disable Tx and Rx interrupts */
+				xgbe_disable_rx_tx_ints(pdata);
+
+				/* Turn on polling */
+				__napi_schedule(&pdata->napi);
+			}
+		}
+
+		/* Restart the device on a Fatal Bus Error */
+		if (XGMAC_GET_BITS(dma_ch_isr, DMA_CH_SR, FBE))
+			schedule_work(&pdata->restart_work);
+
+		/* Clear all interrupt signals */
+		XGMAC_DMA_IOWRITE(channel, DMA_CH_SR, dma_ch_isr);
+	}
+
+	if (XGMAC_GET_BITS(dma_isr, DMA_ISR, MACIS)) {
+		mac_isr = XGMAC_IOREAD(pdata, MAC_ISR);
+
+		if (XGMAC_GET_BITS(mac_isr, MAC_ISR, MMCTXIS))
+			hw_if->tx_mmc_int(pdata);
+
+		if (XGMAC_GET_BITS(mac_isr, MAC_ISR, MMCRXIS))
+			hw_if->rx_mmc_int(pdata);
+
+		if (XGMAC_GET_BITS(mac_isr, MAC_ISR, TSIS)) {
+			mac_tssr = XGMAC_IOREAD(pdata, MAC_TSSR);
+
+			if (XGMAC_GET_BITS(mac_tssr, MAC_TSSR, TXTSC)) {
+				/* Read Tx Timestamp to clear interrupt */
+				pdata->tx_tstamp =
+					hw_if->get_tx_tstamp(pdata);
+				schedule_work(&pdata->tx_tstamp_work);
+			}
+		}
+	}
+
+	DBGPR("  DMA_ISR = %08x\n", XGMAC_IOREAD(pdata, DMA_ISR));
+
+isr_done:
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t xgbe_dma_isr(int irq, void *data)
+{
+	struct xgbe_channel *channel = data;
+
+	/* Per channel DMA interrupts are enabled, so we use the per
+	 * channel napi structure and not the private data napi structure
+	 */
+	if (napi_schedule_prep(&channel->napi)) {
+		/* Disable Tx and Rx interrupts */
+		disable_irq_nosync(channel->dma_irq);
+
+		/* Turn on polling */
+		__napi_schedule(&channel->napi);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static enum hrtimer_restart xgbe_tx_timer(struct hrtimer *timer)
+{
+	struct xgbe_channel *channel = container_of(timer,
+						    struct xgbe_channel,
+						    tx_timer);
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct napi_struct *napi;
+
+	DBGPR("-->xgbe_tx_timer\n");
+
+	napi = (pdata->per_channel_irq) ? &channel->napi : &pdata->napi;
+
+	if (napi_schedule_prep(napi)) {
+		/* Disable Tx and Rx interrupts */
+		if (pdata->per_channel_irq)
+			disable_irq(channel->dma_irq);
+		else
+			xgbe_disable_rx_tx_ints(pdata);
+
+		/* Turn on polling */
+		__napi_schedule(napi);
+	}
+
+	channel->tx_timer_active = 0;
+
+	DBGPR("<--xgbe_tx_timer\n");
+
+	return HRTIMER_NORESTART;
+}
+
+static void xgbe_init_tx_timers(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	DBGPR("-->xgbe_init_tx_timers\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		DBGPR("  %s adding tx timer\n", channel->name);
+		hrtimer_init(&channel->tx_timer, CLOCK_MONOTONIC,
+			     HRTIMER_MODE_REL);
+		channel->tx_timer.function = xgbe_tx_timer;
+	}
+
+	DBGPR("<--xgbe_init_tx_timers\n");
+}
+
+static void xgbe_stop_tx_timers(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	DBGPR("-->xgbe_stop_tx_timers\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			break;
+
+		DBGPR("  %s deleting tx timer\n", channel->name);
+		channel->tx_timer_active = 0;
+		hrtimer_cancel(&channel->tx_timer);
+	}
+
+	DBGPR("<--xgbe_stop_tx_timers\n");
+}
+
+void xgbe_a0_get_all_hw_features(struct xgbe_prv_data *pdata)
+{
+	unsigned int mac_hfr0, mac_hfr1, mac_hfr2;
+	struct xgbe_hw_features *hw_feat = &pdata->hw_feat;
+
+	DBGPR("-->xgbe_a0_get_all_hw_features\n");
+
+	mac_hfr0 = XGMAC_IOREAD(pdata, MAC_HWF0R);
+	mac_hfr1 = XGMAC_IOREAD(pdata, MAC_HWF1R);
+	mac_hfr2 = XGMAC_IOREAD(pdata, MAC_HWF2R);
+
+	memset(hw_feat, 0, sizeof(*hw_feat));
+
+	hw_feat->version = XGMAC_IOREAD(pdata, MAC_VR);
+
+	/* Hardware feature register 0 */
+	hw_feat->gmii        = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, GMIISEL);
+	hw_feat->vlhash      = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, VLHASH);
+	hw_feat->sma         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, SMASEL);
+	hw_feat->rwk         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, RWKSEL);
+	hw_feat->mgk         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, MGKSEL);
+	hw_feat->mmc         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, MMCSEL);
+	hw_feat->aoe         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, ARPOFFSEL);
+	hw_feat->ts          = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TSSEL);
+	hw_feat->eee         = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, EEESEL);
+	hw_feat->tx_coe      = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TXCOESEL);
+	hw_feat->rx_coe      = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, RXCOESEL);
+	hw_feat->addn_mac    = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R,
+					      ADDMACADRSEL);
+	hw_feat->ts_src      = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, TSSTSSEL);
+	hw_feat->sa_vlan_ins = XGMAC_GET_BITS(mac_hfr0, MAC_HWF0R, SAVLANINS);
+
+	/* Hardware feature register 1 */
+	hw_feat->rx_fifo_size  = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R,
+						RXFIFOSIZE);
+	hw_feat->tx_fifo_size  = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R,
+						TXFIFOSIZE);
+	hw_feat->dcb           = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, DCBEN);
+	hw_feat->sph           = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, SPHEN);
+	hw_feat->tso           = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, TSOEN);
+	hw_feat->dma_debug     = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, DBGMEMA);
+	hw_feat->tc_cnt	       = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R, NUMTC);
+	hw_feat->hash_table_size = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R,
+						  HASHTBLSZ);
+	hw_feat->l3l4_filter_num = XGMAC_GET_BITS(mac_hfr1, MAC_HWF1R,
+						  L3L4FNUM);
+
+	/* Hardware feature register 2 */
+	hw_feat->rx_q_cnt     = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, RXQCNT);
+	hw_feat->tx_q_cnt     = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, TXQCNT);
+	hw_feat->rx_ch_cnt    = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, RXCHCNT);
+	hw_feat->tx_ch_cnt    = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, TXCHCNT);
+	hw_feat->pps_out_num  = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, PPSOUTNUM);
+	hw_feat->aux_snap_num = XGMAC_GET_BITS(mac_hfr2, MAC_HWF2R, AUXSNAPNUM);
+
+	/* Translate the Hash Table size into actual number */
+	switch (hw_feat->hash_table_size) {
+	case 0:
+		break;
+	case 1:
+		hw_feat->hash_table_size = 64;
+		break;
+	case 2:
+		hw_feat->hash_table_size = 128;
+		break;
+	case 3:
+		hw_feat->hash_table_size = 256;
+		break;
+	}
+
+	/* The Queue and Channel counts are zero based so increment them
+	 * to get the actual number
+	 */
+	hw_feat->rx_q_cnt++;
+	hw_feat->tx_q_cnt++;
+	hw_feat->rx_ch_cnt++;
+	hw_feat->tx_ch_cnt++;
+
+#define XGBE_TC_CNT		2
+	hw_feat->tc_cnt = XGBE_TC_CNT;
+
+	DBGPR("<--xgbe_a0_get_all_hw_features\n");
+}
+
+static void xgbe_napi_enable(struct xgbe_prv_data *pdata, unsigned int add)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++) {
+			if (add)
+				netif_napi_add(pdata->netdev, &channel->napi,
+					       xgbe_one_poll, NAPI_POLL_WEIGHT);
+
+			napi_enable(&channel->napi);
+		}
+	} else {
+		if (add)
+			netif_napi_add(pdata->netdev, &pdata->napi,
+				       xgbe_all_poll, NAPI_POLL_WEIGHT);
+
+		napi_enable(&pdata->napi);
+	}
+}
+
+static void xgbe_napi_disable(struct xgbe_prv_data *pdata, unsigned int del)
+{
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++) {
+			napi_disable(&channel->napi);
+
+			if (del)
+				netif_napi_del(&channel->napi);
+		}
+	} else {
+		napi_disable(&pdata->napi);
+
+		if (del)
+			netif_napi_del(&pdata->napi);
+	}
+}
+
+void xgbe_a0_init_tx_coalesce(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+
+	DBGPR("-->xgbe_a0_init_tx_coalesce\n");
+
+	pdata->tx_usecs = XGMAC_INIT_DMA_TX_USECS;
+	pdata->tx_frames = XGMAC_INIT_DMA_TX_FRAMES;
+
+	hw_if->config_tx_coalesce(pdata);
+
+	DBGPR("<--xgbe_a0_init_tx_coalesce\n");
+}
+
+void xgbe_a0_init_rx_coalesce(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+
+	DBGPR("-->xgbe_a0_init_rx_coalesce\n");
+
+	pdata->rx_riwt = hw_if->usec_to_riwt(pdata, XGMAC_INIT_DMA_RX_USECS);
+	pdata->rx_frames = XGMAC_INIT_DMA_RX_FRAMES;
+
+	hw_if->config_rx_coalesce(pdata);
+
+	DBGPR("<--xgbe_a0_init_rx_coalesce\n");
+}
+
+static void xgbe_free_tx_data(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	struct xgbe_ring_data *rdata;
+	unsigned int i, j;
+
+	DBGPR("-->xgbe_free_tx_data\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		ring = channel->tx_ring;
+		if (!ring)
+			break;
+
+		for (j = 0; j < ring->rdesc_count; j++) {
+			rdata = XGBE_GET_DESC_DATA(ring, j);
+			desc_if->unmap_rdata(pdata, rdata);
+		}
+	}
+
+	DBGPR("<--xgbe_free_tx_data\n");
+}
+
+static void xgbe_free_rx_data(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	struct xgbe_ring_data *rdata;
+	unsigned int i, j;
+
+	DBGPR("-->xgbe_free_rx_data\n");
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		ring = channel->rx_ring;
+		if (!ring)
+			break;
+
+		for (j = 0; j < ring->rdesc_count; j++) {
+			rdata = XGBE_GET_DESC_DATA(ring, j);
+			desc_if->unmap_rdata(pdata, rdata);
+		}
+	}
+
+	DBGPR("<--xgbe_free_rx_data\n");
+}
+
+static void xgbe_adjust_link(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct phy_device *phydev = pdata->phydev;
+	int new_state = 0;
+
+	if (!phydev)
+		return;
+
+	if (phydev->link) {
+		/* Flow control support */
+		if (pdata->pause_autoneg) {
+			if (phydev->pause || phydev->asym_pause) {
+				pdata->tx_pause = 1;
+				pdata->rx_pause = 1;
+			} else {
+				pdata->tx_pause = 0;
+				pdata->rx_pause = 0;
+			}
+		}
+
+		if (pdata->tx_pause != pdata->phy_tx_pause) {
+			hw_if->config_tx_flow_control(pdata);
+			pdata->phy_tx_pause = pdata->tx_pause;
+		}
+
+		if (pdata->rx_pause != pdata->phy_rx_pause) {
+			hw_if->config_rx_flow_control(pdata);
+			pdata->phy_rx_pause = pdata->rx_pause;
+		}
+
+		/* Speed support */
+		if (phydev->speed != pdata->phy_speed) {
+			new_state = 1;
+
+			switch (phydev->speed) {
+			case SPEED_10000:
+				hw_if->set_xgmii_speed(pdata);
+				break;
+
+			case SPEED_2500:
+				hw_if->set_gmii_2500_speed(pdata);
+				break;
+
+			case SPEED_1000:
+				hw_if->set_gmii_speed(pdata);
+				break;
+			}
+			pdata->phy_speed = phydev->speed;
+		}
+
+		if (phydev->link != pdata->phy_link) {
+			new_state = 1;
+			pdata->phy_link = 1;
+		}
+	} else if (pdata->phy_link) {
+		new_state = 1;
+		pdata->phy_link = 0;
+		pdata->phy_speed = SPEED_UNKNOWN;
+	}
+
+	if (new_state)
+		phy_print_status(phydev);
+}
+
+static int xgbe_phy_init(struct xgbe_prv_data *pdata)
+{
+	struct net_device *netdev = pdata->netdev;
+	struct phy_device *phydev = pdata->phydev;
+	int ret;
+
+	pdata->phy_link = -1;
+	pdata->phy_speed = SPEED_UNKNOWN;
+	pdata->phy_tx_pause = pdata->tx_pause;
+	pdata->phy_rx_pause = pdata->rx_pause;
+
+	ret = phy_connect_direct(netdev, phydev, &xgbe_adjust_link,
+				 pdata->phy_mode);
+	if (ret) {
+		netdev_err(netdev, "phy_connect_direct failed\n");
+		return ret;
+	}
+
+	if (!phydev->drv || (phydev->drv->phy_id == 0)) {
+		netdev_err(netdev, "phy_id not valid\n");
+		ret = -ENODEV;
+		goto err_phy_connect;
+	}
+	DBGPR("  phy_connect_direct succeeded for PHY %s, link=%d\n",
+	      dev_name(&phydev->dev), phydev->link);
+
+	return 0;
+
+err_phy_connect:
+	phy_disconnect(phydev);
+
+	return ret;
+}
+
+static void xgbe_phy_exit(struct xgbe_prv_data *pdata)
+{
+	if (!pdata->phydev)
+		return;
+
+	phy_disconnect(pdata->phydev);
+}
+
+int xgbe_a0_powerdown(struct net_device *netdev, unsigned int caller)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned long flags;
+
+	DBGPR("-->xgbe_a0_powerdown\n");
+
+	if (!netif_running(netdev) ||
+	    (caller == XGMAC_IOCTL_CONTEXT && pdata->power_down)) {
+		netdev_alert(netdev, "Device is already powered down\n");
+		DBGPR("<--xgbe_a0_powerdown\n");
+		return -EINVAL;
+	}
+
+	phy_stop(pdata->phydev);
+
+	spin_lock_irqsave(&pdata->lock, flags);
+
+	if (caller == XGMAC_DRIVER_CONTEXT)
+		netif_device_detach(netdev);
+
+	netif_tx_stop_all_queues(netdev);
+	xgbe_napi_disable(pdata, 0);
+
+	/* Powerdown Tx/Rx */
+	hw_if->powerdown_tx(pdata);
+	hw_if->powerdown_rx(pdata);
+
+	pdata->power_down = 1;
+
+	spin_unlock_irqrestore(&pdata->lock, flags);
+
+	DBGPR("<--xgbe_a0_powerdown\n");
+
+	return 0;
+}
+
+int xgbe_a0_powerup(struct net_device *netdev, unsigned int caller)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned long flags;
+
+	DBGPR("-->xgbe_a0_powerup\n");
+
+	if (!netif_running(netdev) ||
+	    (caller == XGMAC_IOCTL_CONTEXT && !pdata->power_down)) {
+		netdev_alert(netdev, "Device is already powered up\n");
+		DBGPR("<--xgbe_a0_powerup\n");
+		return -EINVAL;
+	}
+
+	spin_lock_irqsave(&pdata->lock, flags);
+
+	pdata->power_down = 0;
+
+	phy_start(pdata->phydev);
+
+	/* Enable Tx/Rx */
+	hw_if->powerup_tx(pdata);
+	hw_if->powerup_rx(pdata);
+
+	if (caller == XGMAC_DRIVER_CONTEXT)
+		netif_device_attach(netdev);
+
+	xgbe_napi_enable(pdata, 0);
+	netif_tx_start_all_queues(netdev);
+
+	spin_unlock_irqrestore(&pdata->lock, flags);
+
+	DBGPR("<--xgbe_a0_powerup\n");
+
+	return 0;
+}
+
+static int xgbe_start(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct net_device *netdev = pdata->netdev;
+
+	DBGPR("-->xgbe_start\n");
+
+	xgbe_set_rx_mode(netdev);
+
+	hw_if->init(pdata);
+
+	phy_start(pdata->phydev);
+
+	hw_if->enable_tx(pdata);
+	hw_if->enable_rx(pdata);
+
+	xgbe_init_tx_timers(pdata);
+
+	xgbe_napi_enable(pdata, 1);
+	netif_tx_start_all_queues(netdev);
+
+	DBGPR("<--xgbe_start\n");
+
+	return 0;
+}
+
+static void xgbe_stop(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_channel *channel;
+	struct net_device *netdev = pdata->netdev;
+	struct netdev_queue *txq;
+	unsigned int i;
+
+	DBGPR("-->xgbe_stop\n");
+
+	phy_stop(pdata->phydev);
+
+	netif_tx_stop_all_queues(netdev);
+	xgbe_napi_disable(pdata, 1);
+
+	xgbe_stop_tx_timers(pdata);
+
+	hw_if->disable_tx(pdata);
+	hw_if->disable_rx(pdata);
+
+	channel = pdata->channel;
+	for (i = 0; i < pdata->channel_count; i++, channel++) {
+		if (!channel->tx_ring)
+			continue;
+
+		txq = netdev_get_tx_queue(netdev, channel->queue_index);
+		netdev_tx_reset_queue(txq);
+	}
+
+	DBGPR("<--xgbe_stop\n");
+}
+
+static void xgbe_restart_dev(struct xgbe_prv_data *pdata)
+{
+	struct xgbe_channel *channel;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned int i;
+
+	DBGPR("-->xgbe_restart_dev\n");
+
+	/* If not running, "restart" will happen on open */
+	if (!netif_running(pdata->netdev))
+		return;
+
+	xgbe_stop(pdata);
+	synchronize_irq(pdata->dev_irq);
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++)
+			synchronize_irq(channel->dma_irq);
+	}
+
+	xgbe_free_tx_data(pdata);
+	xgbe_free_rx_data(pdata);
+
+	/* Issue software reset to device */
+	hw_if->exit(pdata);
+
+	xgbe_start(pdata);
+
+	DBGPR("<--xgbe_restart_dev\n");
+}
+
+static void xgbe_restart(struct work_struct *work)
+{
+	struct xgbe_prv_data *pdata = container_of(work,
+						   struct xgbe_prv_data,
+						   restart_work);
+
+	rtnl_lock();
+
+	xgbe_restart_dev(pdata);
+
+	rtnl_unlock();
+}
+
+static void xgbe_tx_tstamp(struct work_struct *work)
+{
+	struct xgbe_prv_data *pdata = container_of(work,
+						   struct xgbe_prv_data,
+						   tx_tstamp_work);
+	struct skb_shared_hwtstamps hwtstamps;
+	u64 nsec;
+	unsigned long flags;
+
+	if (pdata->tx_tstamp) {
+		nsec = timecounter_cyc2time(&pdata->tstamp_tc,
+					    pdata->tx_tstamp);
+
+		memset(&hwtstamps, 0, sizeof(hwtstamps));
+		hwtstamps.hwtstamp = ns_to_ktime(nsec);
+		skb_tstamp_tx(pdata->tx_tstamp_skb, &hwtstamps);
+	}
+
+	dev_kfree_skb_any(pdata->tx_tstamp_skb);
+
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+	pdata->tx_tstamp_skb = NULL;
+	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+}
+
+static int xgbe_get_hwtstamp_settings(struct xgbe_prv_data *pdata,
+				      struct ifreq *ifreq)
+{
+	if (copy_to_user(ifreq->ifr_data, &pdata->tstamp_config,
+			 sizeof(pdata->tstamp_config)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int xgbe_set_hwtstamp_settings(struct xgbe_prv_data *pdata,
+				      struct ifreq *ifreq)
+{
+	struct hwtstamp_config config;
+	unsigned int mac_tscr;
+
+	if (copy_from_user(&config, ifreq->ifr_data, sizeof(config)))
+		return -EFAULT;
+
+	if (config.flags)
+		return -EINVAL;
+
+	mac_tscr = 0;
+
+	switch (config.tx_type) {
+	case HWTSTAMP_TX_OFF:
+		break;
+
+	case HWTSTAMP_TX_ON:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	default:
+		return -ERANGE;
+	}
+
+	switch (config.rx_filter) {
+	case HWTSTAMP_FILTER_NONE:
+		break;
+
+	case HWTSTAMP_FILTER_ALL:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENALL, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2, UDP, any kind of event packet */
+	case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+	/* PTP v1, UDP, any kind of event packet */
+	case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, SNAPTYPSEL, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2, UDP, Sync packet */
+	case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+	/* PTP v1, UDP, Sync packet */
+	case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2, UDP, Delay_req packet */
+	case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+	/* PTP v1, UDP, Delay_req packet */
+	case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSMSTRENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* 802.AS1, Ethernet, any kind of event packet */
+	case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, AV8021ASMEN, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, SNAPTYPSEL, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* 802.AS1, Ethernet, Sync packet */
+	case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, AV8021ASMEN, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* 802.AS1, Ethernet, Delay_req packet */
+	case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, AV8021ASMEN, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSMSTRENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2/802.AS1, any layer, any kind of event packet */
+	case HWTSTAMP_FILTER_PTP_V2_EVENT:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, SNAPTYPSEL, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2/802.AS1, any layer, Sync packet */
+	case HWTSTAMP_FILTER_PTP_V2_SYNC:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	/* PTP v2/802.AS1, any layer, Delay_req packet */
+	case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSVER2ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV4ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSIPV6ENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSMSTRENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSEVNTENA, 1);
+		XGMAC_SET_BITS(mac_tscr, MAC_TSCR, TSENA, 1);
+		break;
+
+	default:
+		return -ERANGE;
+	}
+
+	pdata->hw_if.config_tstamp(pdata, mac_tscr);
+
+	memcpy(&pdata->tstamp_config, &config, sizeof(config));
+
+	return 0;
+}
+
+static void xgbe_prep_tx_tstamp(struct xgbe_prv_data *pdata,
+				struct sk_buff *skb,
+				struct xgbe_packet_data *packet)
+{
+	unsigned long flags;
+
+	if (XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES, PTP)) {
+		spin_lock_irqsave(&pdata->tstamp_lock, flags);
+		if (pdata->tx_tstamp_skb) {
+			/* Another timestamp in progress, ignore this one */
+			XGMAC_SET_BITS(packet->attributes,
+				       TX_PACKET_ATTRIBUTES, PTP, 0);
+		} else {
+			pdata->tx_tstamp_skb = skb_get(skb);
+			skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+		}
+		spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+	}
+
+	if (!XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES, PTP))
+		skb_tx_timestamp(skb);
+}
+
+static void xgbe_prep_vlan(struct sk_buff *skb, struct xgbe_packet_data *packet)
+{
+	if (skb_vlan_tag_present(skb))
+		packet->vlan_ctag = skb_vlan_tag_get(skb);
+}
+
+static int xgbe_prep_tso(struct sk_buff *skb, struct xgbe_packet_data *packet)
+{
+	int ret;
+
+	if (!XGMAC_GET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			    TSO_ENABLE))
+		return 0;
+
+	ret = skb_cow_head(skb, 0);
+	if (ret)
+		return ret;
+
+	packet->header_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+	packet->tcp_header_len = tcp_hdrlen(skb);
+	packet->tcp_payload_len = skb->len - packet->header_len;
+	packet->mss = skb_shinfo(skb)->gso_size;
+	DBGPR("  packet->header_len=%u\n", packet->header_len);
+	DBGPR("  packet->tcp_header_len=%u, packet->tcp_payload_len=%u\n",
+	      packet->tcp_header_len, packet->tcp_payload_len);
+	DBGPR("  packet->mss=%u\n", packet->mss);
+
+	/* Update the number of packets that will ultimately be transmitted
+	 * along with the extra bytes for each extra packet
+	 */
+	packet->tx_packets = skb_shinfo(skb)->gso_segs;
+	packet->tx_bytes += (packet->tx_packets - 1) * packet->header_len;
+
+	return 0;
+}
+
+static int xgbe_is_tso(struct sk_buff *skb)
+{
+	if (skb->ip_summed != CHECKSUM_PARTIAL)
+		return 0;
+
+	if (!skb_is_gso(skb))
+		return 0;
+
+	DBGPR("  TSO packet to be processed\n");
+
+	return 1;
+}
+
+static void xgbe_packet_info(struct xgbe_prv_data *pdata,
+			     struct xgbe_ring *ring, struct sk_buff *skb,
+			     struct xgbe_packet_data *packet)
+{
+	struct skb_frag_struct *frag;
+	unsigned int context_desc;
+	unsigned int len;
+	unsigned int i;
+
+	packet->skb = skb;
+
+	context_desc = 0;
+	packet->rdesc_count = 0;
+
+	packet->tx_packets = 1;
+	packet->tx_bytes = skb->len;
+
+	if (xgbe_is_tso(skb)) {
+		/* TSO requires an extra descriptor if mss is different */
+		if (skb_shinfo(skb)->gso_size != ring->tx.cur_mss) {
+			context_desc = 1;
+			packet->rdesc_count++;
+		}
+
+		/* TSO requires an extra descriptor for TSO header */
+		packet->rdesc_count++;
+
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       TSO_ENABLE, 1);
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       CSUM_ENABLE, 1);
+	} else if (skb->ip_summed == CHECKSUM_PARTIAL)
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       CSUM_ENABLE, 1);
+
+	if (skb_vlan_tag_present(skb)) {
+		/* VLAN requires an extra descriptor if tag is different */
+		if (skb_vlan_tag_get(skb) != ring->tx.cur_vlan_ctag)
+			/* We can share with the TSO context descriptor */
+			if (!context_desc) {
+				context_desc = 1;
+				packet->rdesc_count++;
+			}
+
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       VLAN_CTAG, 1);
+	}
+
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+	    (pdata->tstamp_config.tx_type == HWTSTAMP_TX_ON))
+		XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
+			       PTP, 1);
+
+	for (len = skb_headlen(skb); len;) {
+		packet->rdesc_count++;
+		len -= min_t(unsigned int, len, XGBE_TX_MAX_BUF_SIZE);
+	}
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		frag = &skb_shinfo(skb)->frags[i];
+		for (len = skb_frag_size(frag); len; ) {
+			packet->rdesc_count++;
+			len -= min_t(unsigned int, len, XGBE_TX_MAX_BUF_SIZE);
+		}
+	}
+}
+
+static int xgbe_open(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_channel *channel = NULL;
+	unsigned int i = 0;
+	int ret;
+
+	DBGPR("-->xgbe_open\n");
+
+	/* Initialize the phy */
+	ret = xgbe_phy_init(pdata);
+	if (ret)
+		return ret;
+
+	/* Enable the clocks */
+	ret = clk_prepare_enable(pdata->sysclk);
+	if (ret) {
+		netdev_alert(netdev, "dma clk_prepare_enable failed\n");
+		goto err_phy_init;
+	}
+
+	ret = clk_prepare_enable(pdata->ptpclk);
+	if (ret) {
+		netdev_alert(netdev, "ptp clk_prepare_enable failed\n");
+		goto err_sysclk;
+	}
+
+	/* Calculate the Rx buffer size before allocating rings */
+	ret = xgbe_calc_rx_buf_size(netdev, netdev->mtu);
+	if (ret < 0)
+		goto err_ptpclk;
+	pdata->rx_buf_size = ret;
+
+	/* Allocate the channel and ring structures */
+	ret = xgbe_alloc_channels(pdata);
+	if (ret)
+		goto err_ptpclk;
+
+	/* Allocate the ring descriptors and buffers */
+	ret = desc_if->alloc_ring_resources(pdata);
+	if (ret)
+		goto err_channels;
+
+	/* Initialize the device restart and Tx timestamp work struct */
+	INIT_WORK(&pdata->restart_work, xgbe_restart);
+	INIT_WORK(&pdata->tx_tstamp_work, xgbe_tx_tstamp);
+
+	/* Request interrupts */
+	ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0,
+			       netdev->name, pdata);
+	if (ret) {
+		netdev_alert(netdev, "error requesting irq %d\n",
+			     pdata->dev_irq);
+		goto err_rings;
+	}
+
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++) {
+			snprintf(channel->dma_irq_name,
+				 sizeof(channel->dma_irq_name) - 1,
+				 "%s-TxRx-%u", netdev_name(netdev),
+				 channel->queue_index);
+
+			ret = devm_request_irq(pdata->dev, channel->dma_irq,
+					       xgbe_dma_isr, 0,
+					       channel->dma_irq_name, channel);
+			if (ret) {
+				netdev_alert(netdev,
+					     "error requesting irq %d\n",
+					     channel->dma_irq);
+				goto err_irq;
+			}
+		}
+	}
+
+	ret = xgbe_start(pdata);
+	if (ret)
+		goto err_start;
+
+	DBGPR("<--xgbe_open\n");
+
+	return 0;
+
+err_start:
+	hw_if->exit(pdata);
+
+err_irq:
+	if (pdata->per_channel_irq) {
+		/* Using an unsigned int, 'i' will go to UINT_MAX and exit */
+		for (i--, channel--; i < pdata->channel_count; i--, channel--)
+			devm_free_irq(pdata->dev, channel->dma_irq, channel);
+	}
+
+	devm_free_irq(pdata->dev, pdata->dev_irq, pdata);
+
+err_rings:
+	desc_if->free_ring_resources(pdata);
+
+err_channels:
+	xgbe_free_channels(pdata);
+
+err_ptpclk:
+	clk_disable_unprepare(pdata->ptpclk);
+
+err_sysclk:
+	clk_disable_unprepare(pdata->sysclk);
+
+err_phy_init:
+	xgbe_phy_exit(pdata);
+
+	return ret;
+}
+
+static int xgbe_close(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	DBGPR("-->xgbe_close\n");
+
+	/* Stop the device */
+	xgbe_stop(pdata);
+
+	/* Issue software reset to device */
+	hw_if->exit(pdata);
+
+	/* Free the ring descriptors and buffers */
+	desc_if->free_ring_resources(pdata);
+
+	/* Release the interrupts */
+	devm_free_irq(pdata->dev, pdata->dev_irq, pdata);
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++)
+			devm_free_irq(pdata->dev, channel->dma_irq, channel);
+	}
+
+	/* Free the channel and ring structures */
+	xgbe_free_channels(pdata);
+
+	/* Disable the clocks */
+	clk_disable_unprepare(pdata->ptpclk);
+	clk_disable_unprepare(pdata->sysclk);
+
+	/* Release the phy */
+	xgbe_phy_exit(pdata);
+
+	DBGPR("<--xgbe_close\n");
+
+	return 0;
+}
+
+static int xgbe_xmit(struct sk_buff *skb, struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_channel *channel;
+	struct xgbe_ring *ring;
+	struct xgbe_packet_data *packet;
+	struct netdev_queue *txq;
+	int ret;
+
+	DBGPR("-->xgbe_xmit: skb->len = %d\n", skb->len);
+
+	channel = pdata->channel + skb->queue_mapping;
+	txq = netdev_get_tx_queue(netdev, channel->queue_index);
+	ring = channel->tx_ring;
+	packet = &ring->packet_data;
+
+	ret = NETDEV_TX_OK;
+
+	if (skb->len == 0) {
+		netdev_err(netdev, "empty skb received from stack\n");
+		dev_kfree_skb_any(skb);
+		goto tx_netdev_return;
+	}
+
+	/* Calculate preliminary packet info */
+	memset(packet, 0, sizeof(*packet));
+	xgbe_packet_info(pdata, ring, skb, packet);
+
+	/* Check that there are enough descriptors available */
+	ret = xgbe_maybe_stop_tx_queue(channel, ring, packet->rdesc_count);
+	if (ret)
+		goto tx_netdev_return;
+
+	ret = xgbe_prep_tso(skb, packet);
+	if (ret) {
+		netdev_err(netdev, "error processing TSO packet\n");
+		dev_kfree_skb_any(skb);
+		goto tx_netdev_return;
+	}
+	xgbe_prep_vlan(skb, packet);
+
+	if (!desc_if->map_tx_skb(channel, skb)) {
+		dev_kfree_skb_any(skb);
+		goto tx_netdev_return;
+	}
+
+	xgbe_prep_tx_tstamp(pdata, skb, packet);
+
+	/* Report on the actual number of bytes (to be) sent */
+	netdev_tx_sent_queue(txq, packet->tx_bytes);
+
+	/* Configure required descriptor fields for transmission */
+	hw_if->dev_xmit(channel);
+
+#ifdef XGMAC_ENABLE_TX_PKT_DUMP
+	xgbe_a0_print_pkt(netdev, skb, true);
+#endif
+
+	/* Stop the queue in advance if there may not be enough descriptors */
+	xgbe_maybe_stop_tx_queue(channel, ring, XGBE_TX_MAX_DESCS);
+
+	ret = NETDEV_TX_OK;
+
+tx_netdev_return:
+	return ret;
+}
+
+static void xgbe_set_rx_mode(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned int pr_mode, am_mode;
+
+	DBGPR("-->xgbe_set_rx_mode\n");
+
+	pr_mode = ((netdev->flags & IFF_PROMISC) != 0);
+	am_mode = ((netdev->flags & IFF_ALLMULTI) != 0);
+
+	hw_if->set_promiscuous_mode(pdata, pr_mode);
+	hw_if->set_all_multicast_mode(pdata, am_mode);
+
+	hw_if->add_mac_addresses(pdata);
+
+	DBGPR("<--xgbe_set_rx_mode\n");
+}
+
+static int xgbe_set_mac_address(struct net_device *netdev, void *addr)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct sockaddr *saddr = addr;
+
+	DBGPR("-->xgbe_set_mac_address\n");
+
+	if (!is_valid_ether_addr(saddr->sa_data))
+		return -EADDRNOTAVAIL;
+
+	memcpy(netdev->dev_addr, saddr->sa_data, netdev->addr_len);
+
+	hw_if->set_mac_address(pdata, netdev->dev_addr);
+
+	DBGPR("<--xgbe_set_mac_address\n");
+
+	return 0;
+}
+
+static int xgbe_ioctl(struct net_device *netdev, struct ifreq *ifreq, int cmd)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	int ret;
+
+	switch (cmd) {
+	case SIOCGHWTSTAMP:
+		ret = xgbe_get_hwtstamp_settings(pdata, ifreq);
+		break;
+
+	case SIOCSHWTSTAMP:
+		ret = xgbe_set_hwtstamp_settings(pdata, ifreq);
+		break;
+
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	return ret;
+}
+
+static int xgbe_change_mtu(struct net_device *netdev, int mtu)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	int ret;
+
+	DBGPR("-->xgbe_change_mtu\n");
+
+	ret = xgbe_calc_rx_buf_size(netdev, mtu);
+	if (ret < 0)
+		return ret;
+
+	pdata->rx_buf_size = ret;
+	netdev->mtu = mtu;
+
+	xgbe_restart_dev(pdata);
+
+	DBGPR("<--xgbe_change_mtu\n");
+
+	return 0;
+}
+
+static struct rtnl_link_stats64 *xgbe_get_stats64(struct net_device *netdev,
+						  struct rtnl_link_stats64 *s)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_mmc_stats *pstats = &pdata->mmc_stats;
+
+	DBGPR("-->%s\n", __func__);
+
+	pdata->hw_if.read_mmc_stats(pdata);
+
+	s->rx_packets = pstats->rxframecount_gb;
+	s->rx_bytes = pstats->rxoctetcount_gb;
+	s->rx_errors = pstats->rxframecount_gb -
+		       pstats->rxbroadcastframes_g -
+		       pstats->rxmulticastframes_g -
+		       pstats->rxunicastframes_g;
+	s->multicast = pstats->rxmulticastframes_g;
+	s->rx_length_errors = pstats->rxlengtherror;
+	s->rx_crc_errors = pstats->rxcrcerror;
+	s->rx_fifo_errors = pstats->rxfifooverflow;
+
+	s->tx_packets = pstats->txframecount_gb;
+	s->tx_bytes = pstats->txoctetcount_gb;
+	s->tx_errors = pstats->txframecount_gb - pstats->txframecount_g;
+	s->tx_dropped = netdev->stats.tx_dropped;
+
+	DBGPR("<--%s\n", __func__);
+
+	return s;
+}
+
+static int xgbe_vlan_rx_add_vid(struct net_device *netdev, __be16 proto,
+				u16 vid)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+
+	DBGPR("-->%s\n", __func__);
+
+	set_bit(vid, pdata->active_vlans);
+	hw_if->update_vlan_hash_table(pdata);
+
+	DBGPR("<--%s\n", __func__);
+
+	return 0;
+}
+
+static int xgbe_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto,
+				 u16 vid)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+
+	DBGPR("-->%s\n", __func__);
+
+	clear_bit(vid, pdata->active_vlans);
+	hw_if->update_vlan_hash_table(pdata);
+
+	DBGPR("<--%s\n", __func__);
+
+	return 0;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void xgbe_poll_controller(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_channel *channel;
+	unsigned int i;
+
+	DBGPR("-->xgbe_poll_controller\n");
+
+	if (pdata->per_channel_irq) {
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++)
+			xgbe_dma_isr(channel->dma_irq, channel);
+	} else {
+		disable_irq(pdata->dev_irq);
+		xgbe_isr(pdata->dev_irq, pdata);
+		enable_irq(pdata->dev_irq);
+	}
+
+	DBGPR("<--xgbe_poll_controller\n");
+}
+#endif /* End CONFIG_NET_POLL_CONTROLLER */
+
+static int xgbe_setup_tc(struct net_device *netdev, u8 tc)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	unsigned int offset, queue;
+	u8 i;
+
+	if (tc && (tc != pdata->hw_feat.tc_cnt))
+		return -EINVAL;
+
+	if (tc) {
+		netdev_set_num_tc(netdev, tc);
+		for (i = 0, queue = 0, offset = 0; i < tc; i++) {
+			while ((queue < pdata->tx_q_count) &&
+			       (pdata->q2tc_map[queue] == i))
+				queue++;
+
+			DBGPR("  TC%u using TXq%u-%u\n", i, offset, queue - 1);
+			netdev_set_tc_queue(netdev, i, queue - offset, offset);
+			offset = queue;
+		}
+	} else {
+		netdev_reset_tc(netdev);
+	}
+
+	return 0;
+}
+
+static int xgbe_set_features(struct net_device *netdev,
+			     netdev_features_t features)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	netdev_features_t rxhash, rxcsum, rxvlan, rxvlan_filter;
+	int ret = 0;
+
+	rxhash = pdata->netdev_features & NETIF_F_RXHASH;
+	rxcsum = pdata->netdev_features & NETIF_F_RXCSUM;
+	rxvlan = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_RX;
+	rxvlan_filter = pdata->netdev_features & NETIF_F_HW_VLAN_CTAG_FILTER;
+
+	if ((features & NETIF_F_RXHASH) && !rxhash)
+		ret = hw_if->enable_rss(pdata);
+	else if (!(features & NETIF_F_RXHASH) && rxhash)
+		ret = hw_if->disable_rss(pdata);
+	if (ret)
+		return ret;
+
+	if ((features & NETIF_F_RXCSUM) && !rxcsum)
+		hw_if->enable_rx_csum(pdata);
+	else if (!(features & NETIF_F_RXCSUM) && rxcsum)
+		hw_if->disable_rx_csum(pdata);
+
+	if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan)
+		hw_if->enable_rx_vlan_stripping(pdata);
+	else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && rxvlan)
+		hw_if->disable_rx_vlan_stripping(pdata);
+
+	if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !rxvlan_filter)
+		hw_if->enable_rx_vlan_filtering(pdata);
+	else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && rxvlan_filter)
+		hw_if->disable_rx_vlan_filtering(pdata);
+
+	pdata->netdev_features = features;
+
+	DBGPR("<--xgbe_set_features\n");
+
+	return 0;
+}
+
+static const struct net_device_ops xgbe_netdev_ops = {
+	.ndo_open		= xgbe_open,
+	.ndo_stop		= xgbe_close,
+	.ndo_start_xmit		= xgbe_xmit,
+	.ndo_set_rx_mode	= xgbe_set_rx_mode,
+	.ndo_set_mac_address	= xgbe_set_mac_address,
+	.ndo_validate_addr	= eth_validate_addr,
+	.ndo_do_ioctl		= xgbe_ioctl,
+	.ndo_change_mtu		= xgbe_change_mtu,
+	.ndo_get_stats64	= xgbe_get_stats64,
+	.ndo_vlan_rx_add_vid	= xgbe_vlan_rx_add_vid,
+	.ndo_vlan_rx_kill_vid	= xgbe_vlan_rx_kill_vid,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+	.ndo_poll_controller	= xgbe_poll_controller,
+#endif
+	.ndo_setup_tc		= xgbe_setup_tc,
+	.ndo_set_features	= xgbe_set_features,
+};
+
+struct net_device_ops *xgbe_a0_get_netdev_ops(void)
+{
+	return (struct net_device_ops *)&xgbe_netdev_ops;
+}
+
+static void xgbe_rx_refresh(struct xgbe_channel *channel)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_ring *ring = channel->rx_ring;
+	struct xgbe_ring_data *rdata;
+
+	while (ring->dirty != ring->cur) {
+		rdata = XGBE_GET_DESC_DATA(ring, ring->dirty);
+
+		/* Reset rdata values */
+		desc_if->unmap_rdata(pdata, rdata);
+
+		if (desc_if->map_rx_buffer(pdata, ring, rdata))
+			break;
+
+		hw_if->rx_desc_reset(rdata);
+
+		ring->dirty++;
+	}
+
+	/* Update the Rx Tail Pointer Register with address of
+	 * the last cleaned entry */
+	rdata = XGBE_GET_DESC_DATA(ring, ring->dirty - 1);
+	XGMAC_DMA_IOWRITE(channel, DMA_CH_RDTR_LO,
+			  lower_32_bits(rdata->rdesc_dma));
+}
+
+static struct sk_buff *xgbe_create_skb(struct xgbe_prv_data *pdata,
+				       struct xgbe_ring_data *rdata,
+				       unsigned int *len)
+{
+	struct net_device *netdev = pdata->netdev;
+	struct sk_buff *skb;
+	u8 *packet;
+	unsigned int copy_len;
+
+	skb = netdev_alloc_skb_ip_align(netdev, rdata->rx.hdr.dma_len);
+	if (!skb)
+		return NULL;
+
+	packet = page_address(rdata->rx.hdr.pa.pages) +
+		 rdata->rx.hdr.pa.pages_offset;
+	copy_len = (rdata->rx.hdr_len) ? rdata->rx.hdr_len : *len;
+	copy_len = min(rdata->rx.hdr.dma_len, copy_len);
+	skb_copy_to_linear_data(skb, packet, copy_len);
+	skb_put(skb, copy_len);
+
+	*len -= copy_len;
+
+	return skb;
+}
+
+static int xgbe_tx_poll(struct xgbe_channel *channel)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_desc_if *desc_if = &pdata->desc_if;
+	struct xgbe_ring *ring = channel->tx_ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+	struct net_device *netdev = pdata->netdev;
+	struct netdev_queue *txq;
+	int processed = 0;
+	unsigned int tx_packets = 0, tx_bytes = 0;
+
+	DBGPR("-->xgbe_tx_poll\n");
+
+	/* Nothing to do if there isn't a Tx ring for this channel */
+	if (!ring)
+		return 0;
+
+	txq = netdev_get_tx_queue(netdev, channel->queue_index);
+
+	while ((processed < XGBE_TX_DESC_MAX_PROC) &&
+	       (ring->dirty != ring->cur)) {
+		rdata = XGBE_GET_DESC_DATA(ring, ring->dirty);
+		rdesc = rdata->rdesc;
+
+		if (!hw_if->tx_complete(rdesc))
+			break;
+
+		/* Make sure descriptor fields are read after reading the OWN
+		 * bit */
+		rmb();
+
+#ifdef XGMAC_ENABLE_TX_DESC_DUMP
+		xgbe_a0_dump_tx_desc(ring, ring->dirty, 1, 0);
+#endif
+
+		if (hw_if->is_last_desc(rdesc)) {
+			tx_packets += rdata->tx.packets;
+			tx_bytes += rdata->tx.bytes;
+		}
+
+		/* Free the SKB and reset the descriptor for re-use */
+		desc_if->unmap_rdata(pdata, rdata);
+		hw_if->tx_desc_reset(rdata);
+
+		processed++;
+		ring->dirty++;
+	}
+
+	if (!processed)
+		return 0;
+
+	netdev_tx_completed_queue(txq, tx_packets, tx_bytes);
+
+	if ((ring->tx.queue_stopped == 1) &&
+	    (xgbe_tx_avail_desc(ring) > XGBE_TX_DESC_MIN_FREE)) {
+		ring->tx.queue_stopped = 0;
+		netif_tx_wake_queue(txq);
+	}
+
+	DBGPR("<--xgbe_tx_poll: processed=%d\n", processed);
+
+	return processed;
+}
+
+static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+{
+	struct xgbe_prv_data *pdata = channel->pdata;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	struct xgbe_ring *ring = channel->rx_ring;
+	struct xgbe_ring_data *rdata;
+	struct xgbe_packet_data *packet;
+	struct net_device *netdev = pdata->netdev;
+	struct napi_struct *napi;
+	struct sk_buff *skb;
+	struct skb_shared_hwtstamps *hwtstamps;
+	unsigned int incomplete, error, context_next, context;
+	unsigned int len, put_len, max_len;
+	unsigned int received = 0;
+	int packet_count = 0;
+
+	DBGPR("-->xgbe_rx_poll: budget=%d\n", budget);
+
+	/* Nothing to do if there isn't a Rx ring for this channel */
+	if (!ring)
+		return 0;
+
+	napi = (pdata->per_channel_irq) ? &channel->napi : &pdata->napi;
+
+	rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+	packet = &ring->packet_data;
+	while (packet_count < budget) {
+		DBGPR("  cur = %d\n", ring->cur);
+
+		/* First time in loop see if we need to restore state */
+		if (!received && rdata->state_saved) {
+			incomplete = rdata->state.incomplete;
+			context_next = rdata->state.context_next;
+			skb = rdata->state.skb;
+			error = rdata->state.error;
+			len = rdata->state.len;
+		} else {
+			memset(packet, 0, sizeof(*packet));
+			incomplete = 0;
+			context_next = 0;
+			skb = NULL;
+			error = 0;
+			len = 0;
+		}
+
+read_again:
+		rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+
+		if (xgbe_rx_dirty_desc(ring) > (XGBE_RX_DESC_CNT >> 3))
+			xgbe_rx_refresh(channel);
+
+		if (hw_if->dev_read(channel))
+			break;
+
+		received++;
+		ring->cur++;
+
+		incomplete = XGMAC_GET_BITS(packet->attributes,
+					    RX_PACKET_ATTRIBUTES,
+					    INCOMPLETE);
+		context_next = XGMAC_GET_BITS(packet->attributes,
+					      RX_PACKET_ATTRIBUTES,
+					      CONTEXT_NEXT);
+		context = XGMAC_GET_BITS(packet->attributes,
+					 RX_PACKET_ATTRIBUTES,
+					 CONTEXT);
+
+		/* Earlier error, just drain the remaining data */
+		if ((incomplete || context_next) && error)
+			goto read_again;
+
+		if (error || packet->errors) {
+			if (packet->errors)
+				DBGPR("Error in received packet\n");
+			dev_kfree_skb(skb);
+			goto next_packet;
+		}
+
+		if (!context) {
+			put_len = rdata->rx.len - len;
+			len += put_len;
+
+			if (!skb) {
+				dma_sync_single_for_cpu(pdata->dev,
+							rdata->rx.hdr.dma,
+							rdata->rx.hdr.dma_len,
+							DMA_FROM_DEVICE);
+
+				skb = xgbe_create_skb(pdata, rdata, &put_len);
+				if (!skb) {
+					error = 1;
+					goto skip_data;
+				}
+			}
+
+			if (put_len) {
+				dma_sync_single_for_cpu(pdata->dev,
+							rdata->rx.buf.dma,
+							rdata->rx.buf.dma_len,
+							DMA_FROM_DEVICE);
+
+				skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+						rdata->rx.buf.pa.pages,
+						rdata->rx.buf.pa.pages_offset,
+						put_len, rdata->rx.buf.dma_len);
+				rdata->rx.buf.pa.pages = NULL;
+			}
+		}
+
+skip_data:
+		if (incomplete || context_next)
+			goto read_again;
+
+		if (!skb)
+			goto next_packet;
+
+		/* Be sure we don't exceed the configured MTU */
+		max_len = netdev->mtu + ETH_HLEN;
+		if (!(netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+		    (skb->protocol == htons(ETH_P_8021Q)))
+			max_len += VLAN_HLEN;
+
+		if (skb->len > max_len) {
+			DBGPR("packet length exceeds configured MTU\n");
+			dev_kfree_skb(skb);
+			goto next_packet;
+		}
+
+#ifdef XGMAC_ENABLE_RX_PKT_DUMP
+		xgbe_a0_print_pkt(netdev, skb, false);
+#endif
+
+		skb_checksum_none_assert(skb);
+		if (XGMAC_GET_BITS(packet->attributes,
+				   RX_PACKET_ATTRIBUTES, CSUM_DONE))
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+		if (XGMAC_GET_BITS(packet->attributes,
+				   RX_PACKET_ATTRIBUTES, VLAN_CTAG))
+			__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+					       packet->vlan_ctag);
+
+		if (XGMAC_GET_BITS(packet->attributes,
+				   RX_PACKET_ATTRIBUTES, RX_TSTAMP)) {
+			u64 nsec;
+
+			nsec = timecounter_cyc2time(&pdata->tstamp_tc,
+						    packet->rx_tstamp);
+			hwtstamps = skb_hwtstamps(skb);
+			hwtstamps->hwtstamp = ns_to_ktime(nsec);
+		}
+
+		if (XGMAC_GET_BITS(packet->attributes,
+				   RX_PACKET_ATTRIBUTES, RSS_HASH))
+			skb_set_hash(skb, packet->rss_hash,
+				     packet->rss_hash_type);
+
+		skb->dev = netdev;
+		skb->protocol = eth_type_trans(skb, netdev);
+		skb_record_rx_queue(skb, channel->queue_index);
+		skb_mark_napi_id(skb, napi);
+
+		netdev->last_rx = jiffies;
+		napi_gro_receive(napi, skb);
+
+next_packet:
+		packet_count++;
+	}
+
+	/* Check if we need to save state before leaving */
+	if (received && (incomplete || context_next)) {
+		rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+		rdata->state_saved = 1;
+		rdata->state.incomplete = incomplete;
+		rdata->state.context_next = context_next;
+		rdata->state.skb = skb;
+		rdata->state.len = len;
+		rdata->state.error = error;
+	}
+
+	DBGPR("<--xgbe_rx_poll: packet_count = %d\n", packet_count);
+
+	return packet_count;
+}
+
+static int xgbe_one_poll(struct napi_struct *napi, int budget)
+{
+	struct xgbe_channel *channel = container_of(napi, struct xgbe_channel,
+						    napi);
+	int processed = 0;
+
+	DBGPR("-->xgbe_one_poll: budget=%d\n", budget);
+
+	/* Cleanup Tx ring first */
+	xgbe_tx_poll(channel);
+
+	/* Process Rx ring next */
+	processed = xgbe_rx_poll(channel, budget);
+
+	/* If we processed everything, we are done */
+	if (processed < budget) {
+		/* Turn off polling */
+		napi_complete(napi);
+
+		/* Enable Tx and Rx interrupts */
+		enable_irq(channel->dma_irq);
+	}
+
+	DBGPR("<--xgbe_one_poll: received = %d\n", processed);
+
+	return processed;
+}
+
+static int xgbe_all_poll(struct napi_struct *napi, int budget)
+{
+	struct xgbe_prv_data *pdata = container_of(napi, struct xgbe_prv_data,
+						   napi);
+	struct xgbe_channel *channel;
+	int ring_budget;
+	int processed, last_processed;
+	unsigned int i;
+
+	DBGPR("-->xgbe_all_poll: budget=%d\n", budget);
+
+	processed = 0;
+	ring_budget = budget / pdata->rx_ring_count;
+	do {
+		last_processed = processed;
+
+		channel = pdata->channel;
+		for (i = 0; i < pdata->channel_count; i++, channel++) {
+			/* Cleanup Tx ring first */
+			xgbe_tx_poll(channel);
+
+			/* Process Rx ring next */
+			if (ring_budget > (budget - processed))
+				ring_budget = budget - processed;
+			processed += xgbe_rx_poll(channel, ring_budget);
+		}
+	} while ((processed < budget) && (processed != last_processed));
+
+	/* If we processed everything, we are done */
+	if (processed < budget) {
+		/* Turn off polling */
+		napi_complete(napi);
+
+		/* Enable Tx and Rx interrupts */
+		xgbe_enable_rx_tx_ints(pdata);
+	}
+
+	DBGPR("<--xgbe_all_poll: received = %d\n", processed);
+
+	return processed;
+}
+
+void xgbe_a0_dump_tx_desc(struct xgbe_ring *ring, unsigned int idx,
+		       unsigned int count, unsigned int flag)
+{
+	struct xgbe_ring_data *rdata;
+	struct xgbe_ring_desc *rdesc;
+
+	while (count--) {
+		rdata = XGBE_GET_DESC_DATA(ring, idx);
+		rdesc = rdata->rdesc;
+		pr_alert("TX_NORMAL_DESC[%d %s] = %08x:%08x:%08x:%08x\n", idx,
+			 (flag == 1) ? "QUEUED FOR TX" : "TX BY DEVICE",
+			 le32_to_cpu(rdesc->desc0), le32_to_cpu(rdesc->desc1),
+			 le32_to_cpu(rdesc->desc2), le32_to_cpu(rdesc->desc3));
+		idx++;
+	}
+}
+
+void xgbe_a0_dump_rx_desc(struct xgbe_ring *ring, struct xgbe_ring_desc *desc,
+		       unsigned int idx)
+{
+	pr_alert("RX_NORMAL_DESC[%d RX BY DEVICE] = %08x:%08x:%08x:%08x\n", idx,
+		 le32_to_cpu(desc->desc0), le32_to_cpu(desc->desc1),
+		 le32_to_cpu(desc->desc2), le32_to_cpu(desc->desc3));
+}
+
+void xgbe_a0_print_pkt(struct net_device *netdev, struct sk_buff *skb, bool tx_rx)
+{
+	struct ethhdr *eth = (struct ethhdr *)skb->data;
+	unsigned char *buf = skb->data;
+	unsigned char buffer[128];
+	unsigned int i, j;
+
+	netdev_alert(netdev, "\n************** SKB dump ****************\n");
+
+	netdev_alert(netdev, "%s packet of %d bytes\n",
+		     (tx_rx ? "TX" : "RX"), skb->len);
+
+	netdev_alert(netdev, "Dst MAC addr: %pM\n", eth->h_dest);
+	netdev_alert(netdev, "Src MAC addr: %pM\n", eth->h_source);
+	netdev_alert(netdev, "Protocol: 0x%04hx\n", ntohs(eth->h_proto));
+
+	for (i = 0, j = 0; i < skb->len;) {
+		j += snprintf(buffer + j, sizeof(buffer) - j, "%02hhx",
+			      buf[i++]);
+
+		if ((i % 32) == 0) {
+			netdev_alert(netdev, "  0x%04x: %s\n", i - 32, buffer);
+			j = 0;
+		} else if ((i % 16) == 0) {
+			buffer[j++] = ' ';
+			buffer[j++] = ' ';
+		} else if ((i % 4) == 0) {
+			buffer[j++] = ' ';
+		}
+	}
+	if (i % 32)
+		netdev_alert(netdev, "  0x%04x: %s\n", i - (i % 32), buffer);
+
+	netdev_alert(netdev, "\n************** SKB dump ****************\n");
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-ethtool.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-ethtool.c
new file mode 100644
index 0000000..165ff1c
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-ethtool.c
@@ -0,0 +1,616 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/spinlock.h>
+#include <linux/phy.h>
+#include <linux/net_tstamp.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+struct xgbe_stats {
+	char stat_string[ETH_GSTRING_LEN];
+	int stat_size;
+	int stat_offset;
+};
+
+#define XGMAC_MMC_STAT(_string, _var)				\
+	{ _string,						\
+	  FIELD_SIZEOF(struct xgbe_mmc_stats, _var),		\
+	  offsetof(struct xgbe_prv_data, mmc_stats._var),	\
+	}
+
+static const struct xgbe_stats xgbe_gstring_stats[] = {
+	XGMAC_MMC_STAT("tx_bytes", txoctetcount_gb),
+	XGMAC_MMC_STAT("tx_packets", txframecount_gb),
+	XGMAC_MMC_STAT("tx_unicast_packets", txunicastframes_gb),
+	XGMAC_MMC_STAT("tx_broadcast_packets", txbroadcastframes_gb),
+	XGMAC_MMC_STAT("tx_multicast_packets", txmulticastframes_gb),
+	XGMAC_MMC_STAT("tx_vlan_packets", txvlanframes_g),
+	XGMAC_MMC_STAT("tx_64_byte_packets", tx64octets_gb),
+	XGMAC_MMC_STAT("tx_65_to_127_byte_packets", tx65to127octets_gb),
+	XGMAC_MMC_STAT("tx_128_to_255_byte_packets", tx128to255octets_gb),
+	XGMAC_MMC_STAT("tx_256_to_511_byte_packets", tx256to511octets_gb),
+	XGMAC_MMC_STAT("tx_512_to_1023_byte_packets", tx512to1023octets_gb),
+	XGMAC_MMC_STAT("tx_1024_to_max_byte_packets", tx1024tomaxoctets_gb),
+	XGMAC_MMC_STAT("tx_underflow_errors", txunderflowerror),
+	XGMAC_MMC_STAT("tx_pause_frames", txpauseframes),
+
+	XGMAC_MMC_STAT("rx_bytes", rxoctetcount_gb),
+	XGMAC_MMC_STAT("rx_packets", rxframecount_gb),
+	XGMAC_MMC_STAT("rx_unicast_packets", rxunicastframes_g),
+	XGMAC_MMC_STAT("rx_broadcast_packets", rxbroadcastframes_g),
+	XGMAC_MMC_STAT("rx_multicast_packets", rxmulticastframes_g),
+	XGMAC_MMC_STAT("rx_vlan_packets", rxvlanframes_gb),
+	XGMAC_MMC_STAT("rx_64_byte_packets", rx64octets_gb),
+	XGMAC_MMC_STAT("rx_65_to_127_byte_packets", rx65to127octets_gb),
+	XGMAC_MMC_STAT("rx_128_to_255_byte_packets", rx128to255octets_gb),
+	XGMAC_MMC_STAT("rx_256_to_511_byte_packets", rx256to511octets_gb),
+	XGMAC_MMC_STAT("rx_512_to_1023_byte_packets", rx512to1023octets_gb),
+	XGMAC_MMC_STAT("rx_1024_to_max_byte_packets", rx1024tomaxoctets_gb),
+	XGMAC_MMC_STAT("rx_undersize_packets", rxundersize_g),
+	XGMAC_MMC_STAT("rx_oversize_packets", rxoversize_g),
+	XGMAC_MMC_STAT("rx_crc_errors", rxcrcerror),
+	XGMAC_MMC_STAT("rx_crc_errors_small_packets", rxrunterror),
+	XGMAC_MMC_STAT("rx_crc_errors_giant_packets", rxjabbererror),
+	XGMAC_MMC_STAT("rx_length_errors", rxlengtherror),
+	XGMAC_MMC_STAT("rx_out_of_range_errors", rxoutofrangetype),
+	XGMAC_MMC_STAT("rx_fifo_overflow_errors", rxfifooverflow),
+	XGMAC_MMC_STAT("rx_watchdog_errors", rxwatchdogerror),
+	XGMAC_MMC_STAT("rx_pause_frames", rxpauseframes),
+};
+
+#define XGBE_STATS_COUNT	ARRAY_SIZE(xgbe_gstring_stats)
+
+static void xgbe_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+	int i;
+
+	DBGPR("-->%s\n", __func__);
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < XGBE_STATS_COUNT; i++) {
+			memcpy(data, xgbe_gstring_stats[i].stat_string,
+			       ETH_GSTRING_LEN);
+			data += ETH_GSTRING_LEN;
+		}
+		break;
+	}
+
+	DBGPR("<--%s\n", __func__);
+}
+
+static void xgbe_get_ethtool_stats(struct net_device *netdev,
+				   struct ethtool_stats *stats, u64 *data)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	u8 *stat;
+	int i;
+
+	DBGPR("-->%s\n", __func__);
+
+	pdata->hw_if.read_mmc_stats(pdata);
+	for (i = 0; i < XGBE_STATS_COUNT; i++) {
+		stat = (u8 *)pdata + xgbe_gstring_stats[i].stat_offset;
+		*data++ = *(u64 *)stat;
+	}
+
+	DBGPR("<--%s\n", __func__);
+}
+
+static int xgbe_get_sset_count(struct net_device *netdev, int stringset)
+{
+	int ret;
+
+	DBGPR("-->%s\n", __func__);
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		ret = XGBE_STATS_COUNT;
+		break;
+
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	DBGPR("<--%s\n", __func__);
+
+	return ret;
+}
+
+static void xgbe_get_pauseparam(struct net_device *netdev,
+				struct ethtool_pauseparam *pause)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	DBGPR("-->xgbe_get_pauseparam\n");
+
+	pause->autoneg = pdata->pause_autoneg;
+	pause->tx_pause = pdata->tx_pause;
+	pause->rx_pause = pdata->rx_pause;
+
+	DBGPR("<--xgbe_get_pauseparam\n");
+}
+
+static int xgbe_set_pauseparam(struct net_device *netdev,
+			       struct ethtool_pauseparam *pause)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct phy_device *phydev = pdata->phydev;
+	int ret = 0;
+
+	DBGPR("-->xgbe_set_pauseparam\n");
+
+	DBGPR("  autoneg = %d, tx_pause = %d, rx_pause = %d\n",
+	      pause->autoneg, pause->tx_pause, pause->rx_pause);
+
+	pdata->pause_autoneg = pause->autoneg;
+	if (pause->autoneg) {
+		phydev->advertising |= ADVERTISED_Pause;
+		phydev->advertising |= ADVERTISED_Asym_Pause;
+
+	} else {
+		phydev->advertising &= ~ADVERTISED_Pause;
+		phydev->advertising &= ~ADVERTISED_Asym_Pause;
+
+		pdata->tx_pause = pause->tx_pause;
+		pdata->rx_pause = pause->rx_pause;
+	}
+
+	if (netif_running(netdev))
+		ret = phy_start_aneg(phydev);
+
+	DBGPR("<--xgbe_set_pauseparam\n");
+
+	return ret;
+}
+
+static int xgbe_get_settings(struct net_device *netdev,
+			     struct ethtool_cmd *cmd)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	int ret;
+
+	DBGPR("-->xgbe_get_settings\n");
+
+	if (!pdata->phydev)
+		return -ENODEV;
+
+	ret = phy_ethtool_gset(pdata->phydev, cmd);
+	cmd->transceiver = XCVR_EXTERNAL;
+
+	DBGPR("<--xgbe_get_settings\n");
+
+	return ret;
+}
+
+static int xgbe_set_settings(struct net_device *netdev,
+			     struct ethtool_cmd *cmd)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct phy_device *phydev = pdata->phydev;
+	u32 speed;
+	int ret;
+
+	DBGPR("-->xgbe_set_settings\n");
+
+	if (!pdata->phydev)
+		return -ENODEV;
+
+	speed = ethtool_cmd_speed(cmd);
+
+	if (cmd->phy_address != phydev->addr)
+		return -EINVAL;
+
+	if ((cmd->autoneg != AUTONEG_ENABLE) &&
+	    (cmd->autoneg != AUTONEG_DISABLE))
+		return -EINVAL;
+
+	if (cmd->autoneg == AUTONEG_DISABLE) {
+		switch (speed) {
+		case SPEED_10000:
+		case SPEED_2500:
+		case SPEED_1000:
+			break;
+		default:
+			return -EINVAL;
+		}
+
+		if (cmd->duplex != DUPLEX_FULL)
+			return -EINVAL;
+	}
+
+	cmd->advertising &= phydev->supported;
+	if ((cmd->autoneg == AUTONEG_ENABLE) && !cmd->advertising)
+		return -EINVAL;
+
+	ret = 0;
+	phydev->autoneg = cmd->autoneg;
+	phydev->speed = speed;
+	phydev->duplex = cmd->duplex;
+	phydev->advertising = cmd->advertising;
+
+	if (cmd->autoneg == AUTONEG_ENABLE)
+		phydev->advertising |= ADVERTISED_Autoneg;
+	else
+		phydev->advertising &= ~ADVERTISED_Autoneg;
+
+	if (netif_running(netdev))
+		ret = phy_start_aneg(phydev);
+
+	DBGPR("<--xgbe_set_settings\n");
+
+	return ret;
+}
+
+static void xgbe_get_drvinfo(struct net_device *netdev,
+			     struct ethtool_drvinfo *drvinfo)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_features *hw_feat = &pdata->hw_feat;
+
+	strlcpy(drvinfo->driver, XGBE_DRV_NAME, sizeof(drvinfo->driver));
+	strlcpy(drvinfo->version, XGBE_DRV_VERSION, sizeof(drvinfo->version));
+	strlcpy(drvinfo->bus_info, dev_name(pdata->dev),
+		sizeof(drvinfo->bus_info));
+	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), "%d.%d.%d",
+		 XGMAC_GET_BITS(hw_feat->version, MAC_VR, USERVER),
+		 XGMAC_GET_BITS(hw_feat->version, MAC_VR, DEVID),
+		 XGMAC_GET_BITS(hw_feat->version, MAC_VR, SNPSVER));
+	drvinfo->n_stats = XGBE_STATS_COUNT;
+}
+
+static int xgbe_get_coalesce(struct net_device *netdev,
+			     struct ethtool_coalesce *ec)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned int riwt;
+
+	DBGPR("-->xgbe_get_coalesce\n");
+
+	memset(ec, 0, sizeof(struct ethtool_coalesce));
+
+	riwt = pdata->rx_riwt;
+	ec->rx_coalesce_usecs = hw_if->riwt_to_usec(pdata, riwt);
+	ec->rx_max_coalesced_frames = pdata->rx_frames;
+
+	ec->tx_coalesce_usecs = pdata->tx_usecs;
+	ec->tx_max_coalesced_frames = pdata->tx_frames;
+
+	DBGPR("<--xgbe_get_coalesce\n");
+
+	return 0;
+}
+
+static int xgbe_set_coalesce(struct net_device *netdev,
+			     struct ethtool_coalesce *ec)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned int rx_frames, rx_riwt, rx_usecs;
+	unsigned int tx_frames, tx_usecs;
+
+	DBGPR("-->xgbe_set_coalesce\n");
+
+	/* Check for not supported parameters  */
+	if ((ec->rx_coalesce_usecs_irq) ||
+	    (ec->rx_max_coalesced_frames_irq) ||
+	    (ec->tx_coalesce_usecs_irq) ||
+	    (ec->tx_max_coalesced_frames_irq) ||
+	    (ec->stats_block_coalesce_usecs) ||
+	    (ec->use_adaptive_rx_coalesce) ||
+	    (ec->use_adaptive_tx_coalesce) ||
+	    (ec->pkt_rate_low) ||
+	    (ec->rx_coalesce_usecs_low) ||
+	    (ec->rx_max_coalesced_frames_low) ||
+	    (ec->tx_coalesce_usecs_low) ||
+	    (ec->tx_max_coalesced_frames_low) ||
+	    (ec->pkt_rate_high) ||
+	    (ec->rx_coalesce_usecs_high) ||
+	    (ec->rx_max_coalesced_frames_high) ||
+	    (ec->tx_coalesce_usecs_high) ||
+	    (ec->tx_max_coalesced_frames_high) ||
+	    (ec->rate_sample_interval))
+		return -EOPNOTSUPP;
+
+	/* Can only change rx-frames when interface is down (see
+	 * rx_descriptor_init in xgbe-dev.c)
+	 */
+	rx_frames = pdata->rx_frames;
+	if (rx_frames != ec->rx_max_coalesced_frames && netif_running(netdev)) {
+		netdev_alert(netdev,
+			     "interface must be down to change rx-frames\n");
+		return -EINVAL;
+	}
+
+	rx_riwt = hw_if->usec_to_riwt(pdata, ec->rx_coalesce_usecs);
+	rx_frames = ec->rx_max_coalesced_frames;
+
+	/* Use smallest possible value if conversion resulted in zero */
+	if (ec->rx_coalesce_usecs && !rx_riwt)
+		rx_riwt = 1;
+
+	/* Check the bounds of values for Rx */
+	if (rx_riwt > XGMAC_MAX_DMA_RIWT) {
+		rx_usecs = hw_if->riwt_to_usec(pdata, XGMAC_MAX_DMA_RIWT);
+		netdev_alert(netdev, "rx-usec is limited to %d usecs\n",
+			     rx_usecs);
+		return -EINVAL;
+	}
+	if (rx_frames > pdata->rx_desc_count) {
+		netdev_alert(netdev, "rx-frames is limited to %d frames\n",
+			     pdata->rx_desc_count);
+		return -EINVAL;
+	}
+
+	tx_usecs = ec->tx_coalesce_usecs;
+	tx_frames = ec->tx_max_coalesced_frames;
+
+	/* Check the bounds of values for Tx */
+	if (tx_frames > pdata->tx_desc_count) {
+		netdev_alert(netdev, "tx-frames is limited to %d frames\n",
+			     pdata->tx_desc_count);
+		return -EINVAL;
+	}
+
+	pdata->rx_riwt = rx_riwt;
+	pdata->rx_frames = rx_frames;
+	hw_if->config_rx_coalesce(pdata);
+
+	pdata->tx_usecs = tx_usecs;
+	pdata->tx_frames = tx_frames;
+	hw_if->config_tx_coalesce(pdata);
+
+	DBGPR("<--xgbe_set_coalesce\n");
+
+	return 0;
+}
+
+static int xgbe_get_rxnfc(struct net_device *netdev,
+			  struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	switch (rxnfc->cmd) {
+	case ETHTOOL_GRXRINGS:
+		rxnfc->data = pdata->rx_ring_count;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+static u32 xgbe_get_rxfh_key_size(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	return sizeof(pdata->rss_key);
+}
+
+static u32 xgbe_get_rxfh_indir_size(struct net_device *netdev)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	return ARRAY_SIZE(pdata->rss_table);
+}
+
+static int xgbe_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+			 u8 *hfunc)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	unsigned int i;
+
+	if (indir) {
+		for (i = 0; i < ARRAY_SIZE(pdata->rss_table); i++)
+			indir[i] = XGMAC_GET_BITS(pdata->rss_table[i],
+						  MAC_RSSDR, DMCH);
+	}
+
+	if (key)
+		memcpy(key, pdata->rss_key, sizeof(pdata->rss_key));
+
+	if (hfunc)
+		*hfunc = ETH_RSS_HASH_TOP;
+
+	return 0;
+}
+
+static int xgbe_set_rxfh(struct net_device *netdev, const u32 *indir,
+			 const u8 *key, const u8 hfunc)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	unsigned int ret;
+
+	if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
+		return -EOPNOTSUPP;
+
+	if (indir) {
+		ret = hw_if->set_rss_lookup_table(pdata, indir);
+		if (ret)
+			return ret;
+	}
+
+	if (key) {
+		ret = hw_if->set_rss_hash_key(pdata, key);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int xgbe_get_ts_info(struct net_device *netdev,
+			    struct ethtool_ts_info *ts_info)
+{
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	ts_info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
+				   SOF_TIMESTAMPING_RX_SOFTWARE |
+				   SOF_TIMESTAMPING_SOFTWARE |
+				   SOF_TIMESTAMPING_TX_HARDWARE |
+				   SOF_TIMESTAMPING_RX_HARDWARE |
+				   SOF_TIMESTAMPING_RAW_HARDWARE;
+
+	if (pdata->ptp_clock)
+		ts_info->phc_index = ptp_clock_index(pdata->ptp_clock);
+	else
+		ts_info->phc_index = -1;
+
+	ts_info->tx_types = (1 << HWTSTAMP_TX_OFF) | (1 << HWTSTAMP_TX_ON);
+	ts_info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+			      (1 << HWTSTAMP_FILTER_PTP_V1_L4_EVENT) |
+			      (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+			      (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+			      (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) |
+			      (1 << HWTSTAMP_FILTER_ALL);
+
+	return 0;
+}
+
+static const struct ethtool_ops xgbe_ethtool_ops = {
+	.get_settings = xgbe_get_settings,
+	.set_settings = xgbe_set_settings,
+	.get_drvinfo = xgbe_get_drvinfo,
+	.get_link = ethtool_op_get_link,
+	.get_coalesce = xgbe_get_coalesce,
+	.set_coalesce = xgbe_set_coalesce,
+	.get_pauseparam = xgbe_get_pauseparam,
+	.set_pauseparam = xgbe_set_pauseparam,
+	.get_strings = xgbe_get_strings,
+	.get_ethtool_stats = xgbe_get_ethtool_stats,
+	.get_sset_count = xgbe_get_sset_count,
+	.get_rxnfc = xgbe_get_rxnfc,
+	.get_rxfh_key_size = xgbe_get_rxfh_key_size,
+	.get_rxfh_indir_size = xgbe_get_rxfh_indir_size,
+	.get_rxfh = xgbe_get_rxfh,
+	.set_rxfh = xgbe_set_rxfh,
+	.get_ts_info = xgbe_get_ts_info,
+};
+
+struct ethtool_ops *xgbe_a0_get_ethtool_ops(void)
+{
+	return (struct ethtool_ops *)&xgbe_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-main.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-main.c
new file mode 100644
index 0000000..a85fb49
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-main.c
@@ -0,0 +1,643 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/spinlock.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_net.h>
+#include <linux/of_address.h>
+#include <linux/clk.h>
+#include <linux/property.h>
+#include <linux/acpi.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+MODULE_AUTHOR("Tom Lendacky <thomas.lendacky@amd.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(XGBE_DRV_VERSION);
+MODULE_DESCRIPTION(XGBE_DRV_DESC);
+
+unsigned int speed = 0;
+module_param(speed, uint, 0444);
+MODULE_PARM_DESC(speed, " Select operating speed (1=1GbE, 2=2.5GbE, 10=10GbE, any other value implies auto-negotiation");
+
+static void xgbe_default_config(struct xgbe_prv_data *pdata)
+{
+	DBGPR("-->xgbe_default_config\n");
+
+	pdata->pblx8 = DMA_PBL_X8_ENABLE;
+	pdata->tx_sf_mode = MTL_TSF_ENABLE;
+	pdata->tx_threshold = MTL_TX_THRESHOLD_64;
+	pdata->tx_pbl = DMA_PBL_16;
+	pdata->tx_osp_mode = DMA_OSP_ENABLE;
+	pdata->rx_sf_mode = MTL_RSF_DISABLE;
+	pdata->rx_threshold = MTL_RX_THRESHOLD_64;
+	pdata->rx_pbl = DMA_PBL_16;
+	pdata->pause_autoneg = 1;
+	pdata->tx_pause = 1;
+	pdata->rx_pause = 1;
+	pdata->phy_speed = SPEED_UNKNOWN;
+	pdata->power_down = 0;
+
+	if (speed == 10) {
+		pdata->default_autoneg = AUTONEG_DISABLE;
+		pdata->default_speed = SPEED_10000;
+	} else if (speed == 2) {
+		pdata->default_autoneg = AUTONEG_DISABLE;
+		pdata->default_speed = SPEED_2500;
+	} else if (speed == 1) {
+		pdata->default_autoneg = AUTONEG_DISABLE;
+		pdata->default_speed = SPEED_1000;
+	} else {
+		pdata->default_autoneg = AUTONEG_ENABLE;
+		pdata->default_speed = SPEED_10000;
+	}
+
+	DBGPR("<--xgbe_default_config\n");
+}
+
+static void xgbe_init_all_fptrs(struct xgbe_prv_data *pdata)
+{
+	xgbe_a0_init_function_ptrs_dev(&pdata->hw_if);
+	xgbe_a0_init_function_ptrs_desc(&pdata->desc_if);
+}
+
+#ifdef CONFIG_ACPI
+static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
+{
+	struct acpi_device *adev = pdata->adev;
+	struct device *dev = pdata->dev;
+	u32 property;
+	acpi_handle handle;
+	acpi_status status;
+	unsigned long long data;
+	int cca;
+	int ret;
+
+	/* Obtain the system clock setting */
+	ret = device_property_read_u32(dev, XGBE_ACPI_DMA_FREQ, &property);
+	if (ret) {
+		dev_err(dev, "unable to obtain %s property\n",
+			XGBE_ACPI_DMA_FREQ);
+		return ret;
+	}
+	pdata->sysclk_rate = property;
+
+	/* Obtain the PTP clock setting */
+	ret = device_property_read_u32(dev, XGBE_ACPI_PTP_FREQ, &property);
+	if (ret) {
+		dev_err(dev, "unable to obtain %s property\n",
+			XGBE_ACPI_PTP_FREQ);
+		return ret;
+	}
+	pdata->ptpclk_rate = property;
+
+	/* Retrieve the device cache coherency value */
+	handle = adev->handle;
+	do {
+		status = acpi_evaluate_integer(handle, "_CCA", NULL, &data);
+		if (!ACPI_FAILURE(status)) {
+			cca = data;
+			break;
+		}
+
+		status = acpi_get_parent(handle, &handle);
+	} while (!ACPI_FAILURE(status));
+
+	if (ACPI_FAILURE(status)) {
+		dev_err(dev, "error obtaining acpi coherency value\n");
+		return -EINVAL;
+	}
+	pdata->coherent = !!cca;
+
+	return 0;
+}
+#else   /* CONFIG_ACPI */
+static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
+{
+	return -EINVAL;
+}
+#endif  /* CONFIG_ACPI */
+
+#ifdef CONFIG_OF
+static int xgbe_of_support(struct xgbe_prv_data *pdata)
+{
+	struct device *dev = pdata->dev;
+
+	/* Obtain the system clock setting */
+	pdata->sysclk = devm_clk_get(dev, XGBE_DMA_CLOCK);
+	if (IS_ERR(pdata->sysclk)) {
+		dev_err(dev, "dma devm_clk_get failed\n");
+		return PTR_ERR(pdata->sysclk);
+	}
+	pdata->sysclk_rate = clk_get_rate(pdata->sysclk);
+
+	/* Obtain the PTP clock setting */
+	pdata->ptpclk = devm_clk_get(dev, XGBE_PTP_CLOCK);
+	if (IS_ERR(pdata->ptpclk)) {
+		dev_err(dev, "ptp devm_clk_get failed\n");
+		return PTR_ERR(pdata->ptpclk);
+	}
+	pdata->ptpclk_rate = clk_get_rate(pdata->ptpclk);
+
+	/* Retrieve the device cache coherency value */
+	pdata->coherent = of_dma_is_coherent(dev->of_node);
+
+	return 0;
+}
+#else   /* CONFIG_OF */
+static int xgbe_of_support(struct xgbe_prv_data *pdata)
+{
+	return -EINVAL;
+}
+#endif  /*CONFIG_OF */
+
+static int xgbe_probe(struct platform_device *pdev)
+{
+	struct xgbe_prv_data *pdata;
+	struct xgbe_hw_if *hw_if;
+	struct xgbe_desc_if *desc_if;
+	struct net_device *netdev;
+	struct device *dev = &pdev->dev;
+	struct resource *res;
+	const char *phy_mode;
+	unsigned int i;
+	int ret;
+
+	DBGPR("--> xgbe_probe\n");
+
+	netdev = alloc_etherdev_mq(sizeof(struct xgbe_prv_data),
+				   XGBE_MAX_DMA_CHANNELS);
+	if (!netdev) {
+		dev_err(dev, "alloc_etherdev failed\n");
+		ret = -ENOMEM;
+		goto err_alloc;
+	}
+	SET_NETDEV_DEV(netdev, dev);
+	pdata = netdev_priv(netdev);
+	pdata->netdev = netdev;
+	pdata->pdev = pdev;
+	pdata->adev = ACPI_COMPANION(dev);
+	pdata->dev = dev;
+	platform_set_drvdata(pdev, netdev);
+
+	spin_lock_init(&pdata->lock);
+	mutex_init(&pdata->xpcs_mutex);
+	mutex_init(&pdata->rss_mutex);
+	spin_lock_init(&pdata->tstamp_lock);
+
+	/* Check if we should use ACPI or DT */
+	pdata->use_acpi = (!pdata->adev || acpi_disabled) ? 0 : 1;
+
+	/* Set and validate the number of descriptors for a ring */
+	BUILD_BUG_ON_NOT_POWER_OF_2(XGBE_TX_DESC_CNT);
+	pdata->tx_desc_count = XGBE_TX_DESC_CNT;
+	if (pdata->tx_desc_count & (pdata->tx_desc_count - 1)) {
+		dev_err(dev, "tx descriptor count (%d) is not valid\n",
+			pdata->tx_desc_count);
+		ret = -EINVAL;
+		goto err_io;
+	}
+	BUILD_BUG_ON_NOT_POWER_OF_2(XGBE_RX_DESC_CNT);
+	pdata->rx_desc_count = XGBE_RX_DESC_CNT;
+	if (pdata->rx_desc_count & (pdata->rx_desc_count - 1)) {
+		dev_err(dev, "rx descriptor count (%d) is not valid\n",
+			pdata->rx_desc_count);
+		ret = -EINVAL;
+		goto err_io;
+	}
+
+	/* Obtain the mmio areas for the device */
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	pdata->xgmac_regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->xgmac_regs)) {
+		dev_err(dev, "xgmac ioremap failed\n");
+		ret = PTR_ERR(pdata->xgmac_regs);
+		goto err_io;
+	}
+	DBGPR("  xgmac_regs = %p\n", pdata->xgmac_regs);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	pdata->xpcs_regs = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->xpcs_regs)) {
+		dev_err(dev, "xpcs ioremap failed\n");
+		ret = PTR_ERR(pdata->xpcs_regs);
+		goto err_io;
+	}
+	DBGPR("  xpcs_regs  = %p\n", pdata->xpcs_regs);
+
+	/* Retrieve the MAC address */
+	ret = device_property_read_u8_array(dev, XGBE_MAC_ADDR_PROPERTY,
+					    pdata->mac_addr,
+					    sizeof(pdata->mac_addr));
+	if (ret || !is_valid_ether_addr(pdata->mac_addr)) {
+		dev_err(dev, "invalid %s property\n", XGBE_MAC_ADDR_PROPERTY);
+		if (!ret)
+			ret = -EINVAL;
+		goto err_io;
+	}
+
+	/* Retrieve the PHY mode - it must be "xgmii" */
+	ret = device_property_read_string(dev, XGBE_PHY_MODE_PROPERTY,
+					  &phy_mode);
+	if (ret || strcmp(phy_mode, phy_modes(PHY_INTERFACE_MODE_XGMII))) {
+		dev_err(dev, "invalid %s property\n", XGBE_PHY_MODE_PROPERTY);
+		if (!ret)
+			ret = -EINVAL;
+		goto err_io;
+	}
+	pdata->phy_mode = PHY_INTERFACE_MODE_XGMII;
+
+	/* Check for per channel interrupt support */
+	if (device_property_present(dev, XGBE_DMA_IRQS_PROPERTY))
+		pdata->per_channel_irq = 1;
+
+	/* Obtain device settings unique to ACPI/OF */
+	if (pdata->use_acpi)
+		ret = xgbe_acpi_support(pdata);
+	else
+		ret = xgbe_of_support(pdata);
+	if (ret)
+		goto err_io;
+
+	/* Set the DMA coherency values */
+	if (pdata->coherent) {
+		pdata->axdomain = XGBE_DMA_OS_AXDOMAIN;
+		pdata->arcache = XGBE_DMA_OS_ARCACHE;
+		pdata->awcache = XGBE_DMA_OS_AWCACHE;
+	} else {
+		pdata->axdomain = XGBE_DMA_SYS_AXDOMAIN;
+		pdata->arcache = XGBE_DMA_SYS_ARCACHE;
+		pdata->awcache = XGBE_DMA_SYS_AWCACHE;
+	}
+
+	/* Set the DMA mask */
+	if (!dev->dma_mask)
+		dev->dma_mask = &dev->coherent_dma_mask;
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+	if (ret) {
+		dev_err(dev, "dma_set_mask_and_coherent failed\n");
+		goto err_io;
+	}
+
+	/* Get the device interrupt */
+	ret = platform_get_irq(pdev, 0);
+	if (ret < 0) {
+		dev_err(dev, "platform_get_irq 0 failed\n");
+		goto err_io;
+	}
+	pdata->dev_irq = ret;
+
+	netdev->irq = pdata->dev_irq;
+	netdev->base_addr = (unsigned long)pdata->xgmac_regs;
+	memcpy(netdev->dev_addr, pdata->mac_addr, netdev->addr_len);
+
+	/* Set all the function pointers */
+	xgbe_init_all_fptrs(pdata);
+	hw_if = &pdata->hw_if;
+	desc_if = &pdata->desc_if;
+
+	/* Issue software reset to device */
+	hw_if->exit(pdata);
+
+	/* Populate the hardware features */
+	xgbe_a0_get_all_hw_features(pdata);
+
+	/* Set default configuration data */
+	xgbe_default_config(pdata);
+
+	/* Calculate the number of Tx and Rx rings to be created
+	 *  -Tx (DMA) Channels map 1-to-1 to Tx Queues so set
+	 *   the number of Tx queues to the number of Tx channels
+	 *   enabled
+	 *  -Rx (DMA) Channels do not map 1-to-1 so use the actual
+	 *   number of Rx queues
+	 */
+	pdata->tx_ring_count = min_t(unsigned int, num_online_cpus(),
+				     pdata->hw_feat.tx_ch_cnt);
+	pdata->tx_q_count = pdata->tx_ring_count;
+	ret = netif_set_real_num_tx_queues(netdev, pdata->tx_ring_count);
+	if (ret) {
+		dev_err(dev, "error setting real tx queue count\n");
+		goto err_io;
+	}
+
+	pdata->rx_ring_count = min_t(unsigned int,
+				     netif_get_num_default_rss_queues(),
+				     pdata->hw_feat.rx_ch_cnt);
+	pdata->rx_q_count = pdata->hw_feat.rx_q_cnt;
+	ret = netif_set_real_num_rx_queues(netdev, pdata->rx_ring_count);
+	if (ret) {
+		dev_err(dev, "error setting real rx queue count\n");
+		goto err_io;
+	}
+
+	/* Initialize RSS hash key and lookup table */
+	netdev_rss_key_fill(pdata->rss_key, sizeof(pdata->rss_key));
+
+	for (i = 0; i < XGBE_RSS_MAX_TABLE_SIZE; i++)
+		XGMAC_SET_BITS(pdata->rss_table[i], MAC_RSSDR, DMCH,
+			       i % pdata->rx_ring_count);
+
+	XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
+	XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
+	XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
+
+	/* Prepare to regsiter with MDIO */
+	pdata->mii_bus_id = kasprintf(GFP_KERNEL, "%s", pdev->name);
+	if (!pdata->mii_bus_id) {
+		dev_err(dev, "failed to allocate mii bus id\n");
+		ret = -ENOMEM;
+		goto err_io;
+	}
+	ret = xgbe_a0_mdio_register(pdata);
+	if (ret)
+		goto err_bus_id;
+
+	/* Set device operations */
+	netdev->netdev_ops = xgbe_a0_get_netdev_ops();
+	netdev->ethtool_ops = xgbe_a0_get_ethtool_ops();
+#ifdef CONFIG_AMD_XGBE_DCB
+	netdev->dcbnl_ops = xgbe_a0_get_dcbnl_ops();
+#endif
+
+	/* Set device features */
+	netdev->hw_features = NETIF_F_SG |
+			      NETIF_F_IP_CSUM |
+			      NETIF_F_IPV6_CSUM |
+			      NETIF_F_RXCSUM |
+			      NETIF_F_TSO |
+			      NETIF_F_TSO6 |
+			      NETIF_F_GRO |
+			      NETIF_F_HW_VLAN_CTAG_RX |
+			      NETIF_F_HW_VLAN_CTAG_TX |
+			      NETIF_F_HW_VLAN_CTAG_FILTER;
+
+	if (pdata->hw_feat.rss)
+		netdev->hw_features |= NETIF_F_RXHASH;
+
+	netdev->vlan_features |= NETIF_F_SG |
+				 NETIF_F_IP_CSUM |
+				 NETIF_F_IPV6_CSUM |
+				 NETIF_F_TSO |
+				 NETIF_F_TSO6;
+
+	netdev->features |= netdev->hw_features;
+	pdata->netdev_features = netdev->features;
+
+	netdev->priv_flags |= IFF_UNICAST_FLT;
+
+	xgbe_a0_init_rx_coalesce(pdata);
+	xgbe_a0_init_tx_coalesce(pdata);
+
+	netif_carrier_off(netdev);
+	ret = register_netdev(netdev);
+	if (ret) {
+		dev_err(dev, "net device registration failed\n");
+		goto err_reg_netdev;
+	}
+
+	xgbe_a0_ptp_register(pdata);
+
+	xgbe_a0_debugfs_init(pdata);
+
+	netdev_notice(netdev, "net device enabled\n");
+
+	DBGPR("<-- xgbe_probe\n");
+
+	return 0;
+
+err_reg_netdev:
+	xgbe_a0_mdio_unregister(pdata);
+
+err_bus_id:
+	kfree(pdata->mii_bus_id);
+
+err_io:
+	free_netdev(netdev);
+
+err_alloc:
+	dev_notice(dev, "net device not enabled\n");
+
+	return ret;
+}
+
+static int xgbe_remove(struct platform_device *pdev)
+{
+	struct net_device *netdev = platform_get_drvdata(pdev);
+	struct xgbe_prv_data *pdata = netdev_priv(netdev);
+
+	DBGPR("-->xgbe_remove\n");
+
+	xgbe_a0_debugfs_exit(pdata);
+
+	xgbe_a0_ptp_unregister(pdata);
+
+	unregister_netdev(netdev);
+
+	xgbe_a0_mdio_unregister(pdata);
+
+	kfree(pdata->mii_bus_id);
+
+	free_netdev(netdev);
+
+	DBGPR("<--xgbe_remove\n");
+
+	return 0;
+}
+
+#ifdef CONFIG_PM
+static int xgbe_suspend(struct device *dev)
+{
+	struct net_device *netdev = dev_get_drvdata(dev);
+	int ret;
+
+	DBGPR("-->xgbe_suspend\n");
+
+	if (!netif_running(netdev)) {
+		DBGPR("<--xgbe_dev_suspend\n");
+		return -EINVAL;
+	}
+
+	ret = xgbe_a0_powerdown(netdev, XGMAC_DRIVER_CONTEXT);
+
+	DBGPR("<--xgbe_suspend\n");
+
+	return ret;
+}
+
+static int xgbe_resume(struct device *dev)
+{
+	struct net_device *netdev = dev_get_drvdata(dev);
+	int ret;
+
+	DBGPR("-->xgbe_resume\n");
+
+	if (!netif_running(netdev)) {
+		DBGPR("<--xgbe_dev_resume\n");
+		return -EINVAL;
+	}
+
+	ret = xgbe_a0_powerup(netdev, XGMAC_DRIVER_CONTEXT);
+
+	DBGPR("<--xgbe_resume\n");
+
+	return ret;
+}
+#endif /* CONFIG_PM */
+
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id xgbe_a0_acpi_match[] = {
+	{ "AMDI8000", 0 },
+	{},
+};
+
+MODULE_DEVICE_TABLE(acpi, xgbe_a0_acpi_match);
+#endif
+
+#ifdef CONFIG_OF
+static const struct of_device_id xgbe_a0_of_match[] = {
+	{ .compatible = "amd,xgbe-seattle-v0a", },
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, xgbe_a0_of_match);
+#endif
+
+static SIMPLE_DEV_PM_OPS(xgbe_pm_ops, xgbe_suspend, xgbe_resume);
+
+static struct platform_driver xgbe_a0_driver = {
+	.driver = {
+		.name = "amd-xgbe-a0",
+#ifdef CONFIG_ACPI
+		.acpi_match_table = xgbe_a0_acpi_match,
+#endif
+#ifdef CONFIG_OF
+		.of_match_table = xgbe_a0_of_match,
+#endif
+		.pm = &xgbe_pm_ops,
+	},
+	.probe = xgbe_probe,
+	.remove = xgbe_remove,
+};
+
+module_platform_driver(xgbe_a0_driver);
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-mdio.c
new file mode 100644
index 0000000..b84d048
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-mdio.c
@@ -0,0 +1,312 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/module.h>
+#include <linux/kmod.h>
+#include <linux/mdio.h>
+#include <linux/phy.h>
+#include <linux/of.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static int xgbe_mdio_read(struct mii_bus *mii, int prtad, int mmd_reg)
+{
+	struct xgbe_prv_data *pdata = mii->priv;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	int mmd_data;
+
+	DBGPR_MDIO("-->xgbe_mdio_read: prtad=%#x mmd_reg=%#x\n",
+		   prtad, mmd_reg);
+
+	mmd_data = hw_if->read_mmd_regs(pdata, prtad, mmd_reg);
+
+	DBGPR_MDIO("<--xgbe_mdio_read: mmd_data=%#x\n", mmd_data);
+
+	return mmd_data;
+}
+
+static int xgbe_mdio_write(struct mii_bus *mii, int prtad, int mmd_reg,
+			   u16 mmd_val)
+{
+	struct xgbe_prv_data *pdata = mii->priv;
+	struct xgbe_hw_if *hw_if = &pdata->hw_if;
+	int mmd_data = mmd_val;
+
+	DBGPR_MDIO("-->xgbe_mdio_write: prtad=%#x mmd_reg=%#x mmd_data=%#x\n",
+		   prtad, mmd_reg, mmd_data);
+
+	hw_if->write_mmd_regs(pdata, prtad, mmd_reg, mmd_data);
+
+	DBGPR_MDIO("<--xgbe_mdio_write\n");
+
+	return 0;
+}
+
+void xgbe_a0_dump_phy_registers(struct xgbe_prv_data *pdata)
+{
+	struct device *dev = pdata->dev;
+	struct phy_device *phydev = pdata->mii->phy_map[XGBE_PRTAD];
+	int i;
+
+	dev_alert(dev, "\n************* PHY Reg dump **********************\n");
+
+	dev_alert(dev, "PCS Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_CTRL1));
+	dev_alert(dev, "PCS Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1));
+	dev_alert(dev, "Phy Id (PHYS ID 1 %#04x)= %#04x\n", MDIO_DEVID1,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID1));
+	dev_alert(dev, "Phy Id (PHYS ID 2 %#04x)= %#04x\n", MDIO_DEVID2,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVID2));
+	dev_alert(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS1,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS1));
+	dev_alert(dev, "Devices in Package (%#04x)= %#04x\n", MDIO_DEVS2,
+		  XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_DEVS2));
+
+	dev_alert(dev, "Auto-Neg Control Reg (%#04x) = %#04x\n", MDIO_CTRL1,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1));
+	dev_alert(dev, "Auto-Neg Status Reg (%#04x) = %#04x\n", MDIO_STAT1,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_STAT1));
+	dev_alert(dev, "Auto-Neg Ad Reg 1 (%#04x) = %#04x\n",
+		  MDIO_AN_ADVERTISE,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE));
+	dev_alert(dev, "Auto-Neg Ad Reg 2 (%#04x) = %#04x\n",
+		  MDIO_AN_ADVERTISE + 1,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1));
+	dev_alert(dev, "Auto-Neg Ad Reg 3 (%#04x) = %#04x\n",
+		  MDIO_AN_ADVERTISE + 2,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2));
+	dev_alert(dev, "Auto-Neg Completion Reg (%#04x) = %#04x\n",
+		  MDIO_AN_COMP_STAT,
+		  XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_AN_COMP_STAT));
+
+	dev_alert(dev, "MMD Device Mask = %#x\n",
+		  phydev->c45_ids.devices_in_package);
+	for (i = 0; i < ARRAY_SIZE(phydev->c45_ids.device_ids); i++)
+		dev_alert(dev, "  MMD %d: ID = %#08x\n", i,
+			  phydev->c45_ids.device_ids[i]);
+
+	dev_alert(dev, "\n*************************************************\n");
+}
+
+int xgbe_a0_mdio_register(struct xgbe_prv_data *pdata)
+{
+	struct mii_bus *mii;
+	struct phy_device *phydev;
+	int ret = 0;
+
+	DBGPR("-->xgbe_a0_mdio_register\n");
+
+	mii = mdiobus_alloc();
+	if (!mii) {
+		dev_err(pdata->dev, "mdiobus_alloc failed\n");
+		return -ENOMEM;
+	}
+
+	/* Register on the MDIO bus (don't probe any PHYs) */
+	mii->name = XGBE_PHY_NAME;
+	mii->read = xgbe_mdio_read;
+	mii->write = xgbe_mdio_write;
+	snprintf(mii->id, sizeof(mii->id), "%s", pdata->mii_bus_id);
+	mii->priv = pdata;
+	mii->phy_mask = ~0;
+	mii->parent = pdata->dev;
+	ret = mdiobus_register(mii);
+	if (ret) {
+		dev_err(pdata->dev, "mdiobus_register failed\n");
+		goto err_mdiobus_alloc;
+	}
+	DBGPR("  mdiobus_register succeeded for %s\n", pdata->mii_bus_id);
+
+	/* Probe the PCS using Clause 45 */
+	phydev = get_phy_device(mii, XGBE_PRTAD, true);
+	if (IS_ERR(phydev) || !phydev ||
+	    !phydev->c45_ids.device_ids[MDIO_MMD_PCS]) {
+		dev_err(pdata->dev, "get_phy_device failed\n");
+		ret = phydev ? PTR_ERR(phydev) : -ENOLINK;
+		goto err_mdiobus_register;
+	}
+	request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT,
+		       MDIO_ID_ARGS(phydev->c45_ids.device_ids[MDIO_MMD_PCS]));
+
+	ret = phy_device_register(phydev);
+	if (ret) {
+		dev_err(pdata->dev, "phy_device_register failed\n");
+		goto err_phy_device;
+	}
+	if (!phydev->dev.driver) {
+		dev_err(pdata->dev, "phy driver probe failed\n");
+		ret = -EIO;
+		goto err_phy_device;
+	}
+
+	/* Add a reference to the PHY driver so it can't be unloaded */
+	pdata->phy_module = phydev->dev.driver->owner;
+	if (!try_module_get(pdata->phy_module)) {
+		dev_err(pdata->dev, "try_module_get failed\n");
+		ret = -EIO;
+		goto err_phy_device;
+	}
+
+	pdata->mii = mii;
+	pdata->mdio_mmd = MDIO_MMD_PCS;
+
+	phydev->autoneg = pdata->default_autoneg;
+	if (phydev->autoneg == AUTONEG_DISABLE) {
+		phydev->speed = pdata->default_speed;
+		phydev->duplex = DUPLEX_FULL;
+
+		phydev->advertising &= ~ADVERTISED_Autoneg;
+	}
+
+	pdata->phydev = phydev;
+
+	DBGPHY_REGS(pdata);
+
+	DBGPR("<--xgbe_a0_mdio_register\n");
+
+	return 0;
+
+err_phy_device:
+	phy_device_free(phydev);
+
+err_mdiobus_register:
+	mdiobus_unregister(mii);
+
+err_mdiobus_alloc:
+	mdiobus_free(mii);
+
+	return ret;
+}
+
+void xgbe_a0_mdio_unregister(struct xgbe_prv_data *pdata)
+{
+	DBGPR("-->xgbe_a0_mdio_unregister\n");
+
+	pdata->phydev = NULL;
+
+	module_put(pdata->phy_module);
+	pdata->phy_module = NULL;
+
+	mdiobus_unregister(pdata->mii);
+	pdata->mii->priv = NULL;
+
+	mdiobus_free(pdata->mii);
+	pdata->mii = NULL;
+
+	DBGPR("<--xgbe_a0_mdio_unregister\n");
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe-ptp.c b/drivers/net/ethernet/amd/xgbe-a0/xgbe-ptp.c
new file mode 100644
index 0000000..c53c7b2
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe-ptp.c
@@ -0,0 +1,284 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/clk.h>
+#include <linux/clocksource.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/net_tstamp.h>
+
+#include "xgbe.h"
+#include "xgbe-common.h"
+
+static cycle_t xgbe_cc_read(const struct cyclecounter *cc)
+{
+	struct xgbe_prv_data *pdata = container_of(cc,
+						   struct xgbe_prv_data,
+						   tstamp_cc);
+	u64 nsec;
+
+	nsec = pdata->hw_if.get_tstamp_time(pdata);
+
+	return nsec;
+}
+
+static int xgbe_adjfreq(struct ptp_clock_info *info, s32 delta)
+{
+	struct xgbe_prv_data *pdata = container_of(info,
+						   struct xgbe_prv_data,
+						   ptp_clock_info);
+	unsigned long flags;
+	u64 adjust;
+	u32 addend, diff;
+	unsigned int neg_adjust = 0;
+
+	if (delta < 0) {
+		neg_adjust = 1;
+		delta = -delta;
+	}
+
+	adjust = pdata->tstamp_addend;
+	adjust *= delta;
+	diff = div_u64(adjust, 1000000000UL);
+
+	addend = (neg_adjust) ? pdata->tstamp_addend - diff :
+				pdata->tstamp_addend + diff;
+
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+
+	pdata->hw_if.update_tstamp_addend(pdata, addend);
+
+	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+
+	return 0;
+}
+
+static int xgbe_adjtime(struct ptp_clock_info *info, s64 delta)
+{
+	struct xgbe_prv_data *pdata = container_of(info,
+						   struct xgbe_prv_data,
+						   ptp_clock_info);
+	unsigned long flags;
+	u64 nsec;
+
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+
+	nsec = timecounter_read(&pdata->tstamp_tc);
+
+	nsec += delta;
+	timecounter_init(&pdata->tstamp_tc, &pdata->tstamp_cc, nsec);
+
+	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+
+	return 0;
+}
+
+static int xgbe_gettime(struct ptp_clock_info *info, struct timespec *ts)
+{
+	struct xgbe_prv_data *pdata = container_of(info,
+						   struct xgbe_prv_data,
+						   ptp_clock_info);
+	unsigned long flags;
+	u64 nsec;
+
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+
+	nsec = timecounter_read(&pdata->tstamp_tc);
+
+	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+
+	*ts = ns_to_timespec(nsec);
+
+	return 0;
+}
+
+static int xgbe_settime(struct ptp_clock_info *info, const struct timespec *ts)
+{
+	struct xgbe_prv_data *pdata = container_of(info,
+						   struct xgbe_prv_data,
+						   ptp_clock_info);
+	unsigned long flags;
+	u64 nsec;
+
+	nsec = timespec_to_ns(ts);
+
+	spin_lock_irqsave(&pdata->tstamp_lock, flags);
+
+	timecounter_init(&pdata->tstamp_tc, &pdata->tstamp_cc, nsec);
+
+	spin_unlock_irqrestore(&pdata->tstamp_lock, flags);
+
+	return 0;
+}
+
+static int xgbe_enable(struct ptp_clock_info *info,
+		       struct ptp_clock_request *request, int on)
+{
+	return -EOPNOTSUPP;
+}
+
+void xgbe_a0_ptp_register(struct xgbe_prv_data *pdata)
+{
+	struct ptp_clock_info *info = &pdata->ptp_clock_info;
+	struct ptp_clock *clock;
+	struct cyclecounter *cc = &pdata->tstamp_cc;
+	u64 dividend;
+
+	snprintf(info->name, sizeof(info->name), "%s",
+		 netdev_name(pdata->netdev));
+	info->owner = THIS_MODULE;
+	info->max_adj = pdata->ptpclk_rate;
+	info->adjfreq = xgbe_adjfreq;
+	info->adjtime = xgbe_adjtime;
+	info->gettime = xgbe_gettime;
+	info->settime = xgbe_settime;
+	info->enable = xgbe_enable;
+
+	clock = ptp_clock_register(info, pdata->dev);
+	if (IS_ERR(clock)) {
+		dev_err(pdata->dev, "ptp_clock_register failed\n");
+		return;
+	}
+
+	pdata->ptp_clock = clock;
+
+	/* Calculate the addend:
+	 *   addend = 2^32 / (PTP ref clock / 50Mhz)
+	 *          = (2^32 * 50Mhz) / PTP ref clock
+	 */
+	dividend = 50000000;
+	dividend <<= 32;
+	pdata->tstamp_addend = div_u64(dividend, pdata->ptpclk_rate);
+
+	/* Setup the timecounter */
+	cc->read = xgbe_cc_read;
+	cc->mask = CLOCKSOURCE_MASK(64);
+	cc->mult = 1;
+	cc->shift = 0;
+
+	timecounter_init(&pdata->tstamp_tc, &pdata->tstamp_cc,
+			 ktime_to_ns(ktime_get_real()));
+
+	/* Disable all timestamping to start */
+	XGMAC_IOWRITE(pdata, MAC_TCR, 0);
+	pdata->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+	pdata->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+}
+
+void xgbe_a0_ptp_unregister(struct xgbe_prv_data *pdata)
+{
+	if (pdata->ptp_clock)
+		ptp_clock_unregister(pdata->ptp_clock);
+}
diff --git a/drivers/net/ethernet/amd/xgbe-a0/xgbe.h b/drivers/net/ethernet/amd/xgbe-a0/xgbe.h
new file mode 100644
index 0000000..dd8500d
--- /dev/null
+++ b/drivers/net/ethernet/amd/xgbe-a0/xgbe.h
@@ -0,0 +1,868 @@
+/*
+ * AMD 10Gb Ethernet driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * This file incorporates work covered by the following copyright and
+ * permission notice:
+ *     The Synopsys DWC ETHER XGMAC Software Driver and documentation
+ *     (hereinafter "Software") is an unsupported proprietary work of Synopsys,
+ *     Inc. unless otherwise expressly agreed to in writing between Synopsys
+ *     and you.
+ *
+ *     The Software IS NOT an item of Licensed Software or Licensed Product
+ *     under any End User Software License Agreement or Agreement for Licensed
+ *     Product with Synopsys or any supplement thereto.  Permission is hereby
+ *     granted, free of charge, to any person obtaining a copy of this software
+ *     annotated with this license and the Software, to deal in the Software
+ *     without restriction, including without limitation the rights to use,
+ *     copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+ *     of the Software, and to permit persons to whom the Software is furnished
+ *     to do so, subject to the following conditions:
+ *
+ *     The above copyright notice and this permission notice shall be included
+ *     in all copies or substantial portions of the Software.
+ *
+ *     THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
+ *     BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ *     TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ *     PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS
+ *     BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ *     CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ *     SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ *     INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ *     CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ *     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ *     THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __XGBE_H__
+#define __XGBE_H__
+
+#include <linux/dma-mapping.h>
+#include <linux/netdevice.h>
+#include <linux/workqueue.h>
+#include <linux/phy.h>
+#include <linux/if_vlan.h>
+#include <linux/bitops.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/clocksource.h>
+#include <linux/net_tstamp.h>
+#include <net/dcbnl.h>
+
+#define XGBE_DRV_NAME		"amd-xgbe"
+#define XGBE_DRV_VERSION	"0.0.0-a"
+#define XGBE_DRV_DESC		"AMD 10 Gigabit Ethernet Driver"
+
+/* Descriptor related defines */
+#define XGBE_TX_DESC_CNT	512
+#define XGBE_TX_DESC_MIN_FREE	(XGBE_TX_DESC_CNT >> 3)
+#define XGBE_TX_DESC_MAX_PROC	(XGBE_TX_DESC_CNT >> 1)
+#define XGBE_RX_DESC_CNT	512
+
+#define XGBE_TX_MAX_BUF_SIZE	(0x3fff & ~(64 - 1))
+
+/* Descriptors required for maximum contigous TSO/GSO packet */
+#define XGBE_TX_MAX_SPLIT	((GSO_MAX_SIZE / XGBE_TX_MAX_BUF_SIZE) + 1)
+
+/* Maximum possible descriptors needed for an SKB:
+ * - Maximum number of SKB frags
+ * - Maximum descriptors for contiguous TSO/GSO packet
+ * - Possible context descriptor
+ * - Possible TSO header descriptor
+ */
+#define XGBE_TX_MAX_DESCS	(MAX_SKB_FRAGS + XGBE_TX_MAX_SPLIT + 2)
+
+#define XGBE_RX_MIN_BUF_SIZE	(ETH_FRAME_LEN + ETH_FCS_LEN + VLAN_HLEN)
+#define XGBE_RX_BUF_ALIGN	64
+#define XGBE_SKB_ALLOC_SIZE	256
+#define XGBE_SPH_HDSMS_SIZE	2	/* Keep in sync with SKB_ALLOC_SIZE */
+
+#define XGBE_MAX_DMA_CHANNELS	16
+#define XGBE_MAX_QUEUES		16
+#define XGBE_DMA_STOP_TIMEOUT	5
+
+/* DMA cache settings - Outer sharable, write-back, write-allocate */
+#define XGBE_DMA_OS_AXDOMAIN	0x2
+#define XGBE_DMA_OS_ARCACHE	0xb
+#define XGBE_DMA_OS_AWCACHE	0xf
+
+/* DMA cache settings - System, no caches used */
+#define XGBE_DMA_SYS_AXDOMAIN	0x3
+#define XGBE_DMA_SYS_ARCACHE	0x0
+#define XGBE_DMA_SYS_AWCACHE	0x0
+
+#define XGBE_DMA_INTERRUPT_MASK	0x31c7
+
+#define XGMAC_MIN_PACKET	60
+#define XGMAC_STD_PACKET_MTU	1500
+#define XGMAC_MAX_STD_PACKET	1518
+#define XGMAC_JUMBO_PACKET_MTU	9000
+#define XGMAC_MAX_JUMBO_PACKET	9018
+
+/* MDIO bus phy name */
+#define XGBE_PHY_NAME		"amd_xgbe_phy_a0"
+#define XGBE_PRTAD		0
+
+/* Common property names */
+#define XGBE_MAC_ADDR_PROPERTY	"mac-address"
+#define XGBE_PHY_MODE_PROPERTY	"phy-mode"
+#define XGBE_DMA_IRQS_PROPERTY	"amd,per-channel-interrupt"
+
+/* Device-tree clock names */
+#define XGBE_DMA_CLOCK		"dma_clk"
+#define XGBE_PTP_CLOCK		"ptp_clk"
+
+/* ACPI property names */
+#define XGBE_ACPI_DMA_FREQ	"amd,dma-freq"
+#define XGBE_ACPI_PTP_FREQ	"amd,ptp-freq"
+
+/* Timestamp support - values based on 50MHz PTP clock
+ *   50MHz => 20 nsec
+ */
+#define XGBE_TSTAMP_SSINC	20
+#define XGBE_TSTAMP_SNSINC	0
+
+/* Driver PMT macros */
+#define XGMAC_DRIVER_CONTEXT	1
+#define XGMAC_IOCTL_CONTEXT	2
+
+#define XGBE_FIFO_MAX		81920
+#define XGBE_FIFO_SIZE_B(x)	(x)
+#define XGBE_FIFO_SIZE_KB(x)	(x * 1024)
+
+#define XGBE_TC_MIN_QUANTUM	10
+
+/* Helper macro for descriptor handling
+ *  Always use XGBE_GET_DESC_DATA to access the descriptor data
+ *  since the index is free-running and needs to be and-ed
+ *  with the descriptor count value of the ring to index to
+ *  the proper descriptor data.
+ */
+#define XGBE_GET_DESC_DATA(_ring, _idx)				\
+	((_ring)->rdata +					\
+	 ((_idx) & ((_ring)->rdesc_count - 1)))
+
+/* Default coalescing parameters */
+#define XGMAC_INIT_DMA_TX_USECS		50
+#define XGMAC_INIT_DMA_TX_FRAMES	25
+
+#define XGMAC_MAX_DMA_RIWT		0xff
+#define XGMAC_INIT_DMA_RX_USECS		30
+#define XGMAC_INIT_DMA_RX_FRAMES	25
+
+/* Flow control queue count */
+#define XGMAC_MAX_FLOW_CONTROL_QUEUES	8
+
+/* Maximum MAC address hash table size (256 bits = 8 bytes) */
+#define XGBE_MAC_HASH_TABLE_SIZE	8
+
+/* Receive Side Scaling */
+#define XGBE_RSS_HASH_KEY_SIZE		40
+#define XGBE_RSS_MAX_TABLE_SIZE		256
+#define XGBE_RSS_LOOKUP_TABLE_TYPE	0
+#define XGBE_RSS_HASH_KEY_TYPE		1
+
+struct xgbe_prv_data;
+
+struct xgbe_packet_data {
+	struct sk_buff *skb;
+
+	unsigned int attributes;
+
+	unsigned int errors;
+
+	unsigned int rdesc_count;
+	unsigned int length;
+
+	unsigned int header_len;
+	unsigned int tcp_header_len;
+	unsigned int tcp_payload_len;
+	unsigned short mss;
+
+	unsigned short vlan_ctag;
+
+	u64 rx_tstamp;
+
+	u32 rss_hash;
+	enum pkt_hash_types rss_hash_type;
+
+	unsigned int tx_packets;
+	unsigned int tx_bytes;
+};
+
+/* Common Rx and Tx descriptor mapping */
+struct xgbe_ring_desc {
+	__le32 desc0;
+	__le32 desc1;
+	__le32 desc2;
+	__le32 desc3;
+};
+
+/* Page allocation related values */
+struct xgbe_page_alloc {
+	struct page *pages;
+	unsigned int pages_len;
+	unsigned int pages_offset;
+
+	dma_addr_t pages_dma;
+};
+
+/* Ring entry buffer data */
+struct xgbe_buffer_data {
+	struct xgbe_page_alloc pa;
+	struct xgbe_page_alloc pa_unmap;
+
+	dma_addr_t dma;
+	unsigned int dma_len;
+};
+
+/* Tx-related ring data */
+struct xgbe_tx_ring_data {
+	unsigned int packets;		/* BQL packet count */
+	unsigned int bytes;		/* BQL byte count */
+};
+
+/* Rx-related ring data */
+struct xgbe_rx_ring_data {
+	struct xgbe_buffer_data hdr;	/* Header locations */
+	struct xgbe_buffer_data buf;	/* Payload locations */
+
+	unsigned short hdr_len;		/* Length of received header */
+	unsigned short len;		/* Length of received packet */
+};
+
+/* Structure used to hold information related to the descriptor
+ * and the packet associated with the descriptor (always use
+ * use the XGBE_GET_DESC_DATA macro to access this data from the ring)
+ */
+struct xgbe_ring_data {
+	struct xgbe_ring_desc *rdesc;	/* Virtual address of descriptor */
+	dma_addr_t rdesc_dma;		/* DMA address of descriptor */
+
+	struct sk_buff *skb;		/* Virtual address of SKB */
+	dma_addr_t skb_dma;		/* DMA address of SKB data */
+	unsigned int skb_dma_len;	/* Length of SKB DMA area */
+
+	struct xgbe_tx_ring_data tx;	/* Tx-related data */
+	struct xgbe_rx_ring_data rx;	/* Rx-related data */
+
+	unsigned int interrupt;		/* Interrupt indicator */
+
+	unsigned int mapped_as_page;
+
+	/* Incomplete receive save location.  If the budget is exhausted
+	 * or the last descriptor (last normal descriptor or a following
+	 * context descriptor) has not been DMA'd yet the current state
+	 * of the receive processing needs to be saved.
+	 */
+	unsigned int state_saved;
+	struct {
+		unsigned int incomplete;
+		unsigned int context_next;
+		struct sk_buff *skb;
+		unsigned int len;
+		unsigned int error;
+	} state;
+};
+
+struct xgbe_ring {
+	/* Ring lock - used just for TX rings at the moment */
+	spinlock_t lock;
+
+	/* Per packet related information */
+	struct xgbe_packet_data packet_data;
+
+	/* Virtual/DMA addresses and count of allocated descriptor memory */
+	struct xgbe_ring_desc *rdesc;
+	dma_addr_t rdesc_dma;
+	unsigned int rdesc_count;
+
+	/* Array of descriptor data corresponding the descriptor memory
+	 * (always use the XGBE_GET_DESC_DATA macro to access this data)
+	 */
+	struct xgbe_ring_data *rdata;
+
+	/* Page allocation for RX buffers */
+	struct xgbe_page_alloc rx_hdr_pa;
+	struct xgbe_page_alloc rx_buf_pa;
+
+	/* Ring index values
+	 *  cur   - Tx: index of descriptor to be used for current transfer
+	 *          Rx: index of descriptor to check for packet availability
+	 *  dirty - Tx: index of descriptor to check for transfer complete
+	 *          Rx: index of descriptor to check for buffer reallocation
+	 */
+	unsigned int cur;
+	unsigned int dirty;
+
+	/* Coalesce frame count used for interrupt bit setting */
+	unsigned int coalesce_count;
+
+	union {
+		struct {
+			unsigned int queue_stopped;
+			unsigned int xmit_more;
+			unsigned short cur_mss;
+			unsigned short cur_vlan_ctag;
+		} tx;
+	};
+} ____cacheline_aligned;
+
+/* Structure used to describe the descriptor rings associated with
+ * a DMA channel.
+ */
+struct xgbe_channel {
+	char name[16];
+
+	/* Address of private data area for device */
+	struct xgbe_prv_data *pdata;
+
+	/* Queue index and base address of queue's DMA registers */
+	unsigned int queue_index;
+	void __iomem *dma_regs;
+
+	/* Per channel interrupt irq number */
+	int dma_irq;
+	char dma_irq_name[IFNAMSIZ + 32];
+
+	/* Netdev related settings */
+	struct napi_struct napi;
+
+	unsigned int saved_ier;
+
+	unsigned int tx_timer_active;
+	struct hrtimer tx_timer;
+
+	struct xgbe_ring *tx_ring;
+	struct xgbe_ring *rx_ring;
+} ____cacheline_aligned;
+
+enum xgbe_int {
+	XGMAC_INT_DMA_CH_SR_TI,
+	XGMAC_INT_DMA_CH_SR_TPS,
+	XGMAC_INT_DMA_CH_SR_TBU,
+	XGMAC_INT_DMA_CH_SR_RI,
+	XGMAC_INT_DMA_CH_SR_RBU,
+	XGMAC_INT_DMA_CH_SR_RPS,
+	XGMAC_INT_DMA_CH_SR_TI_RI,
+	XGMAC_INT_DMA_CH_SR_FBE,
+	XGMAC_INT_DMA_ALL,
+};
+
+enum xgbe_int_state {
+	XGMAC_INT_STATE_SAVE,
+	XGMAC_INT_STATE_RESTORE,
+};
+
+enum xgbe_mtl_fifo_size {
+	XGMAC_MTL_FIFO_SIZE_256  = 0x00,
+	XGMAC_MTL_FIFO_SIZE_512  = 0x01,
+	XGMAC_MTL_FIFO_SIZE_1K   = 0x03,
+	XGMAC_MTL_FIFO_SIZE_2K   = 0x07,
+	XGMAC_MTL_FIFO_SIZE_4K   = 0x0f,
+	XGMAC_MTL_FIFO_SIZE_8K   = 0x1f,
+	XGMAC_MTL_FIFO_SIZE_16K  = 0x3f,
+	XGMAC_MTL_FIFO_SIZE_32K  = 0x7f,
+	XGMAC_MTL_FIFO_SIZE_64K  = 0xff,
+	XGMAC_MTL_FIFO_SIZE_128K = 0x1ff,
+	XGMAC_MTL_FIFO_SIZE_256K = 0x3ff,
+};
+
+struct xgbe_mmc_stats {
+	/* Tx Stats */
+	u64 txoctetcount_gb;
+	u64 txframecount_gb;
+	u64 txbroadcastframes_g;
+	u64 txmulticastframes_g;
+	u64 tx64octets_gb;
+	u64 tx65to127octets_gb;
+	u64 tx128to255octets_gb;
+	u64 tx256to511octets_gb;
+	u64 tx512to1023octets_gb;
+	u64 tx1024tomaxoctets_gb;
+	u64 txunicastframes_gb;
+	u64 txmulticastframes_gb;
+	u64 txbroadcastframes_gb;
+	u64 txunderflowerror;
+	u64 txoctetcount_g;
+	u64 txframecount_g;
+	u64 txpauseframes;
+	u64 txvlanframes_g;
+
+	/* Rx Stats */
+	u64 rxframecount_gb;
+	u64 rxoctetcount_gb;
+	u64 rxoctetcount_g;
+	u64 rxbroadcastframes_g;
+	u64 rxmulticastframes_g;
+	u64 rxcrcerror;
+	u64 rxrunterror;
+	u64 rxjabbererror;
+	u64 rxundersize_g;
+	u64 rxoversize_g;
+	u64 rx64octets_gb;
+	u64 rx65to127octets_gb;
+	u64 rx128to255octets_gb;
+	u64 rx256to511octets_gb;
+	u64 rx512to1023octets_gb;
+	u64 rx1024tomaxoctets_gb;
+	u64 rxunicastframes_g;
+	u64 rxlengtherror;
+	u64 rxoutofrangetype;
+	u64 rxpauseframes;
+	u64 rxfifooverflow;
+	u64 rxvlanframes_gb;
+	u64 rxwatchdogerror;
+};
+
+struct xgbe_hw_if {
+	int (*tx_complete)(struct xgbe_ring_desc *);
+
+	int (*set_promiscuous_mode)(struct xgbe_prv_data *, unsigned int);
+	int (*set_all_multicast_mode)(struct xgbe_prv_data *, unsigned int);
+	int (*add_mac_addresses)(struct xgbe_prv_data *);
+	int (*set_mac_address)(struct xgbe_prv_data *, u8 *addr);
+
+	int (*enable_rx_csum)(struct xgbe_prv_data *);
+	int (*disable_rx_csum)(struct xgbe_prv_data *);
+
+	int (*enable_rx_vlan_stripping)(struct xgbe_prv_data *);
+	int (*disable_rx_vlan_stripping)(struct xgbe_prv_data *);
+	int (*enable_rx_vlan_filtering)(struct xgbe_prv_data *);
+	int (*disable_rx_vlan_filtering)(struct xgbe_prv_data *);
+	int (*update_vlan_hash_table)(struct xgbe_prv_data *);
+
+	int (*read_mmd_regs)(struct xgbe_prv_data *, int, int);
+	void (*write_mmd_regs)(struct xgbe_prv_data *, int, int, int);
+	int (*set_gmii_speed)(struct xgbe_prv_data *);
+	int (*set_gmii_2500_speed)(struct xgbe_prv_data *);
+	int (*set_xgmii_speed)(struct xgbe_prv_data *);
+
+	void (*enable_tx)(struct xgbe_prv_data *);
+	void (*disable_tx)(struct xgbe_prv_data *);
+	void (*enable_rx)(struct xgbe_prv_data *);
+	void (*disable_rx)(struct xgbe_prv_data *);
+
+	void (*powerup_tx)(struct xgbe_prv_data *);
+	void (*powerdown_tx)(struct xgbe_prv_data *);
+	void (*powerup_rx)(struct xgbe_prv_data *);
+	void (*powerdown_rx)(struct xgbe_prv_data *);
+
+	int (*init)(struct xgbe_prv_data *);
+	int (*exit)(struct xgbe_prv_data *);
+
+	int (*enable_int)(struct xgbe_channel *, enum xgbe_int);
+	int (*disable_int)(struct xgbe_channel *, enum xgbe_int);
+	void (*dev_xmit)(struct xgbe_channel *);
+	int (*dev_read)(struct xgbe_channel *);
+	void (*tx_desc_init)(struct xgbe_channel *);
+	void (*rx_desc_init)(struct xgbe_channel *);
+	void (*rx_desc_reset)(struct xgbe_ring_data *);
+	void (*tx_desc_reset)(struct xgbe_ring_data *);
+	int (*is_last_desc)(struct xgbe_ring_desc *);
+	int (*is_context_desc)(struct xgbe_ring_desc *);
+	void (*tx_start_xmit)(struct xgbe_channel *, struct xgbe_ring *);
+
+	/* For FLOW ctrl */
+	int (*config_tx_flow_control)(struct xgbe_prv_data *);
+	int (*config_rx_flow_control)(struct xgbe_prv_data *);
+
+	/* For RX coalescing */
+	int (*config_rx_coalesce)(struct xgbe_prv_data *);
+	int (*config_tx_coalesce)(struct xgbe_prv_data *);
+	unsigned int (*usec_to_riwt)(struct xgbe_prv_data *, unsigned int);
+	unsigned int (*riwt_to_usec)(struct xgbe_prv_data *, unsigned int);
+
+	/* For RX and TX threshold config */
+	int (*config_rx_threshold)(struct xgbe_prv_data *, unsigned int);
+	int (*config_tx_threshold)(struct xgbe_prv_data *, unsigned int);
+
+	/* For RX and TX Store and Forward Mode config */
+	int (*config_rsf_mode)(struct xgbe_prv_data *, unsigned int);
+	int (*config_tsf_mode)(struct xgbe_prv_data *, unsigned int);
+
+	/* For TX DMA Operate on Second Frame config */
+	int (*config_osp_mode)(struct xgbe_prv_data *);
+
+	/* For RX and TX PBL config */
+	int (*config_rx_pbl_val)(struct xgbe_prv_data *);
+	int (*get_rx_pbl_val)(struct xgbe_prv_data *);
+	int (*config_tx_pbl_val)(struct xgbe_prv_data *);
+	int (*get_tx_pbl_val)(struct xgbe_prv_data *);
+	int (*config_pblx8)(struct xgbe_prv_data *);
+
+	/* For MMC statistics */
+	void (*rx_mmc_int)(struct xgbe_prv_data *);
+	void (*tx_mmc_int)(struct xgbe_prv_data *);
+	void (*read_mmc_stats)(struct xgbe_prv_data *);
+
+	/* For Timestamp config */
+	int (*config_tstamp)(struct xgbe_prv_data *, unsigned int);
+	void (*update_tstamp_addend)(struct xgbe_prv_data *, unsigned int);
+	void (*set_tstamp_time)(struct xgbe_prv_data *, unsigned int sec,
+				unsigned int nsec);
+	u64 (*get_tstamp_time)(struct xgbe_prv_data *);
+	u64 (*get_tx_tstamp)(struct xgbe_prv_data *);
+
+	/* For Data Center Bridging config */
+	void (*config_dcb_tc)(struct xgbe_prv_data *);
+	void (*config_dcb_pfc)(struct xgbe_prv_data *);
+
+	/* For Receive Side Scaling */
+	int (*enable_rss)(struct xgbe_prv_data *);
+	int (*disable_rss)(struct xgbe_prv_data *);
+	int (*set_rss_hash_key)(struct xgbe_prv_data *, const u8 *);
+	int (*set_rss_lookup_table)(struct xgbe_prv_data *, const u32 *);
+};
+
+struct xgbe_desc_if {
+	int (*alloc_ring_resources)(struct xgbe_prv_data *);
+	void (*free_ring_resources)(struct xgbe_prv_data *);
+	int (*map_tx_skb)(struct xgbe_channel *, struct sk_buff *);
+	int (*map_rx_buffer)(struct xgbe_prv_data *, struct xgbe_ring *,
+			     struct xgbe_ring_data *);
+	void (*unmap_rdata)(struct xgbe_prv_data *, struct xgbe_ring_data *);
+	void (*wrapper_tx_desc_init)(struct xgbe_prv_data *);
+	void (*wrapper_rx_desc_init)(struct xgbe_prv_data *);
+};
+
+/* This structure contains flags that indicate what hardware features
+ * or configurations are present in the device.
+ */
+struct xgbe_hw_features {
+	/* HW Version */
+	unsigned int version;
+
+	/* HW Feature Register0 */
+	unsigned int gmii;		/* 1000 Mbps support */
+	unsigned int vlhash;		/* VLAN Hash Filter */
+	unsigned int sma;		/* SMA(MDIO) Interface */
+	unsigned int rwk;		/* PMT remote wake-up packet */
+	unsigned int mgk;		/* PMT magic packet */
+	unsigned int mmc;		/* RMON module */
+	unsigned int aoe;		/* ARP Offload */
+	unsigned int ts;		/* IEEE 1588-2008 Adavanced Timestamp */
+	unsigned int eee;		/* Energy Efficient Ethernet */
+	unsigned int tx_coe;		/* Tx Checksum Offload */
+	unsigned int rx_coe;		/* Rx Checksum Offload */
+	unsigned int addn_mac;		/* Additional MAC Addresses */
+	unsigned int ts_src;		/* Timestamp Source */
+	unsigned int sa_vlan_ins;	/* Source Address or VLAN Insertion */
+
+	/* HW Feature Register1 */
+	unsigned int rx_fifo_size;	/* MTL Receive FIFO Size */
+	unsigned int tx_fifo_size;	/* MTL Transmit FIFO Size */
+	unsigned int adv_ts_hi;		/* Advance Timestamping High Word */
+	unsigned int dcb;		/* DCB Feature */
+	unsigned int sph;		/* Split Header Feature */
+	unsigned int tso;		/* TCP Segmentation Offload */
+	unsigned int dma_debug;		/* DMA Debug Registers */
+	unsigned int rss;		/* Receive Side Scaling */
+	unsigned int tc_cnt;		/* Number of Traffic Classes */
+	unsigned int hash_table_size;	/* Hash Table Size */
+	unsigned int l3l4_filter_num;	/* Number of L3-L4 Filters */
+
+	/* HW Feature Register2 */
+	unsigned int rx_q_cnt;		/* Number of MTL Receive Queues */
+	unsigned int tx_q_cnt;		/* Number of MTL Transmit Queues */
+	unsigned int rx_ch_cnt;		/* Number of DMA Receive Channels */
+	unsigned int tx_ch_cnt;		/* Number of DMA Transmit Channels */
+	unsigned int pps_out_num;	/* Number of PPS outputs */
+	unsigned int aux_snap_num;	/* Number of Aux snapshot inputs */
+};
+
+struct xgbe_prv_data {
+	struct net_device *netdev;
+	struct platform_device *pdev;
+	struct acpi_device *adev;
+	struct device *dev;
+
+	/* ACPI or DT flag */
+	unsigned int use_acpi;
+
+	/* XGMAC/XPCS related mmio registers */
+	void __iomem *xgmac_regs;	/* XGMAC CSRs */
+	void __iomem *xpcs_regs;	/* XPCS MMD registers */
+
+	/* Overall device lock */
+	spinlock_t lock;
+
+	/* XPCS indirect addressing mutex */
+	struct mutex xpcs_mutex;
+
+	/* RSS addressing mutex */
+	struct mutex rss_mutex;
+
+	int dev_irq;
+	unsigned int per_channel_irq;
+
+	struct xgbe_hw_if hw_if;
+	struct xgbe_desc_if desc_if;
+
+	/* AXI DMA settings */
+	unsigned int coherent;
+	unsigned int axdomain;
+	unsigned int arcache;
+	unsigned int awcache;
+
+	/* Rings for Tx/Rx on a DMA channel */
+	struct xgbe_channel *channel;
+	unsigned int channel_count;
+	unsigned int tx_ring_count;
+	unsigned int tx_desc_count;
+	unsigned int rx_ring_count;
+	unsigned int rx_desc_count;
+
+	unsigned int tx_q_count;
+	unsigned int rx_q_count;
+
+	/* Tx/Rx common settings */
+	unsigned int pblx8;
+
+	/* Tx settings */
+	unsigned int tx_sf_mode;
+	unsigned int tx_threshold;
+	unsigned int tx_pbl;
+	unsigned int tx_osp_mode;
+
+	/* Rx settings */
+	unsigned int rx_sf_mode;
+	unsigned int rx_threshold;
+	unsigned int rx_pbl;
+
+	/* Tx coalescing settings */
+	unsigned int tx_usecs;
+	unsigned int tx_frames;
+
+	/* Rx coalescing settings */
+	unsigned int rx_riwt;
+	unsigned int rx_frames;
+
+	/* Current Rx buffer size */
+	unsigned int rx_buf_size;
+
+	/* Flow control settings */
+	unsigned int pause_autoneg;
+	unsigned int tx_pause;
+	unsigned int rx_pause;
+
+	/* Receive Side Scaling settings */
+	u8 rss_key[XGBE_RSS_HASH_KEY_SIZE];
+	u32 rss_table[XGBE_RSS_MAX_TABLE_SIZE];
+	u32 rss_options;
+
+	/* MDIO settings */
+	struct module *phy_module;
+	char *mii_bus_id;
+	struct mii_bus *mii;
+	int mdio_mmd;
+	struct phy_device *phydev;
+	int default_autoneg;
+	int default_speed;
+
+	/* Current PHY settings */
+	phy_interface_t phy_mode;
+	int phy_link;
+	int phy_speed;
+	unsigned int phy_tx_pause;
+	unsigned int phy_rx_pause;
+
+	/* Netdev related settings */
+	unsigned char mac_addr[ETH_ALEN];
+	netdev_features_t netdev_features;
+	struct napi_struct napi;
+	struct xgbe_mmc_stats mmc_stats;
+
+	/* Filtering support */
+	unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+
+	/* Device clocks */
+	struct clk *sysclk;
+	unsigned long sysclk_rate;
+	struct clk *ptpclk;
+	unsigned long ptpclk_rate;
+
+	/* Timestamp support */
+	spinlock_t tstamp_lock;
+	struct ptp_clock_info ptp_clock_info;
+	struct ptp_clock *ptp_clock;
+	struct hwtstamp_config tstamp_config;
+	struct cyclecounter tstamp_cc;
+	struct timecounter tstamp_tc;
+	unsigned int tstamp_addend;
+	struct work_struct tx_tstamp_work;
+	struct sk_buff *tx_tstamp_skb;
+	u64 tx_tstamp;
+
+	/* DCB support */
+	struct ieee_ets *ets;
+	struct ieee_pfc *pfc;
+	unsigned int q2tc_map[XGBE_MAX_QUEUES];
+	unsigned int prio2q_map[IEEE_8021QAZ_MAX_TCS];
+
+	/* Hardware features of the device */
+	struct xgbe_hw_features hw_feat;
+
+	/* Device restart work structure */
+	struct work_struct restart_work;
+
+	/* Keeps track of power mode */
+	unsigned int power_down;
+
+#ifdef CONFIG_DEBUG_FS
+	struct dentry *xgbe_debugfs;
+
+	unsigned int debugfs_xgmac_reg;
+
+	unsigned int debugfs_xpcs_mmd;
+	unsigned int debugfs_xpcs_reg;
+#endif
+};
+
+/* Function prototypes*/
+
+void xgbe_a0_init_function_ptrs_dev(struct xgbe_hw_if *);
+void xgbe_a0_init_function_ptrs_desc(struct xgbe_desc_if *);
+struct net_device_ops *xgbe_a0_get_netdev_ops(void);
+struct ethtool_ops *xgbe_a0_get_ethtool_ops(void);
+#ifdef CONFIG_AMD_XGBE_DCB
+const struct dcbnl_rtnl_ops *xgbe_a0_get_dcbnl_ops(void);
+#endif
+
+int xgbe_a0_mdio_register(struct xgbe_prv_data *);
+void xgbe_a0_mdio_unregister(struct xgbe_prv_data *);
+void xgbe_a0_dump_phy_registers(struct xgbe_prv_data *);
+void xgbe_a0_ptp_register(struct xgbe_prv_data *);
+void xgbe_a0_ptp_unregister(struct xgbe_prv_data *);
+void xgbe_a0_dump_tx_desc(struct xgbe_ring *, unsigned int, unsigned int,
+		       unsigned int);
+void xgbe_a0_dump_rx_desc(struct xgbe_ring *, struct xgbe_ring_desc *,
+		       unsigned int);
+void xgbe_a0_print_pkt(struct net_device *, struct sk_buff *, bool);
+void xgbe_a0_get_all_hw_features(struct xgbe_prv_data *);
+int xgbe_a0_powerup(struct net_device *, unsigned int);
+int xgbe_a0_powerdown(struct net_device *, unsigned int);
+void xgbe_a0_init_rx_coalesce(struct xgbe_prv_data *);
+void xgbe_a0_init_tx_coalesce(struct xgbe_prv_data *);
+
+#ifdef CONFIG_DEBUG_FS
+void xgbe_a0_debugfs_init(struct xgbe_prv_data *);
+void xgbe_a0_debugfs_exit(struct xgbe_prv_data *);
+#else
+static inline void xgbe_a0_debugfs_init(struct xgbe_prv_data *pdata) {}
+static inline void xgbe_a0_debugfs_exit(struct xgbe_prv_data *pdata) {}
+#endif /* CONFIG_DEBUG_FS */
+
+/* NOTE: Uncomment for TX and RX DESCRIPTOR DUMP in KERNEL LOG */
+#if 0
+#define XGMAC_ENABLE_TX_DESC_DUMP
+#define XGMAC_ENABLE_RX_DESC_DUMP
+#endif
+
+/* NOTE: Uncomment for TX and RX PACKET DUMP in KERNEL LOG */
+#if 0
+#define XGMAC_ENABLE_TX_PKT_DUMP
+#define XGMAC_ENABLE_RX_PKT_DUMP
+#endif
+
+/* NOTE: Uncomment for function trace log messages in KERNEL LOG */
+#if 0
+#define YDEBUG
+#define YDEBUG_MDIO
+#endif
+
+/* For debug prints */
+#ifdef YDEBUG
+#define DBGPR(x...) pr_alert(x)
+#define DBGPHY_REGS(x...) xgbe_a0_dump_phy_registers(x)
+#else
+#define DBGPR(x...) do { } while (0)
+#define DBGPHY_REGS(x...) do { } while (0)
+#endif
+
+#ifdef YDEBUG_MDIO
+#define DBGPR_MDIO(x...) pr_alert(x)
+#else
+#define DBGPR_MDIO(x...) do { } while (0)
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
index 869d97f..29aad5e 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
@@ -593,12 +593,10 @@ static int xgene_enet_reset(struct xgene_enet_pdata *pdata)
 	if (!xgene_ring_mgr_init(pdata))
 		return -ENODEV;
 
-	if (!efi_enabled(EFI_BOOT)) {
-		clk_prepare_enable(pdata->clk);
-		clk_disable_unprepare(pdata->clk);
-		clk_prepare_enable(pdata->clk);
-		xgene_enet_ecc_init(pdata);
-	}
+	clk_prepare_enable(pdata->clk);
+	clk_disable_unprepare(pdata->clk);
+	clk_prepare_enable(pdata->clk);
+	xgene_enet_ecc_init(pdata);
 	xgene_enet_config_ring_if_assoc(pdata);
 
 	/* Enable auto-incr for scanning */
@@ -676,7 +674,7 @@ static int xgene_enet_phy_connect(struct net_device *ndev)
 
 	phy_dev = pdata->phy_dev;
 
-	if (!phy_dev ||
+	if (phy_dev == NULL ||
 	    phy_connect_direct(ndev, phy_dev, &xgene_enet_adjust_link,
 			       pdata->phy_mode)) {
 		netdev_err(ndev, "Could not connect to PHY\n");
@@ -692,37 +690,23 @@ static int xgene_enet_phy_connect(struct net_device *ndev)
 	return 0;
 }
 
-static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
-				  struct mii_bus *mdio)
+#ifdef CONFIG_ACPI
+static int xgene_acpi_mdiobus_register(struct xgene_enet_pdata *pdata,
+	struct mii_bus *mdio)
 {
 	struct device *dev = &pdata->pdev->dev;
-	struct net_device *ndev = pdata->ndev;
 	struct phy_device *phy;
-	struct device_node *child_np;
-	struct device_node *mdio_np = NULL;
-	int ret;
+	int i, ret;
 	u32 phy_id;
 
-	if (dev->of_node) {
-		for_each_child_of_node(dev->of_node, child_np) {
-			if (of_device_is_compatible(child_np,
-						    "apm,xgene-mdio")) {
-				mdio_np = child_np;
-				break;
-			}
-		}
-
-		if (!mdio_np) {
-			netdev_dbg(ndev, "No mdio node in the dts\n");
-			return -ENXIO;
-		}
-
-		return of_mdiobus_register(mdio, mdio_np);
-	}
-
 	/* Mask out all PHYs from auto probing. */
 	mdio->phy_mask = ~0;
 
+	/* Clear all the IRQ properties */
+	if (mdio->irq)
+		for (i = 0; i < PHY_MAX_ADDR; i++)
+			mdio->irq[i] = PHY_POLL;
+
 	/* Register the MDIO bus */
 	ret = mdiobus_register(mdio);
 	if (ret)
@@ -730,8 +714,6 @@ static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
 
 	ret = device_property_read_u32(dev, "phy-channel", &phy_id);
 	if (ret)
-		ret = device_property_read_u32(dev, "phy-addr", &phy_id);
-	if (ret)
 		return -EINVAL;
 
 	phy = get_phy_device(mdio, phy_id, true);
@@ -746,13 +728,31 @@ static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
 
 	return ret;
 }
+#else
+#define xgene_acpi_mdiobus_register(a, b) -1
+#endif
 
 int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata)
 {
 	struct net_device *ndev = pdata->ndev;
+	struct device *dev = &pdata->pdev->dev;
+	struct device_node *child_np;
+	struct device_node *mdio_np = NULL;
 	struct mii_bus *mdio_bus;
 	int ret;
 
+	for_each_child_of_node(dev->of_node, child_np) {
+		if (of_device_is_compatible(child_np, "apm,xgene-mdio")) {
+			mdio_np = child_np;
+			break;
+		}
+	}
+
+	if (dev->of_node && !mdio_np) {
+		netdev_dbg(ndev, "No mdio node in the dts\n");
+		return -ENXIO;
+	}
+
 	mdio_bus = mdiobus_alloc();
 	if (!mdio_bus)
 		return -ENOMEM;
@@ -766,7 +766,10 @@ int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata)
 	mdio_bus->priv = pdata;
 	mdio_bus->parent = &ndev->dev;
 
-	ret = xgene_mdiobus_register(pdata, mdio_bus);
+	if (dev->of_node)
+		ret = of_mdiobus_register(mdio_bus, mdio_np);
+	else
+		ret = xgene_acpi_mdiobus_register(pdata, mdio_bus);
 	if (ret) {
 		netdev_err(ndev, "Failed to register MDIO bus\n");
 		mdiobus_free(mdio_bus);
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
index 44b1537..a4a53a7 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
@@ -24,10 +24,6 @@
 #include "xgene_enet_sgmac.h"
 #include "xgene_enet_xgmac.h"
 
-#define RES_ENET_CSR	0
-#define RES_RING_CSR	1
-#define RES_RING_CMD	2
-
 static void xgene_enet_init_bufpool(struct xgene_enet_desc_ring *buf_pool)
 {
 	struct xgene_enet_raw_desc16 *raw_desc;
@@ -752,40 +748,41 @@ static const struct net_device_ops xgene_ndev_ops = {
 	.ndo_set_mac_address = xgene_enet_set_mac_address,
 };
 
-static int xgene_get_mac_address(struct device *dev,
-				 unsigned char *addr)
+#ifdef CONFIG_ACPI
+static int acpi_get_mac_address(struct device *dev,
+				unsigned char *addr)
 {
 	int ret;
 
-	ret = device_property_read_u8_array(dev, "local-mac-address", addr, 6);
-	if (ret)
-		ret = device_property_read_u8_array(dev, "mac-address",
-						    addr, 6);
+	ret = device_property_read_u8_array(dev, "mac-address", addr, 6);
 	if (ret)
-		return -ENODEV;
+		return 0;
 
-	return ETH_ALEN;
+	return 6;
 }
 
-static int xgene_get_phy_mode(struct device *dev)
+static int acpi_get_phy_mode(struct device *dev)
 {
-	int i, ret;
+	int i, ret, phy_mode;
 	char *modestr;
 
-	ret = device_property_read_string(dev, "phy-connection-type",
-					  (const char **)&modestr);
-	if (ret)
-		ret = device_property_read_string(dev, "phy-mode",
-						  (const char **)&modestr);
+	ret = device_property_read_string(dev, "phy-mode", &modestr);
 	if (ret)
-		return -ENODEV;
+		return -1;
 
+	phy_mode = -1;
 	for (i = 0; i < PHY_INTERFACE_MODE_MAX; i++) {
-		if (!strcasecmp(modestr, phy_modes(i)))
-			return i;
+		if (!strcasecmp(modestr, phy_modes(i))) {
+			phy_mode = i;
+			break;
+		}
 	}
-	return -ENODEV;
+	return phy_mode;
 }
+#else
+#define acpi_get_mac_address(a, b, c) 0
+#define acpi_get_phy_mode(a) -1
+#endif
 
 static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
 {
@@ -794,45 +791,38 @@ static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
 	struct device *dev;
 	struct resource *res;
 	void __iomem *base_addr;
+	const char *mac;
 	int ret;
 
 	pdev = pdata->pdev;
 	dev = &pdev->dev;
 	ndev = pdata->ndev;
 
-	res = platform_get_resource(pdev, IORESOURCE_MEM, RES_ENET_CSR);
-	if (!res) {
-		dev_err(dev, "Resource enet_csr not defined\n");
-		return -ENODEV;
-	}
-	pdata->base_addr = devm_ioremap(dev, res->start, resource_size(res));
-	if (!pdata->base_addr) {
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "enet_csr");
+	if (!res)
+		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	pdata->base_addr = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->base_addr)) {
 		dev_err(dev, "Unable to retrieve ENET Port CSR region\n");
-		return -ENOMEM;
+		return PTR_ERR(pdata->base_addr);
 	}
 
-	res = platform_get_resource(pdev, IORESOURCE_MEM, RES_RING_CSR);
-	if (!res) {
-		dev_err(dev, "Resource ring_csr not defined\n");
-		return -ENODEV;
-	}
-	pdata->ring_csr_addr = devm_ioremap(dev, res->start,
-							resource_size(res));
-	if (!pdata->ring_csr_addr) {
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_csr");
+	if (!res)
+		res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	pdata->ring_csr_addr = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->ring_csr_addr)) {
 		dev_err(dev, "Unable to retrieve ENET Ring CSR region\n");
-		return -ENOMEM;
+		return PTR_ERR(pdata->ring_csr_addr);
 	}
 
-	res = platform_get_resource(pdev, IORESOURCE_MEM, RES_RING_CMD);
-	if (!res) {
-		dev_err(dev, "Resource ring_cmd not defined\n");
-		return -ENODEV;
-	}
-	pdata->ring_cmd_addr = devm_ioremap(dev, res->start,
-							resource_size(res));
-	if (!pdata->ring_cmd_addr) {
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_cmd");
+	if (!res)
+		res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
+	pdata->ring_cmd_addr = devm_ioremap_resource(dev, res);
+	if (IS_ERR(pdata->ring_cmd_addr)) {
 		dev_err(dev, "Unable to retrieve ENET Ring command region\n");
-		return -ENOMEM;
+		return PTR_ERR(pdata->ring_cmd_addr);
 	}
 
 	ret = platform_get_irq(pdev, 0);
@@ -843,12 +833,16 @@ static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
 	}
 	pdata->rx_irq = ret;
 
-	if (xgene_get_mac_address(dev, ndev->dev_addr) != ETH_ALEN)
+	mac = of_get_mac_address(dev->of_node);
+	if (mac)
+		memcpy(ndev->dev_addr, mac, ndev->addr_len);
+	else if (!acpi_get_mac_address(dev, ndev->dev_addr))
 		eth_hw_addr_random(ndev);
-
 	memcpy(ndev->perm_addr, ndev->dev_addr, ndev->addr_len);
 
-	pdata->phy_mode = xgene_get_phy_mode(dev);
+	pdata->phy_mode = of_get_phy_mode(pdev->dev.of_node);
+	if (pdata->phy_mode < 0)
+		pdata->phy_mode = acpi_get_phy_mode(dev);
 	if (pdata->phy_mode < 0) {
 		dev_err(dev, "Unable to get phy-connection-type\n");
 		return pdata->phy_mode;
@@ -862,7 +856,10 @@ static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
 
 	pdata->clk = devm_clk_get(&pdev->dev, NULL);
 	if (IS_ERR(pdata->clk)) {
-		/* Firmware may have set up the clock already. */
+		/*
+		 * Not necessarily an error. Firmware may have
+		 * set up the clock already.
+		 */
 		pdata->clk = NULL;
 	}
 
@@ -913,7 +910,7 @@ static int xgene_enet_init_hw(struct xgene_enet_pdata *pdata)
 	pdata->port_ops->cle_bypass(pdata, dst_ring_num, buf_pool->id);
 	pdata->mac_ops->init(pdata);
 
-	return ret;
+	return 0;
 }
 
 static void xgene_enet_setup_ops(struct xgene_enet_pdata *pdata)
@@ -1031,18 +1031,18 @@ MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match);
 #endif
 
 #ifdef CONFIG_OF
-static struct of_device_id xgene_enet_of_match[] = {
+static struct of_device_id xgene_enet_match[] = {
 	{.compatible = "apm,xgene-enet",},
 	{},
 };
 
-MODULE_DEVICE_TABLE(of, xgene_enet_of_match);
+MODULE_DEVICE_TABLE(of, xgene_enet_match);
 #endif
 
 static struct platform_driver xgene_enet_driver = {
 	.driver = {
 		   .name = "xgene-enet",
-		   .of_match_table = of_match_ptr(xgene_enet_of_match),
+		   .of_match_table = xgene_enet_match,
 		   .acpi_match_table = ACPI_PTR(xgene_enet_acpi_match),
 	},
 	.probe = xgene_enet_probe,
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
index c2d465c..0e06cad 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
@@ -22,10 +22,7 @@
 #ifndef __XGENE_ENET_MAIN_H__
 #define __XGENE_ENET_MAIN_H__
 
-#include <linux/acpi.h>
 #include <linux/clk.h>
-#include <linux/efi.h>
-#include <linux/io.h>
 #include <linux/of_platform.h>
 #include <linux/of_net.h>
 #include <linux/of_mdio.h>
@@ -34,6 +31,7 @@
 #include <linux/prefetch.h>
 #include <linux/if_vlan.h>
 #include <linux/phy.h>
+#include <linux/acpi.h>
 #include "xgene_enet_hw.h"
 
 #define XGENE_DRV_VERSION	"v1.0"
diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
index 88a55f9..944b177 100644
--- a/drivers/net/ethernet/smsc/smc91x.c
+++ b/drivers/net/ethernet/smsc/smc91x.c
@@ -82,6 +82,7 @@ static const char version[] =
 #include <linux/of.h>
 #include <linux/of_device.h>
 #include <linux/of_gpio.h>
+#include <linux/acpi.h>
 
 #include <linux/netdevice.h>
 #include <linux/etherdevice.h>
@@ -2467,6 +2468,14 @@ static struct dev_pm_ops smc_drv_pm_ops = {
 	.resume		= smc_drv_resume,
 };
 
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id smc91x_acpi_match[] = {
+	{ "LNRO0003", },
+	{ }
+};
+MODULE_DEVICE_TABLE(acpi, smc91x_acpi_match);
+#endif
+
 static struct platform_driver smc_driver = {
 	.probe		= smc_drv_probe,
 	.remove		= smc_drv_remove,
@@ -2474,6 +2483,7 @@ static struct platform_driver smc_driver = {
 		.name	= CARDNAME,
 		.pm	= &smc_drv_pm_ops,
 		.of_match_table = of_match_ptr(smc91x_match),
+		.acpi_match_table = ACPI_PTR(smc91x_acpi_match),
 	},
 };
 
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index 501ea76..92e7644 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -34,4 +34,5 @@ obj-$(CONFIG_MDIO_BUS_MUX_MMIOREG) += mdio-mux-mmioreg.o
 obj-$(CONFIG_MDIO_SUN4I)	+= mdio-sun4i.o
 obj-$(CONFIG_MDIO_MOXART)	+= mdio-moxart.o
 obj-$(CONFIG_AMD_XGBE_PHY)	+= amd-xgbe-phy.o
+obj-$(CONFIG_AMD_XGBE_PHY)	+= amd-xgbe-phy-a0.o
 obj-$(CONFIG_MDIO_BCM_UNIMAC)	+= mdio-bcm-unimac.o
diff --git a/drivers/net/phy/amd-xgbe-phy-a0.c b/drivers/net/phy/amd-xgbe-phy-a0.c
new file mode 100644
index 0000000..ab6414a
--- /dev/null
+++ b/drivers/net/phy/amd-xgbe-phy-a0.c
@@ -0,0 +1,1829 @@
+/*
+ * AMD 10Gb Ethernet PHY driver
+ *
+ * This file is available to you under your choice of the following two
+ * licenses:
+ *
+ * License 1: GPLv2
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ *
+ * This file is free software; you may copy, redistribute and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ *
+ * License 2: Modified BSD
+ *
+ * Copyright (c) 2014 Advanced Micro Devices, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Advanced Micro Devices, Inc. nor the
+ *       names of its contributors may be used to endorse or promote products
+ *       derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/unistd.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/phy.h>
+#include <linux/mdio.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/of_device.h>
+#include <linux/uaccess.h>
+#include <linux/bitops.h>
+#include <linux/property.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+
+MODULE_AUTHOR("Tom Lendacky <thomas.lendacky@amd.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION("0.0.0-a");
+MODULE_DESCRIPTION("AMD 10GbE (amd-xgbe) PHY driver");
+
+#define XGBE_PHY_ID	0x7996ced0
+#define XGBE_PHY_MASK	0xfffffff0
+
+#define XGBE_PHY_SERDES_RETRY		32
+#define XGBE_PHY_CHANNEL_PROPERTY	"amd,serdes-channel"
+#define XGBE_PHY_SPEEDSET_PROPERTY	"amd,speed-set"
+#define XGBE_PHY_BLWC_PROPERTY		"amd,serdes-blwc"
+#define XGBE_PHY_CDR_RATE_PROPERTY	"amd,serdes-cdr-rate"
+#define XGBE_PHY_PQ_SKEW_PROPERTY	"amd,serdes-pq-skew"
+#define XGBE_PHY_TX_AMP_PROPERTY	"amd,serdes-tx-amp"
+
+#define XGBE_PHY_SPEEDS			3
+#define XGBE_PHY_SPEED_1000		0
+#define XGBE_PHY_SPEED_2500		1
+#define XGBE_PHY_SPEED_10000		2
+
+#define XGBE_AN_INT_CMPLT		0x01
+#define XGBE_AN_INC_LINK		0x02
+#define XGBE_AN_PG_RCV			0x04
+#define XGBE_AN_INT_MASK		0x07
+
+#define XNP_MCF_NULL_MESSAGE		0x001
+#define XNP_ACK_PROCESSED		BIT(12)
+#define XNP_MP_FORMATTED		BIT(13)
+#define XNP_NP_EXCHANGE			BIT(15)
+
+#define XGBE_PHY_RATECHANGE_COUNT	500
+
+#define XGBE_PHY_KR_TRAINING_START	0x01
+#define XGBE_PHY_KR_TRAINING_ENABLE	0x02
+
+#define XGBE_PHY_FEC_ENABLE		0x01
+#define XGBE_PHY_FEC_FORWARD		0x02
+#define XGBE_PHY_FEC_MASK		0x03
+
+#ifndef MDIO_PMA_10GBR_PMD_CTRL
+#define MDIO_PMA_10GBR_PMD_CTRL		0x0096
+#endif
+
+#ifndef MDIO_PMA_10GBR_FEC_ABILITY
+#define MDIO_PMA_10GBR_FEC_ABILITY	0x00aa
+#endif
+
+#ifndef MDIO_PMA_10GBR_FEC_CTRL
+#define MDIO_PMA_10GBR_FEC_CTRL		0x00ab
+#endif
+
+#ifndef MDIO_AN_XNP
+#define MDIO_AN_XNP			0x0016
+#endif
+
+#ifndef MDIO_AN_LPX
+#define MDIO_AN_LPX			0x0019
+#endif
+
+#ifndef MDIO_AN_INTMASK
+#define MDIO_AN_INTMASK			0x8001
+#endif
+
+#ifndef MDIO_AN_INT
+#define MDIO_AN_INT			0x8002
+#endif
+
+#ifndef MDIO_AN_KR_CTRL
+#define MDIO_AN_KR_CTRL			0x8003
+#endif
+
+#ifndef MDIO_CTRL1_SPEED1G
+#define MDIO_CTRL1_SPEED1G		(MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
+#endif
+
+#ifndef MDIO_KR_CTRL_PDETECT
+#define MDIO_KR_CTRL_PDETECT		0x01
+#endif
+
+#define GET_BITS(_var, _index, _width)					\
+	(((_var) >> (_index)) & ((0x1 << (_width)) - 1))
+
+#define SET_BITS(_var, _index, _width, _val)				\
+do {									\
+	(_var) &= ~(((0x1 << (_width)) - 1) << (_index));		\
+	(_var) |= (((_val) & ((0x1 << (_width)) - 1)) << (_index));	\
+} while (0)
+
+#define XCMU_IOREAD(_priv, _reg)					\
+	ioread16((_priv)->cmu_regs + _reg)
+
+#define XCMU_IOWRITE(_priv, _reg, _val)					\
+	iowrite16((_val), (_priv)->cmu_regs + _reg)
+
+#define XRXTX_IOREAD(_priv, _reg)					\
+	ioread16((_priv)->rxtx_regs + _reg)
+
+#define XRXTX_IOREAD_BITS(_priv, _reg, _field)				\
+	GET_BITS(XRXTX_IOREAD((_priv), _reg),				\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH)
+
+#define XRXTX_IOWRITE(_priv, _reg, _val)				\
+	iowrite16((_val), (_priv)->rxtx_regs + _reg)
+
+#define XRXTX_IOWRITE_BITS(_priv, _reg, _field, _val)			\
+do {									\
+	u16 reg_val = XRXTX_IOREAD((_priv), _reg);			\
+	SET_BITS(reg_val,						\
+		 _reg##_##_field##_INDEX,				\
+		 _reg##_##_field##_WIDTH, (_val));			\
+	XRXTX_IOWRITE((_priv), _reg, reg_val);				\
+} while (0)
+
+/* SerDes CMU register offsets */
+#define CMU_REG15			0x003c
+#define CMU_REG16			0x0040
+
+/* SerDes CMU register entry bit positions and sizes */
+#define CMU_REG16_TX_RATE_CHANGE_BASE	15
+#define CMU_REG16_RX_RATE_CHANGE_BASE	14
+#define CMU_REG16_RATE_CHANGE_DECR	2
+
+/* SerDes RxTx register offsets */
+#define RXTX_REG2			0x0008
+#define RXTX_REG3			0x000c
+#define RXTX_REG5			0x0014
+#define RXTX_REG6			0x0018
+#define RXTX_REG20			0x0050
+#define RXTX_REG53			0x00d4
+#define RXTX_REG114			0x01c8
+#define RXTX_REG115			0x01cc
+#define RXTX_REG142			0x0238
+
+/* SerDes RxTx register entry bit positions and sizes */
+#define RXTX_REG2_RESETB_INDEX			15
+#define RXTX_REG2_RESETB_WIDTH			1
+#define RXTX_REG3_TX_DATA_RATE_INDEX		14
+#define RXTX_REG3_TX_DATA_RATE_WIDTH		2
+#define RXTX_REG3_TX_WORD_MODE_INDEX		11
+#define RXTX_REG3_TX_WORD_MODE_WIDTH		3
+#define RXTX_REG5_TXAMP_CNTL_INDEX		7
+#define RXTX_REG5_TXAMP_CNTL_WIDTH		4
+#define RXTX_REG6_RX_DATA_RATE_INDEX		9
+#define RXTX_REG6_RX_DATA_RATE_WIDTH		2
+#define RXTX_REG6_RX_WORD_MODE_INDEX		11
+#define RXTX_REG6_RX_WORD_MODE_WIDTH		3
+#define RXTX_REG20_BLWC_ENA_INDEX		2
+#define RXTX_REG20_BLWC_ENA_WIDTH		1
+#define RXTX_REG53_RX_PLLSELECT_INDEX		15
+#define RXTX_REG53_RX_PLLSELECT_WIDTH		1
+#define RXTX_REG53_TX_PLLSELECT_INDEX		14
+#define RXTX_REG53_TX_PLLSELECT_WIDTH		1
+#define RXTX_REG53_PI_SPD_SEL_CDR_INDEX		10
+#define RXTX_REG53_PI_SPD_SEL_CDR_WIDTH		4
+#define RXTX_REG114_PQ_REG_INDEX		9
+#define RXTX_REG114_PQ_REG_WIDTH		7
+#define RXTX_REG115_FORCE_LAT_CAL_START_INDEX	2
+#define RXTX_REG115_FORCE_LAT_CAL_START_WIDTH	1
+#define RXTX_REG115_FORCE_SUM_CAL_START_INDEX	1
+#define RXTX_REG115_FORCE_SUM_CAL_START_WIDTH	1
+#define RXTX_REG142_SUM_CALIB_DONE_INDEX	15
+#define RXTX_REG142_SUM_CALIB_DONE_WIDTH	1
+#define RXTX_REG142_SUM_CALIB_ERR_INDEX		14
+#define RXTX_REG142_SUM_CALIB_ERR_WIDTH		1
+#define RXTX_REG142_LAT_CALIB_DONE_INDEX	11
+#define RXTX_REG142_LAT_CALIB_DONE_WIDTH	1
+
+#define RXTX_FULL_RATE				0x0
+#define RXTX_HALF_RATE				0x1
+#define RXTX_FIFTH_RATE				0x3
+#define RXTX_66BIT_WORD				0x7
+#define RXTX_10BIT_WORD				0x1
+#define RXTX_10G_BLWC				0x0
+#define RXTX_1G_BLWC				0x1
+#define RXTX_10G_TX_AMP				0xa
+#define RXTX_1G_TX_AMP				0xf
+#define RXTX_10G_CDR				0x7
+#define RXTX_1G_CDR				0x2
+#define RXTX_10G_PLL				0x1
+#define RXTX_1G_PLL				0x0
+#define RXTX_10G_PQ				0x1e
+#define RXTX_1G_PQ				0xa
+
+DEFINE_SPINLOCK(cmu_lock);
+
+static const u32 amd_xgbe_phy_serdes_blwc[] = {
+	RXTX_1G_BLWC,
+	RXTX_1G_BLWC,
+	RXTX_10G_BLWC,
+};
+
+static const u32 amd_xgbe_phy_serdes_cdr_rate[] = {
+	RXTX_1G_CDR,
+	RXTX_1G_CDR,
+	RXTX_10G_CDR,
+};
+
+static const u32 amd_xgbe_phy_serdes_pq_skew[] = {
+	RXTX_1G_PQ,
+	RXTX_1G_PQ,
+	RXTX_10G_PQ,
+};
+
+static const u32 amd_xgbe_phy_serdes_tx_amp[] = {
+	RXTX_1G_TX_AMP,
+	RXTX_1G_TX_AMP,
+	RXTX_10G_TX_AMP,
+};
+
+enum amd_xgbe_phy_an {
+	AMD_XGBE_AN_READY = 0,
+	AMD_XGBE_AN_PAGE_RECEIVED,
+	AMD_XGBE_AN_INCOMPAT_LINK,
+	AMD_XGBE_AN_COMPLETE,
+	AMD_XGBE_AN_NO_LINK,
+	AMD_XGBE_AN_ERROR,
+};
+
+enum amd_xgbe_phy_rx {
+	AMD_XGBE_RX_BPA = 0,
+	AMD_XGBE_RX_XNP,
+	AMD_XGBE_RX_COMPLETE,
+	AMD_XGBE_RX_ERROR,
+};
+
+enum amd_xgbe_phy_mode {
+	AMD_XGBE_MODE_KR,
+	AMD_XGBE_MODE_KX,
+};
+
+enum amd_xgbe_phy_speedset {
+	AMD_XGBE_PHY_SPEEDSET_1000_10000 = 0,
+	AMD_XGBE_PHY_SPEEDSET_2500_10000,
+};
+
+struct amd_xgbe_phy_priv {
+	struct platform_device *pdev;
+	struct acpi_device *adev;
+	struct device *dev;
+
+	struct phy_device *phydev;
+
+	/* SerDes related mmio resources */
+	struct resource *rxtx_res;
+	struct resource *cmu_res;
+
+	/* SerDes related mmio registers */
+	void __iomem *rxtx_regs;	/* SerDes Rx/Tx CSRs */
+	void __iomem *cmu_regs;		/* SerDes CMU CSRs */
+
+	int an_irq;
+	char an_irq_name[IFNAMSIZ + 32];
+	struct work_struct an_irq_work;
+	unsigned int an_irq_allocated;
+
+	unsigned int serdes_channel;
+	unsigned int speed_set;
+
+	/* Maintain link status for re-starting auto-negotiation */
+	unsigned int link;
+
+	/* SerDes UEFI configurable settings.
+	 *   Switching between modes/speeds requires new values for some
+	 *   SerDes settings.  The values can be supplied as device
+	 *   properties in array format.  The first array entry is for
+	 *   1GbE, second for 2.5GbE and third for 10GbE
+	 */
+	u32 serdes_blwc[XGBE_PHY_SPEEDS];
+	u32 serdes_cdr_rate[XGBE_PHY_SPEEDS];
+	u32 serdes_pq_skew[XGBE_PHY_SPEEDS];
+	u32 serdes_tx_amp[XGBE_PHY_SPEEDS];
+
+	/* Auto-negotiation state machine support */
+	struct mutex an_mutex;
+	enum amd_xgbe_phy_an an_result;
+	enum amd_xgbe_phy_an an_state;
+	enum amd_xgbe_phy_rx kr_state;
+	enum amd_xgbe_phy_rx kx_state;
+	struct work_struct an_work;
+	struct workqueue_struct *an_workqueue;
+	unsigned int an_supported;
+	unsigned int parallel_detect;
+	unsigned int fec_ability;
+
+	unsigned int lpm_ctrl;		/* CTRL1 for resume */
+};
+
+static int amd_xgbe_an_enable_kr_training(struct phy_device *phydev)
+{
+	int ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
+	if (ret < 0)
+		return ret;
+
+	ret |= XGBE_PHY_KR_TRAINING_ENABLE;
+	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
+
+	return 0;
+}
+
+static int amd_xgbe_an_disable_kr_training(struct phy_device *phydev)
+{
+	int ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~XGBE_PHY_KR_TRAINING_ENABLE;
+	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_pcs_power_cycle(struct phy_device *phydev)
+{
+	int ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret |= MDIO_CTRL1_LPOWER;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	usleep_range(75, 100);
+
+	ret &= ~MDIO_CTRL1_LPOWER;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	return 0;
+}
+
+static void amd_xgbe_phy_serdes_start_ratechange(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	u16 val, mask;
+
+	/* Assert Rx and Tx ratechange in CMU_reg16 */
+	val = XCMU_IOREAD(priv, CMU_REG16);
+
+	mask = (1 << (CMU_REG16_TX_RATE_CHANGE_BASE -
+		      (priv->serdes_channel * CMU_REG16_RATE_CHANGE_DECR))) |
+	       (1 << (CMU_REG16_RX_RATE_CHANGE_BASE -
+		      (priv->serdes_channel * CMU_REG16_RATE_CHANGE_DECR)));
+	val |= mask;
+
+	XCMU_IOWRITE(priv, CMU_REG16, val);
+}
+
+static void amd_xgbe_phy_serdes_complete_ratechange(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	u16 val, mask;
+	unsigned int wait;
+
+	/* Release Rx and Tx ratechange for proper channel in CMU_reg16 */
+	val = XCMU_IOREAD(priv, CMU_REG16);
+
+	mask = (1 << (CMU_REG16_TX_RATE_CHANGE_BASE -
+		      (priv->serdes_channel * CMU_REG16_RATE_CHANGE_DECR))) |
+	       (1 << (CMU_REG16_RX_RATE_CHANGE_BASE -
+		      (priv->serdes_channel * CMU_REG16_RATE_CHANGE_DECR)));
+	val &= ~mask;
+
+	XCMU_IOWRITE(priv, CMU_REG16, val);
+
+	/* Wait for Rx and Tx ready in CMU_reg15 */
+	mask = (1 << priv->serdes_channel) |
+	       (1 << (priv->serdes_channel + 8));
+	wait = XGBE_PHY_RATECHANGE_COUNT;
+	while (wait--) {
+		udelay(50);
+
+		val = XCMU_IOREAD(priv, CMU_REG15);
+		if ((val & mask) == mask)
+			return;
+	}
+
+	netdev_dbg(phydev->attached_dev, "SerDes rx/tx not ready (%#hx)\n",
+		   val);
+}
+
+static int amd_xgbe_phy_xgmii_mode(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	/* Disable KR training */
+	ret = amd_xgbe_an_disable_kr_training(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set PCS to KR/10G speed */
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_PCS_CTRL2_TYPE;
+	ret |= MDIO_PCS_CTRL2_10GBR;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_CTRL1_SPEEDSEL;
+	ret |= MDIO_CTRL1_SPEED10G;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set SerDes to 10G speed */
+	spin_lock(&cmu_lock);
+
+	amd_xgbe_phy_serdes_start_ratechange(phydev);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_DATA_RATE, RXTX_FULL_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_WORD_MODE, RXTX_66BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG5, TXAMP_CNTL,
+			   priv->serdes_tx_amp[XGBE_PHY_SPEED_10000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_DATA_RATE, RXTX_FULL_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_WORD_MODE, RXTX_66BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+			   priv->serdes_blwc[XGBE_PHY_SPEED_10000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, RX_PLLSELECT, RXTX_10G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, TX_PLLSELECT, RXTX_10G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, PI_SPD_SEL_CDR,
+			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_10000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+			   priv->serdes_pq_skew[XGBE_PHY_SPEED_10000]);
+
+	amd_xgbe_phy_serdes_complete_ratechange(phydev);
+
+	spin_unlock(&cmu_lock);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_gmii_2500_mode(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	/* Disable KR training */
+	ret = amd_xgbe_an_disable_kr_training(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set PCS to KX/1G speed */
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_PCS_CTRL2_TYPE;
+	ret |= MDIO_PCS_CTRL2_10GBX;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_CTRL1_SPEEDSEL;
+	ret |= MDIO_CTRL1_SPEED1G;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set SerDes to 2.5G speed */
+	spin_lock(&cmu_lock);
+
+	amd_xgbe_phy_serdes_start_ratechange(phydev);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_DATA_RATE, RXTX_HALF_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_WORD_MODE, RXTX_10BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG5, TXAMP_CNTL,
+			   priv->serdes_tx_amp[XGBE_PHY_SPEED_2500]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_DATA_RATE, RXTX_HALF_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_WORD_MODE, RXTX_10BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+			   priv->serdes_blwc[XGBE_PHY_SPEED_2500]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, RX_PLLSELECT, RXTX_1G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, TX_PLLSELECT, RXTX_1G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, PI_SPD_SEL_CDR,
+			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_2500]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+			   priv->serdes_pq_skew[XGBE_PHY_SPEED_2500]);
+
+	amd_xgbe_phy_serdes_complete_ratechange(phydev);
+
+	spin_unlock(&cmu_lock);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_gmii_mode(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	/* Disable KR training */
+	ret = amd_xgbe_an_disable_kr_training(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set PCS to KX/1G speed */
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_PCS_CTRL2_TYPE;
+	ret |= MDIO_PCS_CTRL2_10GBX;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2, ret);
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_CTRL1_SPEEDSEL;
+	ret |= MDIO_CTRL1_SPEED1G;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	ret = amd_xgbe_phy_pcs_power_cycle(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Set SerDes to 1G speed */
+	spin_lock(&cmu_lock);
+
+	amd_xgbe_phy_serdes_start_ratechange(phydev);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_DATA_RATE, RXTX_FIFTH_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG3, TX_WORD_MODE, RXTX_10BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG5, TXAMP_CNTL,
+			   priv->serdes_tx_amp[XGBE_PHY_SPEED_1000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_DATA_RATE, RXTX_FIFTH_RATE);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RX_WORD_MODE, RXTX_10BIT_WORD);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+			   priv->serdes_blwc[XGBE_PHY_SPEED_1000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, RX_PLLSELECT, RXTX_1G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, TX_PLLSELECT, RXTX_1G_PLL);
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG53, PI_SPD_SEL_CDR,
+			   priv->serdes_cdr_rate[XGBE_PHY_SPEED_1000]);
+
+	XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+			   priv->serdes_pq_skew[XGBE_PHY_SPEED_1000]);
+
+	amd_xgbe_phy_serdes_complete_ratechange(phydev);
+
+	spin_unlock(&cmu_lock);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_cur_mode(struct phy_device *phydev,
+				 enum amd_xgbe_phy_mode *mode)
+{
+	int ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL2);
+	if (ret < 0)
+		return ret;
+
+	if ((ret & MDIO_PCS_CTRL2_TYPE) == MDIO_PCS_CTRL2_10GBR)
+		*mode = AMD_XGBE_MODE_KR;
+	else
+		*mode = AMD_XGBE_MODE_KX;
+
+	return 0;
+}
+
+static bool amd_xgbe_phy_in_kr_mode(struct phy_device *phydev)
+{
+	enum amd_xgbe_phy_mode mode;
+
+	if (amd_xgbe_phy_cur_mode(phydev, &mode))
+		return false;
+
+	return (mode == AMD_XGBE_MODE_KR);
+}
+
+static int amd_xgbe_phy_switch_mode(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	/* If we are in KR switch to KX, and vice-versa */
+	if (amd_xgbe_phy_in_kr_mode(phydev)) {
+		if (priv->speed_set == AMD_XGBE_PHY_SPEEDSET_1000_10000)
+			ret = amd_xgbe_phy_gmii_mode(phydev);
+		else
+			ret = amd_xgbe_phy_gmii_2500_mode(phydev);
+	} else {
+		ret = amd_xgbe_phy_xgmii_mode(phydev);
+	}
+
+	return ret;
+}
+
+static int amd_xgbe_phy_set_mode(struct phy_device *phydev,
+				 enum amd_xgbe_phy_mode mode)
+{
+	enum amd_xgbe_phy_mode cur_mode;
+	int ret;
+
+	ret = amd_xgbe_phy_cur_mode(phydev, &cur_mode);
+	if (ret)
+		return ret;
+
+	if (mode != cur_mode)
+		ret = amd_xgbe_phy_switch_mode(phydev);
+
+	return ret;
+}
+
+static int amd_xgbe_phy_set_an(struct phy_device *phydev, bool enable,
+			       bool restart)
+{
+	int ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret &= ~MDIO_AN_CTRL1_ENABLE;
+
+	if (enable)
+		ret |= MDIO_AN_CTRL1_ENABLE;
+
+	if (restart)
+		ret |= MDIO_AN_CTRL1_RESTART;
+
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1, ret);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_restart_an(struct phy_device *phydev)
+{
+	return amd_xgbe_phy_set_an(phydev, true, true);
+}
+
+static int amd_xgbe_phy_disable_an(struct phy_device *phydev)
+{
+	return amd_xgbe_phy_set_an(phydev, false, false);
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_tx_training(struct phy_device *phydev,
+						    enum amd_xgbe_phy_rx *state)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ad_reg, lp_reg, ret;
+
+	*state = AMD_XGBE_RX_COMPLETE;
+
+	/* If we're not in KR mode then we're done */
+	if (!amd_xgbe_phy_in_kr_mode(phydev))
+		return AMD_XGBE_AN_PAGE_RECEIVED;
+
+	/* Enable/Disable FEC */
+	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+	if (ad_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 2);
+	if (lp_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_CTRL);
+	if (ret < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	ret &= ~XGBE_PHY_FEC_MASK;
+	if ((ad_reg & 0xc000) && (lp_reg & 0xc000))
+		ret |= priv->fec_ability;
+
+	phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_CTRL, ret);
+
+	/* Start KR training */
+	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL);
+	if (ret < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	if (ret & XGBE_PHY_KR_TRAINING_ENABLE) {
+		ret |= XGBE_PHY_KR_TRAINING_START;
+		phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL,
+			      ret);
+	}
+
+	return AMD_XGBE_AN_PAGE_RECEIVED;
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_tx_xnp(struct phy_device *phydev,
+					       enum amd_xgbe_phy_rx *state)
+{
+	u16 msg;
+
+	*state = AMD_XGBE_RX_XNP;
+
+	msg = XNP_MCF_NULL_MESSAGE;
+	msg |= XNP_MP_FORMATTED;
+
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP + 2, 0);
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP + 1, 0);
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP, msg);
+
+	return AMD_XGBE_AN_PAGE_RECEIVED;
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_rx_bpa(struct phy_device *phydev,
+					       enum amd_xgbe_phy_rx *state)
+{
+	unsigned int link_support;
+	int ret, ad_reg, lp_reg;
+
+	/* Read Base Ability register 2 first */
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 1);
+	if (ret < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	/* Check for a supported mode, otherwise restart in a different one */
+	link_support = amd_xgbe_phy_in_kr_mode(phydev) ? 0x80 : 0x20;
+	if (!(ret & link_support))
+		return AMD_XGBE_AN_INCOMPAT_LINK;
+
+	/* Check Extended Next Page support */
+	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+	if (ad_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA);
+	if (lp_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	return ((ad_reg & XNP_NP_EXCHANGE) || (lp_reg & XNP_NP_EXCHANGE)) ?
+	       amd_xgbe_an_tx_xnp(phydev, state) :
+	       amd_xgbe_an_tx_training(phydev, state);
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_rx_xnp(struct phy_device *phydev,
+					       enum amd_xgbe_phy_rx *state)
+{
+	int ad_reg, lp_reg;
+
+	/* Check Extended Next Page support */
+	ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP);
+	if (ad_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPX);
+	if (lp_reg < 0)
+		return AMD_XGBE_AN_ERROR;
+
+	return ((ad_reg & XNP_NP_EXCHANGE) || (lp_reg & XNP_NP_EXCHANGE)) ?
+	       amd_xgbe_an_tx_xnp(phydev, state) :
+	       amd_xgbe_an_tx_training(phydev, state);
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_page_received(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	enum amd_xgbe_phy_rx *state;
+	int ret;
+
+	state = amd_xgbe_phy_in_kr_mode(phydev) ? &priv->kr_state
+						: &priv->kx_state;
+
+	switch (*state) {
+	case AMD_XGBE_RX_BPA:
+		ret = amd_xgbe_an_rx_bpa(phydev, state);
+		break;
+
+	case AMD_XGBE_RX_XNP:
+		ret = amd_xgbe_an_rx_xnp(phydev, state);
+		break;
+
+	default:
+		ret = AMD_XGBE_AN_ERROR;
+	}
+
+	return ret;
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_incompat_link(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	/* Be sure we aren't looping trying to negotiate */
+	if (amd_xgbe_phy_in_kr_mode(phydev)) {
+		priv->kr_state = AMD_XGBE_RX_ERROR;
+
+		if (!(phydev->supported & SUPPORTED_1000baseKX_Full) &&
+		    !(phydev->supported & SUPPORTED_2500baseX_Full))
+			return AMD_XGBE_AN_NO_LINK;
+
+		if (priv->kx_state != AMD_XGBE_RX_BPA)
+			return AMD_XGBE_AN_NO_LINK;
+	} else {
+		priv->kx_state = AMD_XGBE_RX_ERROR;
+
+		if (!(phydev->supported & SUPPORTED_10000baseKR_Full))
+			return AMD_XGBE_AN_NO_LINK;
+
+		if (priv->kr_state != AMD_XGBE_RX_BPA)
+			return AMD_XGBE_AN_NO_LINK;
+	}
+
+	ret = amd_xgbe_phy_disable_an(phydev);
+	if (ret)
+		return AMD_XGBE_AN_ERROR;
+
+	ret = amd_xgbe_phy_switch_mode(phydev);
+	if (ret)
+		return AMD_XGBE_AN_ERROR;
+
+	ret = amd_xgbe_phy_restart_an(phydev);
+	if (ret)
+		return AMD_XGBE_AN_ERROR;
+
+	return AMD_XGBE_AN_INCOMPAT_LINK;
+}
+
+static irqreturn_t amd_xgbe_an_isr(int irq, void *data)
+{
+	struct amd_xgbe_phy_priv *priv = (struct amd_xgbe_phy_priv *)data;
+
+	/* Interrupt reason must be read and cleared outside of IRQ context */
+	disable_irq_nosync(priv->an_irq);
+
+	queue_work(priv->an_workqueue, &priv->an_irq_work);
+
+	return IRQ_HANDLED;
+}
+
+static void amd_xgbe_an_irq_work(struct work_struct *work)
+{
+	struct amd_xgbe_phy_priv *priv = container_of(work,
+						      struct amd_xgbe_phy_priv,
+						      an_irq_work);
+
+	/* Avoid a race between enabling the IRQ and exiting the work by
+	 * waiting for the work to finish and then queueing it
+	 */
+	flush_work(&priv->an_work);
+	queue_work(priv->an_workqueue, &priv->an_work);
+}
+
+static void amd_xgbe_an_state_machine(struct work_struct *work)
+{
+	struct amd_xgbe_phy_priv *priv = container_of(work,
+						      struct amd_xgbe_phy_priv,
+						      an_work);
+	struct phy_device *phydev = priv->phydev;
+	enum amd_xgbe_phy_an cur_state = priv->an_state;
+	int int_reg, int_mask;
+
+	mutex_lock(&priv->an_mutex);
+
+	/* Read the interrupt */
+	int_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT);
+	if (!int_reg)
+		goto out;
+
+next_int:
+	if (int_reg < 0) {
+		priv->an_state = AMD_XGBE_AN_ERROR;
+		int_mask = XGBE_AN_INT_MASK;
+	} else if (int_reg & XGBE_AN_PG_RCV) {
+		priv->an_state = AMD_XGBE_AN_PAGE_RECEIVED;
+		int_mask = XGBE_AN_PG_RCV;
+	} else if (int_reg & XGBE_AN_INC_LINK) {
+		priv->an_state = AMD_XGBE_AN_INCOMPAT_LINK;
+		int_mask = XGBE_AN_INC_LINK;
+	} else if (int_reg & XGBE_AN_INT_CMPLT) {
+		priv->an_state = AMD_XGBE_AN_COMPLETE;
+		int_mask = XGBE_AN_INT_CMPLT;
+	} else {
+		priv->an_state = AMD_XGBE_AN_ERROR;
+		int_mask = 0;
+	}
+
+	/* Clear the interrupt to be processed */
+	int_reg &= ~int_mask;
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, int_reg);
+
+	priv->an_result = priv->an_state;
+
+again:
+	cur_state = priv->an_state;
+
+	switch (priv->an_state) {
+	case AMD_XGBE_AN_READY:
+		priv->an_supported = 0;
+		break;
+
+	case AMD_XGBE_AN_PAGE_RECEIVED:
+		priv->an_state = amd_xgbe_an_page_received(phydev);
+		priv->an_supported++;
+		break;
+
+	case AMD_XGBE_AN_INCOMPAT_LINK:
+		priv->an_supported = 0;
+		priv->parallel_detect = 0;
+		priv->an_state = amd_xgbe_an_incompat_link(phydev);
+		break;
+
+	case AMD_XGBE_AN_COMPLETE:
+		priv->parallel_detect = priv->an_supported ? 0 : 1;
+		netdev_dbg(phydev->attached_dev, "%s successful\n",
+			   priv->an_supported ? "Auto negotiation"
+					      : "Parallel detection");
+		break;
+
+	case AMD_XGBE_AN_NO_LINK:
+		break;
+
+	default:
+		priv->an_state = AMD_XGBE_AN_ERROR;
+	}
+
+	if (priv->an_state == AMD_XGBE_AN_NO_LINK) {
+		/* Disable auto-negotiation for now - it will be
+		 * re-enabled once a link is established
+		 */
+		amd_xgbe_phy_disable_an(phydev);
+
+		int_reg = 0;
+		phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+	} else if (priv->an_state == AMD_XGBE_AN_ERROR) {
+		netdev_err(phydev->attached_dev,
+			   "error during auto-negotiation, state=%u\n",
+			   cur_state);
+
+		int_reg = 0;
+		phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+	}
+
+	if (priv->an_state >= AMD_XGBE_AN_COMPLETE) {
+		priv->an_result = priv->an_state;
+		priv->an_state = AMD_XGBE_AN_READY;
+		priv->kr_state = AMD_XGBE_RX_BPA;
+		priv->kx_state = AMD_XGBE_RX_BPA;
+	}
+
+	if (cur_state != priv->an_state)
+		goto again;
+
+	if (int_reg)
+		goto next_int;
+
+out:
+	enable_irq(priv->an_irq);
+
+	mutex_unlock(&priv->an_mutex);
+}
+
+static int amd_xgbe_an_init(struct phy_device *phydev)
+{
+	int ret;
+
+	/* Set up Advertisement register 3 first */
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+	if (ret < 0)
+		return ret;
+
+	if (phydev->supported & SUPPORTED_10000baseR_FEC)
+		ret |= 0xc000;
+	else
+		ret &= ~0xc000;
+
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2, ret);
+
+	/* Set up Advertisement register 2 next */
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
+	if (ret < 0)
+		return ret;
+
+	if (phydev->supported & SUPPORTED_10000baseKR_Full)
+		ret |= 0x80;
+	else
+		ret &= ~0x80;
+
+	if ((phydev->supported & SUPPORTED_1000baseKX_Full) ||
+	    (phydev->supported & SUPPORTED_2500baseX_Full))
+		ret |= 0x20;
+	else
+		ret &= ~0x20;
+
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1, ret);
+
+	/* Set up Advertisement register 1 last */
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+	if (ret < 0)
+		return ret;
+
+	if (phydev->supported & SUPPORTED_Pause)
+		ret |= 0x400;
+	else
+		ret &= ~0x400;
+
+	if (phydev->supported & SUPPORTED_Asym_Pause)
+		ret |= 0x800;
+	else
+		ret &= ~0x800;
+
+	/* We don't intend to perform XNP */
+	ret &= ~XNP_NP_EXCHANGE;
+
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE, ret);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_soft_reset(struct phy_device *phydev)
+{
+	int count, ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		return ret;
+
+	ret |= MDIO_CTRL1_RESET;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	count = 50;
+	do {
+		msleep(20);
+		ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+		if (ret < 0)
+			return ret;
+	} while ((ret & MDIO_CTRL1_RESET) && --count);
+
+	if (ret & MDIO_CTRL1_RESET)
+		return -ETIMEDOUT;
+
+	/* Disable auto-negotiation for now */
+	ret = amd_xgbe_phy_disable_an(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Clear auto-negotiation interrupts */
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_config_init(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	struct net_device *netdev = phydev->attached_dev;
+	int ret;
+
+	if (!priv->an_irq_allocated) {
+		/* Allocate the auto-negotiation workqueue and interrupt */
+		snprintf(priv->an_irq_name, sizeof(priv->an_irq_name) - 1,
+			 "%s-pcs", netdev_name(netdev));
+
+		priv->an_workqueue =
+			create_singlethread_workqueue(priv->an_irq_name);
+		if (!priv->an_workqueue) {
+			netdev_err(netdev, "phy workqueue creation failed\n");
+			return -ENOMEM;
+		}
+
+		ret = devm_request_irq(priv->dev, priv->an_irq,
+				       amd_xgbe_an_isr, 0, priv->an_irq_name,
+				       priv);
+		if (ret) {
+			netdev_err(netdev, "phy irq request failed\n");
+			destroy_workqueue(priv->an_workqueue);
+			return ret;
+		}
+
+		priv->an_irq_allocated = 1;
+	}
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_ABILITY);
+	if (ret < 0)
+		return ret;
+	priv->fec_ability = ret & XGBE_PHY_FEC_MASK;
+
+	/* Initialize supported features */
+	phydev->supported = SUPPORTED_Autoneg;
+	phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+	phydev->supported |= SUPPORTED_Backplane;
+	phydev->supported |= SUPPORTED_10000baseKR_Full;
+	switch (priv->speed_set) {
+	case AMD_XGBE_PHY_SPEEDSET_1000_10000:
+		phydev->supported |= SUPPORTED_1000baseKX_Full;
+		break;
+	case AMD_XGBE_PHY_SPEEDSET_2500_10000:
+		phydev->supported |= SUPPORTED_2500baseX_Full;
+		break;
+	}
+
+	if (priv->fec_ability & XGBE_PHY_FEC_ENABLE)
+		phydev->supported |= SUPPORTED_10000baseR_FEC;
+
+	phydev->advertising = phydev->supported;
+
+	/* Set initial mode - call the mode setting routines
+	 * directly to insure we are properly configured
+	 */
+	if (phydev->supported & SUPPORTED_10000baseKR_Full)
+		ret = amd_xgbe_phy_xgmii_mode(phydev);
+	else if (phydev->supported & SUPPORTED_1000baseKX_Full)
+		ret = amd_xgbe_phy_gmii_mode(phydev);
+	else if (phydev->supported & SUPPORTED_2500baseX_Full)
+		ret = amd_xgbe_phy_gmii_2500_mode(phydev);
+	else
+		ret = -EINVAL;
+	if (ret < 0)
+		return ret;
+
+	/* Set up advertisement registers based on current settings */
+	ret = amd_xgbe_an_init(phydev);
+	if (ret)
+		return ret;
+
+	/* Enable auto-negotiation interrupts */
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INTMASK, 0x07);
+
+	return 0;
+}
+
+static int amd_xgbe_phy_setup_forced(struct phy_device *phydev)
+{
+	int ret;
+
+	/* Disable auto-negotiation */
+	ret = amd_xgbe_phy_disable_an(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Validate/Set specified speed */
+	switch (phydev->speed) {
+	case SPEED_10000:
+		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
+		break;
+
+	case SPEED_2500:
+	case SPEED_1000:
+		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
+		break;
+
+	default:
+		ret = -EINVAL;
+	}
+
+	if (ret < 0)
+		return ret;
+
+	/* Validate duplex mode */
+	if (phydev->duplex != DUPLEX_FULL)
+		return -EINVAL;
+
+	phydev->pause = 0;
+	phydev->asym_pause = 0;
+
+	return 0;
+}
+
+static int __amd_xgbe_phy_config_aneg(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	u32 mmd_mask = phydev->c45_ids.devices_in_package;
+	int ret;
+
+	if (phydev->autoneg != AUTONEG_ENABLE)
+		return amd_xgbe_phy_setup_forced(phydev);
+
+	/* Make sure we have the AN MMD present */
+	if (!(mmd_mask & MDIO_DEVS_AN))
+		return -EINVAL;
+
+	/* Disable auto-negotiation interrupt */
+	disable_irq(priv->an_irq);
+
+	/* Start auto-negotiation in a supported mode */
+	if (phydev->supported & SUPPORTED_10000baseKR_Full)
+		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
+	else if ((phydev->supported & SUPPORTED_1000baseKX_Full) ||
+		 (phydev->supported & SUPPORTED_2500baseX_Full))
+		ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
+	else
+		ret = -EINVAL;
+	if (ret < 0) {
+		enable_irq(priv->an_irq);
+		return ret;
+	}
+
+	/* Disable and stop any in progress auto-negotiation */
+	ret = amd_xgbe_phy_disable_an(phydev);
+	if (ret < 0)
+		return ret;
+
+	/* Clear any auto-negotitation interrupts */
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
+	priv->an_result = AMD_XGBE_AN_READY;
+	priv->an_state = AMD_XGBE_AN_READY;
+	priv->kr_state = AMD_XGBE_RX_BPA;
+	priv->kx_state = AMD_XGBE_RX_BPA;
+
+	/* Re-enable auto-negotiation interrupt */
+	enable_irq(priv->an_irq);
+
+	/* Set up advertisement registers based on current settings */
+	ret = amd_xgbe_an_init(phydev);
+	if (ret)
+		return ret;
+
+	/* Enable and start auto-negotiation */
+	ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_KR_CTRL);
+	if (ret < 0)
+		return ret;
+
+	ret |= MDIO_KR_CTRL_PDETECT;
+	phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_KR_CTRL, ret);
+
+	return amd_xgbe_phy_restart_an(phydev);
+}
+
+static int amd_xgbe_phy_config_aneg(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	mutex_lock(&priv->an_mutex);
+
+	ret = __amd_xgbe_phy_config_aneg(phydev);
+
+	mutex_unlock(&priv->an_mutex);
+
+	return ret;
+}
+
+static int amd_xgbe_phy_aneg_done(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+
+	return (priv->an_result == AMD_XGBE_AN_COMPLETE);
+}
+
+static int amd_xgbe_phy_update_link(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	unsigned int check_again, autoneg;
+	int ret;
+
+	/* If we're doing auto-negotiation don't report link down */
+	if (priv->an_state != AMD_XGBE_AN_READY) {
+		phydev->link = 1;
+		return 0;
+	}
+
+	/* Since the device can be in the wrong mode when a link is
+	 * (re-)established (cable connected after the interface is
+	 * up, etc.), the link status may report no link. If there
+	 * is no link, try switching modes and checking the status
+	 * again if auto negotiation is enabled.
+	 */
+	check_again = (phydev->autoneg == AUTONEG_ENABLE) ? 1 : 0;
+again:
+	/* Link status is latched low, so read once to clear
+	 * and then read again to get current state
+	 */
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_STAT1);
+	if (ret < 0)
+		return ret;
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_STAT1);
+	if (ret < 0)
+		return ret;
+
+	phydev->link = (ret & MDIO_STAT1_LSTATUS) ? 1 : 0;
+
+	if (!phydev->link) {
+		if (check_again) {
+			ret = amd_xgbe_phy_switch_mode(phydev);
+			if (ret < 0)
+				return ret;
+			check_again = 0;
+			goto again;
+		}
+	}
+
+	autoneg = (phydev->link && !priv->link) ? 1 : 0;
+	priv->link = phydev->link;
+	if (autoneg) {
+		/* Link is (back) up, re-start auto-negotiation */
+		ret = amd_xgbe_phy_config_aneg(phydev);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int amd_xgbe_phy_read_status(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	u32 mmd_mask = phydev->c45_ids.devices_in_package;
+	int ret, ad_ret, lp_ret;
+
+	ret = amd_xgbe_phy_update_link(phydev);
+	if (ret)
+		return ret;
+
+	if ((phydev->autoneg == AUTONEG_ENABLE) &&
+	    !priv->parallel_detect) {
+		if (!(mmd_mask & MDIO_DEVS_AN))
+			return -EINVAL;
+
+		if (!amd_xgbe_phy_aneg_done(phydev))
+			return 0;
+
+		/* Compare Advertisement and Link Partner register 1 */
+		ad_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+		if (ad_ret < 0)
+			return ad_ret;
+		lp_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA);
+		if (lp_ret < 0)
+			return lp_ret;
+
+		ad_ret &= lp_ret;
+		phydev->pause = (ad_ret & 0x400) ? 1 : 0;
+		phydev->asym_pause = (ad_ret & 0x800) ? 1 : 0;
+
+		/* Compare Advertisement and Link Partner register 2 */
+		ad_ret = phy_read_mmd(phydev, MDIO_MMD_AN,
+				      MDIO_AN_ADVERTISE + 1);
+		if (ad_ret < 0)
+			return ad_ret;
+		lp_ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA + 1);
+		if (lp_ret < 0)
+			return lp_ret;
+
+		ad_ret &= lp_ret;
+		if (ad_ret & 0x80) {
+			phydev->speed = SPEED_10000;
+			ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
+			if (ret)
+				return ret;
+		} else {
+			switch (priv->speed_set) {
+			case AMD_XGBE_PHY_SPEEDSET_1000_10000:
+				phydev->speed = SPEED_1000;
+				break;
+
+			case AMD_XGBE_PHY_SPEEDSET_2500_10000:
+				phydev->speed = SPEED_2500;
+				break;
+			}
+
+			ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
+			if (ret)
+				return ret;
+		}
+
+		phydev->duplex = DUPLEX_FULL;
+	} else {
+		if (amd_xgbe_phy_in_kr_mode(phydev)) {
+			phydev->speed = SPEED_10000;
+		} else {
+			switch (priv->speed_set) {
+			case AMD_XGBE_PHY_SPEEDSET_1000_10000:
+				phydev->speed = SPEED_1000;
+				break;
+
+			case AMD_XGBE_PHY_SPEEDSET_2500_10000:
+				phydev->speed = SPEED_2500;
+				break;
+			}
+		}
+		phydev->duplex = DUPLEX_FULL;
+		phydev->pause = 0;
+		phydev->asym_pause = 0;
+	}
+
+	return 0;
+}
+
+static int amd_xgbe_phy_suspend(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	int ret;
+
+	mutex_lock(&phydev->lock);
+
+	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
+	if (ret < 0)
+		goto unlock;
+
+	priv->lpm_ctrl = ret;
+
+	ret |= MDIO_CTRL1_LPOWER;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+
+	ret = 0;
+
+unlock:
+	mutex_unlock(&phydev->lock);
+
+	return ret;
+}
+
+static int amd_xgbe_phy_resume(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+
+	mutex_lock(&phydev->lock);
+
+	priv->lpm_ctrl &= ~MDIO_CTRL1_LPOWER;
+	phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, priv->lpm_ctrl);
+
+	mutex_unlock(&phydev->lock);
+
+	return 0;
+}
+
+static unsigned int amd_xgbe_phy_resource_count(struct platform_device *pdev,
+						unsigned int type)
+{
+	unsigned int count;
+	int i;
+
+	for (i = 0, count = 0; i < pdev->num_resources; i++) {
+		struct resource *r = &pdev->resource[i];
+
+		if (type == resource_type(r))
+			count++;
+	}
+
+	return count;
+}
+
+static int amd_xgbe_phy_probe(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv;
+	struct platform_device *phy_pdev;
+	struct device *dev, *phy_dev;
+	unsigned int phy_resnum, phy_irqnum;
+	int ret;
+
+	if (!phydev->bus || !phydev->bus->parent)
+		return -EINVAL;
+
+	dev = phydev->bus->parent;
+
+	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	priv->pdev = to_platform_device(dev);
+	priv->adev = ACPI_COMPANION(dev);
+	priv->dev = dev;
+	priv->phydev = phydev;
+	mutex_init(&priv->an_mutex);
+	INIT_WORK(&priv->an_irq_work, amd_xgbe_an_irq_work);
+	INIT_WORK(&priv->an_work, amd_xgbe_an_state_machine);
+
+	if (!priv->adev || acpi_disabled) {
+		struct device_node *bus_node;
+		struct device_node *phy_node;
+
+		bus_node = priv->dev->of_node;
+		phy_node = of_parse_phandle(bus_node, "phy-handle", 0);
+		if (!phy_node) {
+			dev_err(dev, "unable to parse phy-handle\n");
+			ret = -EINVAL;
+			goto err_priv;
+		}
+
+		phy_pdev = of_find_device_by_node(phy_node);
+		of_node_put(phy_node);
+
+		if (!phy_pdev) {
+			dev_err(dev, "unable to obtain phy device\n");
+			ret = -EINVAL;
+			goto err_priv;
+		}
+
+		phy_resnum = 0;
+		phy_irqnum = 0;
+	} else {
+		/* In ACPI, the XGBE and PHY resources are the grouped
+		 * together with the PHY resources at the end
+		 */
+		phy_pdev = priv->pdev;
+		phy_resnum = amd_xgbe_phy_resource_count(phy_pdev,
+							 IORESOURCE_MEM) - 2;
+		phy_irqnum = amd_xgbe_phy_resource_count(phy_pdev,
+							 IORESOURCE_IRQ) - 1;
+	}
+	phy_dev = &phy_pdev->dev;
+
+	/* Get the device mmio areas */
+	priv->rxtx_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
+					       phy_resnum++);
+	priv->rxtx_regs = devm_ioremap_resource(dev, priv->rxtx_res);
+	if (IS_ERR(priv->rxtx_regs)) {
+		dev_err(dev, "rxtx ioremap failed\n");
+		ret = PTR_ERR(priv->rxtx_regs);
+		goto err_put;
+	}
+
+	/* All xgbe phy devices share the CMU registers so retrieve
+	 * the resource and do the ioremap directly rather than
+	 * the devm_ioremap_resource call
+	 */
+	priv->cmu_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
+					      phy_resnum++);
+	if (!priv->cmu_res) {
+		dev_err(dev, "cmu invalid resource\n");
+		ret = -EINVAL;
+		goto err_rxtx;
+	}
+	priv->cmu_regs = devm_ioremap_nocache(dev, priv->cmu_res->start,
+					      resource_size(priv->cmu_res));
+	if (!priv->cmu_regs) {
+		dev_err(dev, "cmu ioremap failed\n");
+		ret = -ENOMEM;
+		goto err_rxtx;
+	}
+
+	/* Get the auto-negotiation interrupt */
+	ret = platform_get_irq(phy_pdev, phy_irqnum);
+	if (ret < 0) {
+		dev_err(dev, "platform_get_irq failed\n");
+		goto err_cmu;
+	}
+	if (!phy_irqnum) {
+		struct irq_data *d = irq_get_irq_data(ret);
+		if (!d) {
+			dev_err(dev, "unable to set AN interrupt\n");
+			ret = -EINVAL;
+			goto err_put;
+		}
+
+#ifdef CONFIG_ACPI
+		ret = acpi_register_gsi(dev, d->hwirq - 2,
+					ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH);
+#else
+		ret = -EINVAL;
+#endif
+		if (ret < 0) {
+			dev_err(dev, "unable to set AN interrupt\n");
+			ret = -EINVAL;
+			goto err_put;
+		}
+	}
+	priv->an_irq = ret;
+
+	/* Get the device serdes channel property */
+	ret = device_property_read_u32(phy_dev, XGBE_PHY_CHANNEL_PROPERTY,
+				       &priv->serdes_channel);
+	if (ret) {
+		dev_err(dev, "invalid %s property\n",
+			XGBE_PHY_CHANNEL_PROPERTY);
+		goto err_cmu;
+	}
+
+	/* Get the device speed set property */
+	ret = device_property_read_u32(phy_dev, XGBE_PHY_SPEEDSET_PROPERTY,
+				       &priv->speed_set);
+	if (ret) {
+		dev_err(dev, "invalid %s property\n",
+			XGBE_PHY_SPEEDSET_PROPERTY);
+		goto err_cmu;
+	}
+
+	switch (priv->speed_set) {
+	case AMD_XGBE_PHY_SPEEDSET_1000_10000:
+	case AMD_XGBE_PHY_SPEEDSET_2500_10000:
+		break;
+	default:
+		dev_err(dev, "invalid %s property\n",
+			XGBE_PHY_SPEEDSET_PROPERTY);
+		ret = -EINVAL;
+		goto err_cmu;
+	}
+
+	if (device_property_present(phy_dev, XGBE_PHY_BLWC_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_PHY_BLWC_PROPERTY,
+						     priv->serdes_blwc,
+						     XGBE_PHY_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_PHY_BLWC_PROPERTY);
+			goto err_cmu;
+		}
+	} else {
+		memcpy(priv->serdes_blwc, amd_xgbe_phy_serdes_blwc,
+		       sizeof(priv->serdes_blwc));
+	}
+
+	if (device_property_present(phy_dev, XGBE_PHY_CDR_RATE_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_PHY_CDR_RATE_PROPERTY,
+						     priv->serdes_cdr_rate,
+						     XGBE_PHY_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_PHY_CDR_RATE_PROPERTY);
+			goto err_cmu;
+		}
+	} else {
+		memcpy(priv->serdes_cdr_rate, amd_xgbe_phy_serdes_cdr_rate,
+		       sizeof(priv->serdes_cdr_rate));
+	}
+
+	if (device_property_present(phy_dev, XGBE_PHY_PQ_SKEW_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_PHY_PQ_SKEW_PROPERTY,
+						     priv->serdes_pq_skew,
+						     XGBE_PHY_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_PHY_PQ_SKEW_PROPERTY);
+			goto err_cmu;
+		}
+	} else {
+		memcpy(priv->serdes_pq_skew, amd_xgbe_phy_serdes_pq_skew,
+		       sizeof(priv->serdes_pq_skew));
+	}
+
+	if (device_property_present(phy_dev, XGBE_PHY_TX_AMP_PROPERTY)) {
+		ret = device_property_read_u32_array(phy_dev,
+						     XGBE_PHY_TX_AMP_PROPERTY,
+						     priv->serdes_tx_amp,
+						     XGBE_PHY_SPEEDS);
+		if (ret) {
+			dev_err(dev, "invalid %s property\n",
+				XGBE_PHY_TX_AMP_PROPERTY);
+			goto err_cmu;
+		}
+	} else {
+		memcpy(priv->serdes_tx_amp, amd_xgbe_phy_serdes_tx_amp,
+		       sizeof(priv->serdes_tx_amp));
+	}
+
+	priv->link = 1;
+
+	phydev->priv = priv;
+
+	if (!priv->adev || acpi_disabled)
+		platform_device_put(phy_pdev);
+
+	return 0;
+
+err_cmu:
+	devm_iounmap(dev, priv->cmu_regs);
+
+err_rxtx:
+	devm_iounmap(dev, priv->rxtx_regs);
+	devm_release_mem_region(dev, priv->rxtx_res->start,
+				resource_size(priv->rxtx_res));
+
+err_put:
+	if (!priv->adev || acpi_disabled)
+		platform_device_put(phy_pdev);
+
+err_priv:
+	devm_kfree(dev, priv);
+
+	return ret;
+}
+
+static void amd_xgbe_phy_remove(struct phy_device *phydev)
+{
+	struct amd_xgbe_phy_priv *priv = phydev->priv;
+	struct device *dev = priv->dev;
+
+	if (priv->an_irq_allocated) {
+		devm_free_irq(dev, priv->an_irq, priv);
+
+		flush_workqueue(priv->an_workqueue);
+		destroy_workqueue(priv->an_workqueue);
+	}
+
+	devm_iounmap(dev, priv->cmu_regs);
+
+	devm_iounmap(dev, priv->rxtx_regs);
+	devm_release_mem_region(dev, priv->rxtx_res->start,
+				resource_size(priv->rxtx_res));
+
+	devm_kfree(dev, priv);
+}
+
+static int amd_xgbe_match_phy_device(struct phy_device *phydev)
+{
+	return phydev->c45_ids.device_ids[MDIO_MMD_PCS] == XGBE_PHY_ID;
+}
+
+static struct phy_driver amd_xgbe_phy_a0_driver[] = {
+	{
+		.phy_id			= XGBE_PHY_ID,
+		.phy_id_mask		= XGBE_PHY_MASK,
+		.name			= "AMD XGBE PHY A0",
+		.features		= 0,
+		.probe			= amd_xgbe_phy_probe,
+		.remove			= amd_xgbe_phy_remove,
+		.soft_reset		= amd_xgbe_phy_soft_reset,
+		.config_init		= amd_xgbe_phy_config_init,
+		.suspend		= amd_xgbe_phy_suspend,
+		.resume			= amd_xgbe_phy_resume,
+		.config_aneg		= amd_xgbe_phy_config_aneg,
+		.aneg_done		= amd_xgbe_phy_aneg_done,
+		.read_status		= amd_xgbe_phy_read_status,
+		.match_phy_device	= amd_xgbe_match_phy_device,
+		.driver			= {
+			.owner = THIS_MODULE,
+		},
+	},
+};
+
+module_phy_driver(amd_xgbe_phy_a0_driver);
+
+static struct mdio_device_id __maybe_unused amd_xgbe_phy_ids_a0[] = {
+	{ XGBE_PHY_ID, XGBE_PHY_MASK },
+	{ }
+};
+MODULE_DEVICE_TABLE(mdio, amd_xgbe_phy_ids_a0);
diff --git a/drivers/pci/host/pci-xgene.c b/drivers/pci/host/pci-xgene.c
index aab5547..967ad80 100644
--- a/drivers/pci/host/pci-xgene.c
+++ b/drivers/pci/host/pci-xgene.c
@@ -29,6 +29,8 @@
 #include <linux/pci.h>
 #include <linux/platform_device.h>
 #include <linux/slab.h>
+#include <linux/acpi.h>
+#include <linux/mmconfig.h>
 
 #define PCIECORE_CTLANDSTATUS		0x50
 #define PIM1_1L				0x80
@@ -468,6 +470,252 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port,
 	return 0;
 }
 
+#ifdef CONFIG_ACPI
+
+/* PCIe Configuration Out/In */
+static inline void xgene_pcie_cfg_out32(void __iomem *addr, int offset, u32 val)
+{
+	writel(val, addr + offset);
+}
+
+static inline void xgene_pcie_cfg_out16(void __iomem *addr, int offset, u16 val)
+{
+	u32 val32 = readl(addr + (offset & ~0x3));
+
+	switch (offset & 0x3) {
+	case 2:
+		val32 &= ~0xFFFF0000;
+		val32 |= (u32)val << 16;
+		break;
+	case 0:
+	default:
+		val32 &= ~0xFFFF;
+		val32 |= val;
+		break;
+	}
+	writel(val32, addr + (offset & ~0x3));
+}
+
+static inline void xgene_pcie_cfg_out8(void __iomem *addr, int offset, u8 val)
+{
+	u32 val32 = readl(addr + (offset & ~0x3));
+
+	switch (offset & 0x3) {
+	case 0:
+		val32 &= ~0xFF;
+		val32 |= val;
+		break;
+	case 1:
+		val32 &= ~0xFF00;
+		val32 |= (u32)val << 8;
+		break;
+	case 2:
+		val32 &= ~0xFF0000;
+		val32 |= (u32)val << 16;
+		break;
+	case 3:
+	default:
+		val32 &= ~0xFF000000;
+		val32 |= (u32)val << 24;
+		break;
+	}
+	writel(val32, addr + (offset & ~0x3));
+}
+
+static inline void xgene_pcie_cfg_in32(void __iomem *addr, int offset, u32 *val)
+{
+	*val = readl(addr + offset);
+}
+
+static inline void xgene_pcie_cfg_in16(void __iomem *addr, int offset, u32 *val)
+{
+	*val = readl(addr + (offset & ~0x3));
+
+	switch (offset & 0x3) {
+	case 2:
+		*val >>= 16;
+		break;
+	}
+
+	*val &= 0xFFFF;
+}
+
+static inline void xgene_pcie_cfg_in8(void __iomem *addr, int offset, u32 *val)
+{
+	*val = readl(addr + (offset & ~0x3));
+
+	switch (offset & 0x3) {
+	case 3:
+		*val = *val >> 24;
+		break;
+	case 2:
+		*val = *val >> 16;
+		break;
+	case 1:
+		*val = *val >> 8;
+		break;
+	}
+	*val &= 0xFF;
+}
+
+struct xgene_mcfg_info {
+	void __iomem *csr_base;
+};
+
+/*
+ * When the address bit [17:16] is 2'b01, the Configuration access will be
+ * treated as Type 1 and it will be forwarded to external PCIe device.
+ */
+static void __iomem *__get_cfg_base(struct pci_mmcfg_region *cfg,
+				    unsigned int bus)
+{
+	if (bus > cfg->start_bus)
+		return cfg->virt + AXI_EP_CFG_ACCESS;
+
+	return cfg->virt;
+}
+
+/*
+ * For Configuration request, RTDID register is used as Bus Number,
+ * Device Number and Function number of the header fields.
+ */
+static void __set_rtdid_reg(struct pci_mmcfg_region *cfg,
+			    unsigned int bus, unsigned int devfn)
+{
+	struct xgene_mcfg_info *info = cfg->data;
+	unsigned int b, d, f;
+	u32 rtdid_val = 0;
+
+	b = bus;
+	d = PCI_SLOT(devfn);
+	f = PCI_FUNC(devfn);
+
+	if (bus != cfg->start_bus)
+		rtdid_val = (b << 8) | (d << 3) | f;
+
+	writel(rtdid_val, info->csr_base + RTDID);
+	/* read the register back to ensure flush */
+	readl(info->csr_base + RTDID);
+}
+
+static int xgene_raw_pci_read(struct pci_mmcfg_region *cfg, unsigned int bus,
+			      unsigned int devfn, int offset, int len, u32 *val)
+{
+	void __iomem *addr;
+
+	if (bus == cfg->start_bus) {
+		if (devfn != 0) {
+			*val = 0xffffffff;
+			return PCIBIOS_DEVICE_NOT_FOUND;
+		}
+
+		/* see xgene_pcie_hide_rc_bars() above */
+		if (offset == PCI_BASE_ADDRESS_0 ||
+		    offset == PCI_BASE_ADDRESS_1) {
+			*val = 0;
+			return PCIBIOS_SUCCESSFUL;
+		}
+	}
+
+	__set_rtdid_reg(cfg, bus, devfn);
+	addr = __get_cfg_base(cfg, bus);
+	switch (len) {
+	case 1:
+		xgene_pcie_cfg_in8(addr, offset, val);
+		break;
+	case 2:
+		xgene_pcie_cfg_in16(addr, offset, val);
+		/* FIXME.
+		 * Something wrong with Configuration Request Retry Status
+		 * on this hw. Pretend it isn't supported until the problem
+		 * gets sorted out properly.
+		 */
+		if (bus == cfg->start_bus && offset == (0x40 + PCI_EXP_RTCAP))
+			*val &= ~PCI_EXP_RTCAP_CRSVIS;
+		break;
+	default:
+		xgene_pcie_cfg_in32(addr, offset, val);
+		break;
+	}
+	return PCIBIOS_SUCCESSFUL;
+}
+
+static int xgene_raw_pci_write(struct pci_mmcfg_region *cfg, unsigned int bus,
+			       unsigned int devfn, int offset, int len, u32 val)
+{
+	void __iomem *addr;
+
+	if (bus == cfg->start_bus && devfn != 0)
+		return PCIBIOS_DEVICE_NOT_FOUND;
+
+	__set_rtdid_reg(cfg, bus, devfn);
+	addr = __get_cfg_base(cfg, bus);
+	switch (len) {
+	case 1:
+		xgene_pcie_cfg_out8(addr, offset, (u8)val);
+		break;
+	case 2:
+		xgene_pcie_cfg_out16(addr, offset, (u16)val);
+		break;
+	default:
+		xgene_pcie_cfg_out32(addr, offset, val);
+		break;
+	}
+	return PCIBIOS_SUCCESSFUL;
+}
+
+static acpi_status find_csr_base(struct acpi_resource *acpi_res, void *data)
+{
+	struct pci_mmcfg_region *cfg = data;
+	struct xgene_mcfg_info *info = cfg->data;
+	struct acpi_resource_fixed_memory32 *fixed32;
+
+	if (acpi_res->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) {
+		fixed32 = &acpi_res->data.fixed_memory32;
+		info->csr_base = ioremap(fixed32->address,
+					 fixed32->address_length);
+		return AE_CTRL_TERMINATE;
+	}
+	return AE_OK;
+}
+
+static int xgene_mcfg_fixup(struct acpi_pci_root *root,
+			    struct pci_mmcfg_region *cfg)
+{
+	struct acpi_device *device = root->device;
+	struct xgene_mcfg_info *info;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL)
+		return -ENOMEM;
+
+	cfg->data = info;
+
+	acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+			    find_csr_base, cfg);
+
+	if (!info->csr_base) {
+		kfree(info);
+		cfg->data = NULL;
+		return -ENODEV;
+	}
+
+	cfg->read = xgene_raw_pci_read;
+	cfg->write = xgene_raw_pci_write;
+
+	/* actual last bus reachable through this mmconfig */
+	cfg->end_bus = root->secondary.end;
+
+	/* firmware should have done this */
+	xgene_raw_pci_write(cfg, cfg->start_bus, 0, PCI_PRIMARY_BUS, 4,
+			    cfg->start_bus | ((cfg->start_bus + 1) << 8) |
+			    (cfg->end_bus << 16));
+
+	return 0;
+}
+DECLARE_ACPI_MCFG_FIXUP("APM   ", "XGENE   ", xgene_mcfg_fixup);
+#endif /* CONFIG_ACPI */
+
 static int xgene_pcie_probe_bridge(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index c3e7dfc..1bed857 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -41,8 +41,7 @@ static struct irq_domain *pci_msi_get_domain(struct pci_dev *dev)
 {
 	struct irq_domain *domain = NULL;
 
-	if (dev->bus->msi)
-		domain = dev->bus->msi->domain;
+	domain = dev_get_msi_domain(&dev->dev);
 	if (!domain)
 		domain = arch_get_pci_msi_domain(dev);
 
diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index f092993..75bfb85 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -9,6 +9,7 @@
  * 2 of the License, or (at your option) any later version.
  */
 
+#include <linux/irqdomain.h>
 #include <linux/kernel.h>
 #include <linux/pci.h>
 #include <linux/of.h>
@@ -59,3 +60,22 @@ struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
 		return of_node_get(bus->bridge->parent->of_node);
 	return NULL;
 }
+
+void pci_set_phb_of_msi_domain(struct pci_bus *bus)
+{
+#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
+	struct device_node *np;
+
+	if (!bus->dev.of_node)
+		return;
+	/* Start looking for a phandle to an MSI controller. */
+	np = of_parse_phandle(bus->dev.of_node, "msi-parent", 0);
+	/*
+	 * If we don't have an msi-parent property, look for a domain
+	 * directly attached to the host bridge.
+	 */
+	if (!np)
+		np = bus->dev.of_node;
+	dev_set_msi_domain(&bus->dev, irq_find_host(np));
+#endif
+}
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
index 4890639..04b676b 100644
--- a/drivers/pci/pci-acpi.c
+++ b/drivers/pci/pci-acpi.c
@@ -9,9 +9,11 @@
 
 #include <linux/delay.h>
 #include <linux/init.h>
+#include <linux/irqdomain.h>
 #include <linux/pci.h>
 #include <linux/pci_hotplug.h>
 #include <linux/module.h>
+#include <linux/of.h>
 #include <linux/pci-aspm.h>
 #include <linux/pci-acpi.h>
 #include <linux/pm_runtime.h>
@@ -628,3 +630,37 @@ static int __init acpi_pci_init(void)
 	return 0;
 }
 arch_initcall(acpi_pci_init);
+
+#ifdef CONFIG_PCI_MSI
+void pci_acpi_set_phb_msi_domain(struct pci_bus *bus) {
+#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
+	struct device_node np;
+
+	if (acpi_disabled)
+		return;
+
+	np.data = kzalloc(sizeof(unsigned int), GFP_KERNEL);
+	if (!np.data)
+		return;
+
+	/**
+	 * Since ACPI 5.1 currently does not define
+	 * a way to associate MSI frame ID to a device,
+	 * we can only support single MSI frame.
+	 * Therefore, the ID 0 is used as a default.
+	 *
+	 * Alternatively, we should query the ID from
+	 * device's DSDT
+	 *
+	 * FIXME when ACPI spec is fixed!!!
+	 */
+	*((u32 *)(np.data)) = 0;
+
+	/**
+	 * FIXME: This is currently a hack until we have a
+	 * better way to find MSI domain using msi_frame_id
+	 */
+        dev_set_msi_domain(&bus->dev, irq_find_host(&np));
+#endif
+}
+#endif /* CONFIG_PCI_MSI */
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 8d2f400..324cdce 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -10,6 +10,7 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/cpumask.h>
+#include <linux/pci-acpi.h>
 #include <linux/pci-aspm.h>
 #include <asm-generic/pci-bridge.h>
 #include "pci.h"
@@ -660,6 +661,22 @@ static void pci_set_bus_speed(struct pci_bus *bus)
 	}
 }
 
+void __weak pcibios_set_phb_msi_domain(struct pci_bus *bus)
+{
+	pci_set_phb_of_msi_domain(bus);
+	pci_acpi_set_phb_msi_domain(bus);
+}
+
+static void pci_set_bus_msi_domain(struct pci_bus *bus)
+{
+	struct pci_dev *bridge = bus->self;
+
+	if (!bridge)
+		pcibios_set_phb_msi_domain(bus);
+	else
+		dev_set_msi_domain(&bus->dev, dev_get_msi_domain(&bridge->dev));
+}
+
 static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
 					   struct pci_dev *bridge, int busnr)
 {
@@ -713,6 +730,7 @@ static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
 	bridge->subordinate = child;
 
 add_dev:
+	pci_set_bus_msi_domain(child);
 	ret = device_register(&child->dev);
 	WARN_ON(ret < 0);
 
@@ -1507,6 +1525,17 @@ static void pci_init_capabilities(struct pci_dev *dev)
 	pci_enable_acs(dev);
 }
 
+static void pci_set_msi_domain(struct pci_dev *dev)
+{
+	/*
+	 * If no domain has been set through the pcibios callback,
+	 * inherit the default from the bus device.
+	 */
+	if (!dev_get_msi_domain(&dev->dev))
+		dev_set_msi_domain(&dev->dev,
+				   dev_get_msi_domain(&dev->bus->dev));
+}
+
 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
 {
 	int ret;
@@ -1547,6 +1576,9 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
 	ret = pcibios_add_device(dev);
 	WARN_ON(ret < 0);
 
+	/* Setup MSI irq domain */
+	pci_set_msi_domain(dev);
+
 	/* Notifier could use PCI capabilities */
 	dev->match_driver = false;
 	ret = device_add(&dev->dev);
@@ -1937,6 +1969,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
 	b->bridge = get_device(&bridge->dev);
 	device_enable_async_suspend(b->bridge);
 	pci_set_bus_of_node(b);
+	pci_set_bus_msi_domain(b);
 
 	if (!parent)
 		set_dev_node(b->bridge, pcibus_to_node(b));
diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
index b24aa01..50fe279 100644
--- a/drivers/tty/Kconfig
+++ b/drivers/tty/Kconfig
@@ -419,4 +419,10 @@ config DA_CONSOLE
 	help
 	  This enables a console on a Dash channel.
 
+config SBSAUART_TTY
+	tristate "SBSA UART TTY Driver"
+	help
+	  Console and system TTY driver for the SBSA UART which is defined
+	  in the Server Base System Architecure document for ARM64 servers.
+
 endif # TTY
diff --git a/drivers/tty/Makefile b/drivers/tty/Makefile
index 58ad1c0..c3211c0 100644
--- a/drivers/tty/Makefile
+++ b/drivers/tty/Makefile
@@ -29,5 +29,6 @@ obj-$(CONFIG_SYNCLINK)		+= synclink.o
 obj-$(CONFIG_PPC_EPAPR_HV_BYTECHAN) += ehv_bytechan.o
 obj-$(CONFIG_GOLDFISH_TTY)	+= goldfish.o
 obj-$(CONFIG_DA_TTY)		+= metag_da.o
+obj-$(CONFIG_SBSAUART_TTY)	+= sbsauart.o
 
 obj-y += ipwireless/
diff --git a/drivers/tty/sbsauart.c b/drivers/tty/sbsauart.c
new file mode 100644
index 0000000..0f44624
--- /dev/null
+++ b/drivers/tty/sbsauart.c
@@ -0,0 +1,358 @@
+/*
+ * SBSA (Server Base System Architecture) Compatible UART driver
+ *
+ * Copyright (C) 2014 Linaro Ltd
+ *
+ * Author: Graeme Gregory <graeme.gregory@linaro.org>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/acpi.h>
+#include <linux/amba/serial.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/serial_core.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+
+struct sbsa_tty {
+	struct tty_port port;
+	spinlock_t lock;
+	void __iomem *base;
+	u32 irq;
+	int opencount;
+	struct console console;
+};
+
+static struct tty_driver *sbsa_tty_driver;
+static struct sbsa_tty *sbsa_tty;
+
+#define SBSAUART_CHAR_MASK	0xFF
+
+static void sbsa_raw_putc(struct uart_port *port, int c)
+{
+	while (readw(port->membase + UART01x_FR) & UART01x_FR_TXFF)
+		;
+	writew(c & 0xFF, port->membase + UART01x_DR);
+}
+
+static void sbsa_uart_early_write(struct console *con, const char *buf,
+				  unsigned count)
+{
+	struct earlycon_device *dev = con->data;
+
+	uart_console_write(&dev->port, buf, count, sbsa_raw_putc);
+}
+
+static int __init sbsa_uart_early_console_setup(struct earlycon_device *device,
+						const char *opt)
+{
+	if (!device->port.membase)
+		return -ENODEV;
+
+	device->con->write = sbsa_uart_early_write;
+	return 0;
+}
+EARLYCON_DECLARE(sbsauart, sbsa_uart_early_console_setup);
+
+static void sbsa_tty_do_write(const char *buf, unsigned count)
+{
+	unsigned long irq_flags;
+	struct sbsa_tty *qtty = sbsa_tty;
+	void __iomem *base = qtty->base;
+	unsigned n;
+
+	spin_lock_irqsave(&qtty->lock, irq_flags);
+	for (n = 0; n < count; n++) {
+		while (readw(base + UART01x_FR) & UART01x_FR_TXFF)
+			;
+		writew(buf[n], base + UART01x_DR);
+	}
+	spin_unlock_irqrestore(&qtty->lock, irq_flags);
+}
+
+static void sbsauart_fifo_to_tty(struct sbsa_tty *qtty)
+{
+	void __iomem *base = qtty->base;
+	unsigned int flag, max_count = 32;
+	u16 status, ch;
+
+	while (max_count--) {
+		status = readw(base + UART01x_FR);
+		if (status & UART01x_FR_RXFE)
+			break;
+
+		/* Take chars from the FIFO and update status */
+		ch = readw(base + UART01x_DR);
+		flag = TTY_NORMAL;
+
+		if (ch & UART011_DR_BE)
+			flag = TTY_BREAK;
+		else if (ch & UART011_DR_PE)
+			flag = TTY_PARITY;
+		else if (ch & UART011_DR_FE)
+			flag = TTY_FRAME;
+		else if (ch & UART011_DR_OE)
+			flag = TTY_OVERRUN;
+
+		ch &= SBSAUART_CHAR_MASK;
+
+		tty_insert_flip_char(&qtty->port, ch, flag);
+	}
+
+	tty_schedule_flip(&qtty->port);
+
+	/* Clear the RX IRQ */
+	writew(UART011_RXIC | UART011_RXIC, base + UART011_ICR);
+}
+
+static irqreturn_t sbsa_tty_interrupt(int irq, void *dev_id)
+{
+	struct sbsa_tty *qtty = sbsa_tty;
+	unsigned long irq_flags;
+
+	spin_lock_irqsave(&qtty->lock, irq_flags);
+	sbsauart_fifo_to_tty(qtty);
+	spin_unlock_irqrestore(&qtty->lock, irq_flags);
+
+	return IRQ_HANDLED;
+}
+
+static int sbsa_tty_open(struct tty_struct *tty, struct file *filp)
+{
+	struct sbsa_tty *qtty = sbsa_tty;
+
+	return tty_port_open(&qtty->port, tty, filp);
+}
+
+static void sbsa_tty_close(struct tty_struct *tty, struct file *filp)
+{
+	tty_port_close(tty->port, tty, filp);
+}
+
+static void sbsa_tty_hangup(struct tty_struct *tty)
+{
+	tty_port_hangup(tty->port);
+}
+
+static int sbsa_tty_write(struct tty_struct *tty, const unsigned char *buf,
+								int count)
+{
+	sbsa_tty_do_write(buf, count);
+	return count;
+}
+
+static int sbsa_tty_write_room(struct tty_struct *tty)
+{
+	return 32;
+}
+
+static void sbsa_tty_console_write(struct console *co, const char *b,
+								unsigned count)
+{
+	sbsa_tty_do_write(b, count);
+
+	if (b[count - 1] == '\n')
+		sbsa_tty_do_write("\r", 1);
+}
+
+static struct tty_driver *sbsa_tty_console_device(struct console *c,
+								int *index)
+{
+	*index = c->index;
+	return sbsa_tty_driver;
+}
+
+static int sbsa_tty_console_setup(struct console *co, char *options)
+{
+	if ((unsigned)co->index > 0)
+		return -ENODEV;
+	if (sbsa_tty->base == NULL)
+		return -ENODEV;
+	return 0;
+}
+
+static struct tty_port_operations sbsa_port_ops = {
+};
+
+static const struct tty_operations sbsa_tty_ops = {
+	.open = sbsa_tty_open,
+	.close = sbsa_tty_close,
+	.hangup = sbsa_tty_hangup,
+	.write = sbsa_tty_write,
+	.write_room = sbsa_tty_write_room,
+};
+
+static int sbsa_tty_create_driver(void)
+{
+	int ret;
+	struct tty_driver *tty;
+
+	sbsa_tty = kzalloc(sizeof(*sbsa_tty), GFP_KERNEL);
+	if (sbsa_tty == NULL) {
+		ret = -ENOMEM;
+		goto err_alloc_sbsa_tty_failed;
+	}
+	tty = alloc_tty_driver(1);
+	if (tty == NULL) {
+		ret = -ENOMEM;
+		goto err_alloc_tty_driver_failed;
+	}
+	tty->driver_name = "sbsauart";
+	tty->name = "ttySBSA";
+	tty->type = TTY_DRIVER_TYPE_SERIAL;
+	tty->subtype = SERIAL_TYPE_NORMAL;
+	tty->init_termios = tty_std_termios;
+	tty->flags = TTY_DRIVER_RESET_TERMIOS | TTY_DRIVER_REAL_RAW |
+						TTY_DRIVER_DYNAMIC_DEV;
+	tty_set_operations(tty, &sbsa_tty_ops);
+	ret = tty_register_driver(tty);
+	if (ret)
+		goto err_tty_register_driver_failed;
+
+	sbsa_tty_driver = tty;
+	return 0;
+
+err_tty_register_driver_failed:
+	put_tty_driver(tty);
+err_alloc_tty_driver_failed:
+	kfree(sbsa_tty);
+	sbsa_tty = NULL;
+err_alloc_sbsa_tty_failed:
+	return ret;
+}
+
+static void sbsa_tty_delete_driver(void)
+{
+	tty_unregister_driver(sbsa_tty_driver);
+	put_tty_driver(sbsa_tty_driver);
+	sbsa_tty_driver = NULL;
+	kfree(sbsa_tty);
+	sbsa_tty = NULL;
+}
+
+static int sbsa_tty_probe(struct platform_device *pdev)
+{
+	struct sbsa_tty *qtty;
+	int ret = -EINVAL;
+	int i;
+	struct resource *r;
+	struct device *ttydev;
+	void __iomem *base;
+	u32 irq;
+
+	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (r == NULL)
+		return -EINVAL;
+
+	base = ioremap(r->start, r->end - r->start);
+	if (base == NULL)
+		pr_err("sbsa_tty: unable to remap base\n");
+
+	r = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+	if (r == NULL)
+		goto err_unmap;
+
+	irq = r->start;
+
+	if (pdev->id > 0)
+		goto err_unmap;
+
+	ret = sbsa_tty_create_driver();
+	if (ret)
+		goto err_unmap;
+
+	qtty = sbsa_tty;
+	spin_lock_init(&qtty->lock);
+	tty_port_init(&qtty->port);
+	qtty->port.ops = &sbsa_port_ops;
+	qtty->base = base;
+	qtty->irq = irq;
+
+	/* Clear and Mask all IRQs */
+	writew(0, base + UART011_IMSC);
+	writew(0xFFFF, base + UART011_ICR);
+
+	ret = request_irq(irq, sbsa_tty_interrupt, IRQF_SHARED,
+						"sbsa_tty", pdev);
+	if (ret)
+		goto err_request_irq_failed;
+
+	/* Unmask the RX IRQ */
+	writew(UART011_RXIM | UART011_RTIM, base + UART011_IMSC);
+
+	ttydev = tty_port_register_device(&qtty->port, sbsa_tty_driver,
+							0, &pdev->dev);
+	if (IS_ERR(ttydev)) {
+		ret = PTR_ERR(ttydev);
+		goto err_tty_register_device_failed;
+	}
+
+	strcpy(qtty->console.name, "ttySBSA");
+	qtty->console.write = sbsa_tty_console_write;
+	qtty->console.device = sbsa_tty_console_device;
+	qtty->console.setup = sbsa_tty_console_setup;
+	qtty->console.flags = CON_PRINTBUFFER;
+	/* if no console= on cmdline, make this the console device */
+	if (!console_set_on_cmdline)
+		qtty->console.flags |= CON_CONSDEV;
+	qtty->console.index = pdev->id;
+	register_console(&qtty->console);
+
+	return 0;
+
+	tty_unregister_device(sbsa_tty_driver, i);
+err_tty_register_device_failed:
+	free_irq(irq, pdev);
+err_request_irq_failed:
+	sbsa_tty_delete_driver();
+err_unmap:
+	iounmap(base);
+	return ret;
+}
+
+static int sbsa_tty_remove(struct platform_device *pdev)
+{
+	struct sbsa_tty *qtty;
+
+	qtty = sbsa_tty;
+	unregister_console(&qtty->console);
+	tty_unregister_device(sbsa_tty_driver, pdev->id);
+	iounmap(qtty->base);
+	qtty->base = 0;
+	free_irq(qtty->irq, pdev);
+	sbsa_tty_delete_driver();
+	return 0;
+}
+
+static const struct acpi_device_id sbsa_acpi_match[] = {
+	{ "ARMH0011", 0 },
+	{ }
+};
+
+static struct platform_driver sbsa_tty_platform_driver = {
+	.probe = sbsa_tty_probe,
+	.remove = sbsa_tty_remove,
+	.driver = {
+		.name = "sbsa_tty",
+		.acpi_match_table = ACPI_PTR(sbsa_acpi_match),
+	}
+};
+
+module_platform_driver(sbsa_tty_platform_driver);
+
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index e601162..3991aa0 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -361,9 +361,7 @@ static int dw8250_probe_acpi(struct uart_8250_port *up,
 		return -ENODEV;
 
 	if (!p->uartclk)
-		if (device_property_read_u32(p->dev, "clock-frequency",
-					     &p->uartclk))
-			return -EINVAL;
+		p->uartclk = (unsigned int)id->driver_data;
 
 	p->iotype = UPIO_MEM32;
 	p->serial_in = dw8250_serial_in32;
@@ -587,7 +585,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
 	{ "INT3435", 0 },
 	{ "80860F0A", 0 },
 	{ "8086228A", 0 },
-	{ "APMC0D08", 0},
+	{ "APMC0D08", 50000000},
 	{ },
 };
 MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index 8d94c19..04930e2 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -2238,7 +2238,15 @@ static int pl011_probe(struct amba_device *dev, const struct amba_id *id)
 		}
 	}
 
+	/*
+	 * temp hack to avoid need for console= on cmdline
+	 * this can go away when we switch completely to acpi
+	 */
+	if (amba_reg.cons && !console_set_on_cmdline && uap->port.line == 0)
+		amba_reg.cons->flags |= CON_CONSDEV;
 	ret = uart_add_one_port(&amba_reg, &uap->port);
+	if (amba_reg.cons && !console_set_on_cmdline && uap->port.line == 0)
+		amba_reg.cons->flags &= ~CON_CONSDEV;
 	if (ret) {
 		amba_ports[i] = NULL;
 		uart_unregister_driver(&amba_reg);
diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
index 08d402b..ac52698 100644
--- a/drivers/usb/host/xhci-plat.c
+++ b/drivers/usb/host/xhci-plat.c
@@ -93,14 +93,13 @@ static int xhci_plat_probe(struct platform_device *pdev)
 			return ret;
 	}
 
-	/* Initialize dma_mask and coherent_dma_mask to 32-bits */
-	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
-	if (ret)
-		return ret;
-	if (!pdev->dev.dma_mask)
-		pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
-	else
-		dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+	/* Try setting the coherent_dma_mask to 64 bits, then try 32 bits */
+	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (ret) 
+			return ret;
+	}
 
 	hcd = usb_create_hcd(driver, &pdev->dev, dev_name(&pdev->dev));
 	if (!hcd)
diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 00d115b..cd9b974 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -100,8 +100,7 @@
 #include <linux/virtio_config.h>
 #include <linux/virtio_mmio.h>
 #include <linux/virtio_ring.h>
-
-
+#include <linux/acpi.h>
 
 /* The alignment to use between consumer and producer parts of vring.
  * Currently hardcoded to the page size. */
@@ -635,12 +634,21 @@ static struct of_device_id virtio_mmio_match[] = {
 };
 MODULE_DEVICE_TABLE(of, virtio_mmio_match);
 
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id virtio_mmio_acpi_match[] = {
+	{ "LNRO0005", },
+	{ }
+};
+MODULE_DEVICE_TABLE(acpi, virtio_mmio_acpi_match);
+#endif
+
 static struct platform_driver virtio_mmio_driver = {
 	.probe		= virtio_mmio_probe,
 	.remove		= virtio_mmio_remove,
 	.driver		= {
 		.name	= "virtio-mmio",
 		.of_match_table	= virtio_mmio_match,
+		.acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match),
 	},
 };
 
diff --git a/include/acpi/acnames.h b/include/acpi/acnames.h
index 273de70..b52c0dc 100644
--- a/include/acpi/acnames.h
+++ b/include/acpi/acnames.h
@@ -51,6 +51,7 @@
 #define METHOD_NAME__BBN        "_BBN"
 #define METHOD_NAME__CBA        "_CBA"
 #define METHOD_NAME__CID        "_CID"
+#define METHOD_NAME__CLS        "_CLS"
 #define METHOD_NAME__CRS        "_CRS"
 #define METHOD_NAME__DDN        "_DDN"
 #define METHOD_NAME__HID        "_HID"
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
index 61e32ec..1fec6f5 100644
--- a/include/acpi/acpi_bus.h
+++ b/include/acpi/acpi_bus.h
@@ -69,6 +69,8 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs);
 union acpi_object *acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid,
 			int rev, int func, union acpi_object *argv4);
 
+acpi_status acpi_check_coherency(acpi_handle handle, int *val);
+
 static inline union acpi_object *
 acpi_evaluate_dsm_typed(acpi_handle handle, const u8 *uuid, int rev, int func,
 			union acpi_object *argv4, acpi_object_type type)
diff --git a/include/acpi/acpi_io.h b/include/acpi/acpi_io.h
index 444671e..48f504a 100644
--- a/include/acpi/acpi_io.h
+++ b/include/acpi/acpi_io.h
@@ -2,12 +2,15 @@
 #define _ACPI_IO_H_
 
 #include <linux/io.h>
+#include <asm/acpi.h>
 
+#ifndef acpi_os_ioremap
 static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys,
 					    acpi_size size)
 {
        return ioremap_cache(phys, size);
 }
+#endif
 
 void __iomem *__init_refok
 acpi_os_map_iomem(acpi_physical_address phys, acpi_size size);
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index ac78910..472d6b8 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -276,6 +276,13 @@
 		VMLINUX_SYMBOL(__end_pci_fixups_suspend_late) = .;	\
 	}								\
 									\
+	/* ACPI quirks */						\
+	.acpi_fixup        : AT(ADDR(.acpi_fixup) - LOAD_OFFSET) {	\
+		VMLINUX_SYMBOL(__start_acpi_mcfg_fixups) = .;		\
+		*(.acpi_fixup_mcfg)					\
+		VMLINUX_SYMBOL(__end_acpi_mcfg_fixups) = .;		\
+	}								\
+									\
 	/* Built-in firmware blobs */					\
 	.builtin_fw        : AT(ADDR(.builtin_fw) - LOAD_OFFSET) {	\
 		VMLINUX_SYMBOL(__start_builtin_fw) = .;			\
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 7c55dd5..d7fcc50 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -318,17 +318,19 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 #define vgic_initialized(k)	(!!((k)->arch.vgic.nr_cpus))
 #define vgic_ready(k)		((k)->arch.vgic.ready)
 
-int vgic_v2_probe(struct device_node *vgic_node,
-		  const struct vgic_ops **ops,
-		  const struct vgic_params **params);
+int vgic_v2_dt_probe(struct device_node *vgic_node,
+		     const struct vgic_ops **ops,
+		     const struct vgic_params **params);
+int vgic_v2_acpi_probe(const struct vgic_ops **ops,
+		       const struct vgic_params **params);
 #ifdef CONFIG_ARM_GIC_V3
-int vgic_v3_probe(struct device_node *vgic_node,
-		  const struct vgic_ops **ops,
-		  const struct vgic_params **params);
+int vgic_v3_dt_probe(struct device_node *vgic_node,
+		     const struct vgic_ops **ops,
+		     const struct vgic_params **params);
 #else
-static inline int vgic_v3_probe(struct device_node *vgic_node,
-				const struct vgic_ops **ops,
-				const struct vgic_params **params)
+static inline int vgic_v3_dt_probe(struct device_node *vgic_node,
+				   const struct vgic_ops **ops,
+				   const struct vgic_params **params)
 {
 	return -ENODEV;
 }
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 24c7aa8..23c807d 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -73,6 +73,7 @@ enum acpi_irq_model_id {
 	ACPI_IRQ_MODEL_IOAPIC,
 	ACPI_IRQ_MODEL_IOSAPIC,
 	ACPI_IRQ_MODEL_PLATFORM,
+	ACPI_IRQ_MODEL_GIC,
 	ACPI_IRQ_MODEL_COUNT
 };
 
@@ -166,6 +167,16 @@ extern u32 acpi_irq_not_handled;
 extern int sbf_port;
 extern unsigned long acpi_realmode_flags;
 
+static inline void acpi_irq_init(void)
+{
+	/*
+	 * Hardcode ACPI IRQ chip initialization to GICv2 for now.
+	 * Proper irqchip infrastructure will be implemented along with
+	 * incoming  GICv2m|GICv3|ITS bits.
+	 */
+	acpi_gic_init();
+}
+
 int acpi_register_gsi (struct device *dev, u32 gsi, int triggering, int polarity);
 int acpi_gsi_to_irq (u32 gsi, unsigned int *irq);
 int acpi_isa_irq_to_gsi (unsigned isa_irq, u32 *gsi);
@@ -439,6 +450,10 @@ const struct acpi_device_id *acpi_match_device(const struct acpi_device_id *ids,
 
 extern bool acpi_driver_match_device(struct device *dev,
 				     const struct device_driver *drv);
+
+bool acpi_match_device_cls(const struct acpi_device_cls *dev_cls,
+			   const struct device *dev);
+
 int acpi_device_uevent_modalias(struct device *, struct kobj_uevent_env *);
 int acpi_device_modalias(struct device *, char *, int);
 void acpi_walk_dep_device_list(acpi_handle handle);
@@ -515,6 +530,11 @@ static inline int acpi_table_parse(char *id,
 	return -ENODEV;
 }
 
+static inline void acpi_irq_init(void)
+{
+	return;
+}
+
 static inline int acpi_nvs_register(__u64 start, __u64 size)
 {
 	return 0;
@@ -534,6 +554,12 @@ static inline const struct acpi_device_id *acpi_match_device(
 	return NULL;
 }
 
+static inline bool acpi_match_device_cls(const struct acpi_device_cls *dev_cls,
+					 const struct device *dev)
+{
+	return false;
+}
+
 static inline bool acpi_driver_match_device(struct device *dev,
 					    const struct device_driver *drv)
 {
diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
index 9c78d15..2b2e1f8 100644
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -244,4 +244,10 @@ extern void clocksource_of_init(void);
 static inline void clocksource_of_init(void) {}
 #endif
 
+#ifdef CONFIG_ACPI
+void acpi_generic_timer_init(void);
+#else
+static inline void acpi_generic_timer_init(void) { }
+#endif
+
 #endif /* _LINUX_CLOCKSOURCE_H */
diff --git a/include/linux/device.h b/include/linux/device.h
index 0eb8ee2..4972a5c 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -237,6 +237,7 @@ struct device_driver {
 
 	const struct of_device_id	*of_match_table;
 	const struct acpi_device_id	*acpi_match_table;
+	const struct acpi_device_cls	*acpi_match_cls;
 
 	int (*probe) (struct device *dev);
 	int (*remove) (struct device *dev);
@@ -690,6 +691,7 @@ struct acpi_dev_node {
  * 		along with subsystem-level and driver-level callbacks.
  * @pins:	For device pin management.
  *		See Documentation/pinctrl.txt for details.
+ * @msi_domain: The generic MSI domain this device is using.
  * @numa_node:	NUMA node this device is close to.
  * @dma_mask:	Dma mask (if dma'ble device).
  * @coherent_dma_mask: Like dma_mask, but for alloc_coherent mapping as not all
@@ -750,6 +752,9 @@ struct device {
 	struct dev_pm_info	power;
 	struct dev_pm_domain	*pm_domain;
 
+#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
+	struct irq_domain	*msi_domain; /* MSI domain device uses */
+#endif
 #ifdef CONFIG_PINCTRL
 	struct dev_pin_info	*pins;
 #endif
@@ -837,6 +842,22 @@ static inline void set_dev_node(struct device *dev, int node)
 }
 #endif
 
+static inline struct irq_domain *dev_get_msi_domain(const struct device *dev)
+{
+#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
+	return dev->msi_domain;
+#else
+	return NULL;
+#endif
+}
+
+static inline void dev_set_msi_domain(struct device *dev, struct irq_domain *d)
+{
+#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
+	dev->msi_domain = d;
+#endif
+}
+
 static inline void *dev_get_drvdata(const struct device *dev)
 {
 	return dev->driver_data;
diff --git a/include/linux/dmi.h b/include/linux/dmi.h
index f820f0a..8e1a28d 100644
--- a/include/linux/dmi.h
+++ b/include/linux/dmi.h
@@ -109,6 +109,7 @@ extern int dmi_walk(void (*decode)(const struct dmi_header *, void *),
 	void *private_data);
 extern bool dmi_match(enum dmi_field f, const char *str);
 extern void dmi_memdev_name(u16 handle, const char **bank, const char **device);
+const u8 *dmi_get_smbios_entry_area(int *size);
 
 #else
 
@@ -140,6 +141,8 @@ static inline void dmi_memdev_name(u16 handle, const char **bank,
 		const char **device) { }
 static inline const struct dmi_system_id *
 	dmi_first_match(const struct dmi_system_id *list) { return NULL; }
+static inline const u8 *dmi_get_smbios_entry_area(int *size)
+	{ return NULL; }
 
 #endif
 
diff --git a/include/linux/irqchip/arm-gic-acpi.h b/include/linux/irqchip/arm-gic-acpi.h
new file mode 100644
index 0000000..cc7753d
--- /dev/null
+++ b/include/linux/irqchip/arm-gic-acpi.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2014, Linaro Ltd.
+ *	Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef ARM_GIC_ACPI_H_
+#define ARM_GIC_ACPI_H_
+
+#ifdef CONFIG_ACPI
+
+/*
+ * Hard code here, we can not get memory size from MADT (but FDT does),
+ * Actually no need to do that, because this size can be inferred
+ * from GIC spec.
+ */
+#define ACPI_GICV2_DIST_MEM_SIZE	(SZ_4K)
+#define ACPI_GIC_CPU_IF_MEM_SIZE	(SZ_8K)
+
+struct acpi_table_header;
+
+void acpi_gic_init(void);
+int gic_v2_acpi_init(struct acpi_table_header *table, struct irq_domain **domain);
+#else
+static inline void acpi_gic_init(void) { }
+#endif
+
+#endif /* ARM_GIC_ACPI_H_ */
diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
index 71d706d..0b45062 100644
--- a/include/linux/irqchip/arm-gic.h
+++ b/include/linux/irqchip/arm-gic.h
@@ -55,6 +55,8 @@
 					(GICD_INT_DEF_PRI << 8) |\
 					GICD_INT_DEF_PRI)
 
+#define GIC_DIST_SOFTINT_NSATT		0x8000
+
 #define GICH_HCR			0x0
 #define GICH_VTR			0x4
 #define GICH_VMCR			0x8
@@ -110,6 +112,11 @@ static inline void gic_init(unsigned int nr, int start,
 
 int gicv2m_of_init(struct device_node *node, struct irq_domain *parent);
 
+struct acpi_table_header;
+
+int gicv2m_acpi_init(struct acpi_table_header *table,
+		     struct irq_domain *parent);
+
 void gic_send_sgi(unsigned int cpu_id, unsigned int irq);
 int gic_get_cpu_id(unsigned int cpu);
 void gic_migrate_target(unsigned int new_cpu_id);
diff --git a/include/linux/mmconfig.h b/include/linux/mmconfig.h
new file mode 100644
index 0000000..4360e9a
--- /dev/null
+++ b/include/linux/mmconfig.h
@@ -0,0 +1,86 @@
+#ifndef __MMCONFIG_H
+#define __MMCONFIG_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/acpi.h>
+
+#ifdef CONFIG_PCI_MMCONFIG
+/* "PCI MMCONFIG %04x [bus %02x-%02x]" */
+#define PCI_MMCFG_RESOURCE_NAME_LEN (22 + 4 + 2 + 2)
+
+struct acpi_pci_root;
+struct pci_mmcfg_region;
+
+typedef int (*acpi_mcfg_fixup_t)(struct acpi_pci_root *root,
+				 struct pci_mmcfg_region *cfg);
+
+struct pci_mmcfg_region {
+	struct list_head list;
+	struct resource res;
+	int (*read)(struct pci_mmcfg_region *cfg, unsigned int bus,
+		    unsigned int devfn, int reg, int len, u32 *value);
+	int (*write)(struct pci_mmcfg_region *cfg, unsigned int bus,
+		     unsigned int devfn, int reg, int len, u32 value);
+	acpi_mcfg_fixup_t fixup;
+	void *data;
+	u64 address;
+	char __iomem *virt;
+	u16 segment;
+	u8 start_bus;
+	u8 end_bus;
+	char name[PCI_MMCFG_RESOURCE_NAME_LEN];
+};
+
+struct acpi_mcfg_fixup {
+	char oem_id[7];
+	char oem_table_id[9];
+	acpi_mcfg_fixup_t hook;
+};
+
+/* Designate a routine to fix up buggy MCFG */
+#define DECLARE_ACPI_MCFG_FIXUP(oem_id, table_id, hook)	\
+	static const struct acpi_mcfg_fixup __acpi_fixup_##hook __used	\
+	__attribute__((__section__(".acpi_fixup_mcfg"), aligned((sizeof(void *))))) \
+	= { {oem_id}, {table_id}, hook };
+
+void pci_mmcfg_early_init(void);
+void pci_mmcfg_late_init(void);
+struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, int bus);
+
+int pci_parse_mcfg(struct acpi_table_header *header);
+struct pci_mmcfg_region *pci_mmconfig_alloc(int segment, int start,
+						   int end, u64 addr);
+int pci_mmconfig_inject(struct pci_mmcfg_region *cfg);
+struct pci_mmcfg_region *pci_mmconfig_add(int segment, int start,
+						 int end, u64 addr);
+void list_add_sorted(struct pci_mmcfg_region *new);
+int acpi_mcfg_check_entry(struct acpi_table_mcfg *mcfg,
+			  struct acpi_mcfg_allocation *cfg);
+void free_all_mmcfg(void);
+int pci_mmconfig_insert(struct device *dev, u16 seg, u8 start, u8 end,
+		       phys_addr_t addr);
+int pci_mmconfig_delete(u16 seg, u8 start, u8 end);
+
+/* Arch specific calls */
+int pci_mmcfg_arch_init(void);
+void pci_mmcfg_arch_free(void);
+int pci_mmcfg_arch_map(struct pci_mmcfg_region *cfg);
+void pci_mmcfg_arch_unmap(struct pci_mmcfg_region *cfg);
+int pci_mmcfg_read(unsigned int seg, unsigned int bus,
+		   unsigned int devfn, int reg, int len, u32 *value);
+int pci_mmcfg_write(unsigned int seg, unsigned int bus,
+		    unsigned int devfn, int reg, int len, u32 value);
+
+extern struct list_head pci_mmcfg_list;
+
+#define PCI_MMCFG_BUS_OFFSET(bus)      ((bus) << 20)
+#else /* CONFIG_PCI_MMCONFIG */
+static inline void pci_mmcfg_late_init(void) { }
+static inline void pci_mmcfg_early_init(void) { }
+static inline void *pci_mmconfig_lookup(int segment, int bus)
+{ return NULL; }
+#endif /* CONFIG_PCI_MMCONFIG */
+
+#endif  /* __KERNEL__ */
+#endif  /* __MMCONFIG_H */
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 2e75ab0..9da3162 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -191,6 +191,12 @@ struct acpi_device_id {
 	kernel_ulong_t driver_data;
 };
 
+struct acpi_device_cls {
+	kernel_ulong_t base_class;
+	kernel_ulong_t sub_class;
+	kernel_ulong_t prog_interface;
+};
+
 #define PNP_ID_LEN	8
 #define PNP_MAX_DEVICES	8
 
diff --git a/include/linux/msi.h b/include/linux/msi.h
index 8ac4a68..01b648f 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -108,9 +108,6 @@ struct msi_controller {
 	struct device *dev;
 	struct device_node *of_node;
 	struct list_head list;
-#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
-	struct irq_domain *domain;
-#endif
 
 	int (*setup_irq)(struct msi_controller *chip, struct pci_dev *dev,
 			 struct msi_desc *desc);
@@ -188,6 +185,7 @@ struct msi_domain_info {
 	void			*handler_data;
 	const char		*handler_name;
 	void			*data;
+	u32			acpi_msi_frame_id;
 };
 
 /* Flags for msi_domain_info */
diff --git a/include/linux/pci-acpi.h b/include/linux/pci-acpi.h
index 24c7728..3e95ec8 100644
--- a/include/linux/pci-acpi.h
+++ b/include/linux/pci-acpi.h
@@ -77,9 +77,12 @@ static inline void acpiphp_remove_slots(struct pci_bus *bus) { }
 static inline void acpiphp_check_host_bridge(struct acpi_device *adev) { }
 #endif
 
+void pci_acpi_set_phb_msi_domain(struct pci_bus *bus);
+
 #else	/* CONFIG_ACPI */
 static inline void acpi_pci_add_bus(struct pci_bus *bus) { }
 static inline void acpi_pci_remove_bus(struct pci_bus *bus) { }
+static inline void pci_acpi_set_phb_msi_domain(struct pci_bus *bus) { };
 #endif	/* CONFIG_ACPI */
 
 #ifdef CONFIG_ACPI_APEI
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 211e9da..36e5b57 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1651,19 +1651,12 @@ int pcibios_set_pcie_reset_state(struct pci_dev *dev,
 int pcibios_add_device(struct pci_dev *dev);
 void pcibios_release_device(struct pci_dev *dev);
 void pcibios_penalize_isa_irq(int irq, int active);
+void pcibios_set_phb_msi_domain(struct pci_bus *bus);
 
 #ifdef CONFIG_HIBERNATE_CALLBACKS
 extern struct dev_pm_ops pcibios_pm_ops;
 #endif
 
-#ifdef CONFIG_PCI_MMCONFIG
-void __init pci_mmcfg_early_init(void);
-void __init pci_mmcfg_late_init(void);
-#else
-static inline void pci_mmcfg_early_init(void) { }
-static inline void pci_mmcfg_late_init(void) { }
-#endif
-
 int pci_ext_cfg_avail(void);
 
 void __iomem *pci_ioremap_bar(struct pci_dev *pdev, int bar);
@@ -1838,6 +1831,7 @@ void pci_set_of_node(struct pci_dev *dev);
 void pci_release_of_node(struct pci_dev *dev);
 void pci_set_bus_of_node(struct pci_bus *bus);
 void pci_release_bus_of_node(struct pci_bus *bus);
+void pci_set_phb_of_msi_domain(struct pci_bus *bus);
 
 /* Arch may override this (weak) */
 struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus);
@@ -1858,6 +1852,7 @@ static inline void pci_set_of_node(struct pci_dev *dev) { }
 static inline void pci_release_of_node(struct pci_dev *dev) { }
 static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
 static inline void pci_release_bus_of_node(struct pci_bus *bus) { }
+static inline void pci_set_phb_of_msi_domain(struct pci_bus *bus) {}
 static inline struct device_node *
 pci_device_to_OF_node(const struct pci_dev *pdev) { return NULL; }
 #endif  /* CONFIG_OF */
diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
index 3e18163..2c43e96 100644
--- a/kernel/irq/msi.c
+++ b/kernel/irq/msi.c
@@ -9,11 +9,13 @@
  * This file contains common code to support Message Signalled Interrupt for
  * PCI compatible and non PCI compatible devices.
  */
+#include <linux/acpi.h>
 #include <linux/types.h>
 #include <linux/device.h>
 #include <linux/irq.h>
 #include <linux/irqdomain.h>
 #include <linux/msi.h>
+#include <linux/of.h>
 
 /* Temparory solution for building, will be removed later */
 #include <linux/pci.h>
@@ -124,11 +126,33 @@ static void msi_domain_free(struct irq_domain *domain, unsigned int virq,
 	irq_domain_free_irqs_top(domain, virq, nr_irqs);
 }
 
+/**
+ * TODO: SURAVEE: This is a hack to match the msi_frame_id
+ * This is being discussed w/ Marc for better solution.
+ */
+static int msi_domain_match(struct irq_domain *d, struct device_node *node)
+{
+	struct msi_domain_info *info;
+
+	if (acpi_disabled)
+		return (d->of_node != NULL) && (d->of_node == node);
+
+	info = msi_get_domain_info(d);
+	if (!info || !(node->data))
+		return 0;
+
+	if (info->acpi_msi_frame_id == *((u32 *)(node->data)))
+		return 1;
+
+	return 0;
+}
+
 static struct irq_domain_ops msi_domain_ops = {
 	.alloc		= msi_domain_alloc,
 	.free		= msi_domain_free,
 	.activate	= msi_domain_activate,
 	.deactivate	= msi_domain_deactivate,
+	.match		= msi_domain_match,
 };
 
 #ifdef GENERIC_MSI_DOMAIN_OPS
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 6e54f35..691e868 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -21,9 +21,11 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/interrupt.h>
+#include <linux/acpi.h>
 
 #include <clocksource/arm_arch_timer.h>
 #include <asm/arch_timer.h>
+#include <asm/acpi.h>
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
@@ -247,60 +249,91 @@ static const struct of_device_id arch_timer_of_match[] = {
 	{},
 };
 
-int kvm_timer_hyp_init(void)
+static int kvm_timer_ppi_parse_dt(unsigned int *ppi)
 {
 	struct device_node *np;
-	unsigned int ppi;
-	int err;
-
-	timecounter = arch_timer_get_timecounter();
-	if (!timecounter)
-		return -ENODEV;
 
 	np = of_find_matching_node(NULL, arch_timer_of_match);
 	if (!np) {
-		kvm_err("kvm_arch_timer: can't find DT node\n");
 		return -ENODEV;
 	}
 
-	ppi = irq_of_parse_and_map(np, 2);
-	if (!ppi) {
-		kvm_err("kvm_arch_timer: no virtual timer interrupt\n");
-		err = -EINVAL;
-		goto out;
+	*ppi = irq_of_parse_and_map(np, 2);
+	if (*ppi == 0) {
+		of_node_put(np);
+		return -EINVAL;
 	}
 
-	err = request_percpu_irq(ppi, kvm_arch_timer_handler,
-				 "kvm guest timer", kvm_get_running_vcpus());
-	if (err) {
-		kvm_err("kvm_arch_timer: can't request interrupt %d (%d)\n",
-			ppi, err);
-		goto out;
-	}
+	return 0;
+}
 
-	host_vtimer_irq = ppi;
+extern int arch_timer_ppi[];
 
-	err = __register_cpu_notifier(&kvm_timer_cpu_nb);
-	if (err) {
-		kvm_err("Cannot register timer CPU notifier\n");
-		goto out_free;
-	}
+static int kvm_timer_ppi_parse_acpi(unsigned int *ppi)
 
-	wqueue = create_singlethread_workqueue("kvm_arch_timer");
-	if (!wqueue) {
-		err = -ENOMEM;
-		goto out_free;
-	}
+{
+	/* retrieve VIRT_PPI info */
+	*ppi = arch_timer_ppi[2];
 
-	kvm_info("%s IRQ%d\n", np->name, ppi);
-	on_each_cpu(kvm_timer_init_interrupt, NULL, 1);
+	if (*ppi == 0)
+		return -EINVAL;
+	else
+		return 0;
+}
+
+int kvm_timer_hyp_init(void)
+{
+	unsigned int ppi;
+	int err;
+
+        timecounter = arch_timer_get_timecounter();
+        if (!timecounter)
+		return -ENODEV;
+
+	/* PPI DT parsing */
+	err = kvm_timer_ppi_parse_dt(&ppi);
 
-	goto out;
+	/* if DT parsing fails, try ACPI next */
+	if (err && !acpi_disabled)
+		err = kvm_timer_ppi_parse_acpi(&ppi);
+
+	if (err) {
+		kvm_err("kvm_timer_hyp_init: can't find virtual timer info or "
+			"config virtual timer interrupt\n");
+		return err;
+	}
+
+	/* configure IRQ handler */
+	err = request_percpu_irq(ppi, kvm_arch_timer_handler,
+                                 "kvm guest timer", kvm_get_running_vcpus());
+        if (err) {
+                kvm_err("kvm_arch_timer: can't request interrupt %d (%d)\n",
+                        ppi, err);
+                goto out;
+        }
+
+        host_vtimer_irq = ppi;
+
+        err = __register_cpu_notifier(&kvm_timer_cpu_nb);
+        if (err) {
+                kvm_err("Cannot register timer CPU notifier\n");
+                goto out_free;
+        }
+
+        wqueue = create_singlethread_workqueue("kvm_arch_timer");
+        if (!wqueue) {
+                err = -ENOMEM;
+                goto out_free;
+        }
+
+        kvm_info("timer IRQ%d\n", ppi);
+        on_each_cpu(kvm_timer_init_interrupt, NULL, 1);
+
+        goto out;
 out_free:
-	free_percpu_irq(ppi, kvm_get_running_vcpus());
+        free_percpu_irq(ppi, kvm_get_running_vcpus());
 out:
-	of_node_put(np);
-	return err;
+        return err;
 }
 
 void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/vgic-v2.c b/virt/kvm/arm/vgic-v2.c
index a0a7b5d..964363f 100644
--- a/virt/kvm/arm/vgic-v2.c
+++ b/virt/kvm/arm/vgic-v2.c
@@ -19,6 +19,7 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/interrupt.h>
+#include <linux/acpi.h>
 #include <linux/io.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
@@ -26,6 +27,7 @@
 
 #include <linux/irqchip/arm-gic.h>
 
+#include <asm/acpi.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_mmu.h>
@@ -159,7 +161,7 @@ static const struct vgic_ops vgic_v2_ops = {
 static struct vgic_params vgic_v2_params;
 
 /**
- * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
+ * vgic_v2_dt_probe - probe for a GICv2 compatible interrupt controller in DT
  * @node:	pointer to the DT node
  * @ops: 	address of a pointer to the GICv2 operations
  * @params:	address of a pointer to HW-specific parameters
@@ -168,7 +170,7 @@ static struct vgic_params vgic_v2_params;
  * in *ops and the HW parameters in *params. Returns an error code
  * otherwise.
  */
-int vgic_v2_probe(struct device_node *vgic_node,
+int vgic_v2_dt_probe(struct device_node *vgic_node,
 		  const struct vgic_ops **ops,
 		  const struct vgic_params **params)
 {
@@ -222,11 +224,22 @@ int vgic_v2_probe(struct device_node *vgic_node,
 	}
 
 	if (!PAGE_ALIGNED(resource_size(&vcpu_res))) {
+#if 0
 		kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
 			(unsigned long long)resource_size(&vcpu_res),
 			PAGE_SIZE);
 		ret = -ENXIO;
 		goto out_unmap;
+#else
+		/*
+		 * The check fails for arm64 with 64K pagesize and certain firmware.
+		 * Ignore for now until firmware takes care of the problem.
+		 */
+		kvm_info("GICV size 0x%llx not a multiple of page size 0x%lx\n",
+			(unsigned long long)resource_size(&vcpu_res),
+			PAGE_SIZE);
+		kvm_info("Update DT to assign GICV a multiple of kernel page size \n");
+#endif
 	}
 
 	vgic->can_emulate_gicv2 = true;
@@ -249,3 +262,72 @@ out:
 	of_node_put(vgic_node);
 	return ret;
 }
+
+struct acpi_madt_generic_interrupt *vgic_acpi;
+static void gic_get_acpi_header(struct acpi_subtable_header *header)
+{
+	vgic_acpi = (struct acpi_madt_generic_interrupt *)header;
+}
+
+int vgic_v2_acpi_probe(const struct vgic_ops **ops,
+		       const struct vgic_params **params)
+{
+	struct vgic_params *vgic = &vgic_v2_params;
+	int irq_mode, ret;
+
+	/* MADT table */
+	ret = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
+			(acpi_tbl_entry_handler)gic_get_acpi_header, 0);
+	if (!ret) {
+		pr_err("Failed to get MADT VGIC CPU entry\n");
+		ret = -ENODEV;
+		goto out;
+	}
+
+	/* IRQ trigger mode */
+	irq_mode = (vgic_acpi->flags & ACPI_MADT_VGIC_IRQ_MODE) ?
+		ACPI_EDGE_SENSITIVE : ACPI_LEVEL_SENSITIVE;
+	/* According to GIC-400 manual, all PPIs are active-LOW, level
+	 *  sensative. We register IRQ as active-low.
+	 */
+	vgic->maint_irq = acpi_register_gsi(NULL, vgic_acpi->vgic_interrupt,
+					    irq_mode, ACPI_ACTIVE_LOW);
+	if (!vgic->maint_irq) {
+		pr_err("Cannot register VGIC ACPI maintenance irq\n");
+		ret = -ENXIO;
+		goto out;
+	}
+
+	/* GICH resource */
+	vgic->vctrl_base = ioremap(vgic_acpi->gich_base_address, SZ_8K);
+	if (!vgic->vctrl_base) {
+		pr_err("cannot ioremap GICH memory\n");
+		ret = -ENOMEM;
+		goto out;
+	}
+
+        vgic->nr_lr = readl_relaxed(vgic->vctrl_base + GICH_VTR);
+        vgic->nr_lr = (vgic->nr_lr & 0x3f) + 1;
+
+        ret = create_hyp_io_mappings(vgic->vctrl_base,
+                                     vgic->vctrl_base + SZ_8K,
+                                     vgic_acpi->gich_base_address);
+        if (ret) {
+                kvm_err("Cannot map GICH into hyp\n");
+		goto out;
+        }
+
+	vgic->vcpu_base = vgic_acpi->gicv_base_address;
+
+	kvm_info("GICH base=0x%llx, GICV base=0x%llx, IRQ=%d\n",
+		 (unsigned long long)vgic_acpi->gich_base_address,
+		 (unsigned long long)vgic_acpi->gicv_base_address,
+		 vgic->maint_irq);
+
+	vgic->type = VGIC_V2;
+	*ops = &vgic_v2_ops;
+	*params = vgic;
+
+out:
+	return ret;
+}
diff --git a/virt/kvm/arm/vgic-v3.c b/virt/kvm/arm/vgic-v3.c
index 3a62d8a..6f27fe1 100644
--- a/virt/kvm/arm/vgic-v3.c
+++ b/virt/kvm/arm/vgic-v3.c
@@ -203,7 +203,7 @@ static const struct vgic_ops vgic_v3_ops = {
 static struct vgic_params vgic_v3_params;
 
 /**
- * vgic_v3_probe - probe for a GICv3 compatible interrupt controller in DT
+ * vgic_v3_dt_probe - probe for a GICv3 compatible interrupt controller in DT
  * @node:	pointer to the DT node
  * @ops: 	address of a pointer to the GICv3 operations
  * @params:	address of a pointer to HW-specific parameters
@@ -212,9 +212,9 @@ static struct vgic_params vgic_v3_params;
  * in *ops and the HW parameters in *params. Returns an error code
  * otherwise.
  */
-int vgic_v3_probe(struct device_node *vgic_node,
-		  const struct vgic_ops **ops,
-		  const struct vgic_params **params)
+int vgic_v3_dt_probe(struct device_node *vgic_node,
+		     const struct vgic_ops **ops,
+		     const struct vgic_params **params)
 {
 	int ret = 0;
 	u32 gicv_idx;
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 0cc6ab6..c3fb126 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -25,9 +25,11 @@
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
 #include <linux/uaccess.h>
+#include <linux/acpi.h>
 
 #include <linux/irqchip/arm-gic.h>
 
+#include <asm/acpi.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_mmu.h>
@@ -1865,8 +1867,8 @@ static struct notifier_block vgic_cpu_nb = {
 };
 
 static const struct of_device_id vgic_ids[] = {
-	{ .compatible = "arm,cortex-a15-gic", .data = vgic_v2_probe, },
-	{ .compatible = "arm,gic-v3", .data = vgic_v3_probe, },
+	{ .compatible = "arm,cortex-a15-gic", .data = vgic_v2_dt_probe, },
+	{ .compatible = "arm,gic-v3", .data = vgic_v3_dt_probe, },
 	{},
 };
 
@@ -1876,20 +1878,26 @@ int kvm_vgic_hyp_init(void)
 	const int (*vgic_probe)(struct device_node *,const struct vgic_ops **,
 				const struct vgic_params **);
 	struct device_node *vgic_node;
-	int ret;
-
-	vgic_node = of_find_matching_node_and_match(NULL,
-						    vgic_ids, &matched_id);
-	if (!vgic_node) {
-		kvm_err("error: no compatible GIC node found\n");
-		return -ENODEV;
+	int ret = -ENODEV;
+
+	/* probe VGIC */
+	if ((vgic_node = of_find_matching_node_and_match(NULL,
+				vgic_ids, &matched_id))) {
+		/* probe VGIC in DT */
+		vgic_probe = matched_id->data;
+		ret = vgic_probe(vgic_node, &vgic_ops, &vgic);
+	}
+	else if (!acpi_disabled) {
+		/* probe VGIC in ACPI */
+		ret = vgic_v2_acpi_probe(&vgic_ops, &vgic);
 	}
 
-	vgic_probe = matched_id->data;
-	ret = vgic_probe(vgic_node, &vgic_ops, &vgic);
-	if (ret)
+	if (ret) {
+		kvm_err("error: no compatible GIC info found\n");
 		return ret;
+	}
 
+	/* configuration */
 	ret = request_percpu_irq(vgic->maint_irq, vgic_maintenance_handler,
 				 "vgic", kvm_get_running_vcpus());
 	if (ret) {