profile
viewpoint
Zach Hilman DarkLordZach United States of America zachhilman.dev Emulator Developer and Mathematician

DarkLordZach/yuzu 5

Nintendo Switch Emulator

yuzu-emu/build-environments 3

Dockerfile entries used for building yuzu binaries.

DarkLordZach/build-environments 0

Dockerfile entries used for building yuzu binaries.

DarkLordZach/drogon 0

Drogon: A C++14/17 based HTTP web application framework running on Linux/macOS/Unix/Windows

DarkLordZach/libsquash 0

Portable, user-land SquashFS that can be easily linked and embedded within your application.

DarkLordZach/libyuzutest 0

A library and example for the yuzu-tester utility.

DarkLordZach/libzip 0

A C library for reading, creating, and modifying zip archives.

DarkLordZach/NXEchoArguments 0

Basic switch homebrew that prints its name and all console args to the console.

DarkLordZach/ssh-socks-tunnel-docker 0

A basic ssh server configuration to setup a SOCKS proxy

push eventDarkLordZach/drogon

Zach Hilman

commit sha 9dfb7ae9c5087dc3b61ab4a1a075507a5083bfa3

HttpUtils: Add messages for new status codes Fix missing move

view details

push time in 13 days

push eventDarkLordZach/drogon

Zach Hilman

commit sha 9f4fa68380fb1368f46e6bb0b2b77277fe45702f

HttpAppFrameworkImpl: Move definition of defaultErrorHandler to cpp file

view details

push time in 13 days

PR opened an-tao/drogon

Add additional HttpStatusCodes and implement a custom error handler

This adds:

  • Various lesser-used HTTP status codes to the HttpStatusCode enumeration.
  • Getter and setter for customErrorHandler, which is a function that generates a HttpResponsePtr given an HttpStatusCode. This is intended to be similar to the custom404 functionality, allowing a user to override the layout of the error page.

The custom404 functions were kept even though this supersedes that to avoid breaking current code.

Finally, all of the Routers were updated to use the error handler for their 405/403 responses.

If no custom error handler is set, a default is used. The default behavior is identical to what exists now, an empty body with the status code set.

+68 -27

0 comment

8 changed files

pr created time in 13 days

push eventDarkLordZach/drogon

Zach Hilman

commit sha 9a7668ac99f9f5199c295070c69b6921511b6c6e

HttpTypes: Add additional HTTP status codes

view details

Zach Hilman

commit sha d7851295e42411a1be449573b205d2d92a709b65

HttpAppFramework: Add methods for custom error handler

view details

Zach Hilman

commit sha 429953a6f4539275862abf5dba4df4ba777c9bef

HttpAppFrameworkImpl: Implement get/setCustomErrorHandler methods

view details

Zach Hilman

commit sha 2712dcbde9d9e616d5a9b7b46dffc5760b8e9b20

Routers: Use custom error handler where appropriate

view details

push time in 13 days

fork DarkLordZach/drogon

Drogon: A C++14/17 based HTTP web application framework running on Linux/macOS/Unix/Windows

fork in 13 days

issue commentan-tao/drogon

Add method to get DbClient connection status to HttpAppFramework

Thank you so much! This is exactly what I needed!

DarkLordZach

comment created time in 14 days

issue commentan-tao/drogon

Add method to get DbClient connection status to HttpAppFramework

Thanks! That looks good. But it seems I would still have to maintain a separate list of db client names elsewhere? I don't know how much work it would be, but is there a way to get a list of db client names from the json config?

DarkLordZach

comment created time in 15 days

issue openedan-tao/drogon

Add method to get DbClient connection status to HttpAppFramework

Is your feature request related to a problem? Please describe. I use drogon in server-side application that run on kubernetes. Sometimes, the kubernetes scheduler causes the web container (running drogon) to start before the postgres container. Usually, this doesn't cause an issue but sometimes it requires deletion of the web pod to force a reschedule. Kubernetes has a built-in API for detecting and automatically restarting pods in situations like this called liveness probes but drogon lacks the APIs necessary to make full use of them. Mainly, I would want to tie the liveness probe to Db connection status, so if the db doesn't connect, k8s will reschedule the pod and try to fix it.

Describe the solution you'd like A function in HttpAppFramework that returns true/false if all DbClients in the configuration file have successfully connected.

Describe alternatives you've considered I considered adding a customConfig element which would be an array of db client names to check, but the DbClient interface doesn't have any method of getting status either.

created time in 17 days

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 endif()  create_target_directory_groups(common) -target_link_libraries(common PUBLIC Boost::boost fmt microprofile)-target_link_libraries(common PRIVATE lz4_static libzstd_static)+target_link_libraries(common PUBLIC Boost::boost fmt::fmt microprofile)

The built-into-cmake FindBoost.cmake file provides a target called Boost::headers, which only includes the header-only libraries from boost. We should be using that here because nothing in common requires more than that.

jroweboy

comment created time in 23 days

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)+    conan_basic_setup()+    message(STATUS "Adding conan installed libraries to the search path")+endif() -if (ENABLE_SDL2)-    if (YUZU_USE_BUNDLED_SDL2)-        # Detect toolchain and platform-        if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)-            set(SDL2_VER "SDL2-2.0.8")-        else()-            message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")+function(yuzu_find_packages)+    set(options USE_CONAN)+    cmake_parse_arguments(FN "${options}" "" "" ${ARGN})++    # Cmake has a *serious* lack of 2D array or associative array...+    # Capitalization matters here. We need the naming to match the generated paths from Conan+    set(REQUIRED_LIBS+    #    Cmake Pkg Prefix  Version     Conan Pkg+        "Boost             1.72        boost/1.72.0"+        "Catch2            2.11        catch2/2.11.0"+        "fmt               6.2         fmt/6.2.0"+        "OpenSSL           1.1         openssl/1.1.1f"+    # can't use until https://github.com/bincrafters/community/issues/1173+        #"libzip            1.5         libzip/1.5.2@bincrafters/stable"

I meant the Libzip.cmake file, not the todo comment.

jroweboy

comment created time in 23 days

pull request commentyuzu-emu/build-environments

Update the docker images to run as yuzu user (UID 1027)

I am pretty sure that this PR isn't necessary for our use case of docker. The docker best practices are written with applications in mind, not build images. Many of the guidelines are very solid and apply to both, but some are either detrimental or unnecessary for build images.

One of them is worrying about privilege escalation. For one, we are not running the images on our own infrastructure, its being run on Azure. I am confident Azure has protections in place so that we do not need to worry. Additionally, this is targeted more towards runtime/runner containers, where the commands and code provided is more user-controlled. In our case, all of the code exec'd in these images is controlled by us.

Ultimately, while this isn't harmful, per say, I wouldn't say its necessary either. It also has the potential for problems when root permissions are taken for granted.

jroweboy

comment created time in 23 days

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 endif()  create_target_directory_groups(common) -target_link_libraries(common PUBLIC Boost::boost fmt microprofile)-target_link_libraries(common PRIVATE lz4_static libzstd_static)+target_link_libraries(common PUBLIC Boost::boost fmt::fmt microprofile)

should be Boost::headers

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)+    conan_basic_setup()+    message(STATUS "Adding conan installed libraries to the search path")+endif() -if (ENABLE_SDL2)-    if (YUZU_USE_BUNDLED_SDL2)-        # Detect toolchain and platform-        if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)-            set(SDL2_VER "SDL2-2.0.8")-        else()-            message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")+function(yuzu_find_packages)+    set(options USE_CONAN)+    cmake_parse_arguments(FN "${options}" "" "" ${ARGN})++    # Cmake has a *serious* lack of 2D array or associative array...+    # Capitalization matters here. We need the naming to match the generated paths from Conan+    set(REQUIRED_LIBS+    #    Cmake Pkg Prefix  Version     Conan Pkg+        "Boost             1.72        boost/1.72.0"+        "Catch2            2.11        catch2/2.11.0"+        "fmt               6.2         fmt/6.2.0"+        "OpenSSL           1.1         openssl/1.1.1f"+    # can't use until https://github.com/bincrafters/community/issues/1173+        #"libzip            1.5         libzip/1.5.2@bincrafters/stable"

Either fix the todo or remove FindLIBZIP.cmake, it doesn't make sense to have unused code around

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)+    conan_basic_setup()+    message(STATUS "Adding conan installed libraries to the search path")+endif() -if (ENABLE_SDL2)-    if (YUZU_USE_BUNDLED_SDL2)-        # Detect toolchain and platform-        if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)-            set(SDL2_VER "SDL2-2.0.8")-        else()-            message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")+function(yuzu_find_packages)+    set(options USE_CONAN)+    cmake_parse_arguments(FN "${options}" "" "" ${ARGN})++    # Cmake has a *serious* lack of 2D array or associative array...+    # Capitalization matters here. We need the naming to match the generated paths from Conan+    set(REQUIRED_LIBS+    #    Cmake Pkg Prefix  Version     Conan Pkg+        "Boost             1.72        boost/1.72.0"+        "Catch2            2.11        catch2/2.11.0"+        "fmt               6.2         fmt/6.2.0"+        "OpenSSL           1.1         openssl/1.1.1f"+    # can't use until https://github.com/bincrafters/community/issues/1173+        #"libzip            1.5         libzip/1.5.2@bincrafters/stable"+        "lz4               1.8         lz4/1.9.2"+        "nlohmann_json     3.7         nlohmann_json/3.7.3"+        "opus              1.3         opus/1.3.1"+        "ZLIB              1.2         zlib/1.2.11"+        "zstd              1.4         zstd/1.4.4"+    )++    foreach(PACKAGE ${REQUIRED_LIBS})+        string(REGEX REPLACE "[ \t\r\n]+" ";" PACKAGE_SPLIT ${PACKAGE})+        list(GET PACKAGE_SPLIT 0 PACKAGE_PREFIX)+        list(GET PACKAGE_SPLIT 1 PACKAGE_VERSION)+        list(GET PACKAGE_SPLIT 2 PACKAGE_CONAN)++        if (NOT ${PACKAGE_PREFIX}_FOUND)+            if (FN_USE_CONAN)+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION} CONFIG REQUIRED)+            else()+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION})+            endif()         endif()--        if (DEFINED SDL2_VER)-            download_bundled_external("sdl2/" ${SDL2_VER} SDL2_PREFIX)+        if (NOT ${PACKAGE_PREFIX}_FOUND)+            list(APPEND CONAN_REQUIRED_LIBS ${PACKAGE_CONAN})         endif()+    endforeach()+    set(CONAN_REQUIRED_LIBS ${CONAN_REQUIRED_LIBS} PARENT_SCOPE)+endfunction() -        set(SDL2_FOUND YES)-        set(SDL2_INCLUDE_DIR "${SDL2_PREFIX}/include" CACHE PATH "Path to SDL2 headers")-        set(SDL2_LIBRARY "${SDL2_PREFIX}/lib/x64/SDL2.lib" CACHE PATH "Path to SDL2 library")-        set(SDL2_DLL_DIR "${SDL2_PREFIX}/lib/x64/" CACHE PATH "Path to SDL2.dll")+# Attempt to locate any packages that are required and report the missing ones in CONAN_REQUIRED_LIBS+yuzu_find_packages() -        add_library(SDL2 INTERFACE)-        target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARY}")-        target_include_directories(SDL2 INTERFACE "${SDL2_INCLUDE_DIR}")+# Qt5 requires that we find components, so it doesn't fit our pretty little find package function+if(ENABLE_QT)+    # We want to load the generated conan qt config so that we get the QT_ROOT var so that we can use the official+    # Qt5Config inside the root folder instead of the conan generated one.+    if(EXISTS ${CMAKE_BINARY_DIR}/qtConfig.cmake)+        include(${CMAKE_BINARY_DIR}/qtConfig.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_QT_ROOT_RELEASE}")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_QT_ROOT_RELEASE}")+    endif()+    find_package(Qt5 5.9 COMPONENTS Widgets OpenGL)+    if (NOT Qt5_FOUND)+        list(APPEND CONAN_REQUIRED_LIBS "qt/5.14.1@bincrafters/stable")+    endif()+endif()+# find SDL2 exports a bunch of variables that are needed, so its easier to do this outside of the yuzu_find_package+if(ENABLE_SDL2)+    if(EXISTS ${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        include(${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+    endif()+    find_package(SDL2 2.0.12)+    if (TARGET sdl2::sdl2)+        # If its loading SDL2 from the conan generated cmake, then we don't need to do anything+    elseif(SDL2_FOUND)+        # If its found through the system find_package then we want to make a compatible target from the sources+        add_library(sdl2::sdl2 INTERFACE)+        target_include_directories(sdl2::sdl2 SYSTEM INTERFACE ${SDL2_INCLUDE_DIRS})+        target_link_libraries(sdl2::sdl2 INTERFACE "${SDL2_LIBRARIES}")     else()-        find_package(SDL2 REQUIRED)+        # otherwise add this to the list of libraries to install+        list(APPEND CONAN_REQUIRED_LIBS "sdl2/2.0.12@bincrafters/stable")+    endif()+endif() -        # Some installations don't set SDL2_LIBRARIES-        if("${SDL2_LIBRARIES}" STREQUAL "")-            message(WARNING "SDL2_LIBRARIES wasn't set, manually setting to SDL2::SDL2")-            set(SDL2_LIBRARIES "SDL2::SDL2")+# Install any missing dependencies with conan install+if (CONAN_REQUIRED_LIBS)+    message(STATUS "Packages ${CONAN_REQUIRED_LIBS} not found!")+    # Use Conan to fetch the libraries that aren't found+    # Download conan.cmake automatically, you can also just copy the conan.cmake file+    if(NOT EXISTS "${CMAKE_BINARY_DIR}/conan.cmake")+    message(STATUS "Downloading conan.cmake from https://github.com/conan-io/cmake-conan")+    file(DOWNLOAD "https://github.com/conan-io/cmake-conan/raw/v0.15/conan.cmake"+                    "${CMAKE_BINARY_DIR}/conan.cmake")+    endif()+    include(${CMAKE_BINARY_DIR}/conan.cmake)++    set(CONAN_LIB_OPTIONS+        libzip:with_openssl=False+        libzip:enable_windows_crypto=False+    )+    conan_check(VERSION 1.24.0 REQUIRED)+    # Add the bincrafters remote+    conan_add_remote(NAME bincrafters INDEX 1+                    URL https://api.bintray.com/conan/bincrafters/public-conan)++    # Manually add iconv to fix a dep conflict between qt and sdl2+    # We don't need to add it through find_package or anything since the other two can find it just fine+    if (IS_MULTI_CONFIG)+        conan_cmake_run(REQUIRES ${CONAN_REQUIRED_LIBS}+                        "libiconv/1.16"+                        OPTIONS ${CONAN_LIB_OPTIONS}+                        BUILD missing+                        CONFIGURATION_TYPES "Release;Debug"+                        GENERATORS cmake_multi cmake_find_package_multi)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        conan_cmake_run(REQUIRES ${CONAN_REQUIRED_LIBS}+            "libiconv/1.16"

match formatting with other side of if statement

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)+    conan_basic_setup()+    message(STATUS "Adding conan installed libraries to the search path")+endif() -if (ENABLE_SDL2)-    if (YUZU_USE_BUNDLED_SDL2)-        # Detect toolchain and platform-        if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)-            set(SDL2_VER "SDL2-2.0.8")-        else()-            message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")+function(yuzu_find_packages)+    set(options USE_CONAN)+    cmake_parse_arguments(FN "${options}" "" "" ${ARGN})++    # Cmake has a *serious* lack of 2D array or associative array...+    # Capitalization matters here. We need the naming to match the generated paths from Conan+    set(REQUIRED_LIBS+    #    Cmake Pkg Prefix  Version     Conan Pkg+        "Boost             1.72        boost/1.72.0"+        "Catch2            2.11        catch2/2.11.0"+        "fmt               6.2         fmt/6.2.0"+        "OpenSSL           1.1         openssl/1.1.1f"+    # can't use until https://github.com/bincrafters/community/issues/1173+        #"libzip            1.5         libzip/1.5.2@bincrafters/stable"+        "lz4               1.8         lz4/1.9.2"+        "nlohmann_json     3.7         nlohmann_json/3.7.3"+        "opus              1.3         opus/1.3.1"+        "ZLIB              1.2         zlib/1.2.11"+        "zstd              1.4         zstd/1.4.4"+    )++    foreach(PACKAGE ${REQUIRED_LIBS})+        string(REGEX REPLACE "[ \t\r\n]+" ";" PACKAGE_SPLIT ${PACKAGE})+        list(GET PACKAGE_SPLIT 0 PACKAGE_PREFIX)+        list(GET PACKAGE_SPLIT 1 PACKAGE_VERSION)+        list(GET PACKAGE_SPLIT 2 PACKAGE_CONAN)++        if (NOT ${PACKAGE_PREFIX}_FOUND)+            if (FN_USE_CONAN)+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION} CONFIG REQUIRED)+            else()+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION})+            endif()         endif()--        if (DEFINED SDL2_VER)-            download_bundled_external("sdl2/" ${SDL2_VER} SDL2_PREFIX)+        if (NOT ${PACKAGE_PREFIX}_FOUND)+            list(APPEND CONAN_REQUIRED_LIBS ${PACKAGE_CONAN})         endif()+    endforeach()+    set(CONAN_REQUIRED_LIBS ${CONAN_REQUIRED_LIBS} PARENT_SCOPE)+endfunction() -        set(SDL2_FOUND YES)-        set(SDL2_INCLUDE_DIR "${SDL2_PREFIX}/include" CACHE PATH "Path to SDL2 headers")-        set(SDL2_LIBRARY "${SDL2_PREFIX}/lib/x64/SDL2.lib" CACHE PATH "Path to SDL2 library")-        set(SDL2_DLL_DIR "${SDL2_PREFIX}/lib/x64/" CACHE PATH "Path to SDL2.dll")+# Attempt to locate any packages that are required and report the missing ones in CONAN_REQUIRED_LIBS+yuzu_find_packages() -        add_library(SDL2 INTERFACE)-        target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARY}")-        target_include_directories(SDL2 INTERFACE "${SDL2_INCLUDE_DIR}")+# Qt5 requires that we find components, so it doesn't fit our pretty little find package function+if(ENABLE_QT)+    # We want to load the generated conan qt config so that we get the QT_ROOT var so that we can use the official+    # Qt5Config inside the root folder instead of the conan generated one.+    if(EXISTS ${CMAKE_BINARY_DIR}/qtConfig.cmake)+        include(${CMAKE_BINARY_DIR}/qtConfig.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_QT_ROOT_RELEASE}")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_QT_ROOT_RELEASE}")+    endif()+    find_package(Qt5 5.9 COMPONENTS Widgets OpenGL)+    if (NOT Qt5_FOUND)+        list(APPEND CONAN_REQUIRED_LIBS "qt/5.14.1@bincrafters/stable")+    endif()+endif()+# find SDL2 exports a bunch of variables that are needed, so its easier to do this outside of the yuzu_find_package+if(ENABLE_SDL2)+    if(EXISTS ${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        include(${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+    endif()+    find_package(SDL2 2.0.12)+    if (TARGET sdl2::sdl2)+        # If its loading SDL2 from the conan generated cmake, then we don't need to do anything+    elseif(SDL2_FOUND)+        # If its found through the system find_package then we want to make a compatible target from the sources+        add_library(sdl2::sdl2 INTERFACE)+        target_include_directories(sdl2::sdl2 SYSTEM INTERFACE ${SDL2_INCLUDE_DIRS})+        target_link_libraries(sdl2::sdl2 INTERFACE "${SDL2_LIBRARIES}")     else()-        find_package(SDL2 REQUIRED)+        # otherwise add this to the list of libraries to install+        list(APPEND CONAN_REQUIRED_LIBS "sdl2/2.0.12@bincrafters/stable")+    endif()+endif() -        # Some installations don't set SDL2_LIBRARIES-        if("${SDL2_LIBRARIES}" STREQUAL "")-            message(WARNING "SDL2_LIBRARIES wasn't set, manually setting to SDL2::SDL2")-            set(SDL2_LIBRARIES "SDL2::SDL2")+# Install any missing dependencies with conan install+if (CONAN_REQUIRED_LIBS)+    message(STATUS "Packages ${CONAN_REQUIRED_LIBS} not found!")+    # Use Conan to fetch the libraries that aren't found+    # Download conan.cmake automatically, you can also just copy the conan.cmake file+    if(NOT EXISTS "${CMAKE_BINARY_DIR}/conan.cmake")+    message(STATUS "Downloading conan.cmake from https://github.com/conan-io/cmake-conan")+    file(DOWNLOAD "https://github.com/conan-io/cmake-conan/raw/v0.15/conan.cmake"+                    "${CMAKE_BINARY_DIR}/conan.cmake")+    endif()+    include(${CMAKE_BINARY_DIR}/conan.cmake)++    set(CONAN_LIB_OPTIONS+        libzip:with_openssl=False+        libzip:enable_windows_crypto=False+    )+    conan_check(VERSION 1.24.0 REQUIRED)+    # Add the bincrafters remote+    conan_add_remote(NAME bincrafters INDEX 1

Don't specify an index, it isn't required and it increases the likelihood of a conflict if a user already has remotes. It is fine with just name/url

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)+    conan_basic_setup()+    message(STATUS "Adding conan installed libraries to the search path")+endif() -if (ENABLE_SDL2)-    if (YUZU_USE_BUNDLED_SDL2)-        # Detect toolchain and platform-        if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)-            set(SDL2_VER "SDL2-2.0.8")-        else()-            message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")+function(yuzu_find_packages)+    set(options USE_CONAN)+    cmake_parse_arguments(FN "${options}" "" "" ${ARGN})++    # Cmake has a *serious* lack of 2D array or associative array...+    # Capitalization matters here. We need the naming to match the generated paths from Conan+    set(REQUIRED_LIBS+    #    Cmake Pkg Prefix  Version     Conan Pkg+        "Boost             1.72        boost/1.72.0"+        "Catch2            2.11        catch2/2.11.0"+        "fmt               6.2         fmt/6.2.0"+        "OpenSSL           1.1         openssl/1.1.1f"+    # can't use until https://github.com/bincrafters/community/issues/1173+        #"libzip            1.5         libzip/1.5.2@bincrafters/stable"+        "lz4               1.8         lz4/1.9.2"+        "nlohmann_json     3.7         nlohmann_json/3.7.3"+        "opus              1.3         opus/1.3.1"+        "ZLIB              1.2         zlib/1.2.11"+        "zstd              1.4         zstd/1.4.4"+    )++    foreach(PACKAGE ${REQUIRED_LIBS})+        string(REGEX REPLACE "[ \t\r\n]+" ";" PACKAGE_SPLIT ${PACKAGE})+        list(GET PACKAGE_SPLIT 0 PACKAGE_PREFIX)+        list(GET PACKAGE_SPLIT 1 PACKAGE_VERSION)+        list(GET PACKAGE_SPLIT 2 PACKAGE_CONAN)++        if (NOT ${PACKAGE_PREFIX}_FOUND)+            if (FN_USE_CONAN)+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION} CONFIG REQUIRED)+            else()+                find_package(${PACKAGE_PREFIX} ${PACKAGE_VERSION})+            endif()         endif()--        if (DEFINED SDL2_VER)-            download_bundled_external("sdl2/" ${SDL2_VER} SDL2_PREFIX)+        if (NOT ${PACKAGE_PREFIX}_FOUND)+            list(APPEND CONAN_REQUIRED_LIBS ${PACKAGE_CONAN})         endif()+    endforeach()+    set(CONAN_REQUIRED_LIBS ${CONAN_REQUIRED_LIBS} PARENT_SCOPE)+endfunction() -        set(SDL2_FOUND YES)-        set(SDL2_INCLUDE_DIR "${SDL2_PREFIX}/include" CACHE PATH "Path to SDL2 headers")-        set(SDL2_LIBRARY "${SDL2_PREFIX}/lib/x64/SDL2.lib" CACHE PATH "Path to SDL2 library")-        set(SDL2_DLL_DIR "${SDL2_PREFIX}/lib/x64/" CACHE PATH "Path to SDL2.dll")+# Attempt to locate any packages that are required and report the missing ones in CONAN_REQUIRED_LIBS+yuzu_find_packages() -        add_library(SDL2 INTERFACE)-        target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARY}")-        target_include_directories(SDL2 INTERFACE "${SDL2_INCLUDE_DIR}")+# Qt5 requires that we find components, so it doesn't fit our pretty little find package function+if(ENABLE_QT)+    # We want to load the generated conan qt config so that we get the QT_ROOT var so that we can use the official+    # Qt5Config inside the root folder instead of the conan generated one.+    if(EXISTS ${CMAKE_BINARY_DIR}/qtConfig.cmake)+        include(${CMAKE_BINARY_DIR}/qtConfig.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_QT_ROOT_RELEASE}")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_QT_ROOT_RELEASE}")+    endif()+    find_package(Qt5 5.9 COMPONENTS Widgets OpenGL)+    if (NOT Qt5_FOUND)+        list(APPEND CONAN_REQUIRED_LIBS "qt/5.14.1@bincrafters/stable")+    endif()+endif()+# find SDL2 exports a bunch of variables that are needed, so its easier to do this outside of the yuzu_find_package+if(ENABLE_SDL2)+    if(EXISTS ${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        include(${CMAKE_BINARY_DIR}/sdl2Config.cmake)+        list(APPEND CMAKE_MODULE_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+        list(APPEND CMAKE_PREFIX_PATH "${CONAN_SDL2_ROOT_RELEASE}>")+    endif()+    find_package(SDL2 2.0.12)+    if (TARGET sdl2::sdl2)+        # If its loading SDL2 from the conan generated cmake, then we don't need to do anything+    elseif(SDL2_FOUND)+        # If its found through the system find_package then we want to make a compatible target from the sources+        add_library(sdl2::sdl2 INTERFACE)+        target_include_directories(sdl2::sdl2 SYSTEM INTERFACE ${SDL2_INCLUDE_DIRS})+        target_link_libraries(sdl2::sdl2 INTERFACE "${SDL2_LIBRARIES}")     else()-        find_package(SDL2 REQUIRED)+        # otherwise add this to the list of libraries to install+        list(APPEND CONAN_REQUIRED_LIBS "sdl2/2.0.12@bincrafters/stable")+    endif()+endif() -        # Some installations don't set SDL2_LIBRARIES-        if("${SDL2_LIBRARIES}" STREQUAL "")-            message(WARNING "SDL2_LIBRARIES wasn't set, manually setting to SDL2::SDL2")-            set(SDL2_LIBRARIES "SDL2::SDL2")+# Install any missing dependencies with conan install+if (CONAN_REQUIRED_LIBS)+    message(STATUS "Packages ${CONAN_REQUIRED_LIBS} not found!")+    # Use Conan to fetch the libraries that aren't found+    # Download conan.cmake automatically, you can also just copy the conan.cmake file+    if(NOT EXISTS "${CMAKE_BINARY_DIR}/conan.cmake")+    message(STATUS "Downloading conan.cmake from https://github.com/conan-io/cmake-conan")+    file(DOWNLOAD "https://github.com/conan-io/cmake-conan/raw/v0.15/conan.cmake"+                    "${CMAKE_BINARY_DIR}/conan.cmake")+    endif()+    include(${CMAKE_BINARY_DIR}/conan.cmake)++    set(CONAN_LIB_OPTIONS+        libzip:with_openssl=False+        libzip:enable_windows_crypto=False+    )+    conan_check(VERSION 1.24.0 REQUIRED)

Same reason as before

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Replace externals with Conan

 message(STATUS "Target architecture: ${ARCHITECTURE}") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) -# System imported libraries-# ======================--find_package(Boost 1.66.0 QUIET)-if (NOT Boost_FOUND)-    message(STATUS "Boost 1.66.0 or newer not found, falling back to externals")--    set(BOOST_ROOT "${PROJECT_SOURCE_DIR}/externals/boost")-    set(Boost_NO_SYSTEM_PATHS OFF)-    find_package(Boost QUIET REQUIRED)-endif()- # Output binaries to bin/ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin) -# Prefer the -pthread flag on Linux.-set(THREADS_PREFER_PTHREAD_FLAG ON)-find_package(Threads REQUIRED)+# System imported libraries+# If not found, download any missing through Conan+# =======================================================================+set(CONAN_CMAKE_SILENT_OUTPUT TRUE)+set(CMAKE_FIND_PACKAGE_PREFER_CONFIG TRUE)+get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)+if (YUZU_CONAN_INSTALLED)+    if (IS_MULTI_CONFIG)+        include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)+    else()+        include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)+    endif()+    list(APPEND CMAKE_MODULE_PATH "${CMAKE_BINARY_DIR}")+    list(APPEND CMAKE_PREFIX_PATH "${CMAKE_BINARY_DIR}")+    conan_check(VERSION 1.24.0 REQUIRED)

Is there a specific reason we need 1.24.0? conan_basic_setup will aready fail if conan isn't installed.

jroweboy

comment created time in a month

push eventyuzu-emu/build-environments

James Rowe

commit sha 4bf08307966f8bf7fdc9f8100d9c4898c99fac95

Update build envs to ubunto 20.04. Change mingw build env to arch

view details

James Rowe

commit sha affcece6a6ffe9d98c399354d00bd26074d0995c

Remove any leftover apt caches

view details

James Rowe

commit sha 48dc64253f601d80b91b87e351494360ab8821db

Combine RUN statements into one and remove shell expansion

view details

Zach Hilman

commit sha 5ec9c4a00fc2278bdf03bd0e1942687f86d8d7d4

Merge pull request #8 from jroweboy/master Update build envs to ubuntu 20.04. Change mingw build env to arch

view details

push time in a month

PR merged yuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

Thanks to @lat9nq for the base change to the mingw build env.

+72 -30

3 comments

5 changed files

jroweboy

pr closed time in a month

Pull request review commentyuzu-emu/yuzu

nifm: add structs and stub GetCurrentIpAddress

 class IGeneralService final : public ServiceFramework<IGeneralService> {             rb.Push<u8>(1);         }     }+    void GetCurrentIpAddress(Kernel::HLERequestContext& ctx) {+        LOG_WARNING(Service_NIFM, "(STUBBED) called");++        const auto current_ip_address =+            network_profile_data.ip_data.address_settings.ip_address.address;++        IPC::ResponseBuilder rb{ctx, 3};+        if (current_ip_address == 0) {

This check should remain for the future when we use actual IP addresses.

VolcaEM

comment created time in a month

pull request commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

I conducted a test to verify the claims of removing the apt lists and also combining into a single run command. Here are the results:

linux-fresh                         both                a059598cc380        52 seconds ago      854MB
linux-fresh                         only_single_run     c64933a683a2        8 minutes ago       883MB
linux-fresh                         only_rm_lists       341ff250849f        16 minutes ago      883MB
linux-fresh                         master              736efbbcdad4        16 minutes ago      883MB

This was conducted on ubuntu server 18.04 using docker 18.09.7. As the results show, neither of the two claims on their own have any benefit, but when combined result in a savings of ~30MB.

jroweboy

comment created time in a month

push eventyuzu-emu/build-environments

Roman Meier

commit sha 09245ff74e92919a26f0ca6212db90985e75cd34

Add Dockerfile for linux-flatpak

view details

Roman Meier

commit sha 9daa154e90c6f34d0d24bd5f2bb9bbbaff4ef81c

linux-flatpak: Merge apt-get RUN statements

view details

Zach Hilman

commit sha 27681f5297ff7af407892ee031b215035f04437c

Merge pull request #7 from meiro/master Add Dockerfile for linux-flatpak

view details

push time in a month

PR merged yuzu-emu/build-environments

Add Dockerfile for linux-flatpak

Based on Citra's linux-flatpak Dockerfile (here), abderrahim's flatpak Dockerfile from PR #4 and yuzu's linux-fresh Dockerfile (here).

Should be published as yuzuemu/build-environments:linux-flatpak

+16 -0

1 comment

1 changed file

meiro

pr closed time in a month

Pull request review commentyuzu-emu/build-environments

Add Dockerfile for linux-flatpak

+FROM ubuntu:18.04+MAINTAINER yuzu++RUN useradd -m -s /bin/bash yuzu+RUN apt-get update && apt-get -y full-upgrade && apt-get install --no-install-recommends -y flatpak flatpak-builder ca-certificates

I think you should convert this to ubuntu:20.04 for the base image, but you should also make everything a single run command. I explained this in more detail in #8, but for our purposes it makes more sense to optimize for storage.

meiro

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM ubuntu:20.04 MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu-RUN apt-get update && apt-get -y full-upgrade-RUN apt-get install --no-install-recommends -y build-essential libsdl2-dev libssl-dev python qtbase5-dev qtwebengine5-dev libqt5opengl5-dev wget git ccache cmake ninja-build+RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y full-upgrade+RUN DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \

Considering that these images are probably built once every two months, it doesn't make sense to optimize for build speed. Instead, we should optimize for storage size as that can reduce build times on every build. Combining commands reduces layer count, reducing size and adding the rm /var/lib... also does that. Do note, that remove the apt lists or cleaning up pacman only has tangible benefits if its in the same layer, otherwise it just adds storage space.

jroweboy

comment created time in a month

startedyuzu-emu/yuzu

started time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM archlinux:latest MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu && mkdir -p /tmp/pkgs-RUN apt-get update && apt-get install -y gpg wget git python3-pip python ccache p7zip-full g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64 mingw-w64-tools cmake ninja-build-# workaround broken headers in Ubuntu MinGW package-COPY errno.h /usr/x86_64-w64-mingw32/include/-# add mingw-w64 auxiliary ppa repository-RUN echo 'deb http://ppa.launchpad.net/tobydox/mingw-w64/ubuntu bionic main ' > /etc/apt/sources.list.d/extras.list-RUN apt-key adv --keyserver keyserver.ubuntu.com --recv '72931B477E22FEFD47F8DECE02FE5F12ADDE29B2' && apt-get update-RUN apt-get install -y sdl2-mingw-w64 qt5base-mingw-w64 qt5tools-mingw-w64 libsamplerate-mingw-w64 qt5multimedia-mingw-w64+# Add mingw-repo "ownstuff" is a AUR with an up to date mingw64+RUN echo "[ownstuff]" >> /etc/pacman.conf \+    && echo "SigLevel = Optional TrustAll" >> /etc/pacman.conf \+    && echo "Server = https://martchus.no-ip.biz/repo/arch/ownstuff/os/\$arch" >> /etc/pacman.conf+RUN pacman -Syu --noconfirm+RUN pacman -Syu --noconfirm

is this duplicated on purpose?

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM ubuntu:20.04 MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu-RUN apt-get update && apt-get -y full-upgrade-RUN apt-get install --no-install-recommends -y build-essential libsdl2-dev libssl-dev python qtbase5-dev qtwebengine5-dev libqt5opengl5-dev wget git ccache cmake ninja-build+RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y full-upgrade+RUN DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \+    build-essential \+    libsdl2-dev \+    libssl-dev \+    python \+    qtbase5-dev \+    qtwebengine5-dev \+    libqt5opengl5-dev \+    wget \+    git \+    ccache \+    cmake \+    ninja-build

add a rm -rf /var/lib/apt/lists/* to reduce final layer size

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM archlinux:latest

use an explicit tag

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM archlinux:latest MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu && mkdir -p /tmp/pkgs-RUN apt-get update && apt-get install -y gpg wget git python3-pip python ccache p7zip-full g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64 mingw-w64-tools cmake ninja-build-# workaround broken headers in Ubuntu MinGW package-COPY errno.h /usr/x86_64-w64-mingw32/include/-# add mingw-w64 auxiliary ppa repository-RUN echo 'deb http://ppa.launchpad.net/tobydox/mingw-w64/ubuntu bionic main ' > /etc/apt/sources.list.d/extras.list-RUN apt-key adv --keyserver keyserver.ubuntu.com --recv '72931B477E22FEFD47F8DECE02FE5F12ADDE29B2' && apt-get update-RUN apt-get install -y sdl2-mingw-w64 qt5base-mingw-w64 qt5tools-mingw-w64 libsamplerate-mingw-w64 qt5multimedia-mingw-w64+# Add mingw-repo "ownstuff" is a AUR with an up to date mingw64+RUN echo "[ownstuff]" >> /etc/pacman.conf \

combine everything into one run command

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM ubuntu:20.04 MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu

combine everything into one run command

jroweboy

comment created time in a month

Pull request review commentyuzu-emu/build-environments

Update build envs to ubuntu 20.04. Change mingw build env to arch

-FROM ubuntu:18.04+FROM ubuntu:20.04 MAINTAINER yuzu RUN useradd -m -s /bin/bash yuzu-RUN apt-get update && apt-get -y full-upgrade-RUN apt-get install --no-install-recommends -y build-essential libsdl2-dev libssl-dev python qtbase5-dev qtwebengine5-dev libqt5opengl5-dev wget git ccache cmake ninja-build+RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y full-upgrade+RUN DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \

combine everything into one run command

jroweboy

comment created time in a month

issue closedyuzu-emu/yuzu

"-Werror=conversion" error after merging PR #3718 in Linux

Building yuzu on master and mainline in Linux, I get the following errors:

In file included from yuzu/src/./video_core/engines/kepler_compute.h:10,
                 from yuzu/src/video_core/renderer_vulkan/vk_rasterizer.cpp:20:
yuzu/src/./common/bit_field.h: In instantiation of ‘constexpr void BitField<Position, Bits, T, EndianTag>::Assign(const T&) [with long unsigned int Position = 0; long unsigned int Bits = 1; T = short unsigned int; EndianTag = KeepTag]’:
yuzu/src/./video_core/renderer_vulkan/fixed_pipeline_state.h:131:51:   required from here
yuzu/src/./common/bit_field.h:183:63: error: conversion from ‘int’ to ‘BitField<0, 1, short unsigned int>::StorageTypeWithEndian’ {aka ‘short unsigned int’} may change value [-Werror=conversion]
  183 |         storage = (static_cast<StorageType>(storage) & ~mask) | FormatValue(value);
      |                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
In file included from yuzu/src/./video_core/renderer_vulkan/vk_rasterizer.h:20,
                 from yuzu/src/video_core/renderer_vulkan/renderer_vulkan.cpp:31:
yuzu/src/./video_core/renderer_vulkan/fixed_pipeline_state.h: In member function ‘void Vulkan::FixedPipelineState::VertexInput::SetBinding(std::size_t, bool, u32, u32)’:
yuzu/src/./video_core/renderer_vulkan/fixed_pipeline_state.h:132:35: error: conversion from ‘u32’ {aka ‘unsigned int’} to ‘short unsigned int’ may change value [-Werror=conversion]
  132 |             binding.stride.Assign(stride);
      |                                   ^~~~~~

The second was obtained by modifying the first error to storage = (u_int16_t)(static_cast<StorageType>(storage) & ~mask) | (u_int16_t)FormatValue(value);. If I modify the second to read binding.stride.Assign((u_int16_t)stride);, the build will complete, but games softlock immediately, so there is something I don't understand well enough to fix the error correctly.

My system specs are: Manjaro Linux 20.0 Kernel 5.6.5-10-tkg-pds Ryzen 7 2700X gcc (Arch Linux 9.3.0-1) 9.3.0

The error is NOT detected by the Linux:MingW Docker build environment (-DENABLE_VULKAN=OFF is a CMake argument in .ci/scripts/windows/docker.sh). I do not have the Linux Docker environment setup.

closed time in a month

lat9nq

push eventyuzu-emu/yuzu

Markus Wick

commit sha c499c22cf749e49d223b653b036e560a618bb6e2

Fix -Werror=conversion error.

view details

Markus Wick

commit sha e717a1df20b45dc9bd4b77bbbc3d54b1561189d3

Fix -Wdeprecated-copy warning.

view details

Zach Hilman

commit sha 6ec965ef91260d6eb3ec32d9bd0a4a00654e29c0

Merge pull request #3786 from degasus/fix_warnings Fix -Werror=conversion and -Wdeprecated-copy issues

view details

push time in a month

PR merged yuzu-emu/yuzu

Fix -Werror=conversion and -Wdeprecated-copy issues

Fixes #3754

+3 -2

2 comments

3 changed files

degasus

pr closed time in a month

Pull request review commentyuzu-emu/build-environments

Add Dockerfile for linux-flatpak

+FROM ubuntu:18.04+MAINTAINER yuzu++RUN useradd -m -s /bin/bash yuzu+RUN apt-get update && apt-get -y full-upgrade && apt-get install --no-install-recommends -y flatpak flatpak-builder ca-certificates

Add all the packages from the 2nd apt-get install to this one and add a && rm /var/lib/apt/lists/* to the end to save space/layers. After that, add more && to attach the two flatpak commands below and add the previous command to the beginning. There should only be one RUN command -- this is the most efficient because of how docker layers work

meiro

comment created time in a month

issue closedAzure/azure-storage-cpp

Unresolved external symbols when linking to azure-storage-cpp in CMake via VCPkg

I installed azure-storage-cpp via vcpkg, and am using it in a cmake project with the following buildscript to add the library:

find_path(AZURESTORAGE_INCLUDE_DIRS was/blob.h REQUIRED)
find_library(AZURESTORAGE_LIBRARIES azurestorage REQUIRED)

add_library(AzureStorage STATIC IMPORTED GLOBAL)
set_target_properties(AzureStorage PROPERTIES IMPORTED_LOCATION ${AZURESTORAGE_LIBRARIES})
target_include_directories(AzureStorage INTERFACE ${AZURESTORAGE_INCLUDE_DIRS})

However, when I go to compile/link my executable, I get undefined reference errors to what seems to be web functions, boost, xml, uuid, amongst others.

Am I missing additional target_link_library commands? If so, what libraries. Additionally, would it be possible to provide a cmake .config file to make linking easier?

[This was WSL Ubuntu 18.04]

Thanks

closed time in a month

DarkLordZach

issue commentAzure/azure-storage-cpp

Unresolved external symbols when linking to azure-storage-cpp in CMake via VCPkg

Hi! Thanks for the help. I ended up using a file like this (there were other deps i needed to include):

find_path(AZURESTORAGE_INCLUDE_DIRS was/blob.h REQUIRED)
find_library(AZURESTORAGE_LIBRARIES azurestorage REQUIRED)

find_package(cpprestsdk CONFIG REQUIRED)

add_library(AzureStorage STATIC IMPORTED GLOBAL)
set_target_properties(AzureStorage PROPERTIES IMPORTED_LOCATION ${AZURESTORAGE_LIBRARIES})
target_include_directories(AzureStorage INTERFACE ${AZURESTORAGE_INCLUDE_DIRS})

if (UNIX)
    list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
    find_package(OpenSSL 1.0.0 REQUIRED)
    find_package(UUID REQUIRED)
    find_package(LibXml2 REQUIRED)
    find_package(Boost REQUIRED COMPONENTS log log_setup random system thread locale regex filesystem chrono date_time)
    find_package(Threads REQUIRED)
    target_link_libraries(AzureStorage INTERFACE ${Boost_LIBRARIES} ${Boost_FRAMEWORK} ${OPENSSL_LIBRARIES} ${UUID_LIBRARIES} ${LIBXML2_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
endif ()

target_link_libraries(AzureStorage INTERFACE cpprestsdk::cpprest cpprestsdk::cpprestsdk_zlib_internal cpprestsdk::cpprestsdk_boost_internal cpprestsdk::cpprestsdk_openssl_internal)

I'm posting this here for others who have a similar situation. This has only been tested on linux.

DarkLordZach

comment created time in a month

issue openedAzure/azure-storage-cpp

Unresolved external symbols when linking to azure-storage-cpp in CMake via VCPkg

I installed azure-storage-cpp via vcpkg, and am using it in a cmake project with the following buildscript to add the library:

find_path(AZURESTORAGE_INCLUDE_DIRS was/blob.h REQUIRED)
find_library(AZURESTORAGE_LIBRARIES azurestorage REQUIRED)

add_library(AzureStorage STATIC IMPORTED GLOBAL)
set_target_properties(AzureStorage PROPERTIES IMPORTED_LOCATION ${AZURESTORAGE_LIBRARIES})
target_include_directories(AzureStorage INTERFACE ${AZURESTORAGE_INCLUDE_DIRS})

However, when I go to compile/link my executable, I get undefined reference errors to what seems to be web functions, boost, xml, uuid, amongst others.

Am I missing additional target_link_library commands? If so, what libraries. Additionally, would it be possible to provide a cmake .config file to make linking easier?

[This was WSL Ubuntu 18.04]

Thanks

created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable&     // Unpredictable instructions     config.define_unpredictable_behaviour = true; +    config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;

Ah, I misinterpreted this as a bit flag (I mean, it technically is, but you know what i mean)

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return i;+            }+        }+        return -1;+    }++    static constexpr std::size_t GetBlockSize(std::size_t index) {+        return std::size_t(1) << MemoryBlockPageShifts[index];+    }++    static constexpr std::size_t GetBlockNumPages(std::size_t index) {+        return GetBlockSize(index) / PageSize;+    }++private:+    static constexpr std::size_t NumMemoryBlockPageShifts{7};+    static constexpr std::array<std::size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{+        0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E};++    class Block final : NonCopyable {+    private:+        class Bitmap final : NonCopyable {+        public:+            static constexpr std::size_t MaxDepth{4};++        private:+            std::array<u64*, MaxDepth> bit_storages{};+            std::size_t num_bits{};+            std::size_t used_depths{};++        public:+            constexpr Bitmap() = default;++            constexpr std::size_t GetNumBits() const {+                return num_bits;+            }+            constexpr s32 GetHighestDepthIndex() const {+                return static_cast<s32>(used_depths) - 1;+            }++            constexpr u64* Initialize(u64* storage, std::size_t size) {+                //* Initially, everything is un-set+                num_bits = 0;++                // Calculate the needed bitmap depth+                used_depths = static_cast<std::size_t>(GetRequiredDepth(size));+                ASSERT(used_depths <= MaxDepth);++                // Set the bitmap pointers+                for (s32 depth{GetHighestDepthIndex()}; depth >= 0; depth--) {+                    bit_storages[depth] = storage;+                    size = Common::AlignUp(size, 64) / 64;+                    storage += size;+                }++                return storage;+            }++            s64 FindFreeBlock() const {+                uintptr_t offset{};+                s32 depth{};++                do {+                    const u64 v{bit_storages[depth][offset]};+                    if (v == 0) {+                        // Non-zero depth indicates that a previous level had a free block+                        ASSERT(depth == 0);+                        return -1;+                    }+                    offset = offset * 64 + Common::CountTrailingZeroes64(v);+                    ++depth;+                } while (depth < static_cast<s32>(used_depths));++                return static_cast<s64>(offset);+            }++            constexpr void SetBit(std::size_t offset) {+                SetBit(GetHighestDepthIndex(), offset);+                num_bits++;+            }++            constexpr void ClearBit(std::size_t offset) {+                ClearBit(GetHighestDepthIndex(), offset);+                num_bits--;+            }++            constexpr bool ClearRange(std::size_t offset, std::size_t count) {+                const s32 depth{GetHighestDepthIndex()};+                const std::size_t bit_ind{offset / 64};+                u64* bits{bit_storages[depth]};+                if (count < 64) {+                    const std::size_t shift{offset % 64};+                    ASSERT(shift + count <= 64);+                    // Check that all the bits are set+                    const u64 mask{((u64(1) << count) - 1) << shift};+                    u64 v{bits[bit_ind]};+                    if ((v & mask) != mask) {+                        return false;+                    }++                    // Clear the bits+                    v &= ~mask;+                    bits[bit_ind] = v;+                    if (v == 0) {+                        ClearBit(depth - 1, bit_ind);+                    }+                } else {+                    ASSERT(offset % 64 == 0);+                    ASSERT(count % 64 == 0);+                    // Check that all the bits are set+                    std::size_t remaining{count};+                    std::size_t i = 0;+                    do {+                        if (bits[bit_ind + i++] != ~u64(0)) {+                            return false;+                        }+                        remaining -= 64;+                    } while (remaining > 0);++                    // Clear the bits+                    remaining = count;+                    i = 0;+                    do {+                        bits[bit_ind + i] = 0;+                        ClearBit(depth - 1, bit_ind + i);+                        i++;+                        remaining -= 64;+                    } while (remaining > 0);+                }++                num_bits -= count;+                return true;+            }++        private:+            constexpr void SetBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    const u64 v{*bit};+                    ASSERT((v & mask) == 0);+                    *bit = v | mask;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++            constexpr void ClearBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    u64 v{*bit};+                    ASSERT((v & mask) != 0);+                    v &= ~mask;+                    *bit = v;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++        private:+            static constexpr s32 GetRequiredDepth(std::size_t region_size) {+                s32 depth = 0;+                while (true) {+                    region_size /= 64;+                    depth++;+                    if (region_size == 0) {+                        return depth;+                    }+                }+            }++        public:+            static constexpr std::size_t CalculateMetadataOverheadSize(std::size_t region_size) {+                std::size_t overhead_bits = 0;+                for (s32 depth{GetRequiredDepth(region_size) - 1}; depth >= 0; depth--) {+                    region_size = Common::AlignUp(region_size, 64) / 64;+                    overhead_bits += region_size;+                }+                return overhead_bits * sizeof(u64);+            }+        };++    private:+        Bitmap bitmap;+        VAddr heap_address{};+        uintptr_t end_offset{};+        std::size_t block_shift{};+        std::size_t next_block_shift{};++    public:+        constexpr Block() = default;++        constexpr std::size_t GetShift() const {+            return block_shift;+        }+        constexpr std::size_t GetNextShift() const {+            return next_block_shift;+        }+        constexpr std::size_t GetSize() const {+            return std::size_t(1) << GetShift();+        }+        constexpr std::size_t GetNumPages() const {+            return GetSize() / PageSize;+        }+        constexpr std::size_t GetNumFreeBlocks() const {+            return bitmap.GetNumBits();+        }+        constexpr std::size_t GetNumFreePages() const {+            return GetNumFreeBlocks() * GetNumPages();+        }++        constexpr u64* Initialize(VAddr addr, std::size_t size, std::size_t bs, std::size_t nbs,+                                  u64* bit_storage) {+            // Set shifts+            block_shift = bs;+            next_block_shift = nbs;++            // Align up the address+            VAddr end{addr + size};+            const std::size_t align{(next_block_shift != 0) ? (u64(1) << next_block_shift)+                                                            : (u64(1) << block_shift)};+            addr = Common::AlignDown((addr), align);+            end = Common::AlignUp((end), align);++            heap_address = addr;+            end_offset = (end - addr) / (u64(1) << block_shift);+            return bitmap.Initialize(bit_storage, end_offset);+        }++        constexpr VAddr PushBlock(VAddr address) {+            // Set the bit for the free block+            std::size_t offset{(address - heap_address) >> GetShift()};+            bitmap.SetBit(offset);++            // If we have a next shift, try to clear the blocks below and return the address+            if (GetNextShift()) {+                const std::size_t diff{u64(1) << (GetNextShift() - GetShift())};

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include <algorithm>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/common_types.h"+#include "common/scope_exit.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/memory/memory_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"++namespace Kernel::Memory {++std::size_t MemoryManager::Impl::Initialize(Pool new_pool, u64 start_address, u64 end_address) {+    const std::size_t size{end_address - start_address};++    // Calculate metadata sizes+    const std::size_t ref_count_size{(size / PageSize) * sizeof(u16)};+    const std::size_t optimize_map_size{(Common::AlignUp((size / PageSize), 64) / 64) *+                                        sizeof(u64)};+    const std::size_t manager_size{Common::AlignUp(optimize_map_size + ref_count_size, PageSize)};+    const std::size_t page_heap_size{PageHeap::CalculateMetadataOverheadSize(size)};+    const std::size_t total_metadata_size{manager_size + page_heap_size};+    ASSERT(manager_size <= total_metadata_size);+    ASSERT(Common::IsAligned(total_metadata_size, PageSize));++    // Setup region+    pool = new_pool;++    // Initialize the manager's KPageHeap+    heap.Initialize(start_address, size, page_heap_size);++    // Free the memory to the heap+    heap.Free(start_address, size / PageSize);++    // Update the heap's used size+    heap.UpdateUsedSize();++    return total_metadata_size;+}++void MemoryManager::InitializeManager(Pool pool, u64 start_address, u64 end_address) {+    ASSERT(pool < Pool::Count);+    managers[static_cast<std::size_t>(pool)].Initialize(pool, start_address, end_address);+}++VAddr MemoryManager::AllocateContinuous(std::size_t num_pages, std::size_t align_pages, Pool pool,+                                        Direction dir) {+    // Early return if we're allocating no pages+    if (num_pages == 0) {+        return {};+    }++    // Lock the pool that we're allocating from+    const std::size_t pool_index{static_cast<std::size_t>(pool)};+    std::lock_guard lock{pool_locks[pool_index]};++    // Choose a heap based on our page size request+    const s32 heap_index{PageHeap::GetAlignedBlockIndex(num_pages, align_pages)};++    // Loop, trying to iterate from each block+    // TODO (bunnei): Support multiple managers+    Impl& chosen_manager{managers[pool_index]};+    VAddr allocated_block{chosen_manager.AllocateBlock(heap_index)};++    // If we failed to allocate, quit now+    if (!allocated_block) {+        return {};+    }++    // If we allocated more than we need, free some+    const std::size_t allocated_pages{PageHeap::GetBlockNumPages(heap_index)};
    const auto allocated_pages{PageHeap::GetBlockNumPages(heap_index)};
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 #include <string>  #include "common/common_types.h"+#include "core/device_memory.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/page_linked_list.h" #include "core/hle/kernel/object.h"-#include "core/hle/kernel/physical_memory.h" #include "core/hle/kernel/process.h" #include "core/hle/result.h"  namespace Kernel {  class KernelCore; -/// Permissions for mapped shared memory blocks-enum class MemoryPermission : u32 {-    None = 0,-    Read = (1u << 0),-    Write = (1u << 1),-    ReadWrite = (Read | Write),-    Execute = (1u << 2),-    ReadExecute = (Read | Execute),-    WriteExecute = (Write | Execute),-    ReadWriteExecute = (Read | Write | Execute),-    DontCare = (1u << 28)-};- class SharedMemory final : public Object { public:-    explicit SharedMemory(KernelCore& kernel);+    explicit SharedMemory(KernelCore& kernel, Core::DeviceMemory& device_memory);     ~SharedMemory() override; -    /**-     * Creates a shared memory object.-     * @param kernel The kernel instance to create a shared memory instance under.-     * @param owner_process Process that created this shared memory object.-     * @param size Size of the memory block. Must be page-aligned.-     * @param permissions Permission restrictions applied to the process which created the block.-     * @param other_permissions Permission restrictions applied to other processes mapping the-     * block.-     * @param address The address from which to map the Shared Memory.-     * @param region If the address is 0, the shared memory will be allocated in this region of the-     * linear heap.-     * @param name Optional object name, used for debugging purposes.-     */-    static std::shared_ptr<SharedMemory> Create(KernelCore& kernel, Process* owner_process,-                                                u64 size, MemoryPermission permissions,-                                                MemoryPermission other_permissions,-                                                VAddr address = 0,-                                                MemoryRegion region = MemoryRegion::BASE,-                                                std::string name = "Unknown");--    /**-     * Creates a shared memory object from a block of memory managed by an HLE applet.-     * @param kernel The kernel instance to create a shared memory instance under.-     * @param heap_block Heap block of the HLE applet.-     * @param offset The offset into the heap block that the SharedMemory will map.-     * @param size Size of the memory block. Must be page-aligned.-     * @param permissions Permission restrictions applied to the process which created the block.-     * @param other_permissions Permission restrictions applied to other processes mapping the-     * block.-     * @param name Optional object name, used for debugging purposes.-     */-    static std::shared_ptr<SharedMemory> CreateForApplet(-        KernelCore& kernel, std::shared_ptr<Kernel::PhysicalMemory> heap_block, std::size_t offset,-        u64 size, MemoryPermission permissions, MemoryPermission other_permissions,-        std::string name = "Unknown Applet");+    static std::shared_ptr<SharedMemory> Create(+        KernelCore& kernel, Core::DeviceMemory& device_memory, Process* owner_process,+        Memory::PageLinkedList&& page_list, Memory::MemoryPermission owner_permission,+        Memory::MemoryPermission user_permission, PAddr physical_address, std::size_t size,+        std::string name = "Unknown");

Don't default the name -- clients should have to name their memory

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 static ResultCode QueryProcessMemory(Core::System& system, VAddr memory_info_add         return ERR_INVALID_HANDLE;     } -    auto& memory = system.Memory();-    const auto& vm_manager = process->VMManager();-    const MemoryInfo memory_info = vm_manager.QueryMemory(address);--    memory.Write64(memory_info_address, memory_info.base_address);-    memory.Write64(memory_info_address + 8, memory_info.size);-    memory.Write32(memory_info_address + 16, memory_info.state);-    memory.Write32(memory_info_address + 20, memory_info.attributes);-    memory.Write32(memory_info_address + 24, memory_info.permission);-    memory.Write32(memory_info_address + 32, memory_info.ipc_ref_count);-    memory.Write32(memory_info_address + 28, memory_info.device_ref_count);-    memory.Write32(memory_info_address + 36, 0);+    auto& memory{system.Memory()};+    const Svc::MemoryInfo memory_info{process->PageTable().QueryInfo(address).GetSvcMemoryInfo()};

auto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(dst_addr, num_pages, MemoryState::Free);+    block_manager->Update(src_addr, num_pages, MemoryState::Normal, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++void PageTable::MapPhysicalMemory(PageLinkedList& page_linked_list, VAddr start, VAddr end) {+    auto node{page_linked_list.Nodes().begin()};+    PAddr map_addr{node->GetAddress()};+    std::size_t src_num_pages{node->GetNumPages()};++    block_manager->IterateForRange(start, end, [&](const MemoryInfo& info) {

explicit captures

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(dst_addr, num_pages, MemoryState::Free);+    block_manager->Update(src_addr, num_pages, MemoryState::Normal, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++void PageTable::MapPhysicalMemory(PageLinkedList& page_linked_list, VAddr start, VAddr end) {+    auto node{page_linked_list.Nodes().begin()};+    PAddr map_addr{node->GetAddress()};+    std::size_t src_num_pages{node->GetNumPages()};++    block_manager->IterateForRange(start, end, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            return;+        }++        std::size_t dst_num_pages{GetSizeInRange(info, start, end) / PageSize};+        VAddr dst_addr{GetAddressInRange(info, start)};++        while (dst_num_pages) {+            if (!src_num_pages) {+                node = std::next(node);+                map_addr = node->GetAddress();+                src_num_pages = node->GetNumPages();+            }++            const std::size_t num_pages{std::min(src_num_pages, dst_num_pages)};+            Operate(dst_addr, num_pages, MemoryPermission::ReadAndWrite, OperationType::Map,+                    map_addr);++            dst_addr += num_pages * PageSize;+            map_addr += num_pages * PageSize;+            src_num_pages -= num_pages;+            dst_num_pages -= num_pages;+        }+    });+}++ResultCode PageTable::MapPhysicalMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    std::size_t mapped_size{};+    const VAddr end_addr{addr + size};++    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {

explicit captures

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 // Licensed under GPLv2 or any later version // Refer to the license.txt file included. -#include <utility>- #include "common/assert.h"-#include "common/logging/log.h"-#include "core/hle/kernel/errors.h"+#include "core/core.h" #include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/page_table.h" #include "core/hle/kernel/shared_memory.h"  namespace Kernel { -SharedMemory::SharedMemory(KernelCore& kernel) : Object{kernel} {}-SharedMemory::~SharedMemory() = default;--std::shared_ptr<SharedMemory> SharedMemory::Create(KernelCore& kernel, Process* owner_process,-                                                   u64 size, MemoryPermission permissions,-                                                   MemoryPermission other_permissions,-                                                   VAddr address, MemoryRegion region,-                                                   std::string name) {-    std::shared_ptr<SharedMemory> shared_memory = std::make_shared<SharedMemory>(kernel);--    shared_memory->owner_process = owner_process;-    shared_memory->name = std::move(name);-    shared_memory->size = size;-    shared_memory->permissions = permissions;-    shared_memory->other_permissions = other_permissions;--    if (address == 0) {-        shared_memory->backing_block = std::make_shared<Kernel::PhysicalMemory>(size);-        shared_memory->backing_block_offset = 0;--        // Refresh the address mappings for the current process.-        if (kernel.CurrentProcess() != nullptr) {-            kernel.CurrentProcess()->VMManager().RefreshMemoryBlockMappings(-                shared_memory->backing_block.get());-        }-    } else {-        const auto& vm_manager = shared_memory->owner_process->VMManager();+SharedMemory::SharedMemory(KernelCore& kernel, Core::DeviceMemory& device_memory)+    : Object{kernel}, device_memory{device_memory} {} -        // The memory is already available and mapped in the owner process.-        const auto vma = vm_manager.FindVMA(address);-        ASSERT_MSG(vm_manager.IsValidHandle(vma), "Invalid memory address");-        ASSERT_MSG(vma->second.backing_block, "Backing block doesn't exist for address");--        // The returned VMA might be a bigger one encompassing the desired address.-        const auto vma_offset = address - vma->first;-        ASSERT_MSG(vma_offset + size <= vma->second.size,-                   "Shared memory exceeds bounds of mapped block");--        shared_memory->backing_block = vma->second.backing_block;-        shared_memory->backing_block_offset = vma->second.offset + vma_offset;-    }--    shared_memory->base_address = address;+SharedMemory::~SharedMemory() = default; -    return shared_memory;-}+std::shared_ptr<SharedMemory> SharedMemory::Create(+    KernelCore& kernel, Core::DeviceMemory& device_memory, Process* owner_process,+    Memory::PageLinkedList&& page_list, Memory::MemoryPermission owner_permission,+    Memory::MemoryPermission user_permission, PAddr physical_address, std::size_t size,+    std::string name) { -std::shared_ptr<SharedMemory> SharedMemory::CreateForApplet(-    KernelCore& kernel, std::shared_ptr<Kernel::PhysicalMemory> heap_block, std::size_t offset,-    u64 size, MemoryPermission permissions, MemoryPermission other_permissions, std::string name) {-    std::shared_ptr<SharedMemory> shared_memory = std::make_shared<SharedMemory>(kernel);+    std::shared_ptr<SharedMemory> shared_memory{+        std::make_shared<SharedMemory>(kernel, device_memory)}; -    shared_memory->owner_process = nullptr;-    shared_memory->name = std::move(name);+    shared_memory->owner_process = owner_process;+    shared_memory->page_list = std::move(page_list);+    shared_memory->owner_permission = owner_permission;+    shared_memory->user_permission = user_permission;+    shared_memory->physical_address = physical_address;     shared_memory->size = size;-    shared_memory->permissions = permissions;-    shared_memory->other_permissions = other_permissions;-    shared_memory->backing_block = std::move(heap_block);-    shared_memory->backing_block_offset = offset;-    shared_memory->base_address =-        kernel.CurrentProcess()->VMManager().GetHeapRegionBaseAddress() + offset;+    shared_memory->name = name;      return shared_memory; } -ResultCode SharedMemory::Map(Process& target_process, VAddr address, MemoryPermission permissions,-                             MemoryPermission other_permissions) {-    const MemoryPermission own_other_permissions =-        &target_process == owner_process ? this->permissions : this->other_permissions;--    // Automatically allocated memory blocks can only be mapped with other_permissions = DontCare-    if (base_address == 0 && other_permissions != MemoryPermission::DontCare) {-        return ERR_INVALID_MEMORY_PERMISSIONS;-    }--    // Error out if the requested permissions don't match what the creator process allows.-    if (static_cast<u32>(permissions) & ~static_cast<u32>(own_other_permissions)) {-        LOG_ERROR(Kernel, "cannot map id={}, address=0x{:X} name={}, permissions don't match",-                  GetObjectId(), address, name);-        return ERR_INVALID_MEMORY_PERMISSIONS;-    }+ResultCode SharedMemory::Map(Process& target_process, VAddr address, std::size_t size,+                             Memory::MemoryPermission permission) {+    const u64 page_count{(size + Memory::PageSize - 1) / Memory::PageSize}; -    // Error out if the provided permissions are not compatible with what the creator process needs.-    if (other_permissions != MemoryPermission::DontCare &&-        static_cast<u32>(this->permissions) & ~static_cast<u32>(other_permissions)) {-        LOG_ERROR(Kernel, "cannot map id={}, address=0x{:X} name={}, permissions don't match",-                  GetObjectId(), address, name);-        return ERR_INVALID_MEMORY_PERMISSIONS;+    if (page_list.GetNumPages() != page_count) {+        UNIMPLEMENTED();     } -    VAddr target_address = address;+    Memory::MemoryPermission expected =+        &target_process == owner_process ? owner_permission : user_permission; -    // Map the memory block into the target process-    auto result = target_process.VMManager().MapMemoryBlock(-        target_address, backing_block, backing_block_offset, size, MemoryState::Shared);-    if (result.Failed()) {-        LOG_ERROR(-            Kernel,-            "cannot map id={}, target_address=0x{:X} name={}, error mapping to virtual memory",-            GetObjectId(), target_address, name);-        return result.Code();+    if (permission != expected) {+        UNIMPLEMENTED();

ditto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 // Licensed under GPLv2 or any later version // Refer to the license.txt file included. -#include <utility>- #include "common/assert.h"-#include "common/logging/log.h"-#include "core/hle/kernel/errors.h"+#include "core/core.h" #include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/page_table.h" #include "core/hle/kernel/shared_memory.h"  namespace Kernel { -SharedMemory::SharedMemory(KernelCore& kernel) : Object{kernel} {}-SharedMemory::~SharedMemory() = default;--std::shared_ptr<SharedMemory> SharedMemory::Create(KernelCore& kernel, Process* owner_process,-                                                   u64 size, MemoryPermission permissions,-                                                   MemoryPermission other_permissions,-                                                   VAddr address, MemoryRegion region,-                                                   std::string name) {-    std::shared_ptr<SharedMemory> shared_memory = std::make_shared<SharedMemory>(kernel);--    shared_memory->owner_process = owner_process;-    shared_memory->name = std::move(name);-    shared_memory->size = size;-    shared_memory->permissions = permissions;-    shared_memory->other_permissions = other_permissions;--    if (address == 0) {-        shared_memory->backing_block = std::make_shared<Kernel::PhysicalMemory>(size);-        shared_memory->backing_block_offset = 0;--        // Refresh the address mappings for the current process.-        if (kernel.CurrentProcess() != nullptr) {-            kernel.CurrentProcess()->VMManager().RefreshMemoryBlockMappings(-                shared_memory->backing_block.get());-        }-    } else {-        const auto& vm_manager = shared_memory->owner_process->VMManager();+SharedMemory::SharedMemory(KernelCore& kernel, Core::DeviceMemory& device_memory)+    : Object{kernel}, device_memory{device_memory} {} -        // The memory is already available and mapped in the owner process.-        const auto vma = vm_manager.FindVMA(address);-        ASSERT_MSG(vm_manager.IsValidHandle(vma), "Invalid memory address");-        ASSERT_MSG(vma->second.backing_block, "Backing block doesn't exist for address");--        // The returned VMA might be a bigger one encompassing the desired address.-        const auto vma_offset = address - vma->first;-        ASSERT_MSG(vma_offset + size <= vma->second.size,-                   "Shared memory exceeds bounds of mapped block");--        shared_memory->backing_block = vma->second.backing_block;-        shared_memory->backing_block_offset = vma->second.offset + vma_offset;-    }--    shared_memory->base_address = address;+SharedMemory::~SharedMemory() = default; -    return shared_memory;-}+std::shared_ptr<SharedMemory> SharedMemory::Create(+    KernelCore& kernel, Core::DeviceMemory& device_memory, Process* owner_process,+    Memory::PageLinkedList&& page_list, Memory::MemoryPermission owner_permission,+    Memory::MemoryPermission user_permission, PAddr physical_address, std::size_t size,+    std::string name) { -std::shared_ptr<SharedMemory> SharedMemory::CreateForApplet(-    KernelCore& kernel, std::shared_ptr<Kernel::PhysicalMemory> heap_block, std::size_t offset,-    u64 size, MemoryPermission permissions, MemoryPermission other_permissions, std::string name) {-    std::shared_ptr<SharedMemory> shared_memory = std::make_shared<SharedMemory>(kernel);+    std::shared_ptr<SharedMemory> shared_memory{+        std::make_shared<SharedMemory>(kernel, device_memory)}; -    shared_memory->owner_process = nullptr;-    shared_memory->name = std::move(name);+    shared_memory->owner_process = owner_process;+    shared_memory->page_list = std::move(page_list);+    shared_memory->owner_permission = owner_permission;+    shared_memory->user_permission = user_permission;+    shared_memory->physical_address = physical_address;     shared_memory->size = size;-    shared_memory->permissions = permissions;-    shared_memory->other_permissions = other_permissions;-    shared_memory->backing_block = std::move(heap_block);-    shared_memory->backing_block_offset = offset;-    shared_memory->base_address =-        kernel.CurrentProcess()->VMManager().GetHeapRegionBaseAddress() + offset;+    shared_memory->name = name;      return shared_memory; } -ResultCode SharedMemory::Map(Process& target_process, VAddr address, MemoryPermission permissions,-                             MemoryPermission other_permissions) {-    const MemoryPermission own_other_permissions =-        &target_process == owner_process ? this->permissions : this->other_permissions;--    // Automatically allocated memory blocks can only be mapped with other_permissions = DontCare-    if (base_address == 0 && other_permissions != MemoryPermission::DontCare) {-        return ERR_INVALID_MEMORY_PERMISSIONS;-    }--    // Error out if the requested permissions don't match what the creator process allows.-    if (static_cast<u32>(permissions) & ~static_cast<u32>(own_other_permissions)) {-        LOG_ERROR(Kernel, "cannot map id={}, address=0x{:X} name={}, permissions don't match",-                  GetObjectId(), address, name);-        return ERR_INVALID_MEMORY_PERMISSIONS;-    }+ResultCode SharedMemory::Map(Process& target_process, VAddr address, std::size_t size,+                             Memory::MemoryPermission permission) {+    const u64 page_count{(size + Memory::PageSize - 1) / Memory::PageSize}; -    // Error out if the provided permissions are not compatible with what the creator process needs.-    if (other_permissions != MemoryPermission::DontCare &&-        static_cast<u32>(this->permissions) & ~static_cast<u32>(other_permissions)) {-        LOG_ERROR(Kernel, "cannot map id={}, address=0x{:X} name={}, permissions don't match",-                  GetObjectId(), address, name);-        return ERR_INVALID_MEMORY_PERMISSIONS;+    if (page_list.GetNumPages() != page_count) {+        UNIMPLEMENTED();

provide a message for logs

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include <algorithm>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/common_types.h"+#include "common/scope_exit.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/memory/memory_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"++namespace Kernel::Memory {++std::size_t MemoryManager::Impl::Initialize(Pool new_pool, u64 start_address, u64 end_address) {+    const std::size_t size{end_address - start_address};++    // Calculate metadata sizes+    const std::size_t ref_count_size{(size / PageSize) * sizeof(u16)};+    const std::size_t optimize_map_size{(Common::AlignUp((size / PageSize), 64) / 64) *+                                        sizeof(u64)};+    const std::size_t manager_size{Common::AlignUp(optimize_map_size + ref_count_size, PageSize)};+    const std::size_t page_heap_size{PageHeap::CalculateMetadataOverheadSize(size)};+    const std::size_t total_metadata_size{manager_size + page_heap_size};+    ASSERT(manager_size <= total_metadata_size);+    ASSERT(Common::IsAligned(total_metadata_size, PageSize));++    // Setup region+    pool = new_pool;++    // Initialize the manager's KPageHeap+    heap.Initialize(start_address, size, page_heap_size);++    // Free the memory to the heap+    heap.Free(start_address, size / PageSize);++    // Update the heap's used size+    heap.UpdateUsedSize();++    return total_metadata_size;+}++void MemoryManager::InitializeManager(Pool pool, u64 start_address, u64 end_address) {+    ASSERT(pool < Pool::Count);+    managers[static_cast<std::size_t>(pool)].Initialize(pool, start_address, end_address);+}++VAddr MemoryManager::AllocateContinuous(std::size_t num_pages, std::size_t align_pages, Pool pool,+                                        Direction dir) {+    // Early return if we're allocating no pages+    if (num_pages == 0) {+        return {};+    }++    // Lock the pool that we're allocating from+    const std::size_t pool_index{static_cast<std::size_t>(pool)};+    std::lock_guard lock{pool_locks[pool_index]};++    // Choose a heap based on our page size request+    const s32 heap_index{PageHeap::GetAlignedBlockIndex(num_pages, align_pages)};++    // Loop, trying to iterate from each block+    // TODO (bunnei): Support multiple managers+    Impl& chosen_manager{managers[pool_index]};+    VAddr allocated_block{chosen_manager.AllocateBlock(heap_index)};++    // If we failed to allocate, quit now+    if (!allocated_block) {+        return {};+    }++    // If we allocated more than we need, free some+    const std::size_t allocated_pages{PageHeap::GetBlockNumPages(heap_index)};+    if (allocated_pages > num_pages) {+        chosen_manager.Free(allocated_block + num_pages * PageSize, allocated_pages - num_pages);+    }++    return allocated_block;+}++ResultCode MemoryManager::Allocate(PageLinkedList& page_list, std::size_t num_pages, Pool pool,+                                   Direction dir) {+    ASSERT(page_list.GetNumPages() == 0);++    // Early return if we're allocating no pages+    if (num_pages == 0) {+        return RESULT_SUCCESS;+    }++    // Lock the pool that we're allocating from+    const std::size_t pool_index{static_cast<std::size_t>(pool)};
    const auto pool_index{static_cast<std::size_t>(pool)};
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 static const FunctionDef SVC_Table_64[] = {     {0x74, nullptr, "MapProcessMemory"},     {0x75, nullptr, "UnmapProcessMemory"},     {0x76, SvcWrap64<QueryProcessMemory>, "QueryProcessMemory"},-    {0x77, SvcWrap64<MapProcessCodeMemory>, "MapProcessCodeMemory"},-    {0x78, SvcWrap64<UnmapProcessCodeMemory>, "UnmapProcessCodeMemory"},+    {0x77, nullptr, "MapProcessCodeMemory"},

This seems like a mistake?

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(dst_addr, num_pages, MemoryState::Free);+    block_manager->Update(src_addr, num_pages, MemoryState::Normal, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++void PageTable::MapPhysicalMemory(PageLinkedList& page_linked_list, VAddr start, VAddr end) {+    auto node{page_linked_list.Nodes().begin()};+    PAddr map_addr{node->GetAddress()};+    std::size_t src_num_pages{node->GetNumPages()};++    block_manager->IterateForRange(start, end, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            return;+        }++        std::size_t dst_num_pages{GetSizeInRange(info, start, end) / PageSize};+        VAddr dst_addr{GetAddressInRange(info, start)};++        while (dst_num_pages) {+            if (!src_num_pages) {+                node = std::next(node);+                map_addr = node->GetAddress();+                src_num_pages = node->GetNumPages();+            }++            const std::size_t num_pages{std::min(src_num_pages, dst_num_pages)};+            Operate(dst_addr, num_pages, MemoryPermission::ReadAndWrite, OperationType::Map,+                    map_addr);++            dst_addr += num_pages * PageSize;+            map_addr += num_pages * PageSize;+            src_num_pages -= num_pages;+            dst_num_pages -= num_pages;+        }+    });+}++ResultCode PageTable::MapPhysicalMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    std::size_t mapped_size{};+    const VAddr end_addr{addr + size};++    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            mapped_size += GetSizeInRange(info, addr, end_addr);+        }+    });++    if (mapped_size == size) {+        return RESULT_SUCCESS;+    }++    auto process{system.Kernel().CurrentProcess()};+    const std::size_t remaining_size{size - mapped_size};+    const std::size_t remaining_pages{remaining_size / PageSize};++    if (process->GetResourceLimit() &&+        !process->GetResourceLimit()->Reserve(ResourceType::PhysicalMemory, remaining_size)) {+        return ERR_RESOURCE_LIMIT_EXCEEDED;+    }++    PageLinkedList page_linked_list;+    {+        auto block_guard = detail::ScopeExit([&] {+            system.Kernel().MemoryManager().Free(page_linked_list, remaining_pages, memory_pool);+            process->GetResourceLimit()->Release(ResourceType::PhysicalMemory, remaining_size);+        });++        if (const ResultCode result{system.Kernel().MemoryManager().Allocate(+                page_linked_list, remaining_pages, memory_pool)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    MapPhysicalMemory(page_linked_list, addr, end_addr);++    physical_memory_usage += remaining_size;++    const std::size_t num_pages{size / PageSize};+    block_manager->Update(addr, num_pages, MemoryState::Free, MemoryPermission::None,+                          MemoryAttribute::None, MemoryState::Normal,+                          MemoryPermission::ReadAndWrite, MemoryAttribute::None);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapPhysicalMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const VAddr end_addr{addr + size};+    ResultCode result{RESULT_SUCCESS};+    std::size_t mapped_size{};++    // Verify that the region can be unmapped+    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state == MemoryState::Normal) {+            if (info.attribute != MemoryAttribute::None) {+                result = ERR_INVALID_ADDRESS_STATE;+                return;+            }+            mapped_size += GetSizeInRange(info, addr, end_addr);+        } else if (info.state != MemoryState::Free) {+            result = ERR_INVALID_ADDRESS_STATE;+        }+    });++    if (result.IsError()) {+        return result;+    }++    if (!mapped_size) {+        return RESULT_SUCCESS;+    }++    if (const ResultCode result{UnmapMemory(addr, size)}; result.IsError()) {+        return result;+    }++    auto process{system.Kernel().CurrentProcess()};+    process->GetResourceLimit()->Release(ResourceType::PhysicalMemory, mapped_size);+    physical_memory_usage -= mapped_size;++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const VAddr end_addr{addr + size};+    ResultCode result{RESULT_SUCCESS};+    PageLinkedList page_linked_list;++    // Unmap each region within the range+    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state == MemoryState::Normal) {+            const std::size_t block_size{GetSizeInRange(info, addr, end_addr)};+            const std::size_t block_num_pages{block_size / PageSize};+            const VAddr block_addr{GetAddressInRange(info, addr)};++            AddRegionToPages(block_addr, block_size / PageSize, page_linked_list);++            if (result = Operate(block_addr, block_num_pages, MemoryPermission::None,+                                 OperationType::Unmap);+                result.IsError()) {+                return;+            }+        }+    });++    if (result.IsError()) {+        return result;+    }++    const std::size_t num_pages{size / PageSize};+    system.Kernel().MemoryManager().Free(page_linked_list, num_pages, memory_pool);++    block_manager->Update(addr, num_pages, MemoryState::Free);++    return RESULT_SUCCESS;+}++ResultCode PageTable::Map(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    MemoryState src_state{};+    if (const ResultCode result{CheckMemoryState(+            &src_state, nullptr, nullptr, src_addr, size, MemoryState::FlagCanAlias,+            MemoryState::FlagCanAlias, MemoryPermission::Mask, MemoryPermission::ReadAndWrite,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    const std::size_t num_pages{size / PageSize};++    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit([&] {+            Operate(src_addr, num_pages, MemoryPermission::ReadAndWrite,+                    OperationType::ChangePermissions);+        });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{+                MapPages(dst_addr, page_linked_list, MemoryPermission::ReadAndWrite)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, src_state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::Stack, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++ResultCode PageTable::Unmap(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    MemoryState src_state{};+    if (const ResultCode result{CheckMemoryState(+            &src_state, nullptr, nullptr, src_addr, size, MemoryState::FlagCanAlias,+            MemoryState::FlagCanAlias, MemoryPermission::Mask, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryPermission dst_perm{};+    if (const ResultCode result{CheckMemoryState(+            nullptr, &dst_perm, nullptr, dst_addr, size, MemoryState::All, MemoryState::Stack,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    PageLinkedList src_pages;+    PageLinkedList dst_pages;+    const std::size_t num_pages{size / PageSize};++    AddRegionToPages(src_addr, num_pages, src_pages);+    AddRegionToPages(dst_addr, num_pages, dst_pages);++    if (!dst_pages.IsEqual(src_pages)) {+        return ERR_INVALID_MEMORY_RANGE;+    }++    {+        auto block_guard = detail::ScopeExit([&] { MapPages(dst_addr, dst_pages, dst_perm); });++        if (const ResultCode result{+                Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::ReadAndWrite,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, src_state, MemoryPermission::ReadAndWrite);+    block_manager->Update(dst_addr, num_pages, MemoryState::Free);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapPages(VAddr addr, const PageLinkedList& page_linked_list,+                               MemoryPermission perm) {+    VAddr cur_addr{addr};++    for (const auto& node : page_linked_list.Nodes()) {+        if (const ResultCode result{+                Operate(cur_addr, node.GetNumPages(), perm, OperationType::Map, node.GetAddress())};+            result.IsError()) {+            const MemoryInfo info{block_manager->FindBlock(cur_addr).GetMemoryInfo()};+            const std::size_t num_pages{(addr - cur_addr) / PageSize};++            ASSERT(+                Operate(addr, num_pages, MemoryPermission::None, OperationType::Unmap).IsSuccess());++            return result;+        }++        cur_addr += node.GetNumPages() * PageSize;+    }++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapPages(VAddr addr, PageLinkedList& page_linked_list, MemoryState state,+                               MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{page_linked_list.GetNumPages()};+    const std::size_t size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, num_pages * PageSize)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (const ResultCode result{MapPages(addr, page_linked_list, perm)}; result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::SetCodeMemoryPermission(VAddr addr, std::size_t size, MemoryPermission perm) {++    std::lock_guard lock{page_table_lock};++    MemoryState prev_state{};+    MemoryPermission prev_perm{};++    if (const ResultCode result{CheckMemoryState(+            &prev_state, &prev_perm, nullptr, addr, size, MemoryState::FlagCode,+            MemoryState::FlagCode, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{prev_state};++    // Ensure state is mutable if permission allows write+    if ((perm & MemoryPermission::Write) != MemoryPermission::None) {+        if (prev_state == MemoryState::Code) {+            state = MemoryState::CodeData;+        } else if (prev_state == MemoryState::AliasCode) {+            state = MemoryState::AliasCodeData;+        } else {+            UNREACHABLE();+        }+    }++    // Return early if there is nothing to change+    if (state == prev_state && perm == prev_perm) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};+    const OperationType operation{(perm & MemoryPermission::Execute) != MemoryPermission::None+                                      ? OperationType::ChangePermissionsAndRefresh+                                      : OperationType::ChangePermissions};++    if (const ResultCode result{Operate(addr, num_pages, perm, operation)}; result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++MemoryInfo PageTable::QueryInfoImpl(VAddr addr) {+    std::lock_guard lock{page_table_lock};++    return block_manager->FindBlock(addr).GetMemoryInfo();+}++MemoryInfo PageTable::QueryInfo(VAddr addr) {+    if (!Contains(addr, 1)) {+        return {address_space_end,      0 - address_space_end, MemoryState::Inaccessible,+                MemoryPermission::None, MemoryAttribute::None, MemoryPermission::None};+    }++    return QueryInfoImpl(addr);+}++ResultCode PageTable::ReserveTransferMemory(VAddr addr, std::size_t size, MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    MemoryState state{};+    MemoryAttribute attribute{};++    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, &attribute, addr, size,+            MemoryState::FlagCanTransfer | MemoryState::FlagReferenceCounted,+            MemoryState::FlagCanTransfer | MemoryState::FlagReferenceCounted,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, size / PageSize, state, perm, attribute | MemoryAttribute::Locked);++    return RESULT_SUCCESS;+}++ResultCode PageTable::ResetTransferMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    MemoryState state{};++    if (const ResultCode result{+            CheckMemoryState(&state, nullptr, nullptr, addr, size,+                             MemoryState::FlagCanTransfer | MemoryState::FlagReferenceCounted,+                             MemoryState::FlagCanTransfer | MemoryState::FlagReferenceCounted,+                             MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+                             MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, size / PageSize, state, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++ResultCode PageTable::SetMemoryAttribute(VAddr addr, std::size_t size, MemoryAttribute mask,+                                         MemoryAttribute value) {+    std::lock_guard lock{page_table_lock};++    MemoryState state{};+    MemoryPermission perm{};+    MemoryAttribute attribute{};++    if (const ResultCode result{CheckMemoryState(+            &state, &perm, &attribute, addr, size, MemoryState::FlagCanChangeAttribute,+            MemoryState::FlagCanChangeAttribute, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::LockedAndIpcLocked, MemoryAttribute::None,+            MemoryAttribute::DeviceSharedAndUncached)};+        result.IsError()) {+        return result;+    }++    attribute = attribute & ~mask;+    attribute = attribute | (mask & value);++    block_manager->Update(addr, size / PageSize, state, perm, attribute);++    return RESULT_SUCCESS;+}++ResultCode PageTable::SetHeapCapacity(std::size_t new_heap_capacity) {+    std::lock_guard lock{page_table_lock};+    heap_capacity = new_heap_capacity;+    return RESULT_SUCCESS;+}++ResultVal<VAddr> PageTable::SetHeapSize(std::size_t size) {++    if (size > heap_region_end - heap_region_start) {+        return ERR_OUT_OF_MEMORY;+    }++    const u64 previous_heap_size{GetHeapSize()};++    UNIMPLEMENTED_IF_MSG(previous_heap_size > size, "Heap shrink is unimplemented");

will this crash if this is bypassed by ignore-asserts?

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(dst_addr, num_pages, MemoryState::Free);+    block_manager->Update(src_addr, num_pages, MemoryState::Normal, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++void PageTable::MapPhysicalMemory(PageLinkedList& page_linked_list, VAddr start, VAddr end) {+    auto node{page_linked_list.Nodes().begin()};+    PAddr map_addr{node->GetAddress()};+    std::size_t src_num_pages{node->GetNumPages()};++    block_manager->IterateForRange(start, end, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            return;+        }++        std::size_t dst_num_pages{GetSizeInRange(info, start, end) / PageSize};+        VAddr dst_addr{GetAddressInRange(info, start)};++        while (dst_num_pages) {+            if (!src_num_pages) {+                node = std::next(node);+                map_addr = node->GetAddress();+                src_num_pages = node->GetNumPages();+            }++            const std::size_t num_pages{std::min(src_num_pages, dst_num_pages)};+            Operate(dst_addr, num_pages, MemoryPermission::ReadAndWrite, OperationType::Map,+                    map_addr);++            dst_addr += num_pages * PageSize;+            map_addr += num_pages * PageSize;+            src_num_pages -= num_pages;+            dst_num_pages -= num_pages;+        }+    });+}++ResultCode PageTable::MapPhysicalMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    std::size_t mapped_size{};+    const VAddr end_addr{addr + size};++    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            mapped_size += GetSizeInRange(info, addr, end_addr);+        }+    });++    if (mapped_size == size) {+        return RESULT_SUCCESS;+    }++    auto process{system.Kernel().CurrentProcess()};+    const std::size_t remaining_size{size - mapped_size};+    const std::size_t remaining_pages{remaining_size / PageSize};++    if (process->GetResourceLimit() &&+        !process->GetResourceLimit()->Reserve(ResourceType::PhysicalMemory, remaining_size)) {+        return ERR_RESOURCE_LIMIT_EXCEEDED;+    }++    PageLinkedList page_linked_list;+    {+        auto block_guard = detail::ScopeExit([&] {+            system.Kernel().MemoryManager().Free(page_linked_list, remaining_pages, memory_pool);+            process->GetResourceLimit()->Release(ResourceType::PhysicalMemory, remaining_size);+        });++        if (const ResultCode result{system.Kernel().MemoryManager().Allocate(+                page_linked_list, remaining_pages, memory_pool)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    MapPhysicalMemory(page_linked_list, addr, end_addr);++    physical_memory_usage += remaining_size;++    const std::size_t num_pages{size / PageSize};+    block_manager->Update(addr, num_pages, MemoryState::Free, MemoryPermission::None,+                          MemoryAttribute::None, MemoryState::Normal,+                          MemoryPermission::ReadAndWrite, MemoryAttribute::None);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapPhysicalMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const VAddr end_addr{addr + size};+    ResultCode result{RESULT_SUCCESS};+    std::size_t mapped_size{};++    // Verify that the region can be unmapped+    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state == MemoryState::Normal) {+            if (info.attribute != MemoryAttribute::None) {+                result = ERR_INVALID_ADDRESS_STATE;+                return;+            }+            mapped_size += GetSizeInRange(info, addr, end_addr);+        } else if (info.state != MemoryState::Free) {+            result = ERR_INVALID_ADDRESS_STATE;+        }+    });++    if (result.IsError()) {+        return result;+    }++    if (!mapped_size) {+        return RESULT_SUCCESS;+    }++    if (const ResultCode result{UnmapMemory(addr, size)}; result.IsError()) {+        return result;+    }++    auto process{system.Kernel().CurrentProcess()};+    process->GetResourceLimit()->Release(ResourceType::PhysicalMemory, mapped_size);+    physical_memory_usage -= mapped_size;++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapMemory(VAddr addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const VAddr end_addr{addr + size};+    ResultCode result{RESULT_SUCCESS};+    PageLinkedList page_linked_list;++    // Unmap each region within the range+    block_manager->IterateForRange(addr, end_addr, [&](const MemoryInfo& info) {+        if (info.state == MemoryState::Normal) {+            const std::size_t block_size{GetSizeInRange(info, addr, end_addr)};+            const std::size_t block_num_pages{block_size / PageSize};+            const VAddr block_addr{GetAddressInRange(info, addr)};++            AddRegionToPages(block_addr, block_size / PageSize, page_linked_list);++            if (result = Operate(block_addr, block_num_pages, MemoryPermission::None,+                                 OperationType::Unmap);+                result.IsError()) {+                return;+            }+        }+    });++    if (result.IsError()) {+        return result;+    }++    const std::size_t num_pages{size / PageSize};+    system.Kernel().MemoryManager().Free(page_linked_list, num_pages, memory_pool);++    block_manager->Update(addr, num_pages, MemoryState::Free);++    return RESULT_SUCCESS;+}++ResultCode PageTable::Map(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    MemoryState src_state{};+    if (const ResultCode result{CheckMemoryState(+            &src_state, nullptr, nullptr, src_addr, size, MemoryState::FlagCanAlias,+            MemoryState::FlagCanAlias, MemoryPermission::Mask, MemoryPermission::ReadAndWrite,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    const std::size_t num_pages{size / PageSize};++    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit([&] {+            Operate(src_addr, num_pages, MemoryPermission::ReadAndWrite,+                    OperationType::ChangePermissions);+        });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,

im going to stop listing each example in this file, but: auto and explicit captures

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#pragma once++#include <random>++#include "core/hle/kernel/memory/system_control.h"++namespace Kernel::Memory::SystemControl {++u64 GenerateRandomU64ForInit() {+    std::random_device device;

It is not guaranteed by the standard that new instances of this wont produce the same value on each call Consider making this, and the next two objects, static to avoid init costs and ensure its random

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(dst_addr, num_pages, MemoryPermission::None, OperationType::Unmap)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(dst_addr, num_pages, MemoryState::Free);+    block_manager->Update(src_addr, num_pages, MemoryState::Normal, MemoryPermission::ReadAndWrite);++    return RESULT_SUCCESS;+}++void PageTable::MapPhysicalMemory(PageLinkedList& page_linked_list, VAddr start, VAddr end) {+    auto node{page_linked_list.Nodes().begin()};+    PAddr map_addr{node->GetAddress()};+    std::size_t src_num_pages{node->GetNumPages()};++    block_manager->IterateForRange(start, end, [&](const MemoryInfo& info) {+        if (info.state != MemoryState::Free) {+            return;+        }++        std::size_t dst_num_pages{GetSizeInRange(info, start, end) / PageSize};+        VAddr dst_addr{GetAddressInRange(info, start)};++        while (dst_num_pages) {+            if (!src_num_pages) {+                node = std::next(node);+                map_addr = node->GetAddress();+                src_num_pages = node->GetNumPages();+            }++            const std::size_t num_pages{std::min(src_num_pages, dst_num_pages)};

auto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{
    if (const auto result{
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{
    if (const auto result{
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(
    if (const auto result{CheckMemoryState(
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return i;+            }+        }+        return -1;+    }++    static constexpr std::size_t GetBlockSize(std::size_t index) {+        return std::size_t(1) << MemoryBlockPageShifts[index];+    }++    static constexpr std::size_t GetBlockNumPages(std::size_t index) {+        return GetBlockSize(index) / PageSize;+    }++private:+    static constexpr std::size_t NumMemoryBlockPageShifts{7};+    static constexpr std::array<std::size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{+        0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E};++    class Block final : NonCopyable {+    private:+        class Bitmap final : NonCopyable {+        public:+            static constexpr std::size_t MaxDepth{4};++        private:+            std::array<u64*, MaxDepth> bit_storages{};+            std::size_t num_bits{};+            std::size_t used_depths{};++        public:+            constexpr Bitmap() = default;++            constexpr std::size_t GetNumBits() const {+                return num_bits;+            }+            constexpr s32 GetHighestDepthIndex() const {+                return static_cast<s32>(used_depths) - 1;+            }++            constexpr u64* Initialize(u64* storage, std::size_t size) {+                //* Initially, everything is un-set+                num_bits = 0;++                // Calculate the needed bitmap depth+                used_depths = static_cast<std::size_t>(GetRequiredDepth(size));+                ASSERT(used_depths <= MaxDepth);++                // Set the bitmap pointers+                for (s32 depth{GetHighestDepthIndex()}; depth >= 0; depth--) {+                    bit_storages[depth] = storage;+                    size = Common::AlignUp(size, 64) / 64;+                    storage += size;+                }++                return storage;+            }++            s64 FindFreeBlock() const {+                uintptr_t offset{};+                s32 depth{};++                do {+                    const u64 v{bit_storages[depth][offset]};+                    if (v == 0) {+                        // Non-zero depth indicates that a previous level had a free block+                        ASSERT(depth == 0);+                        return -1;+                    }+                    offset = offset * 64 + Common::CountTrailingZeroes64(v);+                    ++depth;+                } while (depth < static_cast<s32>(used_depths));++                return static_cast<s64>(offset);+            }++            constexpr void SetBit(std::size_t offset) {+                SetBit(GetHighestDepthIndex(), offset);+                num_bits++;+            }++            constexpr void ClearBit(std::size_t offset) {+                ClearBit(GetHighestDepthIndex(), offset);+                num_bits--;+            }++            constexpr bool ClearRange(std::size_t offset, std::size_t count) {+                const s32 depth{GetHighestDepthIndex()};+                const std::size_t bit_ind{offset / 64};+                u64* bits{bit_storages[depth]};+                if (count < 64) {+                    const std::size_t shift{offset % 64};+                    ASSERT(shift + count <= 64);+                    // Check that all the bits are set+                    const u64 mask{((u64(1) << count) - 1) << shift};+                    u64 v{bits[bit_ind]};+                    if ((v & mask) != mask) {+                        return false;+                    }++                    // Clear the bits+                    v &= ~mask;+                    bits[bit_ind] = v;+                    if (v == 0) {+                        ClearBit(depth - 1, bit_ind);+                    }+                } else {+                    ASSERT(offset % 64 == 0);+                    ASSERT(count % 64 == 0);+                    // Check that all the bits are set+                    std::size_t remaining{count};+                    std::size_t i = 0;+                    do {+                        if (bits[bit_ind + i++] != ~u64(0)) {+                            return false;+                        }+                        remaining -= 64;+                    } while (remaining > 0);++                    // Clear the bits+                    remaining = count;+                    i = 0;+                    do {+                        bits[bit_ind + i] = 0;+                        ClearBit(depth - 1, bit_ind + i);+                        i++;+                        remaining -= 64;+                    } while (remaining > 0);+                }++                num_bits -= count;+                return true;+            }++        private:+            constexpr void SetBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    const u64 v{*bit};+                    ASSERT((v & mask) == 0);+                    *bit = v | mask;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++            constexpr void ClearBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    u64 v{*bit};+                    ASSERT((v & mask) != 0);+                    v &= ~mask;+                    *bit = v;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++        private:+            static constexpr s32 GetRequiredDepth(std::size_t region_size) {+                s32 depth = 0;+                while (true) {+                    region_size /= 64;+                    depth++;+                    if (region_size == 0) {+                        return depth;+                    }+                }+            }++        public:+            static constexpr std::size_t CalculateMetadataOverheadSize(std::size_t region_size) {+                std::size_t overhead_bits = 0;+                for (s32 depth{GetRequiredDepth(region_size) - 1}; depth >= 0; depth--) {+                    region_size = Common::AlignUp(region_size, 64) / 64;+                    overhead_bits += region_size;+                }+                return overhead_bits * sizeof(u64);+            }+        };++    private:+        Bitmap bitmap;+        VAddr heap_address{};+        uintptr_t end_offset{};+        std::size_t block_shift{};+        std::size_t next_block_shift{};++    public:+        constexpr Block() = default;++        constexpr std::size_t GetShift() const {+            return block_shift;+        }+        constexpr std::size_t GetNextShift() const {+            return next_block_shift;+        }+        constexpr std::size_t GetSize() const {+            return std::size_t(1) << GetShift();+        }+        constexpr std::size_t GetNumPages() const {+            return GetSize() / PageSize;+        }+        constexpr std::size_t GetNumFreeBlocks() const {+            return bitmap.GetNumBits();+        }+        constexpr std::size_t GetNumFreePages() const {+            return GetNumFreeBlocks() * GetNumPages();+        }++        constexpr u64* Initialize(VAddr addr, std::size_t size, std::size_t bs, std::size_t nbs,+                                  u64* bit_storage) {+            // Set shifts+            block_shift = bs;+            next_block_shift = nbs;++            // Align up the address+            VAddr end{addr + size};+            const std::size_t align{(next_block_shift != 0) ? (u64(1) << next_block_shift)+                                                            : (u64(1) << block_shift)};+            addr = Common::AlignDown((addr), align);+            end = Common::AlignUp((end), align);++            heap_address = addr;+            end_offset = (end - addr) / (u64(1) << block_shift);

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {

explicitly specify captures

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};
        const auto target_pages{std::max(num_pages, align_pages)};
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(

ditto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return i;+            }+        }+        return -1;+    }++    static constexpr std::size_t GetBlockSize(std::size_t index) {+        return std::size_t(1) << MemoryBlockPageShifts[index];+    }++    static constexpr std::size_t GetBlockNumPages(std::size_t index) {+        return GetBlockSize(index) / PageSize;+    }++private:+    static constexpr std::size_t NumMemoryBlockPageShifts{7};+    static constexpr std::array<std::size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{+        0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E};++    class Block final : NonCopyable {+    private:+        class Bitmap final : NonCopyable {+        public:+            static constexpr std::size_t MaxDepth{4};++        private:+            std::array<u64*, MaxDepth> bit_storages{};+            std::size_t num_bits{};+            std::size_t used_depths{};++        public:+            constexpr Bitmap() = default;++            constexpr std::size_t GetNumBits() const {+                return num_bits;+            }+            constexpr s32 GetHighestDepthIndex() const {+                return static_cast<s32>(used_depths) - 1;+            }++            constexpr u64* Initialize(u64* storage, std::size_t size) {+                //* Initially, everything is un-set+                num_bits = 0;++                // Calculate the needed bitmap depth+                used_depths = static_cast<std::size_t>(GetRequiredDepth(size));+                ASSERT(used_depths <= MaxDepth);++                // Set the bitmap pointers+                for (s32 depth{GetHighestDepthIndex()}; depth >= 0; depth--) {+                    bit_storages[depth] = storage;+                    size = Common::AlignUp(size, 64) / 64;+                    storage += size;+                }++                return storage;+            }++            s64 FindFreeBlock() const {+                uintptr_t offset{};+                s32 depth{};++                do {+                    const u64 v{bit_storages[depth][offset]};+                    if (v == 0) {+                        // Non-zero depth indicates that a previous level had a free block+                        ASSERT(depth == 0);+                        return -1;+                    }+                    offset = offset * 64 + Common::CountTrailingZeroes64(v);+                    ++depth;+                } while (depth < static_cast<s32>(used_depths));++                return static_cast<s64>(offset);+            }++            constexpr void SetBit(std::size_t offset) {+                SetBit(GetHighestDepthIndex(), offset);+                num_bits++;+            }++            constexpr void ClearBit(std::size_t offset) {+                ClearBit(GetHighestDepthIndex(), offset);+                num_bits--;+            }++            constexpr bool ClearRange(std::size_t offset, std::size_t count) {+                const s32 depth{GetHighestDepthIndex()};+                const std::size_t bit_ind{offset / 64};+                u64* bits{bit_storages[depth]};+                if (count < 64) {+                    const std::size_t shift{offset % 64};+                    ASSERT(shift + count <= 64);+                    // Check that all the bits are set+                    const u64 mask{((u64(1) << count) - 1) << shift};+                    u64 v{bits[bit_ind]};+                    if ((v & mask) != mask) {+                        return false;+                    }++                    // Clear the bits+                    v &= ~mask;+                    bits[bit_ind] = v;+                    if (v == 0) {+                        ClearBit(depth - 1, bit_ind);+                    }+                } else {+                    ASSERT(offset % 64 == 0);+                    ASSERT(count % 64 == 0);+                    // Check that all the bits are set+                    std::size_t remaining{count};+                    std::size_t i = 0;+                    do {+                        if (bits[bit_ind + i++] != ~u64(0)) {+                            return false;+                        }+                        remaining -= 64;+                    } while (remaining > 0);++                    // Clear the bits+                    remaining = count;+                    i = 0;+                    do {+                        bits[bit_ind + i] = 0;+                        ClearBit(depth - 1, bit_ind + i);+                        i++;+                        remaining -= 64;+                    } while (remaining > 0);+                }++                num_bits -= count;+                return true;+            }++        private:+            constexpr void SetBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    const u64 v{*bit};+                    ASSERT((v & mask) == 0);+                    *bit = v | mask;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++            constexpr void ClearBit(s32 depth, std::size_t offset) {+                while (depth >= 0) {+                    const std::size_t ind{offset / 64};+                    const std::size_t which{offset % 64};+                    const u64 mask{u64(1) << which};++                    u64* bit{std::addressof(bit_storages[depth][ind])};+                    u64 v{*bit};+                    ASSERT((v & mask) != 0);+                    v &= ~mask;+                    *bit = v;+                    if (v) {+                        break;+                    }+                    offset = ind;+                    depth--;+                }+            }++        private:+            static constexpr s32 GetRequiredDepth(std::size_t region_size) {+                s32 depth = 0;+                while (true) {+                    region_size /= 64;+                    depth++;+                    if (region_size == 0) {+                        return depth;+                    }+                }+            }++        public:+            static constexpr std::size_t CalculateMetadataOverheadSize(std::size_t region_size) {+                std::size_t overhead_bits = 0;+                for (s32 depth{GetRequiredDepth(region_size) - 1}; depth >= 0; depth--) {+                    region_size = Common::AlignUp(region_size, 64) / 64;+                    overhead_bits += region_size;+                }+                return overhead_bits * sizeof(u64);+            }+        };++    private:+        Bitmap bitmap;+        VAddr heap_address{};+        uintptr_t end_offset{};+        std::size_t block_shift{};+        std::size_t next_block_shift{};++    public:+        constexpr Block() = default;++        constexpr std::size_t GetShift() const {+            return block_shift;+        }+        constexpr std::size_t GetNextShift() const {+            return next_block_shift;+        }+        constexpr std::size_t GetSize() const {+            return std::size_t(1) << GetShift();+        }+        constexpr std::size_t GetNumPages() const {+            return GetSize() / PageSize;+        }+        constexpr std::size_t GetNumFreeBlocks() const {+            return bitmap.GetNumBits();+        }+        constexpr std::size_t GetNumFreePages() const {+            return GetNumFreeBlocks() * GetNumPages();+        }++        constexpr u64* Initialize(VAddr addr, std::size_t size, std::size_t bs, std::size_t nbs,+                                  u64* bit_storage) {+            // Set shifts+            block_shift = bs;+            next_block_shift = nbs;++            // Align up the address+            VAddr end{addr + size};+            const std::size_t align{(next_block_shift != 0) ? (u64(1) << next_block_shift)+                                                            : (u64(1) << block_shift)};

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#pragma once++#include "common/assert.h"+#include "common/common_funcs.h"+#include "common/virtual_buffer.h"++namespace Core {++class System;++namespace DramMemoryMap {

Its up to you, but imo it'd be more readable to do a class-less enum:

namespace DramMemoryMap {
enum : u64 {
Base = XXX,
Size = XXX,
...
}
}
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++MemoryBlockManager::MemoryBlockManager(VAddr start_addr, VAddr end_addr)+    : start_addr{start_addr}, end_addr{end_addr} {+    const u64 num_pages{(end_addr - start_addr) / PageSize};+    memory_block_tree.emplace_back(start_addr, num_pages, MemoryState::Free, MemoryPermission::None,+                                   MemoryAttribute::None);+}++MemoryBlockManager::iterator MemoryBlockManager::FindIterator(VAddr addr) {+    iterator node{memory_block_tree.begin()};+    while (node != end()) {+        const VAddr end_addr{node->GetNumPages() * PageSize + node->GetAddress()};+        if (node->GetAddress() <= addr && end_addr - 1 >= addr) {+            return node;+        }+        node = std::next(node);+    }+    return end();+}++VAddr MemoryBlockManager::FindFreeArea(VAddr region_start, std::size_t region_num_pages,+                                       std::size_t num_pages, std::size_t align, std::size_t offset,+                                       std::size_t guard_pages) {+    if (num_pages == 0) {+        return {};+    }++    const VAddr region_end{region_start + region_num_pages * PageSize};+    const VAddr region_last{region_end - 1};+    for (const_iterator it{FindIterator(region_start)}; it != memory_block_tree.cend(); it++) {
    for (auto it{FindIterator(region_start)}; it != memory_block_tree.cend(); ++it) {
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return i;+            }+        }+        return -1;+    }++    static constexpr std::size_t GetBlockSize(std::size_t index) {+        return std::size_t(1) << MemoryBlockPageShifts[index];

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++MemoryBlockManager::MemoryBlockManager(VAddr start_addr, VAddr end_addr)+    : start_addr{start_addr}, end_addr{end_addr} {+    const u64 num_pages{(end_addr - start_addr) / PageSize};+    memory_block_tree.emplace_back(start_addr, num_pages, MemoryState::Free, MemoryPermission::None,+                                   MemoryAttribute::None);+}++MemoryBlockManager::iterator MemoryBlockManager::FindIterator(VAddr addr) {+    iterator node{memory_block_tree.begin()};+    while (node != end()) {+        const VAddr end_addr{node->GetNumPages() * PageSize + node->GetAddress()};+        if (node->GetAddress() <= addr && end_addr - 1 >= addr) {+            return node;+        }+        node = std::next(node);+    }+    return end();+}++VAddr MemoryBlockManager::FindFreeArea(VAddr region_start, std::size_t region_num_pages,+                                       std::size_t num_pages, std::size_t align, std::size_t offset,+                                       std::size_t guard_pages) {+    if (num_pages == 0) {+        return {};+    }++    const VAddr region_end{region_start + region_num_pages * PageSize};+    const VAddr region_last{region_end - 1};+    for (const_iterator it{FindIterator(region_start)}; it != memory_block_tree.cend(); it++) {+        const MemoryInfo info{it->GetMemoryInfo()};
        const auto info{it->GetMemoryInfo()};
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#include <array>++#include "common/assert.h"+#include "core/hle/kernel/memory/address_space_info.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t Size_1_MB{0x100000};

Same as mentioned before, consider a classless enum

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable&     // Unpredictable instructions     config.define_unpredictable_behaviour = true; +    config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;
    config.detect_misaligned_access_via_page_table = 0b11110000;
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 __declspec(dllimport) void __stdcall DebugBreak(void); // Defined in Misc.cpp. std::string GetLastErrorMsg(); +#define DECLARE_ENUM_FLAG_OPERATORS(type)                                                          \+    constexpr type operator|(type a, type b) noexcept {                                            \+        using T = std::underlying_type_t<type>;                                                    \+        return type(static_cast<T>(a) | static_cast<T>(b));                                        \

You're doing a functional-style cast to type here

        return static_cast<type>(static_cast<T>(a) | static_cast<T>(b));                                        \

applies to rest of ops too

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 class RelocatableObject final : public ServiceFramework<RelocatableObject> {             return;         } -        NROHeader header;+        // Load and validate the NRO header+        NROHeader header{};         std::memcpy(&header, nro_data.data(), sizeof(NROHeader));-         if (!IsValidNRO(header, nro_size, bss_size)) {             LOG_ERROR(Service_LDR, "NRO was invalid!");             IPC::ResponseBuilder rb{ctx, 2};             rb.Push(ERROR_INVALID_NRO);             return;         } -        // Load NRO as new executable module-        auto* process = system.CurrentProcess();-        auto& vm_manager = process->VMManager();-        auto map_address = vm_manager.FindFreeRegion(nro_size + bss_size);--        if (!map_address.Succeeded() ||-            *map_address + nro_size + bss_size > vm_manager.GetAddressSpaceEndAddress()) {--            LOG_ERROR(Service_LDR,-                      "General error while allocation memory or no available memory to allocate!");+        // Map memory for the NRO+        const ResultVal<VAddr> map_result{MapNro(system.CurrentProcess(), nro_address, nro_size,

auto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 class RelocatableObject final : public ServiceFramework<RelocatableObject> {         rb.Push(RESULT_SUCCESS);     } -    void UnloadNrr(Kernel::HLERequestContext& ctx) {-        if (!initialized) {-            LOG_ERROR(Service_LDR, "LDR:RO not initialized before use!");-            IPC::ResponseBuilder rb{ctx, 2};-            rb.Push(ERROR_NOT_INITIALIZED);-            return;+    bool ValidateRegionForMap(Kernel::Memory::PageTable& page_table, VAddr start,+                              std::size_t size) const {+        constexpr std::size_t padding_size{4 * Kernel::Memory::PageSize};+        const auto start_info{page_table.QueryInfo(start - 1)};++        if (start_info.state != Kernel::Memory::MemoryState::Free) {+            return {};         } -        struct Parameters {-            u64_le process_id;-            u64_le nrr_address;-        };+        if (start_info.GetAddress() > (start - padding_size)) {+            return {};+        } -        IPC::RequestParser rp{ctx};-        const auto [process_id, nrr_address] = rp.PopRaw<Parameters>();+        const auto end_info{page_table.QueryInfo(start + size)}; -        LOG_DEBUG(Service_LDR, "called with process_id={:016X}, nrr_addr={:016X}", process_id,-                  nrr_address);+        if (end_info.state != Kernel::Memory::MemoryState::Free) {+            return {};+        } -        if (!Common::Is4KBAligned(nrr_address)) {-            LOG_ERROR(Service_LDR, "NRR Address has invalid alignment (actual {:016X})!",-                      nrr_address);-            IPC::ResponseBuilder rb{ctx, 2};-            rb.Push(ERROR_INVALID_ALIGNMENT);-            return;+        return (start + size + padding_size) <= (end_info.GetAddress() + end_info.GetSize());+    }++    VAddr GetRandomMapRegion(const Kernel::Memory::PageTable& page_table, std::size_t size) const {+        VAddr addr{};+        const std::size_t end_pages{(page_table.GetAliasCodeRegionSize() - size) >>+                                    Kernel::Memory::PageBits};+        do {+            addr = page_table.GetAliasCodeRegionStart() ++                   (Kernel::Memory::SystemControl::GenerateRandomRange(0, end_pages)+                    << Kernel::Memory::PageBits);+        } while (!page_table.IsInsideAddressSpace(addr, size) ||+                 page_table.IsInsideHeapRegion(addr, size) ||+                 page_table.IsInsideAliasRegion(addr, size));+        return addr;+    }++    ResultVal<VAddr> MapProcessCodeMemory(Kernel::Process* process, VAddr baseAddress,+                                          u64 size) const {+        for (int retry{}; retry < MAXIMUM_MAP_RETRIES; retry++) {+            auto& page_table{process->PageTable()};+            const VAddr addr{GetRandomMapRegion(page_table, size)};+            const ResultCode result{page_table.MapProcessCodeMemory(addr, baseAddress, size)};++            if (result == Kernel::ERR_INVALID_ADDRESS_STATE) {+                continue;+            }++            if (result.IsError()) {

CASCADE_CODE

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 class RelocatableObject final : public ServiceFramework<RelocatableObject> {             return;         } -        NROHeader header;+        // Load and validate the NRO header+        NROHeader header{};         std::memcpy(&header, nro_data.data(), sizeof(NROHeader));-         if (!IsValidNRO(header, nro_size, bss_size)) {             LOG_ERROR(Service_LDR, "NRO was invalid!");             IPC::ResponseBuilder rb{ctx, 2};             rb.Push(ERROR_INVALID_NRO);             return;         } -        // Load NRO as new executable module-        auto* process = system.CurrentProcess();-        auto& vm_manager = process->VMManager();-        auto map_address = vm_manager.FindFreeRegion(nro_size + bss_size);--        if (!map_address.Succeeded() ||-            *map_address + nro_size + bss_size > vm_manager.GetAddressSpaceEndAddress()) {--            LOG_ERROR(Service_LDR,-                      "General error while allocation memory or no available memory to allocate!");+        // Map memory for the NRO+        const ResultVal<VAddr> map_result{MapNro(system.CurrentProcess(), nro_address, nro_size,+                                                 bss_address, bss_size, nro_size + bss_size)};+        if (map_result.Failed()) {             IPC::ResponseBuilder rb{ctx, 2};-            rb.Push(ERROR_INVALID_MEMORY_STATE);-            return;+            rb.Push(map_result.Code());         } -        // Mark text and read-only region as ModuleCode-        ASSERT(vm_manager-                   .MirrorMemory(*map_address, nro_address, header.text_size + header.ro_size,-                                 Kernel::MemoryState::ModuleCode)-                   .IsSuccess());-        // Mark read/write region as ModuleCodeData, which is necessary if this region is used for-        // TransferMemory (e.g. Final Fantasy VIII Remastered does this)-        ASSERT(vm_manager-                   .MirrorMemory(*map_address + header.rw_offset, nro_address + header.rw_offset,-                                 header.rw_size, Kernel::MemoryState::ModuleCodeData)-                   .IsSuccess());-        // Revoke permissions from the old memory region-        ASSERT(vm_manager.ReprotectRange(nro_address, nro_size, Kernel::VMAPermission::None)-                   .IsSuccess());--        if (bss_size > 0) {-            // Mark BSS region as ModuleCodeData, which is necessary if this region is used for-            // TransferMemory (e.g. Final Fantasy VIII Remastered does this)-            ASSERT(vm_manager-                       .MirrorMemory(*map_address + nro_size, bss_address, bss_size,-                                     Kernel::MemoryState::ModuleCodeData)-                       .IsSuccess());-            ASSERT(vm_manager.ReprotectRange(bss_address, bss_size, Kernel::VMAPermission::None)-                       .IsSuccess());+        // Load the NRO into the mapped memory+        if (const ResultCode result{

auto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 static const FunctionDef SVC_Table_64[] = {     {0x4D, nullptr, "SleepSystem"},     {0x4E, nullptr, "ReadWriteRegister"},     {0x4F, nullptr, "SetProcessActivity"},-    {0x50, SvcWrap64<CreateSharedMemory>, "CreateSharedMemory"},-    {0x51, SvcWrap64<MapTransferMemory>, "MapTransferMemory"},-    {0x52, SvcWrap64<UnmapTransferMemory>, "UnmapTransferMemory"},+    {0x50, nullptr, "CreateSharedMemory"},+    {0x51, nullptr, "MapTransferMemory"},

Same -- this seems weird? Do applets still work?

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

 class RelocatableObject final : public ServiceFramework<RelocatableObject> {             return;         } -        auto& vm_manager = system.CurrentProcess()->VMManager();-        const auto& nro_info = iter->second;--        // Unmap the mirrored memory-        ASSERT(-            vm_manager.UnmapRange(nro_address, nro_info.nro_size + nro_info.bss_size).IsSuccess());--        // Reprotect the source memory-        ASSERT(vm_manager-                   .ReprotectRange(nro_info.nro_address, nro_info.nro_size,-                                   Kernel::VMAPermission::ReadWrite)-                   .IsSuccess());-        if (nro_info.bss_size > 0) {-            ASSERT(vm_manager-                       .ReprotectRange(nro_info.bss_address, nro_info.bss_size,-                                       Kernel::VMAPermission::ReadWrite)-                       .IsSuccess());-        }+        const ResultCode result{UnmapNro(iter->second)};

auto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {

I'm not going to point each one out, but when ever you have this pattern and all you are doing in the if is returning the code, you should be able to just CASCADE_CODE(function)

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include <algorithm>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/common_types.h"+#include "common/scope_exit.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/memory/memory_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"++namespace Kernel::Memory {++std::size_t MemoryManager::Impl::Initialize(Pool new_pool, u64 start_address, u64 end_address) {+    const std::size_t size{end_address - start_address};++    // Calculate metadata sizes+    const std::size_t ref_count_size{(size / PageSize) * sizeof(u16)};+    const std::size_t optimize_map_size{(Common::AlignUp((size / PageSize), 64) / 64) *+                                        sizeof(u64)};+    const std::size_t manager_size{Common::AlignUp(optimize_map_size + ref_count_size, PageSize)};+    const std::size_t page_heap_size{PageHeap::CalculateMetadataOverheadSize(size)};+    const std::size_t total_metadata_size{manager_size + page_heap_size};+    ASSERT(manager_size <= total_metadata_size);+    ASSERT(Common::IsAligned(total_metadata_size, PageSize));++    // Setup region+    pool = new_pool;++    // Initialize the manager's KPageHeap+    heap.Initialize(start_address, size, page_heap_size);++    // Free the memory to the heap+    heap.Free(start_address, size / PageSize);++    // Update the heap's used size+    heap.UpdateUsedSize();++    return total_metadata_size;+}++void MemoryManager::InitializeManager(Pool pool, u64 start_address, u64 end_address) {+    ASSERT(pool < Pool::Count);+    managers[static_cast<std::size_t>(pool)].Initialize(pool, start_address, end_address);+}++VAddr MemoryManager::AllocateContinuous(std::size_t num_pages, std::size_t align_pages, Pool pool,+                                        Direction dir) {+    // Early return if we're allocating no pages+    if (num_pages == 0) {+        return {};+    }++    // Lock the pool that we're allocating from+    const std::size_t pool_index{static_cast<std::size_t>(pool)};+    std::lock_guard lock{pool_locks[pool_index]};++    // Choose a heap based on our page size request+    const s32 heap_index{PageHeap::GetAlignedBlockIndex(num_pages, align_pages)};++    // Loop, trying to iterate from each block+    // TODO (bunnei): Support multiple managers+    Impl& chosen_manager{managers[pool_index]};+    VAddr allocated_block{chosen_manager.AllocateBlock(heap_index)};++    // If we failed to allocate, quit now+    if (!allocated_block) {
    if (allocated_block == 0) {
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return i;+            }+        }+        return -1;+    }++    static constexpr std::size_t GetBlockSize(std::size_t index) {+        return std::size_t(1) << MemoryBlockPageShifts[index];+    }++    static constexpr std::size_t GetBlockNumPages(std::size_t index) {+        return GetBlockSize(index) / PageSize;+    }++private:+    static constexpr std::size_t NumMemoryBlockPageShifts{7};+    static constexpr std::array<std::size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{+        0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E};
        0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E,};
bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {+                return static_cast<s32>(i);+            }+        }+        return -1;+    }++    static constexpr s32 GetBlockIndex(std::size_t num_pages) {+        for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {+            if (num_pages >= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,+                                                 MemoryPermission::None, MemoryPermission::None,+                                                 MemoryAttribute::Mask, MemoryAttribute::None)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{

ditto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceSize(address_space_width, type);+    };++    //  Set our width and heap/alias sizes+    address_space_width = GetAddressSpaceWidthFromType(as_type);+    const VAddr start = 0;+    const VAddr end{1ULL << address_space_width};+    std::size_t alias_region_size{GetSpaceSize(AddressSpaceInfo::Type::Alias)};+    std::size_t heap_region_size{GetSpaceSize(AddressSpaceInfo::Type::Heap)};++    ASSERT(start <= code_addr);+    ASSERT(code_addr < code_addr + code_size);+    ASSERT(code_addr + code_size - 1 <= end - 1);++    // Adjust heap/alias size if we don't have an alias region+    if (as_type == FileSys::ProgramAddressSpaceType::Is32BitNoMap) {+        heap_region_size += alias_region_size;+        alias_region_size = 0;+    }++    // Set code regions and determine remaining+    constexpr std::size_t RegionAlignment{2 * 1024 * 1024};+    VAddr process_code_start{};+    VAddr process_code_end{};+    std::size_t stack_region_size{};+    std::size_t kernel_map_region_size{};++    if (address_space_width == 39) {+        alias_region_size = GetSpaceSize(AddressSpaceInfo::Type::Alias);+        heap_region_size = GetSpaceSize(AddressSpaceInfo::Type::Heap);+        stack_region_size = GetSpaceSize(AddressSpaceInfo::Type::Stack);+        kernel_map_region_size = GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Large64Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Large64Bit);+        alias_code_region_start = code_region_start;+        alias_code_region_end = code_region_end;+        process_code_start = Common::AlignDown(code_addr, RegionAlignment);+        process_code_end = Common::AlignUp(code_addr + code_size, RegionAlignment);+    } else {+        stack_region_size = 0;+        kernel_map_region_size = 0;+        code_region_start = GetSpaceStart(AddressSpaceInfo::Type::Is32Bit);+        code_region_end = code_region_start + GetSpaceSize(AddressSpaceInfo::Type::Is32Bit);+        stack_region_start = code_region_start;+        alias_code_region_start = code_region_start;+        alias_code_region_end = GetSpaceStart(AddressSpaceInfo::Type::Small64Bit) ++                                GetSpaceSize(AddressSpaceInfo::Type::Small64Bit);+        stack_region_end = code_region_end;+        kernel_map_region_start = code_region_start;+        kernel_map_region_end = code_region_end;+        process_code_start = code_region_start;+        process_code_end = code_region_end;+    }++    // Set other basic fields+    is_aslr_enabled = enable_aslr;+    address_space_start = start;+    address_space_end = end;+    is_kernel = false;++    // Determine the region we can place our undetermineds in+    VAddr alloc_start{};+    std::size_t alloc_size{};+    if ((process_code_start - code_region_start) >= (end - process_code_end)) {+        alloc_start = code_region_start;+        alloc_size = process_code_start - code_region_start;+    } else {+        alloc_start = process_code_end;+        alloc_size = end - process_code_end;+    }+    const std::size_t needed_size{+        (alias_region_size + heap_region_size + stack_region_size + kernel_map_region_size)};+    if (alloc_size < needed_size) {+        UNREACHABLE();+        return ERR_OUT_OF_MEMORY;+    }++    const std::size_t remaining_size{alloc_size - needed_size};++    // Determine random placements for each region+    std::size_t alias_rnd{}, heap_rnd{}, stack_rnd{}, kmap_rnd{};+    if (enable_aslr) {+        alias_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        heap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+        stack_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                    RegionAlignment;+        kmap_rnd = SystemControl::GenerateRandomRange(0, remaining_size / RegionAlignment) *+                   RegionAlignment;+    }++    // Setup heap and alias regions+    alias_region_start = alloc_start + alias_rnd;+    alias_region_end = alias_region_start + alias_region_size;+    heap_region_start = alloc_start + heap_rnd;+    heap_region_end = heap_region_start + heap_region_size;++    if (alias_rnd <= heap_rnd) {+        heap_region_start += alias_region_size;+        heap_region_end += alias_region_size;+    } else {+        alias_region_start += heap_region_size;+        alias_region_end += heap_region_size;+    }++    // Setup stack region+    if (stack_region_size) {+        stack_region_start = alloc_start + stack_rnd;+        stack_region_end = stack_region_start + stack_region_size;++        if (alias_rnd < stack_rnd) {+            stack_region_start += alias_region_size;+            stack_region_end += alias_region_size;+        } else {+            alias_region_start += stack_region_size;+            alias_region_end += stack_region_size;+        }++        if (heap_rnd < stack_rnd) {+            stack_region_start += heap_region_size;+            stack_region_end += heap_region_size;+        } else {+            heap_region_start += stack_region_size;+            heap_region_end += stack_region_size;+        }+    }++    // Setup kernel map region+    if (kernel_map_region_size) {+        kernel_map_region_start = alloc_start + kmap_rnd;+        kernel_map_region_end = kernel_map_region_start + kernel_map_region_size;++        if (alias_rnd < kmap_rnd) {+            kernel_map_region_start += alias_region_size;+            kernel_map_region_end += alias_region_size;+        } else {+            alias_region_start += kernel_map_region_size;+            alias_region_end += kernel_map_region_size;+        }++        if (heap_rnd < kmap_rnd) {+            kernel_map_region_start += heap_region_size;+            kernel_map_region_end += heap_region_size;+        } else {+            heap_region_start += kernel_map_region_size;+            heap_region_end += kernel_map_region_size;+        }++        if (stack_region_size) {+            if (stack_rnd < kmap_rnd) {+                kernel_map_region_start += stack_region_size;+                kernel_map_region_end += stack_region_size;+            } else {+                stack_region_start += kernel_map_region_size;+                stack_region_end += kernel_map_region_size;+            }+        }+    }++    // Set heap members+    current_heap_end = heap_region_start;+    max_heap_size = 0;+    max_physical_memory_size = 0;++    // Ensure that we regions inside our address space+    auto IsInAddressSpace = [&](VAddr addr) {+        return address_space_start <= addr && addr <= address_space_end;+    };+    ASSERT(IsInAddressSpace(alias_region_start));+    ASSERT(IsInAddressSpace(alias_region_end));+    ASSERT(IsInAddressSpace(heap_region_start));+    ASSERT(IsInAddressSpace(heap_region_end));+    ASSERT(IsInAddressSpace(stack_region_start));+    ASSERT(IsInAddressSpace(stack_region_end));+    ASSERT(IsInAddressSpace(kernel_map_region_start));+    ASSERT(IsInAddressSpace(kernel_map_region_end));++    // Ensure that we selected regions that don't overlap+    const VAddr alias_start{alias_region_start};+    const VAddr alias_last{alias_region_end - 1};+    const VAddr heap_start{heap_region_start};+    const VAddr heap_last{heap_region_end - 1};+    const VAddr stack_start{stack_region_start};+    const VAddr stack_last{stack_region_end - 1};+    const VAddr kmap_start{kernel_map_region_start};+    const VAddr kmap_last{kernel_map_region_end - 1};+    ASSERT(alias_last < heap_start || heap_last < alias_start);+    ASSERT(alias_last < stack_start || stack_last < alias_start);+    ASSERT(alias_last < kmap_start || kmap_last < alias_start);+    ASSERT(heap_last < stack_start || stack_last < heap_start);+    ASSERT(heap_last < kmap_start || kmap_last < heap_start);++    current_heap_addr = heap_region_start;+    heap_capacity = 0;+    physical_memory_usage = 0;+    memory_pool = pool;++    page_table_impl.Resize(address_space_width, PageBits, true);++    return InitializeMemoryLayout(start, end);+}++ResultCode PageTable::MapProcessCode(VAddr addr, std::size_t num_pages, MemoryState state,+                                     MemoryPermission perm) {+    std::lock_guard lock{page_table_lock};++    const u64 size{num_pages * PageSize};++    if (!CanContain(addr, size, state)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    if (IsRegionMapped(addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    if (const ResultCode result{+            system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{+            Operate(addr, num_pages, page_linked_list, OperationType::MapGroup)};+        result.IsError()) {+        return result;+    }++    block_manager->Update(addr, num_pages, state, perm);++    return RESULT_SUCCESS;+}++ResultCode PageTable::MapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    const std::size_t num_pages{size / PageSize};++    MemoryState state{};+    MemoryPermission perm{};+    if (const ResultCode result{CheckMemoryState(+            &state, &perm, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::Mask, MemoryPermission::ReadAndWrite, MemoryAttribute::Mask,+            MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (IsRegionMapped(dst_addr, size)) {+        return ERR_INVALID_ADDRESS_STATE;+    }++    PageLinkedList page_linked_list;+    AddRegionToPages(src_addr, num_pages, page_linked_list);++    {+        auto block_guard = detail::ScopeExit(+            [&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });++        if (const ResultCode result{Operate(src_addr, num_pages, MemoryPermission::None,+                                            OperationType::ChangePermissions)};+            result.IsError()) {+            return result;+        }++        if (const ResultCode result{MapPages(dst_addr, page_linked_list, MemoryPermission::None)};+            result.IsError()) {+            return result;+        }++        block_guard.Cancel();+    }++    block_manager->Update(src_addr, num_pages, state, MemoryPermission::None,+                          MemoryAttribute::Locked);+    block_manager->Update(dst_addr, num_pages, MemoryState::AliasCode);++    return RESULT_SUCCESS;+}++ResultCode PageTable::UnmapProcessCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {+    std::lock_guard lock{page_table_lock};++    if (!size) {+        return RESULT_SUCCESS;+    }++    const std::size_t num_pages{size / PageSize};++    if (const ResultCode result{CheckMemoryState(+            nullptr, nullptr, nullptr, src_addr, size, MemoryState::All, MemoryState::Normal,+            MemoryPermission::None, MemoryPermission::None, MemoryAttribute::Mask,+            MemoryAttribute::Locked, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    MemoryState state{};+    if (const ResultCode result{CheckMemoryState(+            &state, nullptr, nullptr, dst_addr, PageSize, MemoryState::FlagCanCodeAlias,+            MemoryState::FlagCanCodeAlias, MemoryPermission::None, MemoryPermission::None,+            MemoryAttribute::Mask, MemoryAttribute::None, MemoryAttribute::IpcAndDeviceMapped)};+        result.IsError()) {+        return result;+    }++    if (const ResultCode result{CheckMemoryState(dst_addr, size, MemoryState::All, state,

ditto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "common/alignment.h"+#include "common/assert.h"+#include "common/scope_exit.h"+#include "core/core.h"+#include "core/device_memory.h"+#include "core/hle/kernel/errors.h"+#include "core/hle/kernel/kernel.h"+#include "core/hle/kernel/memory/address_space_info.h"+#include "core/hle/kernel/memory/memory_block.h"+#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/page_linked_list.h"+#include "core/hle/kernel/memory/page_table.h"+#include "core/hle/kernel/memory/system_control.h"+#include "core/hle/kernel/process.h"+#include "core/hle/kernel/resource_limit.h"+#include "core/memory.h"++namespace Kernel::Memory {++namespace {++constexpr std::size_t GetAddressSpaceWidthFromType(FileSys::ProgramAddressSpaceType as_type) {+    switch (as_type) {+    case FileSys::ProgramAddressSpaceType::Is32Bit:+    case FileSys::ProgramAddressSpaceType::Is32BitNoMap:+        return 32;+    case FileSys::ProgramAddressSpaceType::Is36Bit:+        return 36;+    case FileSys::ProgramAddressSpaceType::Is39Bit:+        return 39;+    default:+        UNREACHABLE();+        return {};+    }+}++constexpr u64 GetAddressInRange(const MemoryInfo& info, VAddr addr) {+    if (info.GetAddress() < addr) {+        return addr;+    }+    return info.GetAddress();+}++constexpr std::size_t GetSizeInRange(const MemoryInfo& info, VAddr start, VAddr end) {+    std::size_t size{info.GetSize()};+    if (info.GetAddress() < start) {+        size -= start - info.GetAddress();+    }+    if (info.GetEndAddress() > end) {+        size -= info.GetEndAddress() - end;+    }+    return size;+}++} // namespace++PageTable::PageTable(Core::System& system) : system{system} {}++ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,+                                           bool enable_aslr, VAddr code_addr, std::size_t code_size,+                                           Memory::MemoryManager::Pool pool) {++    const auto GetSpaceStart = [&](AddressSpaceInfo::Type type) {+        return AddressSpaceInfo::GetAddressSpaceStart(address_space_width, type);+    };+    const auto GetSpaceSize = [&](AddressSpaceInfo::Type type) {

ditto

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++// This file references various implementation details from Atmosph�re, an open-source firmware for+// the Nintendo Switch. Copyright 2018-2020 Atmosph�re-NX.++#pragma once++#include <array>+#include <vector>++#include "common/alignment.h"+#include "common/assert.h"+#include "common/bit_util.h"+#include "common/common_funcs.h"+#include "common/common_types.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++class PageHeap final : NonCopyable {+public:+    static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {+        const std::size_t target_pages{std::max(num_pages, align_pages)};+        for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {+            if (target_pages <= (std::size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {

use static_cast instead of functional-style

bunnei

comment created time in a month

Pull request review commentyuzu-emu/yuzu

Implement a new virtual memory manager

+// Copyright 2020 yuzu Emulator Project+// Licensed under GPLv2 or any later version+// Refer to the license.txt file included.++#include "core/hle/kernel/memory/memory_block_manager.h"+#include "core/hle/kernel/memory/memory_types.h"++namespace Kernel::Memory {++MemoryBlockManager::MemoryBlockManager(VAddr start_addr, VAddr end_addr)+    : start_addr{start_addr}, end_addr{end_addr} {+    const u64 num_pages{(end_addr - start_addr) / PageSize};+    memory_block_tree.emplace_back(start_addr, num_pages, MemoryState::Free, MemoryPermission::None,+                                   MemoryAttribute::None);+}++MemoryBlockManager::iterator MemoryBlockManager::FindIterator(VAddr addr) {+    iterator node{memory_block_tree.begin()};
    auto node{memory_block_tree.begin()};
bunnei

comment created time in a month

push eventyuzu-emu/yuzu

bunnei

commit sha 598740f1ddc90a863357859dde3304886219adf6

service: friend: Stub IFriendService::GetBlockedUserListIds. - This is safe to stub, as there should be no adverse consequences from reporting no blocked users.

view details

Zach Hilman

commit sha e366b4ee1f3c29858614689396d302c96aee14f1

Merge pull request #3660 from bunnei/friend-blocked-users service: friend: Stub IFriendService::GetBlockedUserListIds.

view details

push time in 2 months

PR merged yuzu-emu/yuzu

service: friend: Stub IFriendService::GetBlockedUserListIds.
  • This is safe to stub, as there should be no adverse consequences from reporting no blocked users.
+10 -1

0 comment

1 changed file

bunnei

pr closed time in 2 months

push eventyuzu-emu/yuzu

bunnei

commit sha fc35803f9108711a1ba0e41cfe252ed74efca8a4

file_sys: patch_manager: Return early when there are no layers to apply.

view details

Zach Hilman

commit sha 8040f6d54430578e84ab60c2d219f23dfcf1862c

Merge pull request #3661 from bunnei/patch-manager-fix file_sys: patch_manager: Return early when there are no layers to apply.

view details

push time in 2 months

PR merged yuzu-emu/yuzu

file_sys: patch_manager: Return early when there are no layers to apply.

When there are no layers to apply, return early as there is no need to rebuild the RomFS

+6 -0

0 comment

1 changed file

bunnei

pr closed time in 2 months

push eventyuzu-emu/yuzu

SilverBeamx

commit sha 22b5d5211e125f8f59c29caf21f16e6fc5d912ab

Hack BUILD_FULLNAME into GenerateSCMRev.cmake

view details

SilverBeamx

commit sha 6b512d78c994550ff653d519af7784023be5ebd2

Addressed feedback: removed CMake hack in favor of building the necessary strings via the supplied title format

view details

SilverBeamx

commit sha 5a66ca4697d5337d591a6a57ce1bb76db5f6fc1f

Removed leftover test code

view details

SilverBeamx

commit sha 863f7385dc6f7c232877bbabb5ff1068d06a7f96

Addressed feedback: switched to snake case and fixed clang-format errors

view details

Zach Hilman

commit sha 26ed65495d37f74afd12d2c7ed9b541189718bbd

Merge pull request #3621 from SilverBeamx/fullnamefix Log version and about section version fix

view details

push time in 2 months

PR merged yuzu-emu/yuzu

Log version and about section version fix frontend-fix

I noticed some time ago that the "human-readable" yuzu version was missing from the log and about section. This PR aims to fix that by "hacking together" the required parameter directly into GenerateSCMRev.cmake instead of modifying every single CI to pass the correct argument to cmake. I tested this against Mainline, i hope it works in the Patreon repo too, as i couldn't test it being that it isn't public.

+15 -2

8 comments

2 changed files

SilverBeamx

pr closed time in 2 months

issue closedyuzu-emu/yuzu

Yuzu version missing from log file

<!-- Please keep in mind yuzu is EXPERIMENTAL SOFTWARE.

Please read the FAQ: https://yuzu-emu.org/wiki/faq/

THIS IS NOT A SUPPORT FORUM, FOR SUPPORT GO TO: https://community.citra-emu.org/

If the FAQ does not answer your question, please go to: https://community.citra-emu.org/

When submitting an issue, please check the following:

  • You have read the above.
  • You have provided the version (commit hash) of yuzu you are using.
  • You have provided sufficient detail for the issue to be reproduced.
  • You have provided system specs (if relevant).
  • Please also provide:
    • For any issues, a log file
    • For crashes, a backtrace.
    • For graphical issues, comparison screenshots with real hardware.
    • For emulation inaccuracies, a test-case (if able).

-->

Yuzu's log file misses a "human-readable" version number. It looks like it is supposed to be placed before the git hash, but i was told this feature has been broken some time ago. Could someone look into it? I think this would help a lot identifying reported issues stemming from outdated/EA builds.

closed time in 2 months

SilverBeamx

pull request commentyuzu-emu/yuzu

Log version and about section version fix

  1. We use snake_case not camelCase for variable names
  2. Fix the clang formatting errors.

Then I'd be happy to merge.

SilverBeamx

comment created time in 2 months

pull request commentyuzu-emu/yuzu

Log version and about section version fix

Probably add something like 'development' or similar to indicate that it was built not on CI, but that's up to you.

SilverBeamx

comment created time in 2 months

pull request commentyuzu-emu/yuzu

Log version and about section version fix

For all intents and purposes, Common::g_title_bar_format_idle is the version of yuzu. I just double-checked on CI and it evaluates to either yuzu ### or yuzu Early Access ###. This is plenty sufficient for everywhere. Just make sure to have a fallback based on git rev if the variable doesn't exist (like what setWindowTitle does)

SilverBeamx

comment created time in 2 months

push eventyuzu-emu/yuzu

Zach Hilman

commit sha 59e75f4372deb8bbdab8dd40ff78006b7cce6a10

ci: Update to Windows Server 2019 and Visual Studio 2019 This updates to the latest available toolchain for MSVC builds.

view details

push time in 2 months

Pull request review commentyuzu-emu/yuzu

am: Implement VR related APIs

 void ICommonStateGetter::GetCurrentFocusState(Kernel::HLERequestContext& ctx) {     rb.Push(static_cast<u8>(FocusState::InFocus)); } +void ICommonStateGetter::IsVrModeEnabled(Kernel::HLERequestContext& ctx) {+    IPC::RequestParser rp{ctx};++    LOG_WARNING(Service_AM, "(STUBBED) called");++    IPC::ResponseBuilder rb{ctx, 3};+    rb.Push(RESULT_SUCCESS);+    // Yuzu does not have VR support yet.

This is wrong, the function doenst implicitly describe what the 0 means. Please add back the comment

perillamint

comment created time in 2 months

fork DarkLordZach/Radiance

Mirror of the master Radiance cvs source repo, used for the creation of Radiance installers for the NREL OpenStudio project

fork in 2 months

more