مطالب مرتبط با کلیدواژه

multi-agent reinforcement learning (MARL)


۱.

Drone-Based Network Coverage Expansion in 6G Networks(مقاله علمی وزارت علوم)

کلیدواژه‌ها: UAV 6G network coverage interference management Energy Efficiency multi-agent reinforcement learning (MARL) trajectory optimization latency reduction SINR Real-time optimization

حوزه‌های تخصصی:
تعداد بازدید : ۱ تعداد دانلود : ۱
Background: The emergence of 6G networks requires new approaches to extend coverage, increase network availability and optimize performance in difficult conditions, including urban and rural areas. Thus, UAVs or UAV systems have developed as a powerful candidate to counter these problems by offering on-demand contingent coverage and differing communication services.   Objective: The opportunity of the development of UAVs’ application in the extension of the network’s coverage is studied in the context of energy efficiency, latency, and Inter-UE interference in high-density 6G environment. Methods: A three-layered optimization architecture was devised, including multi-agent reinforcement learning (MARL) for interference control, trajectory optimization techniques, and energy-aware deployment schemes. Small scale scenarios including urban, suburban and rural environment were considered and the results were analyzed based on the network coverage, energy efficiency, end to end latency and interference encountered on UAVs. Results: The outcome significantly revealed the enhancements in the spatial coverage of the network; UAVs prevented considerable gaps and offered enhancements of network coverage in rural and suburban regions. These achievements include up to 30.5% energy efficiency enhancement, more than 50% latency minimization and interference management that enabled 35.4% enhancement of SINR. Conclusion: Integrating of drones in 6G network is invaluable in enhancing coverage in the networks by providing massive coverage while at the same time providing scalable solutions to problems of coverage gaps, power demands and real-time network adjustments. In future studies, researchers should channel their efforts toward increasing real-time dynamism and energy consumption that suit large-scale executions.