Balancer github

Balancer github DEFAULT

Balancer

Experimental, Zero allocation Balancing machine that can be plugged anywhere

Balancer helps you to balance jobs/requests/messages between workers. It is well tested, ready to be used in production under high throughput.

*Balancer is not a reverse proxy, it is a router, so you balance things between any type of Node Pool.

API

The interface of balancer is as simple as,

//Balancer select a node to send loadtypeBalancerstruct { PolicySelectionPolicyfuncAdd(node ...Node) {} funcNext(clientIDstring) Node {} }

Selector is one of selection policies:

  • RoundRobin - selects next available upstream on every request
  • Hash - matches client using consistent hashing by its own id (any string like ip address or user id)
  • LeastConnection - selects the node that has the lowest active connection (using Node.Load)
  • LeastTime - selects the node that has the lowest response time (using Node.AverageResponseTime)

Experiments

Any type of object that satisfies Node Interface will work as node

typeNodeinterface { IsHealthy() boolTotalRequest() uint64AverageResponseTime() time.DurationLoad() int64Host() string }

You decide the selection policy on init like;

package main balancer:=Balancer{ Policy: &RoundRobin{}, }

Add nodes and balancer machine will decide the next upstream based on the policy

balancer.Add(&Upstream{ Host:"worker-1" }) selectednode:=balancer.Next("client-1")

Benchmark

Those results are for 10 upstreams where half of them are down but still in pool. (see balancer_test.go for details) I tried to keep mocking as a real example as possible (Increasing current load, load times and kep half of them down all the time). So those results are worst case of finding the best upstream in pool.

Real World Examples

-- coming soon

TODO

Contributing

License

Sours: https://github.com/tufanbarisyildirim/balancer

Introduction

BuildRun

dpvs-logo.png

is a high performance Layer-4 load balancer based on DPDK. It's derived from Linux Virtual Server LVS and its modification alibaba/LVS.

Notes: The name comes from "DPDK-LVS".

dpvs.png

Several techniques are applied for high performance:

  • Kernel by-pass (user space implementation).
  • Share-nothing, per-CPU for key data (lockless).
  • RX Steering and CPU affinity (avoid context switch).
  • Batching TX/RX.
  • Zero Copy (avoid packet copy and syscalls).
  • Polling instead of interrupt.
  • Lockless message for high performance IPC.
  • Other techs enhanced by DPDK.

Major features of including:

  • L4 Load Balancer, including FNAT, DR, Tunnel, DNAT modes, etc.
  • SNAT mode for Internet access from internal network.
  • NAT64 forwarding in FNAT mode for quick IPv6 adaptation without application changes.
  • Different schedule algorithms like RR, WLC, WRR, MH(Maglev Hashing), Conhash(Consistent Hashing) etc.
  • User-space Lite IP stack (IPv4/IPv6, Routing, ARP, Neighbor, ICMP ...).
  • Support KNI, VLAN, Bonding, Tunneling for different IDC environment.
  • Security aspect, support TCP syn-proxy, Conn-Limit, black-listwhite-list.
  • QoS: Traffic Control.

feature modules are illustrated as following picture.

modules

Test Environment

This quick start is tested with the environment below.

  • Linux Distribution: CentOS 7.2
  • Kernel: 3.10.0-327.el7.x86_64
  • CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
  • NIC: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 03)
  • Memory: 64G with two NUMA node.
  • GCC: gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)

Other environments should also be OK if DPDK works, please check dpdk.org for more info.

Notes: To let dpvs work properly with multi-cores, rte_flow items must support "ipv4, ipv6, tcp, udp" four items, and rte_flow actions must support "drop, queue" at least.

Clone DPVS

$ git clone https://github.com/iqiyi/dpvs.git $ cd dpvs

Well, let's start from DPDK then.

DPDK setup.

Currently, is recommended for , and we will not support dpdk version earlier than dpdk-20.11 any more. If you are still using earlier dpdk versions, such as , and , please use earlier dpvs releases, such as v1.8.10.

Notes: You can skip this section if experienced with DPDK, and refer the link for details.

$ wget https://fast.dpdk.org/rel/dpdk-20.11.1.tar.xz # download from dpdk.org if link failed. $ tar xf dpdk-20.11.1.tar.xz

DPDK patchs

There are some patches for DPDK to support extra features needed by DPVS. Apply them if needed. For example, there's a patch for DPDK driver for hardware multicast, apply it if you are to launch on device.

Notes: Assuming we are in DPVS root directory and dpdk-stable-20.11.1 is under it, please note it's not mandatory, just for convenience.

Tips: It's advised to patch all if your are not sure about what they are meant for.

DPDK build and install

Use meson-ninja to build DPDK libraries, and export environment variable for DPDK app (DPVS). The in DPVS checks the presence of libdpdk.

$ cd dpdk-stable-20.11.1 $ mkdir dpdklib # user desired install folder $ mkdir dpdkbuild # user desired build folder $ meson -Denable_kmods=true -Dprefix=dpdklib dpdkbuild $ ninja -C dpdkbuild $ cd dpdkbuild; ninja install $ export PKG_CONFIG_PATH=$(pwd)/../dpdklib/lib64/pkgconfig/libdpdk.pc

Tips: You can use script dpdk-build.sh to facilitate dpdk build. Run for the usage of the script.

Next is to set up DPDK hugepage. Our test environment is NUMA system. For single-node system please refer to the link.

$ # for NUMA machine $ echo 8192 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages $ echo 8192 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages $ mkdir /mnt/huge $ mount -t hugetlbfs nodev /mnt/huge

Install kernel modules and bind NIC with driver. Quick start uses only one NIC, normally we use two for FNAT cluster, even four for bonding mode. For example, suppose the NIC we would use to run DPVS is eth0, in the meantime, we still keep another standalone NIC eth1 for debugging.

$ modprobe uio_pci_generic $ cd dpdk-stable-20.11.1 $ insmod dpdkbuild/kernel/linux/kni/rte_kni.ko carrier=on $ ./usertools/dpdk-devbind.py --status $ ifconfig eth0 down # assuming eth0 is 0000:06:00.0 $ ./usertools/dpdk-devbind.py -b uio_pci_generic 0000:06:00.0

Notes:

  1. An alternative to the is , which is moved to a separated repository dpdk-kmods.
  2. A kernel module parameter is added to since DPDK v18.11, and the default value for it is "off". We need to load with the extra parameter to make KNI devices work properly.

can be used to unbind driver and switch it back to Linux driver like . You can also use or to check the NIC PCI bus-id. Please refer to DPDK site for more details.

Notes: PMD of Mellanox NIC is built on top of libibverbs using the Raw Ethernet Accelerated Verbs AP. It doesn't rely on UIO/VFIO driver. Thus, Mellanox NICs should not bind the driver. Refer to Mellanox DPDK for details.

Build DPVS

It's simple, just set and build it.

$ export PKG_CONFIG_PATH=<path-of-libdpdk.pc># normally located at dpdklib/lib64/pkgconfig/libdpdk.pc $ cd<path-of-dpvs> $ make # or "make -j" to speed up $ make install

Notes:

  1. Build dependencies may be needed, such as (version 0.29.2+),, , , , and . You can install the missing dependencies by using the package manager of the system, e.g., (CentOS).
  2. Early versions (v0.29.2 before) may cause dpvs build failure. If so, please upgrade this tool.

Output files are installed to .

$ ls bin/ dpip dpvs ipvsadm keepalived
  • is the main program.
  • is the tool to set IP address, route, vlan, neigh, etc.
  • and come from LVS, both are modified.

Launch DPVS

Now, must locate at , just copy it from .

$ cp conf/dpvs.conf.single-nic.sample /etc/dpvs.conf

and start DPVS,

$ cd<path-of-dpvs>/bin $ ./dpvs &

Check if it's get started ?

$ ./dpip link show 1: dpdk0: socket 0 mtu 1500 rx-queue 8 tx-queue 8 UP 10000 Mbps full-duplex fixed-nego promisc-off addr A0:36:9F:9D:61:F4 OF_RX_IP_CSUM OF_TX_IP_CSUM OF_TX_TCP_CSUM OF_TX_UDP_CSUM

If you see this message. Well done, is working with NIC !

Don't worry if you see this error:

It means the NIC count of DPVS does not match . Please use to adjust the NIC number or modify . We'll improve this part to make DPVS more "clever" to avoid modify config file when NIC count does not match.

What config items does support? How to configure them? Well, maintains a config item file which lists all supported config entries and corresponding feasible values. Besides, some config sample files maintained as show the configurations of dpvs in some specified cases.

Test Full-NAT (FNAT) Load Balancer

The test topology looks like the following diagram.

fnat-single-nic

Set VIP and Local IP (LIP, needed by FNAT mode) on DPVS. Let's put commands into . You do some check by , .

$ cat setup.sh VIP=192.168.100.100 LIP=192.168.100.200 RS=192.168.100.2 ./dpip addr add ${VIP}/24 dev dpdk0 ./ipvsadm -A -t ${VIP}:80 -s rr ./ipvsadm -a -t ${VIP}:80 -r ${RS} -b ./ipvsadm --add-laddr -z ${LIP} -t ${VIP}:80 -F dpdk0 $ $ ./setup.sh

Access VIP from Client, it looks good!

client $ curl 192.168.100.100 Your ip:port : 192.168.100.3:56890

Tutorial Docs

More configure examples can be found in the Tutorial Document. Including,

  • WAN-to-LAN reverse proxy.
  • Direct Route () mode setup.
  • Master/Backup model () setup.
  • OSPF/ECMP cluster model setup.
  • mode for Internet access from internal network.
  • Virtual Devices (, , , /).
  • module to get real UDP client IP/port in .
  • ... and more ...

We also listed some frequently asked questions in the FAQ Document. It may help when you run into problems with DPVS.

Our test shows the forwarding speed (pps) of DPVS is several times than LVS and as good as Google's Maglev.

performance

Please refer to the License file for details.

Please refer to the CONTRIBUTING file for details.

Currently, DPVS has been widely accepted by dozens of community cooperators, who have successfully used and contributed a lot to DPVS. We just list some of them alphabetically as below.

is developed by iQiYiQLB team since April 2016. It's widely used in iQiYi IDC for L4 load balancer and SNAT clusters, and we have already replaced nearly all our LVS clusters with DPVS. We open-sourced DPVS at October 2017, and are excited to see that more people can get involved in this project. Welcome to try, report issues and submit pull requests. And please feel free to contact us through Github or Email.

  • github:
  • email: (Please remove the white-spaces and replace with ).
Sours: https://github.com/iqiyi/dpvs
  1. Doubleframe table
  2. Ark wallpaper phone
  3. Admin resume example

Welcome to Balancer Simulations!

Balancer Simulations is a project of TE-AMM and the Token Engineering Community, funded by grants from Balancer and PowerPool, and kicked off by EthicHub.
It aims to build infrastructure and knowledge for rigorous Balancer Token Engineering to leverage the full power of Balancer Pools as a core building block in DeFi. We invite any project building on Balancer Pools to join our Discord Channel, use the model, and benefit from Balancer Simulations.

  • Analyze existing Balancer Pools using on-chain transaction data, understand pool characteristics, strengths, and weaknesses
  • Gain intuition by exploring pool variants, observe system behavior over time, derive the most valuable monitoring metrics for your use case
  • Run experiments based on historical transactions, mix historical and synthetic transactions to model particular market scenarios
  • Develop and test adaptive Dynamic AMM solutions, like Dynamic Weights Changing, test and optimize controls and feedback loops
  • Model agent behavior and apply Reinforcement Learning to run stress tests for a proposed system design

All research and models are available through open source repositories, and will be further developed by TE-AMM.
For detailed information, please visit the Balancer Simulations Documentation.

Sours: https://github.com/TokenEngineeringCommunity/BalancerPools_Model
Ball Bouncer/Balancer Project

Balancer

CI StatusLicense

This repository contains Balancer Protocol V2's core smart contract, the , along with auxiliary contracts such as the .

For a high-level introduction to Balancer V2, see Introducing Balancer V2: Generalized AMMs.

Structure

Active development occurs in this repository, which means some contracts in it may not be production-ready. Proceed with proper care.

Directories

  • : source code for all smart contracts in the system.
    • stores the contract, which is split across many files for separation of concerns and clarity.
    • keeps the code for the different Pool types and related contracts, such as factories.
    • holds contracts that are only used for testing purposes, often with lax access control patterns and other unsafe properties.
  • : unit tests for each smart contract, using ethers and waffle chai matchers. The subdirectory holds utilities used to simplify writing assertions, deploying test contracts, etc., with the overall goal of making tests more ergonomic and less verbose.
  • : miscellaneous files used for deployment, gas benchmarking, testing and so on.

This repository will soon be migrated into a monorepo, making the different contracts, interfaces and libraries easier to use by third parties. Stay tuned!

Security

Multiple independent reviews and audits were performed by Certora, OpenZeppelin and Trail of Bits. The latest reports from these engagements are located in the directory.

Bug bounties apply to most of the smart contracts hosted in this repository: head to Balancer V2 Bug Bounties to learn more.

Licensing

Most of the source code is licensed under the GNU General Public License Version 3 (GPL v3): see .

Exceptions

  • All files under , are based on the OpenZeppelin Contracts library, and as such are licensed under the MIT License: see LICENSE.
  • is licensed under the MIT License.
  • All other files under and are unlicensed.
Sours: https://github.com/alcuadrado/balancer-core-v2

Github balancer

Scalelite

BigBlueButton is an open source web conferencing system for online learning.

Scalelite is an open source load balancer that manages a pool of BigBlueButton servers. It makes the pool of servers appear as a single (very scalable) BigBlueButton server. A front-end, such as Moodle or Greenlight, sends standard BigBlueButton API requests to the Scalelite server which, in turn, distributes those request to the least loaded BigBlueButton server in the pool.

A single BigBlueButton server that meets the minimum configuration supports around 200 concurrent users.

For many schools and organizations, the ability to 4 simultaneous classes of 50 users, or 8 simultaneous meetings of 25 users, is enough capacity. However, what if a school wants to support 1,500 users across 50 simultaneous classes? A single BigBlueButton server cannot handle such a load.

With Scalelite, a school can create a pool of 4 BigBlueButton servers and handle 16 simultaneous classes of 50 users. Want to scale higher, add more BigBlueButton servers to the pool.

BigBlueButton has been in development for over 10 years now. The latest release is a pure HTML5 client, with extensive documentation. There is even a BigBlueButton install script called bbb-install.sh that lets you setup a BigBlueButton server (with a Let's Encrypt certificate) in about 15 minutes. Using you can quickly setup a pool of servers for management by Scalelite.

To load balance the pool, Scalelite periodically polls each BigBlueButton to check if it is reachable online, ready to receive API requests, and to determine its current load (number of currently running meetings). With this information, when Scalelite receives an incoming API call to create a new meeting, it places the new meeting on the least loaded server in the pool. In this way, Scalelite can balance the load of meeting requests evenly across the pool.

Many BigBlueButton servers will create many recordings. Scalelite can serve a large set of recordings by consolidating them together, indexing them in a database, and, when receiving an incoming getRecordings, use the database index to return quickly the list of available recordings.

Before you begin

The Scalelite installation process requires advanced technical knowledge. You should, at a minimum, be very familar with

  • Setup and administration of a BigBlueButton server
  • Setup and administration of a Linux server and using common tools, such as , to manage processes on the server
  • How the BigBlueButton API works with a front-end
  • How docker containers work
  • How UDP and TCP/IP work together
  • How to administrate a Linux Firewall
  • How to setup a TURN server

If you are a beginner, you will have a difficult time getting any part of this deployment correct. If you require help, see Getting Help

Architecture of Scalelite

There are several components required to get Scalelite up and running:

  1. Multiple BigBlueButton Servers
  2. Scalelite LoadBalancer Server
  3. NFS Shared Volume
  4. PostgreSQL Database
  5. Redis Cache

An example Scalelite deployment will look like this:

Minimum Server Requirements

For the Scalelite Server, the minimum recommended server requirements are:

For each BigBlueButton server, the minimum requirements can be found here.

For the external Postgres Database, the minimum recommended server requirements are:

  • 2 CPU Cores
  • 2 GB Memory
  • 20 GB Disk Space (should be good for tens of thousands of recordings)

For the external Redis Cache, the minimum recommended server requirements are:

  • 2 CPU Cores
  • 0.5GB Memory
  • Persistence must be enabled

Setup a pool of BigBlueButton Server

To setup a pool of BigBlueButton servers (minimum recommended number is 3), we recommend using bbb-install.sh as it can automate the steps to install, configure (with SSL + Let's Encrypt), and update the server when new versions of BigBlueButton are released.

To help users who are behind restrictive firewalls to send/receive media (audio, video, and screen share) to your BigBlueButton server, you should setup a TURN server and configure each BigBlueButton server to use it.

Again, bbb-install.sh can automate this process for you.

Setup a shared volume for recordings

See Setting up a shared volume for recordings

Setup up a PostgreSQL Database

Setting up a PostgreSQL Database depends heavily on the infrastructure you use to setup Scalelite. We recommend you refer to your infrastructure provider's documentation.

Ensure the that you set in (in the next step) matches the connection url of your PostgreSQL Database.

For more configuration options, see configuration.

Setup a Redis Cache

Setting up a Redis Cache depends heavily on the infrastructure you use to setup Scalelite. We recommend you refer to your infrastructure provider's documentation.

Ensure the that you set in (in the next step) matches the connection url of your Redis Cache.

For more configuration options, see configuration.

Deploying Scalelite Docker Containers

See Deploying Scalelite Docker Containers

Configure your Front-End to use Scalelite

To switch your Front-End application to use Scalelite instead of a single BigBlueButton server, there are 2 changes that need to be made

  • should be set to the url of your Scalelite deployment
  • should be set to the value that you set in

Configuration

Sours: https://github.com/blindsidenetworks/scalelite
Github: acces_token вместо пароля

Convey

Layer 4 load balancer with dynamic configuration loading featuring proxy, passthrough and direct server return modes

Features

  • Stats page (at /stats) with basic connection/bytes counters and backend server pool statuses
  • Dynamic configuration re-loading of backend servers and associated weights. Configuration is loaded via a .toml file (see sample.toml for a full example).
  • Tcp-based health checking of backend servers at a configured interval. If a server fails its health check it will be automatically removed from selection and added back once its health checks are successful.

Proxy Features

  • Event-driven TCP load balancer built on tokio.
  • Weighted round-robin load balancing. For uniform round robin simply leave out the weights or set them to be equal.
  • TCP connection termination

Passthrough and Direct Server Return (DSR) Features

  • Packet forwarding (no TCP termination)
  • Minimal internal connection tracking
  • NAT

Usage

Passthrough mode

For passthrough mode we need a couple iptables rules on the convey load balancer to handle ingress packets from the client and responses from the backend load balanced servers. Since TCP is not terminating we need to ensure the OS does not send a RST in response to any packets destined for a port that does not have a process bound to it. We need to do the same for any packets came back through from a backend server. Convey internally assigns ephemeral ports 32768-61000 to map connections to clients.

passthrough

For passthrough mode on the convey load balancer

To run

DSR Mode

For dsr mode we need the same iptables rule for ingress packets. Responses from the backend load balanced servers will be going directly to the clients. The "listening" port on the convey load balancer must match the backend load balanced servers listening ports in this mode.

dsr

For dsr mode on the convey load balancer

In dsr mode the backend servers "participate" in that their response packets must be sent directly to the client. Convey does not do any encapsulation so, for example, a gre tunnel is not an option. Instead, Traffic Control can be used as an egress nat.

For dsr mode on backend servers

To run

Proxy

No special setup neccessary

proxy

To run

Tests

The easiest way to run tests is to run them as superuser. This is because some of the tests spin up test servers as well as a convey load balancer instance.

AF_XDP

The branch is a WIP using AF_XDP to loadbalance in passthrough and DSR modes. For now it will be maintained in a separate branch since it requires kernel versions 5.4 or greater.

Build

Sours: https://github.com/bparli/convey

You will also like:

Ampl Elastic Configuration Rights Pool

Build Status

Coverage Status

- Extension of Balancer labs' configurable rights pool (smart-pool).

When the Ampleforth protocol adjusts supply it expects market actors to propagate this information back into price. However, un-informed AMMs like Uniswap and Balancer do this automatically as they price assets by the relative pool balances. This lets arbitrageurs extract value from liqudity providers in these platforms.

We aim to create a Balancer smart pool which mitigates this problem by adjusting pool weights proportional to rebase. This ensures that the price of Amples in the smart-pool is unaffected by rebase induced supply adjustments.

The method is invoked atomically, just after rebase from Ampleforth's Orchestrator.

Getting started

Sours: https://github.com/ampleforth/ampl-balancer


595 596 597 598 599