RabbitMQ Channel reuse (SimpleMessageContainer) - spring-rabbit

My spring boot application's is functionality it to listen to the messages on rabbitmq queue, do some processing in onMessage, and then publish the message on another rabbitmq queue. We are using spring-rabbit (1.7.2.RELEASE). We have configured listener using SimpleMessageListenerContainer.
My question is can i publish using he same channel on which I am reading he messages. Does spring-rabbit provides access to channel used by listener? so that same channel can be reused to publish?
Thanks,
Smita

If you use transactions (listener container), any operations performed by a transactional RabbitTemplate on the container thread will participate in the transaction and use the same channel.
If you are not using transactions, you can use a ChannelAwareMessageListener to access the channel the message was received on. See Message Listeners.
If you are using #RabbitListener you can add the Channel as a method parameter.
The current 1.7.x release is 1.7.9.

Related

Azure NodeJS Functions and Service Bus, DeadLetter message

Is it possible to dispose a message to the dead letter queue in a NodeJS function triggered by a service bus message from a topic subscription? It appears to be possible for C# functions, but I need the same functionality in a NodeJS function. Azure WebJobs SDK Service Bus DeadLetter queue
As far as I know, there is no function we can use to dispose a message to the dead letter queue for Node.js. See missing implementation of service bus message deadletter function #2019. It may be helpful to ask for this feature by providing feedback to Azure team here.

Using AMQP to maintain a long-term connection to a remote worker

I'm trying to model the following scenario
Server dispatches a 'START' action to a worker process via amqp as follows (assume channel and action are supplied previously and action is START with some payload.)
channel.assertQueue('', { exclusive: true }).then(({ queue }) => {
const cId = uuid()
channel.consume(queue, (msg) => {
if (msg.properties.correlationId === cId) {
const response = JSON.parse(msg.content.toString())
console.log('response', response)
resolve(response)
}
}, { noAck: true })
const msg = JSON.stringify(action)
channel.sendToQueue(
QUEUE_NAME,
new Buffer(msg),
{ correlationId: cId, replyTo: queue }
)
}, reject)
The worker gets the START action along with a correlationId and replyTo queue name, adds the payload to its own internal list of things to do, and responds to the replyTo queue with a 'START_SUCCESS' action.
Now the worker is going to go through its internal list of things to do and do them, and emits an 'UPDATE' action back to the server, via the same replyTo queue, so the server needs to know to keep listening to that queue for updates and it needs to know which worker is handling the updates for any specific task. The server is smart enough to know that a particular task has started and so will not dispatch it again in that case.
But when it's time for the worker to stop doing the task, the server needs to know which worker to send a 'STOP' message to. Is there a way for the worker to send back to the server some sort of direct amqp channel to itself that the server can use to send STOP messages?
The simplest answer would seem to be for the worker to create a "reply" queue, and send that identifier to the server in the 'START_SUCCESS' message, and the server store that state somewhere.
However, I think much of the power of RabbitMQ comes from the fact that messages aren't published directly to queues, but to exchanges, and their ultimate destination is determined by their routing key. (Publishing by queue name is actually via an exchange that uses the routing key as the queue name.) If you're not familiar with the different types of exchange, read through the RabbitMQ Getting Started tutorials.
In this case, rather than thinking of the server and worker needing to know each other's identity, you can think in terms of them publishing and subscribing to each other's updates. If everything is published to an exchange, then the server and worker don't actually need to know anything about each other's identity.
Here's how I see that working:
The server generates a unique ID for a particular job.
The server publishes a START message to an exchange jobs.new, with a routing key classifying the type of job, and the job ID in the message.
The server binds an anonymous queue to the direct or topic exchange jobs.status with the binding key set to the job ID.
The worker starts up and takes one message from jobs.ready (or jobs.ready.some_type).
The worker binds an anonymous queue to the jobs.control exchange with the job ID as the binding key.
The worker starts the task, and publishes a START_SUCCESS message to the exchange jobs.status with the job ID as the routing key.
The server receives the START_SUCCESS message from the queue it bound at step 3, and updates its state for that job.
The worker periodically sends an UPDATE message to the jobs.status exchange; again, the routing key matches the job ID, so the server receives the message.
When the server wants to stop (or modify) the running job, it publishes a STOP message to the jobs.control exchange with the job ID as the routing key.
The worker receives this message on the queue bound at step 5, and stops the job.
Viewed from the RabbitMQ side, you have these elements:
3 exchanges:
jobs.new where servers publish new jobs. This could be a simple fanout exchange if all workers can handle all jobs, or it could be a topic exchange which routes it into different job queues for different types of worker.
jobs.status where updates are published by workers. This would be a direct or topic exchange, whose routing keys are, or contain, the job ID.
jobs.control where updates are published by the server to control existing jobs. Again, this would be a direct or topic exchange, whose routing keys are, or contain, the job ID.
Permanent queues:
A single jobs.ready queue, or distinct jobs.ready.some_type queues, bound to the jobs.new exchange.
Anonymous queues:
One queue per job created by the server and bound to the jobs.status exchange using that job's ID. Alternatively, the server process could have a single queue for inbound traffic, and simply read the job ID out of the received message.
One queue per worker created by the worker, and bound to the jobs.control exchange using the ID of the job it is currently processing.
Note that you can attach additional queues to any of these exchanges to get a copy of the traffic, e.g. for logging or debugging. For a topic exchange, just bind the extra queue with the binding key #, and it will get a copy of all messages, without interrupting any existing bindings.

Publishing message only if listener exist?

using RabbitMQ, I'd like my (PHP) code to publish a message to a specific customer only if this specific user is currently listening.
The reason for that is that my connected user will open a websocket that will wait for notifications from RabbitMQ and update the UI when notifications arrive. But when they first load the page, all the previous notification will be loaded so there is no need to reload notification that are in the queue.
Being new to Message queuing, I don't know if it's possible, but I'd like my publisher to check if user ID = X is currently listening (since the websocket will open a channel when being executed), and if he's currently listening, posting a message. If he's not, then it won't post that message (but add it in the database).
The workflow is like this :
Publisher :
The endpoint is receiving an event
It save the event in the database
It check if user ID =X has an open channel, if yes, it publish the event to that channel
Subscriber :
The customer connects to the app
The frontend loads the last events from the database
The frontend opens a websocket for that specific user that listen to events that may be published
When events are published, the websocket tells the frontend.
Maybe what I'm asking is basic, but I lack the knowledge to tell.
One bonus question : Is it better to open a channel PER customer, or open a generic channel for all customer, that the subscribe side will filter ?
Thank you for your help :)
I believe you are running into your question because there is a distinction within AMQP between publisher and queue. There is not a one-to-one correspondence between the two.
If I understand your situation correctly, you have a central publisher that is firing messages off to the message broker. At the same time, you have a number of potential subscribers who are signing on and off of the broker (via websockets in this case). One of these subscribers is a database process, which archives all messages for later retrieval.
What I recommend is the following:
Publish these messages to a topic-style exchange
Subscribing consumers each create their own queue upon subscription. This would be done in the web server code, and it would funnel arriving messages to the websocket.
When creating subscription queues (dynamically), set the queues to auto-delete after a (short) number of seconds, so small interruptions in connectivity do not cause lost messages.
Create a persistent queue for the messages going to the database process(es).
It shall be the responsibility of the consuming application to de-duplicate between messages loaded from the database, and messages flowing in through the websocket. Assigning each message a guid should help with this.
To answer your second question, it depends on your web server architecture. I do not know enough about PHP, but in terms of the AMQP protocol itself, there is no impact. Channels are simply protocol-level constructs, so creating them is negligible in impact. Multiple consumers can share one channel, or you can create a channel per consumer. It really makes no difference.

How to send events to a single client with pusher pubnub socketio

I am building a multiplayer turn based game, the communication between clients and server is established with Pusher. I can send events to all clients using game channels. The problem is how do I send an event to a single client? Pusher has no documentation for it, only seemingly solution is to use authenticated channels. Is it viable to authenticate a dedicated channel for every client sending events to a single client, or is there a better solution?
You touched on the best solution in your answer. You should be able to quite easily programmatically setup channels for each individual user and then just broadcast messages to them over those channels.
e.g. (this is a Ruby example but it should be clear what's happening)
user = SOME_USER_OBJECT
Pusher.trigger("card-data-#{user.id}", 'card-update', {data: {card_id: 1, status: 'used'})
or something like that. Obviously you'd then need to make sure that on the client side that the users are subscribing to the correct channels.
Obviously if you need the channels to be secure then, as you said, you can use authenticated channels - probably using private channels makes sense in your case.
If you have any more questions then you can reply here again and I'll take a look, or you can email support at support#pusher.com.
Instead of creating an individual channel you can subscribe to an individual event for each client.
PubNub Stream Filter
If you are using PubNub, you can either create a unique channel for each user and just publish the proper message to each of the channels or you can create a common channel for all of the users and use the Stream Filter feature so that each client only gets messages they want on that channel. This filtering is performed on the server side so the end user doesn't get unwanted messages that have to be ignored.
This is the simple high level steps for using Stream Filters
When you init PubNub on the client side, create a filter using the meta parameter
When you publish messages, add key/values to the meta parameter that will be used to filter messages on the PubNub Network before sending them to the subscribers (based on each individual subscriber's filter).
For full Stream Filter docs per SDK (that you mentioned via tags):
PubNub JavaScript SDK Stream Filter
PubNub Node SDK Stream Filter
PubNub Ruby SDK Stream Filter
You could also use PubNub BLOCKS to route each message to the appropriate user channel (or however you map your channels to end users) the on before publish event handler. I won't go into the details of this since it is slightly more involved but feel free to ask for additional insights as necessary. To get started, you can review the full BLOCKS docs.

Pubsub with Node.js and Socket.io for individual users

I want to check that i've got my knowledge the right way round with using pubsub to gather notifications for a specific user in a Node.js/socket.io environment.
Here's my setup:
Main application is written in PHP over Codeigniter. Auth is handled using Ion_Auth for CI (Session etc)
Realtime (currently just notifications) is handled with Node.js and Socket.io
Authenticated users can invite friends to "groups" - invite will send both email and internal notification if invitee already has account
Authenticated users can leave comments, perform actions on shared content. Both will result in a notification to all subscribed users of that content.
I believe the correct way to handle this, is for each user to be subscribed to a notification channel. This channel contains every notification for every user, which is pushed to the channel by a publish event fired any time we do one of the above actions. The subscription then checks this channel for specific data related to the user session, i.e.:
For notifications related to invites, the published event would contain some uniquely identifying user data, and we would check for that.
for notifications related to a specific piece of content, we would check the channel for published events containing identifying markers for that content.
Is this the right way to do it? I'm fairly new to socket.io, node.js and pubsub, but this seems to make sense to me. The part which is throwing me, is that we should be pushing events to the clients, rather than the client pulling events from the server. This solution seems to do both.
If there is a simpler solution (i.e. something more native to socket.io) i'd appreciate some insight. All I can really find in the way of tutorials or examples is the same chat client writeup over and over...
Edit: Alternatively, would it be more practical to maintain a hash of all connected client ids alongside their corresponding user id, then when a new message comes in, emit that message to the specific client using var socket = hash[userID]; socket.emit(message);
Anyone got any thoughts as to potential bottlenecks in this case? The site could potentially have many thousands of concurrent users being updated about multiple events.
I suggest not implementing the PubSub yourself. I had to do that kind of broadcast once and used RabbitMQ to handle the connections and routing (that includes broadcast).
Real time messaging to a browser is done using reverse Ajax calls (long held http connections, see Comet on Wikipedia)
See RabbitMQ server and libraries for Node
Rabbit MQ offers many advantage:
The client will alway be able to post a message as long as RabbitMQ is running.
You will be able to scope your server easily by starting another instance of your application server whenever you need more throughput (scale out).
Messages sent to RabbitMQ can be persisted.
Posting messages to RabbitMQ can be done inside a transaction.
Easy server insterface to manage your queues and exchanges.

Resources